Home » ‘¡Math≈ » Time: Analysis

Time: Analysis

±∞ ¬∆@ST:

Advertisements

 SUMMARY

INTRODUCTION: Analysis in mathematics.

Definition. Differences of algebra and analysis.

I. RASHOMON EFFECT: ∆@ST perspectives on analysis.

Analysis on Time: the 5 Dimotions of the Universe.

Algebraic operand as mirror of Dimotions. Calculus applied to them.

2nd Dimotion: Analysis of locomotions

3rd Dimotion: Analysis of Reproduction

4th Dimotion: Analysis of Entropy.

Analysis on Stœps and a(nti)symmetries: S=T

5th Dimotion: Analysis on scales: ∆±¡: Polynomials approximating analysis.

Analysis on Mind.

1st Dimotion: Analysis of perception

II. The 3 AGES of Analysis.

1st age:  Limits vs. finitesimals

5D View on ‘finitesimals’:

Finitesimals in time.

Finitesimals in space

Finitesimals in scale.

Power Series: from Archimedes to Newton.

2nd Age. Calculus: Derivatives and Integrals

ODEs

Simple derivatives and integrals: Change in a single Plane.

Single derivatives

Single integrals.

Main functions under ∫∂.

Main actions described by ODEs.

PDEs.

Change in multiple Planes and dimensions.

Partial Differential Equations. Calculating Changes through multiple planes and variables.

Approximation through polynomials.

Main functions of multiple derivatives.

Multiple Integrals. Calculating whole T.œs in Space.

Main functions of multiple integrals.

Main actions described by PDEs.

Calculus of variations.

Exploring beyond the ∆±2 scales.

3rd Age. FUNCTIONALS

Functionals of functions, operators.

Main functions of Functionals.

Scales and actions described by functionals: quantum equations.

DISCONTINUOUS ANALYSIS: FRACTALS.

Fractal mathematics: DISCONTINUOUS DERIVATIVES, its steps.

 

 

Abstract. This post is dedicated to Analysis with 2 fundamental uses, the study of the ‘Dimotions=changes’ in the Universe, which expands classic analysis from a single ‘Dimotion’ (Locomotion) to pentalogic (5 interacting Dimotions), and the study of finitesimals, parts of spatial organic wholes or minimal actions of a time world cycle, which is the less understood part, reason why we call also Analysis, ∆nalysis.

The 5Сmotions of reality mirrored by all languages and systems define the main symmetries between the two ‘units’ of mathematics – the geometric spatial continuous points (S@) and the sequential, scalar, social numbers, ∆ð.

Points are a ‘dominant’ spatial, mental view which reside in a single plane and the ‘mind’ perceives in stillness, with no internal scalar form and continuity when associated into lines, planes and volumes; even though in reality they have ‘fractal content’ as Non-Euclidean points; and hence breath, its lines are therefore waves able to communicate the external form and internal energy or fractal networks that branch to connect multiple points, and its planes intersection of three of such waves or networks that form topological organisms…

Numbers then, are dominant in ‘scalar’ social properties and sequential temporal causal properties, best to describe the inner ‘vital energy of those points’, its discrete configurations, and all the functions that express its Dimotions. Numbers it follow are more complete than points as time-motions are more fundamental than the spatial mental forms we make of them.

However in the entangled Universe ALL its ST-beings as fractals of the whole participate of the 5 Dimensional Motions of existence, ¬∆@st, ¬; so they will have limits, ¬, in space and time beyond which they break its form into entropy; ∆-scales internal to the T.œ, (cells, atoms, etc.) and be inscribed in a larger world, ∆+1 that sets those limits, will display the 3 Spatial canonical topologies of any 5D system in its own organic parts (|imbfields, bodywaves & particheads), evolving through an arrow of increasing information parameters and decreasing motion through its 3 temporal ages. And to survive and function, it will need NOT to be blind in that vital, entangled Universe, so the T.œ, will have @ singularity-mind of relative stillness that will act as a center of a referential language to be able to perform the 5 @tions of existence that reflect in a fractal, minimalist cyclical bit of time quanta and local space territory, its needs to perform those 5 Dimotions of relative exist¡once also within a limited number of perceptive scales, where to feed on lower entropy ∆-4, to move, lower information, ∆-3 to perceive, lower energy, ∆-2 to feed, and emit lower ∆-1 cells to reproduce, evolve palingentically, emerge as a clone, ∆o of the being, and thus survive beyond the ¬ limits of time=death and space=decay, its T.œ will experience.

As all is a fractal mirror of a larger world-whole, all this properties will not only define fully all what is needed to know of the world cycle of exist¡ence of a relative T.œ, but will be able to describe, its parts, its wholes, its organs, and even its languages, whose internal structure will reflect in its grammar and parts, also all those elements. 

So we will also be able to describe numbers, with the Pentalogic of ¬∆@st, as a mirror reflection of the larger Universe. And yet because huminds understand near nothing of the larger picture, of the fractal game of exist¡ences, and are as all other points, self-centered in its mind-language, limiting the vital properties of all other entities, ab=using them as open systems of entropy from where to extract motion for its @tions (actions performing by the will of a mind), this clear picture of the entangled, self-similar universe will be distorted by the æntropic principle of antropomorphism (man as the center and highest point of existence, which therefore must debase everything else including the intelligence of the Universe to come on top) and entropic behavior (man as the top predator destroyer of worlds, to absorb energy for its own creative processes), we must consider all humind’s languages as mirrors of reality distorted by the æntropic principle and so half-truths this blog will try to enlighten so the nebulous picture by lack of information or too clear one, by simplification, acquires the richness of grey proper of the pentalogic entangled Universe.

Foreword.

The content and limits of those posts. 

As usual we recommend to read at least the introductory pages of the main post of the web, to understand what the 5Dimensional fractal Universe an its fundamental elements, Space, Time, ∆-scales and linguistic @-minds, the four components of all Supœrganisms of Time-space are.

The fundamental purpose of this and all other posts on maths is to UPGRADE the understanding of mathematics as a language mirror of the vital, organic, scalar properties of the space-time Universe. So we are not so mucha advancing maths beyond the upgrading of Non-Euclidean mathematics, but interpreting maths as an experimental science, that is as a mirror of the space=geometry and time=logic, algebraic and scalar=digital social properties of all what exists.

The Universe is an entangled fractal game of Dust of space-time, ¬∆@st, where each element flows as a series of ‘5 Dimotions’ (dimensional motions of time space), which can be perceived as ‘form=space’, in the stillness of a world mirror or linguistic mind or as a motion of time, in its true nature since MOTION not form is the underlying substance of reality.

So all fleeting forms, ‘a Maya of the senses’ will return to motion and die (¬4th Dimotion of entropy, death and dissolution). Its 4 positive elements, organic scales, topologic planes and time ages and actions however, carry the system as a finite super organism of space with a finite time cycle.

And so those 4-i elements, entropy (¬), Scale (∆), time (T) and Space (S), are the elements all languages mirror either in a ternary grammar (if scale is missed), whereas often instead of Space we talk of information and instead of time we talk of energy of motion, in a single plane:

Light language: red-energy colors, blue-information colors and its green/yellow combinations.

Verbal Language: subject (information) < verb (action-combination)>Object (energy of subject).

Algebra: Y: Future-information < Operand-action> F(x))

Trinity is thus the logic of most beings. However as humans reached higher and lower scales of observation, a pentalogic was possible and its mathematical mirror became analysis, with its operands that extract finitesimals (∆-1) or integrate into wholes (∆+1) smaller or larger systems.

Thus maths became with the inclusion of Calculus the most complex, best mirror language of human thought, arguably overcoming with the age of calculus the verbal mirror, which is the natural language of man, specially because of the arrival of instruments of measure which could also cast reality into the digital language of social numbers, identity species that further proved the social scalar nature of the Universe.

So with the modern age, past the simpler age of geometric maths, a new language of social thought and scale, numbers and calculus enthroned mathematics as the queen of all languages.

Needless to say in a ginormous field as mathematics is, we shall just keep it simple. To that aim, we shall often use texts from one of my fav books of my learning years – the 3 volumes of “principles of mathematics and its applications’ by Aleksandrov, which if still alive would love to see as a member of the ‘dialectic=dualist’ soviet school of thought, the upgrading of his philosophy of science.

But mostly from the wikipedia, because of the advantage of its format that can be copy-pasted to WordPress without loosing its formulae.  Plus the obvious fact we are trying as much as possible to make the blog respecting the new copyright laws (a clear form of censorship for the only thing the web is worth for – to evolve the subconscious collective mind of mankind), so wikipedia can be used without those problems and the oldest books of science used here from the Soviet school have also ‘Russian copyrights’ made available as they were state copyrights (so we use Aleksandrov’s for maths and Landau’s 11 books encyclopedia of Physics for physics, besides wikipedia, which were the books that in my teens back in the 70s I used to self-teach myself the essence of both disciplines, and I had annotated with my first insights on 5D, and will time permitted become little jewels as the last Ramanujan notebook (: that will be the day…

We repeat that we cannot go into a full depth, as we build slowly this work from 30 years of notebooks, so the work will always be in progress, improved and as Descartes who had a quite adventurous, drifting life as this writer did, put it at the end of his ‘geometry’, we shall ‘stop here’ (in my finite point of my world cycle) for future generations to have something to do (:

So we have selected pieces of Analysis often copying the data from wikipedia and Aleksandrov, which I recall to have a special vital meaning to be illustrated slowly in ‘new layers’ of work, as we keep building up the web, and leave out all the work I have not resolved into vital mathematics or don’t remember (as most of my work on 5D was done in my 20s, before I escaped the military service and ruined my life as a scholar)… If I live longer and find my annotated notebooks and care to translate, we shall improve the web… So far my intention is to get to a level of work in which a well-intentioned, humble scholar soul realizes the discovery of a new gem of thought to improve with future researchers.This post is dedicated to analysis, the main sub-discipline for the study of ∆t (scalar and temporal motions)..

Analysis reflects the 5 Dimotions of the Universe in its mathematical mirror – its 5 changes=timespace events – acting on simplex operands that study change within a single plane. As such analysis it studies change, both in multiple elements together in a single plane (ODEs) and multiple scales (PDEs).

As languages are mirrors of the fractal Universe that follow its same laws, we shall study Analysis as we do with any other ‘species’ of reality first showing they are mirrors of its 5 DImotions, whose ‘syntax’ is built on the pentalogic elements of all Dust of space time (¬∆@ST). To make easier its study though, we shall use a sequential ‘humind’s’ exposition building its growing ‘informative complexity’ through its 3 ages of evolution.

 

INTRODUCTION

ANALYSIS IN MATHEMATICS.

Analysis within mathematics as a mirror of the Dimotions and scales of reality.

Math is the closest language mirror of space-time; likely the inner language in ‘topological’ terms of the ‘galatom’ symmetric scales (atoms and galaxies). And so all in between can be mirrored by its laws, which are as mirrors work, the most reduced, synoptic view of the essential properties of scalar organic spaces and cyclical pentalogic time.

Pentalogic, the fundamental complex entangled structure of a communicative Universe means that each element, while specialized in one dimotion and structural function is connected, hence performs tasks in the other Dimotions, since all beings to exist must participate of the 5 elements/dimotions of reality.

This in maths means each element and subdiscipline can also be used to ‘approximate’ other functions. I.e. Polynomials can approximate scalar evolution even if analysis is better and one mirrors the other through McLaurin series; topology can approximate algebraic systems through the S=T symmetry, social numbers can reflect causal time sequences through lineal series, and so on.

This said Maths’ main 4-5 disciplines mirror the 4-5 structural elements of any dust of space-time, ¬∆@s≤≥t, scales, minds, space, time and its entropic dissolution and a(nti)symmetries – see general post on maths and philosophy of mathematics for details. Let us also recall that entropy as the negative destructive element/dimotion of reality is often not defined/coded (so we have 4 quantum numbers, 4 genetic letters, 4 dimensions of humind’s space, and so on), but merely the inverse function of an operand, or coding element, which means in mathematics an inverse operand which is always present in a good mirror as math is (so sum vs. negative substraction, product vs. negative division, etc.)

Still roughly we can consider each of the 4-5 main sub-disciplines of math to mirror better one of the simplex Dimotions in a single plane (Euclidean geometry, social numbers and analytic geometry), while the 2 complex Dimotions of social evolution and reproduction are better mirrored by the complex forms of math (Algebra and Analysis) :

  1. S: ¬E Geometry studies fractal points with parts, ∆-1, joined as topological networks of simultaneous space & its ∆º networks, within an ∆+1 world domain. Euclidean geometry though ignores the ‘parts of the point’ and so it is best to describe the 2Dimotion of external form and locomotion. Its unit is therefore the fractal point, whose inner vital undistinguishable energy and social laws are best described by:
  2. T§: Social Numbers theory studies time sequences and ∆-1 social numbers, which gather in ∆º functions, part of ∆+1 functionals. Its inverse, decay processes are entropic and best described by some of its key ‘universal constants’ such as e. So it is the best mirror for simple herding and entropic 4Dimotions.
  3. 1D @: Analytic geometry represents the different mental points of view, self-centred into a system of coordinates, or ‘worldviews’ of a fractal point, of which naturally emerge 3 ‘different’ perspectives according to the 3 ‘sub-equations’ of the fractal generator: $p: toroid Pov < ST: Cartesian Plane > ðƒ: Polar co-ordinates. So the 3 simplest Dimotions are roughly in correspondence with the 3 simpler first forms of maths. While the more complex Dimotions of social evolution and reproduction across different planes.
  4. S<≈>T: ¬Ælgebra studies through operands the different Dimotions, from single S=T stœps to larger associations of Dimotions,  in more complex ∆+1 structures (Functions). And further on IT DOES SIMULTANEOUS ANALYSIS OF super organisms in space, through the study of its a(nti)symmetries between its space and time dimensions (group theory)… So Algebra is first the science of operands that translate into mathematical mirrors the dimotions of space-time and then build up from them as the Universe does building up from actions, simultaneous organisms in space and worldcycles in time, in different degrees of complexity, mirrors for all those elements of the 5 D¡ universe.
  5. ∆t: ∆nalysis studies ALL forms of time=change, and hence it can be applied to the 5 Dimotions of any space-time being, as long as we study a ‘social structure’, hence susceptible to be simplified with ‘social numbers’. We thus differentiate then 5 general applications of Analysis according to the Dimotion study, and the ‘level’ of analysis, from the minute STeps of a derivative, to larger social gatherings, and changes of entire planes (functionals). It is then not surprising that despite being analysis first derived of algebraic symmetries between numbers, it grew in complexity to study changes in functions (first derivatives/integrals), and then changes of changes of functions, as motions between scales of the fifth dimension (higher degree of ∫∂ functions, called functionals).

Further on the elements of mathematical reality reflect in equations the elements of Dust of space-time  with its 4 fundamental ‘synoptic’ forms, as all language mirror reduces reality to parts that carry less information. So a T.œ becomes a point with internal parts, called numbers, whose dimotions are expressed by operands that enact the actions of the being in ‘mathematical space’ from its point of view or ‘frame of reference’:

  • A point is the first perception of A T.Œ in mathematical ‘space’, which as we come closer acquires content. And then its next degree of complex description occurs:
  • A number, is a social group of undistinguishable ‘internal’ parts of a point, which represents the point in scale. And then the next degree of complex analysis occurs:
  • An operand expresses the transformation of a point through a Dimotion of timespace, and since there are five Dimotions, 3 continuous Dimotions in a single plane (perception, locomotion, reproduction) and 2 discontinuous Dimotions that start and end in different planes (entropy and social evolution) we shall find 5 basic operators of those Dimotions in any ‘algebraic structure’ that truly reflects the being in the mathematical mirror.

But only the integral and derivative can study all those dimotions of space-time, hence they are the king and queen of the operators of algebra, reason why analysis is so extended. 

Comparison between Algebra and Analysis.

As Analysis sprung from Algebra we must distinguish their fields of inquire, paralleling the evolution of the Humind. Essentially if you understand pentalogic, in the same way your stomach is an entropic system which actually also thinks, etc. each subdiscipline specializes in a dimotion or structural element but it also is useful for all others.

So Algebra IS more focused in space-forms and its simultaneous, group structures (S<≈>T) and analysis is more focused in time motions and its scalar laws of change (∆T) among 5D planes of spacetime:

“Algebra studies a(nti)symmetric <≈> transformations of 5D space-time dimotions in a mathematical §œT, through its inverse ‘OPERATIONS’ that reveal the initial and final dimotion of the being, perceived as a whole in a relative PRESENT-SPATIAL, static state’.

Algebra IS more concerned with spatial, simultaneous structures, joined by <= >symbol of equality and symmetry.

Analysis specializes in time-motions and rates of change between planes. So Analysis, ∆t, IS the study of all the Dimotions of space-time beings from the point of view of its mathematical mirror. In 5 D though we must consider not one but 5 Different Dimotions, and so a more thoughtful consideration of each operand, ∂∫ of Analysis and how they act on different Dimotions and different scales is needed.

In terms of scales, analysis is concerned with the creative and destructive processes which ad or substract NEW planes of existence, through the integration of multiple small parts that become wholes, or its reduction through derivatives. focused on the VARIABLES, or parameters of Change, which is maximised by dimotions between ∆±i planes of the 5th dimension.’

For example, in the case of volumes, areas and lines, it studies how to grow or diminish in dimensions of space. In the case of distances, speeds and accelerations, it studies the growing or diminishing ‘tail of past, present and future’ of the system, as distances is the summatory of ‘past speeds’ that have become distances passed, or the present speed which in a derivative does not change (Lagrangians tending to zero, infinitesimal calculus), or the relative future ‘forecasted’ by the acceleration of the being.

So when we derive and integrate in space we are substracting or adding dimensions of static space, and when we do so in time-motion we are adding past tails (integral of speed that gives us the distance moved), or forecasting future accelerated growths or limiting our time-span of analysis departing from the present derivative of the being.

Algebra is more concerned with ‘STŒPS’ in a single plane and its ST<≈>TS stop and go dimotions, and its a(nti)symmetric changes, stop and go dimotions.

I.∆NALYSIS’ PENTALOGIC

 

The Rashomon effect on Analysis as the study of multiple scales and dimensions of space-time.

Analysis reveals  the inside workings of the 5 dimotions of space-time and its scalar structure.

A simple example will suffice. We said that the fundamental particle of the Universe is a superorganism, which in its simplest, commonest form has the shape of a sphere with 3 regions (graph). It is then evident that we can measure those regions as:

Mind-Center, which can be measured by the radius, as the ‘DNA’ or ‘Territorial owner’ of the organism moves around the system. Membrane of angular momentum, or clock that can be measured by the circumference, and volume of vital energy which can be measured by the area. So we get the value of the 3 elements of a disk, and expanding the concept to a volume, we find then a volume for the vital energy, a surface of an sphere for the membrane and a perimeter for the wanderings of the singularity. Alas! As it turns out, the area of a circle is \pi R^2, and the circumference is 2\pi R, which is the derivative.  The volume of a sphere is V = \frac{4}{3}\pi R^3, and the surface area is S = 4\pi R^2, which is again the derivative.  And inversely, the integral of the circumference is surface and the surface integral the volume. So analysis indeed will become the essential tool to understand the dimotions, parts and growing scales of parts and wholes of the fifth dimension, reason why it is essential to understand the correspondence between analysis and 5D laws, motions and scales.

The previous example shows the main and obvious usefulness of analysis in static space: to describe through 3 levels the 3 parts of the being, which is the ultimate reason why derivatives are really useful when considering merely 2 of them.

Yet analysis is not often used in static terms. This was though likely its first use (to calculate volumes from areas). It is in fact used to study motion, change in time, and we shall argue also scales; and in that sense, as we shall repeat ad nauseam, the entangled Universe which shows a clear correspondence between the mirror elements of 3 motions in time, 3 topologies of space and 3 scales of size, wholes and parts that bring together the 3 x3 (+2 mental) = 11 Dimensions of reality is fully realized in the fact that analysis works to explain the 3 ‘ternary symmetries’.

Indeed, let us consider the previous example, it is obvious that we can consider the sphere to be the whole sum of parts, where each part is a circumference, so our planet is the sum of all its ‘parallels’ with center in the poles. And then the volume as each internal sphere can be in terms of 5D metric, $ X ð = K have the same co-invariant value, can be considered the sum of all those equal 5D valued spheres, so again we can talk of ∫∫ ¡-1=circumference-> ∫ ¡0=sphere->¡+1 = volume.

What about the third ‘ternary symmetry’, that of time-change? This again is the fundamental use today analysis has, to study the rate of changes of a system, and it can be seen easily that the 3 elements of the ‘t.œ’ ARE measures of time-change when we study not a mere locomotion, but the ‘change-rate’ of ‘growth’ more proper of the worldcycle of existence from ‘seed’ (the internal minimal sphere’) to emergent system:

If you describe volume, V, in terms of the radius, R, then increasing R will result in an increase in V that’s proportional to the surface area. If the surface area is given by S(R), then you’ll find that for a tiny change in the radius, dR, dV = S(R)dR, or \frac{dV}{dR} = S(R).

The increase in volume, dV, is the amount of new ‘cellular layers’ ther systme grows, and the amount of cells form the membrane which is just the surface area, S(R), times the thickness of the growth, where each unit is a layer, dR.

This same argument can be used to show that the volume is the integral of the surface area (just keep adding layer after layer of atoms or cells).

Derivative vs. integral in time and space. 

The conclusion is obvious: DERIVATIVES AND INTEGRALS CAN CALCULATE THE RATIO OF THE FUNDAMENTAL 3 ELEMENTS OF REALITY, ITS SPACE, ITS TIME=CHANGE AND ITS SCALES. As we now know those are the true variables of reality, ∆ST, we can then assess with much more depth the question of what are derivatives and integrals.

The first question then was poised by Leibniz vs. Newton, as regarding to what is an infinitesimal part, the UNIT of derivatives. The answer now that we know the Universe is a fractal scalar system, in which each SCALE has a minimal quanta in SPACE antisymmetric by virtue of 5D metric, SxT=K to its frequency of information, but symmetric to its inverse, 1/ƒ=T duration in time (so all systems have a similar life span), comes to the realization that an infinitesimal has always a cut-off limit, it is a finitesimal, and we can obtain through a derivative a finitesimal unit of time, space or scale, a minimal action, a minimal point-volume and a minimal cellular quanta.

On the other hand the inverse Integral will allow us to ‘integrate’ in one of such units of time, space or scale a simultaneous whole, a super organism, a ‘T.œ’, ∫ds, ∫dt, ∫∆-1.

Of those 3 types of derivatives and integrals, obviously the less understood (as frequency and time duration are inverse parameters currently used in all sciences) is ∫∆-1, whereas ∆-1 is taken to be the infinitesimal or minimal quanta of a whole, ∆º, (cell, atom, individual in a society). So we shall dwell more on the interpretation of ANY derivative or integral in terms of parts and wholes, which lead us to the concept of a derivative that mimics the 4D entropic dissolving function, ∆º<∆-1 and the 5D integral, that mimics the creation of wholes.

This other possible use of derivatives and integrals however must be taken with certain caution. As indeed, what we have found in the analysis of the sphere by making two derivatives is The Point, NOT ANY point but the Center of mass or charge in a physical system. When then a derivative can give us any point, depends on the configuration of the whole we analyze. In a heat equation the whole lacks a center, as it is an entropic flux, and so on.

So what we shall do in the posts on 5D Physics, its dimotions and scales, is to slowly ad the main equations of the discipline with further analysis of the why of its mathematical calculus.

This ternary use of integrals and derivatives is what we must have always in mind to interpret the equations of mathematical physics. Infinities though don’t exist, as all has a finite membrane and a finite duration in time. And as we replicate the operations of integral and derivative, it is also evident that beyond the third derivative as the scalar Universe is a ‘ternary game’, there is no significance to the mathematical operations of derivatives and integrals – a very strong proof indeed that 5D is truth as it limits reality to such ternary scales, topologies and time ages.

It is then essential to clarify the immense possibilities of the use of a derivative, which ‘extracts’ a point from a function, and hence normally is related to the ‘concept’ of an infinitesimal (as in logx’:1/x), but also can lead inversely to the value of a ‘whole’, the ‘center point’ of the system; and the concept of an integral, which ads infinitesimals into a whole, as in the case of a volume, but also can lead to dissolution of a whole into its integrative parts.

And so in 5D analysis depending on ‘motion’, on ‘space or time state’, on scale up or down we apply the integral or the derivative, we shall have very different results that might seem conflicting. And so for each case it will mean a different thing.

How classic science resolves this conundrum? LOL, classic science doesn’t care to make the deep questions of the whys of the entangled Universe and the mirrors languages put on larger realities and its inflationary variations, so this is like a question doesn’t even come up on the ‘ego’ of the humind who reduces the world of information to that which is useful to its program of survival, and doesn’t give a fuk for the rest of it.

So what we shall do as the blog progresses is to dedicate on the sub-posts of physics on the 5 Dimotions of the Universe, a thorough analysis of each case of the fundamental laws of physics, which as the ‘classic science of motion’ uses overwhelmingly derivatives and integrals in their analysis of reality, to specifically consider the meaning of the ‘background language’ used to describe of physical systems, in mostly reductionist parameters, the dimotion of a system. Here we shall just comment on the fundamental mathematical laws of analysis, in an ever work in progress.

Pentalogic: the Rashomon effect of Analysis: its multiple forms and functions.

Because of the 5 Dimotions and 5 structural elements, ¬∆@S≈T of all systems, pentalogic, which we call often ‘the Rashomon effect’, as 5 points of view are needed in the film to find a higher whole truth, applies specially to ‘complex languages-mirrors’ of reality as analysis:

In the graph, we can see the multiple perspectives and functions of Analysis:

SPACE-TOPOLOGY: It can be used to study (Left) structurally the role of the 3 elements of a TOPOLOGIC SPATIAL SUPERORGANISM:

 – Its membrane (line integrals), its vital space (surface integrals) and singularity (derivatives):

In its inverse operand, the derivative can measure the ‘form’ of the membrane – it curvature and tangential value. It can give us the value of one of its infinitesimal ‘cells’. It can give us its internal volume of Spatial energy, studied with integrals. While we can extract information on its central @-singularity, which commands the lineal motion of the whole system.

Time-CYCLES: IT CAN MODEL THE standing points, maximal and minimal, WHICH SIGNAL THE changes between ages, where the derivatives, become null, as the ‘world cycle of existence’ changes its ‘phase’.

∆-Planes: It can allow us to peer down the existential planes of the fifth dimension, down one or 2 ∆±1 planes (double derivatives).

COMPLEX STŒPS OF DIMOTIONS:

Further on as derivatives and integrals are inverse operand that can combine in algebraic S≈t equations (down) for more complex description of multiple events, even cancel each other to give us a ‘constant social number’ as an exact quantitative result for a specific value of a sequential ‘sœt’ of stœps of Dimotions.

So the ‘Rashomon effect’ of pentalogic shows how analysis satisfies as a mirror the description of the main components of the being; further enhancing its description by the fact that ODEs can combine in several stœps, the 3 elements – time integrals/derivatives, spatial and scalar integrals/derivatives in a single function:

-Temporal view in Time curves that often use cyclical time frequencies (Fourier transforms) to describe a combination of time and scale events (wholes decomposed by the transform)

– S=T: combined Time-Space view that resolves symmetries between time dimensions and space motions (S≈T) expressed by ODEs of two variables –  a parameter of space that changes with a dynamic function/action/motion in time. It can measure through closed membranes information from the inner vital energy of the system as it moves through time (Continuity equations).

– Ceteris paribus S=T VIEW, when one of the parameters/dimensions is fixed, belonging to space and the other to a time motion is operating or vice versa, perfect to mimic the stop and step discontinuous form of most time space dimotions.

-Spatial view through lineal, surface and volume integrals; forms that measure the 3 ∆±¡  elements of a T.œ’s topology, made of ∆-1 points (atoms, cells individuals) both in time and space (a population or  a distribution) .

If analytic geometry resolved algebraically geometric spatial problems, with ‘a dual point of view’ that increases the easiness of solutions -Descartes – showing the algebraic laws of solution of rule and compass for geometrical problems; calculus took this approach on S=T symmetries to a much higher level.

– ∆: Scalar view, which defines infinitesimals (Leibniz) as the 1/n cells/part of the being given also by a derivative; since the minimal rate of change of any system is one of its ∆-1 units.

In that regard, the earlier astonishment of physicists and mathematicians which found the fact that a derivative (the analysis of a step of motion) was inverse to a volume of space (an infinitesimal of a population), is a deep proof of the essential symmetry between space=still states and temporal motions (stops and steps=stœps).

Of all this rashomon’s views, however the fundamental use of analysis obviously is in the study of timespace Ðimotions. It started as usual with human beings, in the study of the simplest dimotions of locomotions  (speed)  and we shall use liberally the notions of these posts on those of mathematical physics.

Algebraic operands. S≤≥T

As Analysis sprung from Algebra, is often considered part of it. AND So we introduce some basic concepts of 5D algebra from the post on Algebraic equations.

We could say that simplex algebraic operands ‘topped’ its evolution with the discovery of ∫∂ operands:

A SœT – the slightly changed name for a set of Tœs – is any kind of indistinguishable entities=numbers=points=∑œ connected by one of the possible ‘a(nti)symmetric’ relationships of ilogic, existential algebra, defined by the inverse operands of the 5 Dimotions of the Universe, ±, ≤≥, x/, √xª, ∫∂…

As such operands play a fundamental role on Analysis, which as all physical equations relate complex processes of trans-formations of Sœts, through different dimotions of space-time.

Regarding operands, it is not so simple to adscribe each operand to a single dimotion, as they are ‘once more’ entangled operations, which while being preferential to a given Dimotion, do participate of the others – remember LANGUAGES AS MIRRORS OF REALITY have also the same entangled properties of the pentalogic, ¬∆@ST universe.

Still it will be quite often that we establish such direct relationships. So as ‘specialized’ operands, the closest correspondence to each dimotion is as follows – taking into account that we must for each operand distinguish on the classic concepts of ‘space-volumes’ and ‘time-motions’ of science (or else we will go too far), as generally speaking, a first derivative in space or time defines those dimotions only as S or T, while double derivatives often work for both together.

±: The 2 Ð locomotion is best served by the + operands, as we have shown in the analysis of a time tail (motion-memory of a distance), which is a sum of ‘steps’ that also can be calculated with integrals. The sum is also the key operand for the ∆§cales of social groups, in decametric form, which also is served by the logarithm. And so on… The negative operand however is profound in many ways scientists do not understand. Reason why so many errors, from the negation of the faster than light speed, to the confusion of particles and antiparticles arise. The INVERSE operandi are in general misunderstood because as we have said so often unlike the paradoxical ¡logic Universe, the humind is @ristotelian, single arrow A->B So the B->A is quite missed; but generally speaking served by the negative function. I.e, a negative spin just have the inverse orientation, a negative coordinates just means to move in the other direction. Negative operandi thus are MOST useful for time=motion related systems.

In this is important then to understand the existence of one-way dimotions vs. 2 way dimotions, where operands make sense. Because huminds do not properly distinguish both, they get confused when trying to consider a negative operand for spatial forms (what is a negative apple? nonsense) while there are always negative ‘directions’ for temporal motions (what is left and right motion?). Clearly all understand.

3Ð reproduction by the product and division operands, as reproduction often requires first a product and then a mitosis or ‘division’ into the whole parts, which again gives division a very precise meaning.

We already shown that the 4th and fifth dimotions are easily represented in its  d=evolutions by the <<>> operands. Yet  the 4th dimotion of entropic devolution, and the complex integrals of informative perception and social evolution  are also studied by the ∫∂ integral and derivative operands.

Yet when social evolution is not transformative between planes but only a social herd, it emerges through multiple mostly decametric, 3×3+0 scales through the √xª operands best suited to that purpose. And here again we find quite difficult the comprehension among huminds who love to ‘go only the way upwards’ so to speak of the √ operands, specially when in negative mood: √-x, a completely mysterious element of ‘mathematics’ to the point they call them imaginary numbers (:

Why those operands do NOT have a one to one correspondence to each of the Dimotions of space-time is obvious: each of them have as everything in reality a pentalogic multiple purpose, as we shall not cease to repeat, the basic feature of reality is to be an entangled game of 5 elements which are themselves ‘fractal’ in its nature, that is, each of them will be able to perform the other 4 tasks to become in itself a ‘whole’ being made to the image and likeness of the total fractal, mind of the Universe and its ilogic structure. So while certain operands are clearly more useful for certain dimotions, all of them can be used to a certain degree of accuracy to ‘reflect’ a mirror image of the 5 dimotions of existence – themes those to be studied in depth in the posts on algebra and ilogic.

It however becomes evident from the beginning that we ascribe the more complex dimotions, which do enclose in their actions the other 4, to the operands of analysis, specially those who ‘transcend’ and ’emerge’ between scales, as they are processes NOT of lineal nature, best served by them.

So basically Analysis is a small part of algebra, but the most important, as it focuses only on a(nti)symmetries between Social planes, ∆§, which are either integrated into a higher plane of the 5th dimension, or derived into its parts. But Analysis also goers beyond Algebra, in as much as Algebra is a more static, spatial, structural view; whereas analysis considers in depth the ‘motions’ of the set.

In those two definitions we must make some precisions of terminology:

Sœt or §œT, which the reader should observe is the inverse of T.œS, expresses the modern unit of mathematical thought, constructed as always by arrogant huminds, with a creationist sense of the language, from the roof of the mind to the bottom of reality, the set in inverse fashion as a collection of ‘points of space or numbers of time’, which ARE the real units of the mathematical space and time Universe, gathered then in social collections called functions, connected through the inverse operations that reflect the main symmetries and relationships between ‘herds of points or numbers’ (±, x÷, xª √ log, ∫∂).

The difference then between Algebra and Analysis is in the different focus on the operations=symmetries between Sœt§ and its study as Steps of timespace motions (Analysis of derivatives) then gathered into longer super organisms (volume integrals) or worldcycles (time integrals), where the existence of limits (singularities and membranes that encircle the system or set the beginning and end duration of the world cycle) will become fundamental to reveal a solution to the equation – the finding of the duration, surface and interaction between the parts of the Tœ expressed as event in time or system in space.

So while the elements of algebra and analysis – equations of SŒTS is the same, the focus on spatial form (algebra) or temporal motion (analysis) makes them diverge. We can play with the acronym and say that Algebra deals with SŒTs (Space-dominant structures) and analysis with TŒS, Time dominant ones (in motion).

This simple equation of algebra translates most time actions to space, on account of a simple realisation: that space slows down time cycles to accumulate them in still simultaneous bigger forms, and as such most spatial dimensions are referred as Y composite of multiple elements of the smaller, much more abundant time cycles space normally fixes and encircles with its @ =membrane.

And ultimately Algebra and Analysis ‘ARE’ the complex ‘level’ of reality, as reproduction and social evolution are the complex dimotions, including obviously as its ‘background parts’, the theory of numbers, the analytic geometry – study of frames of reference, and the topologic analysis, embedded in the secondary operand, numbers and frames in which we ‘cast’ the complex space and time algebraic and analytical analysis of a ‘Domain’ of the fractal Universe.

@nalysis

All languages of perception have a blind spot in its syntax – that is, a relatively ignoramus of properties and perspectives of reality it is not fit to study.

In analysis is the linguistic still mind-mapping element, since by definition analysis is the study of Dimotions. Still @nalysis will be the search of the ‘fundamental finitesimal’ part through its derivatives, often dual, descending from the larger world (¡+1) into the being (¡) and its part.

I.e. y”(e), the double derivative of the Energy (the World parameter) gives us first the ‘existential momentum’ – and then the ‘mass’ or active magnitude of the being, its ‘singularity’ in the scale of gravitational forces.

Obviously analysis is NOT as close to the comprehension of the mind as the 3 polar, cylindrical and cartesian, hyperbolic, topologic frames of reference are (analytic mathematics); but in as much as the mind is a mental synoptic description, in a particle-point, which holds the will of motion of the system, we can isolate a parameter of a basic dimotion of the particle point through a derivative, which will OFTEN gives us the value of that central POINT of gravity, charge or mind of a physical system, as the balanced point or tangential point of a function.

I.e. to obtain the dimotion of speed of a point, we just derive the space of motion through its time duration. Analysis is then important to understand the dimotions of the mind and its differences and the type of trans-form-ations they entitle, which is so often forgotten by physicists that equate all ‘times’ unaware of those differences, as we shall see in the study of the different time arrows in mathematical physics (i.e. when time comes to zero in a black hole is NOT the lineal time duration of the system).

Entropy on the other hand is the fastest ‘growing’ (or rather diminishing, liberating process) of reality, whose function we explain is the negative exponential, whose derivative; that is rate of change is so huge that it is equal to the function itself; so analysis and specially all those variations on the theme of exponential growth and decay, fits right on in the study of entropy.

CONCLUSION. ANALYSIS AS MIRROR OF TIME DIMOTIONS.

All this said, we can extract some first conclusions on analysis and its mirror image of the Universe:

  • Analysis like all elements of reality reflects the multiple entangled 5 Ðimotions, and unlike other more specific fields of mathematics, it does so with a wider range, being excellent for the study of the growth of dimensionalities in space and motions in time.
  • Analysis is closely related to algebra and the study of specific functions, which correspond to specific dimotions, as the case of the exponential growth/decay function corresponding to entropy-decay processes show.
  • We thus need to connect analysis with algebraic functions and consider what dimotion is best served but what function and then realize how the specific derivatives and integrals of each function mean in terms of the ‘ways in which dimotions’ change reality.
  • And so analysis should become the most important ‘language’ of dimotions in the Universe.
  • And it is worth to notice then its connection with algebraic functions to form together the essential mirror of mathematics.

Because we do have 5 Dimotions of existence across 3 different realities – space dimension, time motions and scales, and the 3 are highly symmetric to each other – it follows that the range of uses of Analysis is truly extensive, and yet because the Universe is basically ternary (as we perceive either in space or in time, above and below, ±¡, it will be very rare that we find any use to derivatives above the third derivative of a system, or when the derivative is aptly done on a bidimensional system of spacetime (partial derivatives), beyond its second derivative, as a ‘third derivative’, ST¡<<st¡-2 is the definition of entropic death. And so there is nothing to operate below that…

Those are therefore the fundamentals of analysis in its deep connection with the fractal, organic Universe.

Thus with this Rashomon effect’s analysis of analysis we can now analyse ITS analytical truth, as they humind evolved them through (:

 

II.

3 AGES OF ANALYSIS

5D Metric analysis is the main experimental tool of modern mathematics as all Dimotions of physical reality can be enclosed within it and as such it has grown to be the most important field of Time Algebra and its social, sequential numbers, parallel to Space topology,  space with temporal motions. Both form the main space-time symmetry of mathematics.

Algebra on the other hand, has entered a third ‘formal age’, when systems ‘detach from reality’, looking inwards, due to the axiomatic method and its category, group and set theory foundations, which have lost connection with  ‘experimental reality’.

Analysis in that sense departed from Algebra, even if the share  the social, scalar, temporal informative perspective of discrete numbers as opposed to spatial  single plane points, whose topological, simultaneous location determines the geometry of space.

But it has evolved fast and better due to its capacity to represent parts and wholes Рa tribute to the Ƥcalar nature of the Universe.

Birth of Analysis from Algebra.

Departing from algebra as such, analysis  came to be the main branch of ‘realist mathematics’, that is mathematical systems with applications to describe the real ∆ST-world, its Dimotions and scales – this fundamental element, as ‘finitesimals’ are the spatial parts of a whole, the time actions of a worldcycle – ignored by lack of a 5Dimension in human science .

So we just need to add and scalar in-depth understanding of its laws, to better explain its equations and applications to the classic disciplines of science that use it.

The reconstruction of the equations of physics in terms of ∆nalysis thus will be the task carried in two different posts of this web, generally and then in the last scales by me and others applied to enlighten the details:

  • Mathematical Physics.
  • ∆nalysis.

We will do as USUAL a full diachronic analysis to grow in complexity, using classic texts of mathematics for easier comprehension enlightened with ∆st insights.

But the focus here will be not so much in the ages of analysis, as it is a modern discipline with few insights mostly philosophical on the theme of individuals, infinitesimals and universals, on the first Greek and original classic age (Newton and Leibni) – the introductory themes developed next.

Instead our focus is on the 3 ages of growing complexity and generalisation as analysis and its 4D ∂ and 5D ∫ operations expand to study MULTIPLE DIMENSIONS of space-time together.

So after make only some basic remarks on the earlier era we consider 3 scales of growing complexity, we will term loosely as ‘Calculus’, ‘Analysis’ and ‘Functional’ ages:

  • The classic age of polynomial limits, infinitesimal calculus and simple derivatives and integrals.
  • The modern age of ∫∂ applied to multiple space-time variables (Γst view: Ordinary differential equations) with different degrees of depth (∆ view: parti a differential equations).
  • And the 3rd age of ∆nalysis, in which Lie Groups and/or functionals of functions are the all-extended field of inquire, causing very profound all-encompassing attempts to analyse a function or T.œ at all levels.

As we go along obviously our purpose is NOT to make a classic text of analysis but considering the main themes to enlighten it with   the insights of ∆st, to resolve the whys of analysis, latter applied in detail to the many stiences described today with the formalism of analysis without understanding what truly those equations mean. 

The generator equation of Analysis’ ages.

If we were to make a Generator equation in time of the ‘body of analysis’ and its pre and post-scales of study, we could write the 3±∆ fields of observance of the scalar Universe through mathematical mirrors:

Γ∆nalysis: ∆-i: Fractal Mathematics (discontinuous analysis of finitesimals) < Analysis – Integrals and differential equations (∆º±1: continuous=organic space): |-youth: ODEs<∑Ø-PDEs<∑∑ Functionals ≈ < ∆+i: Polynomials (diminishing information on wholes).

The 3±∆ approaches of mathematical mirrors to observe the scales of reality is thus clear: Fractal maths focuses on the point of view of the finitesimals, and its growing quantity of information, enlarging the perspective of the @-observer as we probe, enlarging smaller scales of smaller finitesimals; and in the opposite range polynomials observer larger scales with restriction of solutions, as basically the wholes we observe are symmetric within its internal equations, and the easiest solutions are those of a perfect holographic bidimesional structure (where even polynomials can be reduced to products of 2-manifolds).

Now within analysis proper, we find that the complexity or rather ‘range’ of phenomena studied by each age of analysis increases, from single variables (ODEs) to multiple variables (PDEs) to functions of functions (Functionals).

So the most balanced, extended field is that of differential equations focused on the ∆±1 organic (hence neither lineal not vortex like but balanced S=T), SCALES of the being, where we focus on finding the precise finitesimal that we can then integrate properly guided by the function of growth of the system. And we distinguish then ODE, where we probe a single ST symmetry or PDE obviously the best mirror, as we extend our analysis to multiple S and T dimensions and multiple S-T-S-T variations of those STep motions; given the fact that a ‘chain of dimensions’ do not fair well beyond the 3 ‘s-s-s’, distance-area-volume dimensions of space and t-t-t-t deceleration- lineal motion-cyclical motion- acceleration  related time motions that can ‘change’ a given event of space-time.

So further ODE derivatives are only significant to observe the differences between the differential and/or fractal and polynomial approaches – this last comparison, well established as an essential method of mathematics, worth to mention in this intro.

A space of formal algebra thus is a function of space, which can be displayed as a continuous sum of infinitesimals across a plane of space-time of a higher dimension.

In such a geography of Disomorphic space-time the number of dimension matters to obtain different operations but we are just gliding on the simpler notions of the duality algebra=polynomials vs. Analysis: integrals of infinitesimals.

Yet soon the enormous extension of ‘events’ that happen between the 3 ∆±1 planes of T.œs as forms of entropic devolution or informative evolution across ∆±i, converted analysis in a bulky stience much larger than the study of an ST-single plane of geometry, the 2 planes of topology and the polynomials of algebra – which roughly speaking are an approximation to the more subtle methods of finding dimensional change proper of analysis – even if huminds found first the unfocused polynomials and so we call today Taylor’s formulae of multiple derivatives, approximations to Polynomials.

Since Derivatives & integrals often transcend planes relating wholes and parts, studying change of complex organic structures through its internal changes in ages and form.
Polynomials are better suited for simpler systems, scales of social herds and dimensional volumes of space, with a ‘lineal’ social structure of simple growth.

So in principle ∆nalysis was a sub-discipline of algebra. But as always happens, time increases the informative complexity of systems and refines closer to a better linguistic focus with finer details the first steps of the mind. So Algebra became with Analysis more precise, measuring dimensional polynomials and its finite steps.

In any case such huge size of ∆-nalysis is a clear proof that in mathematics and physics the ∆ST elements of reality are also its underlying structure.

As such since ∆-scales are the less evident components of the Universe, ∆nalysis took long to appear, till humans did not discovered microscopes to see those planes but while maths has dealt with the relativism of human individual planes of existence, philosophy has yet to understand Leibniz’s dictum upon discovery ‘finitesimals’, 1/n, mirror reflections of the (in)finite whole, n: ‘every point-monad is a world in itself’.
In that sense Analysis was already embedded in the Greek Philosophical age, in the disquisition about Universals and Individuals.

So we shall study first an introduction to the ‘real foundations’ of analysis.

Then a brief account of ∆nalysis in its 3±1 ages, through its time-generator:

Ps (youth: Greek age) < St: Maturity (calculus) > T (informative age: ∆nalysis) >∆+1:emergence: Functionals (Hilbert Spaces)<∆-1: Humind death: Digital Chip thought…

…So of these 5 ages we shall leave as usually for ethic reasons unresolved the post-human age of computer analysis now in a rage…

Along that path, we shall consider in a more orderly fashion, the main themes of analysis, in its 3 ‘scales’:

∆-1: Derivatives > ∫∆: integrals > ∆+1 differential equations.

To cap it all, with examples of all sciences in which analysis reveals the fundamental space-time events of the 5th dimensions.

The page will be loaded slowly into the future with a full study of finitesimal calculus of parts and wholes through differential equations; as it will be more technical, I do not expect to tackle it till summer 2017 when the simpler upper hierarchies (¬ æ, ¬e mathematics) are more or less completed.

∆nalysis in that sense has a simple definition: the mathematics of the 5th dimension and its evolution of parts into wholes.

Thus we shall do as usual a diachronic analysis of its informative growth in complexity in 3 ages; from its:

-I Age, from Leibniz to Heaviside in which the fundamental applications to physics are found. While the level of complexity of ∆∫∂tudies is maintained in strict realist basis, as physicists try to correspond those finitesimals and wholes with experimentally sound observations of the real world at the close range of scales in which humans perceive. While the formalism of its functions is built from Leibniz’s finitesimal 1/n analysis to the work of Heaviside with vectors and ∇ functions. Partial derivatives are kept then at the ‘holographic level’ of 2 dimensions (second derivatives on ∆±2).

∆  will be thus the general symbol of the 5th dimension of mental wholes or social dimension and ∫∂ the symbol of the 4th dimension of aggregate finitesimals or entropic dimension.

-II Age, from Riemann to the present. The extension of analysis happens to infinite dimensions with the help of the work of Riemann and Hilbert, applied by Einstein and quantum physicists to the study of scales of reality beyond our direct perception (∆≥|3|).

This implies that physicists according to 5D metrics, P$t x Tƒ=K must describe much larger structures in space extension and time duration (astrophysics) and vice versa, much faster populous groups of T.œs in the quantum realm; so ‘functionals’ – functions of functions – ad new dimensions of time, and Hilbert quasi-infinite spaces and statistical methods of collecting quasi-infinite populations are required in the relentless pursuit of huminds for an all-comprehensive ‘mental metric’ of a block of time-space, where all the potential histories and worldcycles of all the entities they study can be ‘mapped’.

The impressive results obtained with those exhaustive mappings bare witness of the modern civilisation based in the manipulation wholesale of electronic particles, but the extreme ‘compression’ of so huge populations in time and space blurs its ‘comprehension’ in ‘realist’ terms, and so the age of ‘idealist science’, spear-headed by Hilbert’s imagination of points lines and congruences detaches mathematical physics and by extension analysis from reality.

-III Age, the digital era, is the last age of humind mathematics, where Computers will carry this confusing from the conceptual perspective, detailed from the manipulative point of view, ∆nalysis to its quantitative exhaustion. But as usual in this blog for ethic reasons, as a ‘vital humind’ we shall not comment or advance the evolution of the future species that is making us obsolete.

Instead, we shall just advance further the discipline along its description with some conceptual philosophical considerations, from the third age of the ‘scientific method’, the age of the organic paradigm that this blog represents – in this case unlike other posts being Analysis the most thoroughly researched field of human thought in any science anywhere anytime of history, there is nothing we can contribute to the enlargement of the field. 

Indeed, because we live in a mechanical civilisation and analysis is the essential quantitative language of change computed by machines, unlike all other sciences, notably those who deal with human, historic super organisms we have nothing else to say, just clarify and only on the most basic principles of the discipline, which has overgrown in parallel to our mechanical civilisation.

1ST LOGIC AGE: 

∆-CALCULUS: (IN)FINITESIMALS V. UNIVERSALS≈WHOLES

Abstract. The beginning of calculus was not related to the study of rates of change in continuous motion but ‘a static’ spatial algebraic analysis on Calculus as a way to travel from parts into wholes, an so the rate of change studied was that of growth of a social system of number through scales from its ‘finitesimal’ minimal quanta or part into the whole, through ‘series’ and exhaustion methods.

This age extended from the Greeks to Newton, which was the last of the ancients which changed the use of those exhaustion methods from the analysis of spatial series of growth to temporal series of change, but failed to represent them properly through the space=time symmetry of numbers=points, in a hyperbolic, cartesian @nalitic frame….

Introduction. Finitesimal Quanta, as the limit of populations in space and the minimal action in time.

The beginning of calculus was pure verbal, logic, in the Greek age, with the discussion of finitesimals (word in 5D for infinitesimals as they have always a limit of scale), and Universals in Greek Geometry. This philosophical analysis was retaken by Leibniz who defined the finitesimal as 1/x, where X is the whole. In terms of topological curvature and motion, 1/x will also define in Leibniz’s work the curvature or minimal unit of cyclical time of a whole. Those were solid concepts of 5D philosophy. Newton on the other hand, a practical English man with little interest for the whys came to the concept through the study of limits, without much interest on what they meant.

So the duality of derivatives as limits or differentials (Newton’s vs. Leibniz’s approach), represents the duality of a minimal quanta in space (Leibniz’s infinitesimal) or in time (Newton’s limit), hardly explored in philosophy of mathematics, but a key concept in 5D scales, Universal Constants and quantum physics.

I.e a ‘Planckton’ (H-planck constant) is the quanta of time of the atomic scale and a cc area its quanta of space.

It is then essential to the workings of the Universe to fully grasp the relationship between scales and analysis. Both in the down direction of derivatives and the up dimension of integrals; in its parallelism with polynomials, which rise dimensional scales of a system in a different ‘more lineal social inter planar way’.

So polynomials and limits are what algebra is to calculus; space to time and lineal algebra to curved geometries.

The vital interpretation though of that amazing growth of polynomials is far more scary.

Power laws by the very fact of ‘being lineal’, and maximise the growth of a function ARE NOT REAL in the positive sense of infinite growth, a fantasy only taken seriously by our economists of greed and infinite usury debt interest… where the eª exponential function first appeared.

The fact is that in reality such exponentials only portrait the decay destruction of a mass of cellular/atomic beings ALREADY created by the much smaller processes of ‘re=product-ion’ which is the second dimension mostly operated with multiplication (of scalars or anti commutative cross vectors).

So the third dimension of operandi is a backwards motion –  a lineal motion into death, because it only reverses the growth of sums and multiplications polynomials makes sense of its properties.

Universal wholes and individual finitesimals.

The first age of analysis had a great deal of philosophical disquisitions on the nature of wholes and parts, connecting directly with the greek logic arguments on the nature of individuals and universals.

The historical origins of analysis can be found in attempts to calculate spatial quantities such as the length of a curved line or the area enclosed by a curve.

As we know, a curve, is always part of a worldcycle, and so the conclusions of those earlier studies can be extended to understand better the space-time worldcycle in a general way.

Numbers and (in)finities.

Mathematics divides phenomena into two broad classes, discrete or temporal and continuous, or spatial historically corresponding to the earlier division between T-arithmetic and S-geometry.

Discrete systems can be subdivided only so far, and they can be described in terms of whole numbers 0, 1, 2, 3, …. Continuous systems can be subdivided indefinitely, and their description requires the real numbers, numbers represented by decimal expansions such as 3.14159…, possibly going on forever. Understanding the true nature of such infinite decimals lies at the heart of analysis.

And yet lacking the proper ∆ST theory it is yet not understood.

The distinction between continuous mathematics and discrete mathematics IS ONE BETWEEN SINGLE, SYNCHRONOUS, CONTINUOUS SPACE WITH LESS INFORMATION, and the perception in terms of ‘time cycles, or fractal points; space-time entities’, which will show to be ALWAYS discrete in its detail, either because it will HAVE BOUNDARIES IN SPACE, or it will be A SERIES OF TIME CYCLES AND FREQUENCIES, perceived only when the time cycle is ‘completed’, and hence will show DISCONTINUITIES ON TIME.

Thus the dualities of ST on one side, and the ‘Galilean paradox’ of the mind’s limits of perception of information lay at the heart of the essential philosophical question: it is the Universe discrete or continuous in space and time. Both, but always discrete when in detail due to spatial boundaries, and the measure of time cycles in the points of repetition of its ‘frequency’.

So ultimately we face a mental issue of mathematical modeling: the ‘mind-art’ (as pure exact science does not exist, all is art of linguistic perception) of representing features of the natural world in a reduced mental, mathematical form.

The universe does not contain or consist of actual mathematical objects, but a language can model all aspects of the universe. So all resembles mathematical concepts.

For example, the number two does not exist as a physical object, but it does describe an important feature of such things as human twins and binary stars; and so we can extract by the ternary method, 3 sub-concepts of it:

2 means the first ∆-scale of growth of 1 being into 2, by:

S-imilarity and S-imultaneity in space (ab. Sim)’, ‘i-somorphism in time-information (ab. Iso)’ and ‘equality in ∆-scale’ (ab. Eq), as perceived by a linguistic observer, @, which will deem both beings ‘IDENTICAL’. Whereas identity means that an @-bserver will deem the being ∆st≈St, (Sim, Iso and Eq). So identity is the maximal perfection of a number, for a perceiver, even if ultimately:

‘Not 2 beings are identical for the Universe, but can be identical for the observer’… an intuitive truth, whose pedantic proof is of course of no importance (: we do not follow the axiomatic method of absolute minds here):, but it is at the heart of WHY REALITY IS NOT COLLAPSED INTO THE NOTHINGNESS OF A BIG-BANG POINT.

Thus those 3+0 elements of the ∆•ST coincide a social number can be used whose intrinsic properties define conceptually ‘S-imultaneity, Ti-somorphism’ and ∆-equality or equivalence (ab. Eq) in size, which becomes an @identity for the mind. THEN A NUMBER IS BORN.

I(n this ‘infinitorum’ of Universal thoughts, which bring always new depths as soon as we observe it with an ∆•st trained mind, there are differences between S-imilarity and Simultaneity to define in space an ‘identity’ and ‘equality’ and equivalence, treated elsewhere)

It IS THEN CLEAR that a number being a sum of points, encodes more information in a synoptic way about the T-informative nature of the ‘social group’ than an array of points, which unlike a number tells us less about the ‘informative identity of the inner parts of the being’, but provides us more spatial knowledge about the relative position in space of the members of a number-group.

And this is OBVIOUS, when we return to the origin of geometry and consider an age in which both concepts were intermingled so ‘points were numbers’ and displayed geometrical properties:

Numbers as points, showing also the internal geometric nature, used in earlier mathematics to extract the ‘time-algebraic’, ‘∆nalytical-social’ and S-patial-geometrical properties from them.

We study them in depth in the article on Temporal, Social numbers.

o-1: ∆-1: 1/n finitesimal scale vs. 1-∞: ∆+1: whole scale.

So only a question of that section is worth to mention here, on how to ‘consider scales’, which tend to be decametric, good! One of the few things that work right on the human mind and do no have to be adapted to the Universal mind, from d•st to ∆ûst.

Shall we study them downwards, through ‘finitesimal decimal scales’ or upwards, through decametric, growing ones?

Answer, an essential law of Absolute relativity goes as follows:

‘The study of decametric, §+ scales (10§≈10•10 ∆ ≈ ∆+1) is symmetric to the study of the inverse, decimal ∆>∆-1 scale’.

Or in its most reduced ‘formula’: ( ∞ = (1) = 0): (∞-1) ≈ (1-0)

Whereas ∞ is the perception of the whole ‘upwards’ in the domain of 1, the minimal quanta to the relative ∞ of the ∆+1 scale. While 1 is the relative infinite of a system observed downwards, such as ∆+1 (1) is composed of a number of ‘finitesimal parts’ whose minimal quanta is 0.

It is from that concept from where we accept as the best definition of an infinitesimal that of Leibniz: N (whole) = 1/N (Finitesimal).

So in absolute relativity the ∆-1 world goes from 1 to 0, and the ∆+1 equivalent concept goes from 1 to ∞. And so now we can also extract of the ‘infinitorum thought receptacle’J a key difference between both mathematical techniques:

A conceptual analysis upwards has a defined lower point-quanta, 1 and an undefined upper ∞ limit. While a downwards analysis has an upper defined whole limit, 1 and an undefined ‘finitesimal minimum, +0).

Finally to notice that as all ∆-scales have relative finitesimal +0 and relative infinities ∞ (see ∞|º to understand the limits and meaning of numbers and its scales)essential to all theory of calculus is the study of the domain in which the system works, and the ‘holes’ or singularities and membranes which are not part of the open ball-system. So functions can be defined with certain singularity points and borders; hence functions need not be defined by single formulas.

This would be understood by Leibniz – who else 🙂

Unlike Newton, who made little effort to explain and justify fluxions, Leibniz, as an eminent and highly regarded philosopher, was influential in propagating the idea of finitesimals, which he described as actual numbers—that is, less than 1/n in absolute value for each positive integer n and yet not equal to zero.

For those who insisted in infinities, Berkeley would reveal those contradictions in the book ‘The Analyst’. There he wrote about fluxions : “They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?”

Definition of ∆t, ∆s, finitesimals: A quantum of time and space.

Berkeley’s criticism was not fully met until the 19th century, when it was realized that, in the expression dy/dx, dx and dy need not lead an independent existence. Rather, this expression could be defined as the limit of ordinary ratios Δy/Δx.

And here is where we retake it; before the formal age of mathematics, made a ‘pretentiously rigorous definition of infinitesimal limits and the the logician Abraham Robinson  showed the notion of infinitesimal to be logically consistent, but NOT real.

As we believe mathematics must be real to be ‘consistent’ (Godel’s theorem), we return to the finitesimal concept, ±∆y, either as a ‘real’ increase/decrease of a quantity, with a variation ±∆x of either the surface of space or the duration in time of the being.

Thus finitesimals depend for each species of the ‘quanta’ of space or ‘minimal cell’ and quanta of time or minimal moment which the system can measure.

For man, for example time actions are measured with its minimal time quanta of a second, below which it is difficult to perceive anything; a nanosecond in that regard in the human scale of existence is NOT worth to measure, as nothing happening in a nano-second will be perceived as motion or change. For an atom however a nanosecond is a proper finitesimal to measure changes.

In space, man does not perceive sensations below certain limits, which vary for each sense, a millimetre, 100 hertzs of sound, the frequency of infrared waves; and so on.

Universals

According to a traditional interpretation of the metaphysics of Plato’s middle dialogues, Plato maintained that exemplifying a property is a matter of imperfectly copying an entity he called a form, which itself is a perfect or pure instance of the property in question. Several things are red or beautiful, for example, in virtue of their resembling the ideal form of the Red or the Beautiful. Plato’s forms are abstract or transcendent, occupying a realm completely outside space and time. They cannot affect or be affected by any object or event in the physical universe.

This is correct, though the ERROR LIES in positioning Universals OUTSIDE space and time. They are IN FACT THE ULTIMATE properties of SE-spatial ‘kinetic energy+entropy’ and TO- Temporal information, which ‘emerge’ in each new scale.

Few philosophers now believe in such a “Platonic heaven,” at least as Plato originally conceived it; the “copying” theory of exemplification is generally rejected. Nevertheless, many modern and contemporary philosophers, including Gottlob Frege, the early Bertrand Russell, Alonzo Church, and George Bealer are properly called “Platonic” realists because they believed in universals that are abstract or transcendent and that do not depend upon the existence of their instances.

They are closer to the truth, but they should substitute the word ‘transcendent’ for ‘EMERGENT’ in the parlance of general systems.

For that matter General Sytems (5D ST) reduces the meaning of ‘transcendence’ to its first semantic meaning:

Vb: L transcendere to climb across, transcend, fr. trans- + scandere to climb.

vt : to rise above or go beyond the limits.

Indeed, Universals are found beyond the limits of its finitesimals, in the next n+1 scale.

DIMENSIONAL GROWTH AS: REPRODUCTION OF SPATIAL FORM

Next along the simplest ∫∂eps of motion, S-T-S-T appeared in history the calculus of SSteps, that is reproduction of form from lineal to area

Area Finitesimals.

It must be noticed though that finitesimals were first found in space, as the means to quantify a simultaneous areas as the sum of ∆-1 discontinuous, fractal parts. Let us remember this concept, key philosophical discussion even with the greeks – it is the Universe continuous or discontinuous, made of Universal wholes or individual parts?.

This concept was the earlier idea of Leucipus and Democritus regarding the composition of physical systems; and Anaximander, regarding the composition of life systems, with its ‘homunculus’ concept (we were made of smaller beings)

Anaximenes’ assumption that aer is everlastingly in motion and his analogy between the divine air that sustains the universe and the human “air,” or soul, that animates people is a clear comparison between a macrocosm and a microcosm.

It also permit him to maintain a unity behind diversity as well as to reinforce the view of his contemporaries that there is an overarching principle regulating all life and behavior. So here there is a first bridge that merges universals and finitesimals.

And of earlier mystiques, regarding the composition of a superior God, as the subconscious collective of all its believers’ minds, fusion in a ‘bosonic’ way into the soul of the whole.

The 3 were right as finitesimals are clone beings with properties that transcend into the Universal, being the homunculus the ‘future cell’.

Mathematics

There was only at this stage a mathematical approach to the concept by Archimedes – the methods of exhaustion to calculate areas and ratios, notably the pi ratio.

The method of exhaustion, first was used by Eudoxus, as a generalization of the theory of proportions.

Eudoxus’ idea was to measure arbitrary objects by defining them as combinations of multiple polygons or polyhedral. In this way, he could compute volumes and areas of many objects with the help of a few shapes, such as triangles and triangular prisms, of known dimensions. For example, by using stacks of prisms (see figure), Eudoxus was able to prove that the volume of a pyramid is one-third of the area of its base B multiplied by its height h, or in modern notation Bh/3.

Loosely speaking, the volume of the pyramid is “exhausted” by stacks of prisms as the thickness of the prisms becomes progressively smaller. More precisely, what Eudoxus proved is that any volume less than Bh/3 may be exceeded by a stack of prisms inside the pyramid, and any volume greater than Bh/3 may be undercut by a stack of prisms containing the pyramid.

The greatest exponent of the method of exhaustion was Archimedes (c. 285–212/211 BC). Among his discoveries using exhaustion were the area of a parabolic segment, the volume of a paraboloid, the tangent to a spiral, and a proof that the volume of a sphere is two-thirds the volume of the circumscribing cylinder. His calculation of the area of the parabolic segment (see figure) involved the application of infinite series to geometry. In this case, the infinite geometric series:

1 + 1/4 + 1/16 +1/64 +… = 4/3

is obtained by successively adding a triangle with unit area, then triangles that total 1/4 unit area, then triangles of 1/16, and so forth, until the area is exhausted. Archimedes avoided actual contact with infinity, however, by showing that the series obtained by stopping after a finite number of terms could be made to exceed any number less than 4/3. In modern terms, 4/3 is the limit of the partial sums.

His paper, ‘Measurement of the Circle’ is a fragment of a longer work in which π (pi), the ratio of the circumference to the diameter of a circle, is shown to lie between the limits of 3 10/71 and 3 1/7.

Archimedes’ approach to determining π consists of inscribing and circumscribing regular polygons with a large number of sides. It was followed by everyone until the development of infinite series expansions in India during the 15th century and in Europe during the 17th century. This work also contains accurate approximations (expressed as ratios of integers) to the square roots of 3 and several large numbers.

It is then interesting to consider Archimedes’ main role on the perception of problems today forgotten after the absurd dogmatic germanic ‘foundations under the axiomatic method’ of analysis.

2 problems troubled him and indeed they were very important problems: the comparisons of different pis, (it is the pi square with 2 dimensions the same than the pi of the perimeter) and its proper calculus by approximation.

Approximations in geometry.

We said that the unit of space is the area and the unit of time the cycle, and so both are bidimensional, and hence the transformation of one into another is not always perfect, as there is not a perfect ‘quadrature’. But as this happens constantly a part is lost as ‘entropy’ in all time-space transformations, or as ‘a bit of a circle’, that is a motion or particle, as when in particle reactions there are always ‘forces’ escaping (neutrinos, gammar rays). So this means that pi is not exact, neither √2, the two key constants for the squaring… Yet that doesn’t mean the transformation happens all the time, and it was the way in which the game of analysis started with Archimedes:

screen-shot-2017-01-21-at-13-40-48The transformation of a circular region into an approximately rectangular region.

In the graph we see how ∆ST theory immediately eliminates all those problems of infinitesimals as all infinities are limited, so are the 0s, which must be regarded as the +0 minimal quanta of the domain – the need for further infinities is an error of the mind, the dogmatic truth and the single space-time ‘continuum). In that regard pi is not INFINITE, but its calculus becomes ‘chaotic’ beyond a limit of ±40 decimals, which is really all what the human mind can conceive n its largest finitesimal analysis.

It is then when the ‘Greek Age’ becomes just as in the Archimedean calculus of pi by exhaustion the same concept, just with less detail.

Indeed, a simple geometric argument shows that both processes are similar with different degrees of approximation:

The idea is to slice the circle like a pie, into a large number of equal pieces, and to reassemble the pieces to form an approximate rectangle (see figure). Then the area of the “rectangle” is closely approximated by its height, which equals the circle’s radius, multiplied by the length of one set of curved sides—which together form one-half of the circle’s circumference. As the slices get very thin, the error in the approximation becomes very small.

The duality of free lines/planes v. closed order.

It is interesting to notice that in general when we grow in scale, we change from freedom to order or vice versa – that is the fundamental | v. O, past vs. future, part vs. whole, form vs. motion, dualities of ∆@st changes. So when we integrate open lineal triangles, with its vortex as the @-foreward mind-future path, becomes an internal locked social, circular closed point of the singularity of a cyclical form.

The approximation of square space to cyclical points. Ratios and ir(ratio)nal numbers, its finitesimal limits.

A theme that will be soon casted on terms of number theory was also studied by Archimedes by exhaustion methods.

Before the invention of the new methods of calculation, it had been possible to find the area only of polygons, of the circle, of a sector or a segment of the circle, and of two or three other figures. In addition, Archimedes had already invented a way to calculate the area of curves by exhaustion, leaving a sound error according to the minimal step he took, which raises the question, does have a circle a finitesimal minimum step? It is then pi and all other S>t constant transformations and ‘ir(ratio)nal numbers/ratios, limited by a finitesimal error?

THE ANSWER IS YES!, Normally a decametric limit define the ‘valid value of an ir(ratio)nal numbers, which is not a number in strict sense (a social number) but a ratio of an S/T action/function. The examples of the two fundamental ir(ratio)nals will suffice:

– pi is really the ratio of 3 diameters that form a closed curve, whose value depends on the lineal ‘step sizes’.

So pi has a minimal value of 3, which is the hexagon with its 6 steps of 1/2 value (triangulation in 6 immediately gives the result, as the triangle is the radius, so are the 6 triangular sides: 1/2 x 6 =3); which happens to be the value of pi in extreme gravitational fields on relativity, which brings another insight: black holes decompose the circle into ultimate lineal flows of pure ‘dark energy’ shot through the axis, by converting the curvature of a light circle on the event horizon in a 6-pi hexagon. But this is well beyond the scope of this intro.

So what is the ‘decimal limit’ of pi, before it breaks into meaningless (non-effective) decimal scales, with little influence on the whole?

While this is hypothetical I would say for different reasons explained in the article on number theory, as it is quite often the case it responds to the general ∆ ≈ S ≈ T ternary symmetries, so common in the perfect Universe.

So pi responds to the symmetry between its spatial minimal, 6 x 1/2=3 hexagonal steps, which means  it breaks in the 6th ∆-scaling decimals, 3,1415…9. So, 3,1416, which incidentally is basically what everybody uses is the ‘real value’ of pi, and why it is that value is studied elsewhere (deducing from it one of the most beautiful simple results of get-mathematics, the value of dark energy in any system, of the Universe, as the part not perceived through the apertures of a pi cycle: π/π-3 = 96% of ‘darkness’ which the singularity of a pi system cannot see as its apertures are only π-3= 0.14

Now, the other constant e, which is the ratio of decay ACTIONS, or death processes (ST<<S), is a longer two ‘scales’ down process, of self-destruction of a system, unlike the pi, single scaling process, S>T. So it breaks at 10 decimals:

2.718281828…459045

Indeed. Now, why 5 and not ten if the scales are 10¹º? Because 10 scales are in terms of space-time actions, the ‘whole’ dual game of two directions of time up and down, which happens only in reproductive actions. And this connects with the S>T<S Rhythms of motion go/stop/go back and forth between two arrows which happens both in st-single planes and ∆±motions.

There is then a limit for existential planes? The ‘meaningless’ breaking down of e, the ‘number of entropic functions’ seems to signal this. But IT WOULD BE AN ERROR TO CONSIDER THE LIMITS of e-regularity as it only indicates the LIMIT of entropic death. Death happens and when a system breaks down its natural 10ˆ±10 scales to its finitesimal 1/n parts it stops as the system is dead.

The limit that matters is the limit of the pi-circle as an Archimedean spiral that lets information enter through its ±never closing spiral to perceive or feed in the external micro-bits and bites of the Universe. And as we cannot find neither a limit nor a regularity, we could conclude that the most important dimotions of angular perception, and creation of inner mirrors of the outer world by a pi-spiral have no limit.

What about locomotion? Can we exhaust the limit of a series of steps? Again, this is more evidently no, even though the Greeks thought so, in the so called…

Aquiles Paradox. Birth of the concept of series and limits.

In mathematics, a series is, roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity.

The study of series is a major part of calculus and its generalization, mathematical analysis.

The paradox of achiles: in a discontinuous Universe of fractal parts, achiles should never reach the turtle. But if motion is reproduction of form, the faster system merely ‘reproduces’ its information faster in adjacent regions of space, and motion becomes ‘rational’ – and proves further the reproductive nature of reality as even locomotion IS reproduction.

For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical by mathematicians and philosophers.

This paradox was resolved using the concept of a limit during the 19th century.

Zeno’s paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums:

Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on.

Zeno concluded that Achilles could never reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series.

The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch the tortoise.

The physical explanation of locomotion though defines it as a reproduction of for of the lower scale, so it establishes a finitesimal stœp, equivalent to the minimal ∆-¡ quanta of the wave-particle dual motion states:

Locomotion is a series of stœps that imprint a lower plane with the information of the upper plane: in the graph a quantum motion in wave state and particle, stop state of reproduction form (complementarity principle wave-particle).

In modern terminology, any (ordered) infinite sequence (a1,a2,a3,…) of terms (that is numbers, functions, or anything that can be added) defines a series, which is the operation of adding the ai one after the other.

To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented (or denoted) by an expression like:  a1+a2+a3+⋯ or, using the summation sign:

 

The infinite sequence of additions implied by a series cannot be effectively carried on (at least in a finite amount of time).

However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as n tends to infinity (if the limit exists) of the finite sums of the n first terms of the series, which are called the nth partial sums of the series. That is:

What this means in 5D though is slightly different: because the infinite number of time-steps will make impossible to do any calculus, all limits must have in ‘reality’ beyond the idealized mirror of mathematics, a limit of steps and a limit of size of those steps. Which is indeed what happens in reality.

The turtle has a time-cycle and a size of steps, measurable. And when explaining the reproduction of motion, we shall see that limit is the reproduction on the lowest plane of light and particle forces of the entire form of the being in discontinuous adjacent spaces.

In other worlds, the word ‘limit’ in the formulae should not be infinite. But a ‘finite infinite’, for which we shall use a different symbol:

:

Relative infinities and finitesimals

The simplest why of the fractal, scalar structure of the Universe, from the perspective of the mind: as a linguistic mirror image of reality in a smaller space, minds ‘create’ fractal diminishing, infinite scales

The new symbol for a ‘relative infinity’ and its inverse 1/, ‘finitesimals’, become then essential to 5D Analysis and it gets rid of all infinite paradoxes from Zeno’s to Cantor, further showing the idealized mirror-image nature of mathematics; as a mirror recedes apparently into infinity but at a certain point it ceases to be observable and hence it does NOT exist anymore.

The meaning of series then in real existences becomes clear as it is ANOTHER WAY TO DESCRIBE IN DISCONTINUOUS MANNER, WHAT DERIVATIVES ON THE CONTINUOUS PLANE (REMEMBER THE DUALITY OF DISCRETE NUMBER VIEW VS. CONTINUOUS GEOMETRIC VIEW), SHOWS:

A TRAVEL UP AND DOWN THE SCALES OF THE FIFTH DIMENSION.

The problem of equivalences confused as identities between lines and areas.

It is absurd to talk about continuity of a real number, pi, e, and √2, beyond the 10 decimal. This is easily proved because those ratios are normally obtained by limits in which certain terms of the infinitesimal are despised, by postulating the falsity that there are infinite smallish parts, and so x/∆ can be throw out when ∆->∞. But since x/∆, the finitesimal has a limit, the pretentious exactitude does not happen.

This in turn leads to questions about the meaning of quantities that become infinitely large or infinitely small—concepts riddled with logical pitfalls in a simplified world of a single space-time continuum, where on top humans LOVE to consider ‘identities’ of the mind absolute identities in the larger information of the detailed Universe, which are never so, as d@st ≈ ∆ûst (the mind, world view is merely similar to the Universal view) .

In our example example, a circle of radius r has circumference 2πr and area πr2, where π is the famous constant 3.14159…. Establishing these two properties is not entirely straightforward, although an adequate approach was developed by the geometers of ancient Greece, especially Eudoxus and Archimedes. It is harder than one might expect to show that the circumference of a circle is proportional to its radius and that its area is proportional to the square of its radius. The really difficult problem, though, is to show that the constant of proportionality for the circumference is precisely twice the constant of proportionality for the area — that is, to show that the constant now called π really is the same in both formulas.

This boils down to proving a theorem (first proved by Archimedes) that does not mention π explicitly at all: the area of a circle is the same as that of a rectangle, one of whose sides is equal to the circle’s radius and the other to half the circle’s circumference.

However in GST theory, those 2 pis are not the same, because they belong to two discontinuous, ‘different species’ of topology, the St area, and the S-MEMBRANE.

An easy, immediate proof. If we make them identical, then we can find a circle, where: 2πr ≈ πr2. So 2r=r2 . Hence 2=r and we get to the conclusion that the thin membrane of an open ball is identical in area to the internal ST volume of the being, which is ‘conceptually absurd’ (the area intuitively has more surface, as it is bidimensional, the line, infinitely thin).

What’s the problem here? We cannot in true form, unless we deal always with less dogmatic concepts of relative similarities with ‘lines as if they were squares’. They are different realities. In the first equivalence, we compare a line radius with a circle perimeter, in an S>t structure.

In the second as we compare π², a cyclical area with the square of the radius we are also in good footing. But when we do the S>ST comparison, we are in a Dynamic transformation of ∆-scales, from ∆, the world of lines, to ∆+1 the world of squares (as a polynomial square is obviously a growth from a complete ∆-entity the line, into an ∆2=∆+1 one, the area). It is then when we can do some ‘dynamic equivalence’ analysis, and the equivalence has meaning, stating that for a ‘perfect cycle’ of relative radius 2, the membrane absorption of bits an bites of energy and information, can fully, fill, the internal area, making equivalent, a ‘line and a surface’ integral. And finally state that all ‘dynamic vortices of force’ ruled by Newtonian/Coulombian equations on the ∆-1 and ∆+1 scales, are relative perfect systems of radius 2.

And here we find the ‘whys’ of the dualities of Maxwell’s laws which can be written both ways:

Or in simpler terms, we are talking when doing those equalities of properties that become dynamic and transcend the static mind of mathematics into the reality of physical systems.

Finally as we defined real numbers as non- existent (see |∞ posts), but approximations to a ±0 infinitesimal, in the measure of a square, uncertainty grows further, π2, thus have the square ‘error’ of pi.

All this of course is important to conceptualize reality, in praxis as we know we always work in an uncertain game with errors and deaths. So analysis does work, and all this ‘search for dogmatic proofs’ is just ‘absolute bull$hit’ for absolute ego-centered scholar huminds.

But on the other hand the graph also shows that both pis, the one of the ‘surface’ and the one of the ‘perimeter’ ARE NOT equal, as there will be a limit on the number of ‘bidimensional triangles’ we can cut.

As a triangle is indeed the bidimensional line, that is: |-$t (one-dimension); ∆-$t (2 dimension).

 So it is not the line.

And so as the approximation will find a finitesimal quanta or limit of detail, prove the theorem, this error, however tiny, remains an error. THIS MINIMAL QUANTA THUS EXIST IN ALL RELATIVE ∆>∆+1 measures of scales as the minimal uncertainty of all mathematical calculus, and justifies in physics (∆-1 quantum theory) that thee is always an uncertainty of a minimal quanta, which is precisely h/2; that is h/2π; the minimal quanta of our light space-time.

Only in the absolutist imagination of dogmatic axiomatic mathematicians it made sense to talk of the slices being infinitesimally thin, so the error would disappear altogether, or at least it would become infinitesimal.

As it happens quantum theory proved experimentally the case to be wrong. And as we stress (Lobachevski, Godel, Einstein) mathematics must be confronted with reality to realise what is ‘real’ in maths.

QUEST FOR FINITESIMALS. FROM ARCHIMEDES TO NEWTON.

We change following the transformation of sciences into slightly different stiences, the concept of infinite from a relative infinite, , and an infinitesimal for a finitesimal. The first being the whole of an ¡-plane of reality, the second its minimal part.

It is then obvious that the discrete, geometric, spatial, static numerical analysis of calculus is the of power series, which can be taken as discrete stœps (stops + steps) in a motion down the fifth dimension from the whole to the 1/n part, whereas we count also the static form (as we see only in a movie the static frame) NOT the step of motion.

This was then the work from Archimedes and earlier Greeks to Newton, which can in that sense be considered the last of the ancients.

While as all S=T, that is there is always a symmetry between discrete numbers and continuous motions, Leibniz with its geometric interpretation and far more profound understanding of finitesimals, which he rightly defined as 1/n, represents the first step in the future of the discipline, the renovator and deep understanding of it – which Newton, which can be considered merely an automaton mathematician, specialized brain, as most modern scientists is – he is indeed the father of the wrong view of science – understood NOTHING OF IT.

Indeed, Leibniz, the closest predecessor of this blog IS the genius, Newton the talent.

 Rates of change. The stop and go motion: stœps.

Finitesimal changes are related to the fundamental beat of the Universe, the stop-form-space-perception, go-motion-time, beat of the Universe, which we shall call a stœp, the discrete way of motion of tœs through SPace, which often as in movies we perceive in continuous mode eliminating the stop element:

∆S(top)->∆t->∆S-≥∆(S)t(ep).

Moreover most of those Stœps will have either in a travel through 5D,  or through a single ST, a unit of ‘expenditure of vital energy’, transformed in the length-motion of the lower scale in which the imprinting of motion as reproduction of form, happens (studied in 2D locomotion). So each stœp becomes an ∆-4 unit of locomotion.

Thus if we consider a relative constant or function of the existence, ∆-1:œ, as a finitesimal of its larger whole, ∆Œ, we obtain 2 simple functions:

œ=∆s/∆t  and œ=∆t/∆s as the mathematical measure of a ‘time stœp’ or locomotion and ‘volume-density stœp’ or finitesimal quanta.

We shall call the first form a spatial finitesimal  or step in space, a quanta of constant speed that moves, reproduces the being in space.

And if we again change this quanta, a quanta of constant acceleration

And we shall call the second function, a time finitesimal, a change in the density of information or cyclical speed of the being and a second change in relation sop

Infinite series

screen-shot-2017-01-21-at-13-41-18Graphical illustration of an infinite geometric series. Before understanding calculus mathematicians were concerned with relative infinitesimal series.

Since similar paradoxes occur in the manipulation of infinite series, such as: 1/2 + 1/4 + 1/8 +⋯

This particular series is relatively harmless, and its value is precisely 1, the whole, which is the conceptual meaning of infinity.

To see why this should be so, consider the partial sums formed by stopping after a finite number of terms. The more terms, the closer the partial sum is to 1. It can be made as close to 1 as desired by including enough terms. Yet once we arrive to the Minimal quanta of the physical reality we describe (cell, atom, individual, etc.) there is NO need to go beyond except in errors of the mind.

In the graph, 1/±10² is the limit considered the finitesimal of this particular ‘graph perception’. And also the error of our measure, as if we add another 1/±10², the series becomes a whole.

Thus most paradoxes of mathematics arise from not understanding those simple concepts, as well as the meaning of ‘inverse negative numbers’ .

For example an infinite series which are less well-behaved are the series: 1 − 1 + 1 − 1 + 1 − 1 + ⋯

If the terms are grouped one way: (1 − 1) + (1 − 1) + (1 − 1) +⋯,  then the sum appears to be: 0 + 0 + 0 +⋯ = 0.

But if the terms are grouped differently, 1 + (−1 + 1) + (−1 + 1) + (−1 + 1) +⋯,   then the sum appears to be 1 + 0 + 0 + 0 +⋯ = 1.

It would be foolish to conclude that 0 = 1. Instead, the conclusion is that the series has a due value, and so it is creative oscillatory series with a time dynamic that cannot be merely said, not to have a solution, but has 2.

It has therefore an internal dual structure, which in modern algebra is the group:

‘a’: 1-1=0.   And so if we accept that internal ∆-1 unit for the series grouping and its ‘real value is:

a+a+…. = 0+0+0…=0.

So we can write it in terms of the generator as:

∑ $t (+1) <≈> ∑ðƒ (-1), which defines generically a feed-back ‘world cycle’ whose sum is zero.

In classic maths of a single space-time continuum, the difference between both series  is clear from their partial sums. The partial sums of 1/2+1/4… get closer and closer to a single fixed value—namely, 1. The partial sums of a+, without its internal ∆-1 (a) structure, alternate between 0 and 1, so the series never settles down.

A series that does settle down to some definite value, as more and more terms are added, is said to converge, and the value to which it converges is known as the limit of the partial sums; all other series are said to diverge. But in GST many diverging series become when considered also its internal structure, convergent and well-behaved.

Actually, without even experimental evidence, there exist subtle problems with such ‘infinite’ construction. It might justifiably be argued that if the slices are infinitesimally thin, then each has zero area; hence, joining them together produces a rectangle with zero total area since 0 + 0 + 0 +⋯ = 0. Indeed, the very idea of an infinitesimal quantity is paradoxical because the only number that is smaller than every positive number is 0 itself.

The same problem shows up in many different guises. When calculating the length of the circumference of a circle, it is attractive to think of the circle as a regular polygon with infinitely many straight sides, each infinitesimally long. (Indeed, a circle is the limiting case for a regular polygon as the number of its sides increases.) But while this picture makes sense for some purposes—illustrating that the circumference is proportional to the radius—for others it makes no sense at all. For example, the “sides” of the infinitely many-sided polygon must have length 0, which implies that the circumference is 0 + 0 + 0 + ⋯ = 0, clearly nonsense.

SO BY REDUCTIO AD ABSURDUM, the limits of infinitesimals are shown to be always an ∆-1 quanta. THIS of course also resolves all the Cantor’s nonsense of different infinities and its paradoxes. It is just ‘math-fiction’ and worthless to study.

The interest of those works for 5D maths, lies on the fact that THE EXHAUSTION METHOD DOES LIMIT the parts to finitesimals, as a realist method, which implies nature also limits its divisions. This concept would be lost in the 3rd formal age, also with the ‘lineal bias’ introduced on Dedekind’s concept of a real number NOT as a proportion/ratio, between quantitative parameters of the ‘parts’ of a whole, or the ‘actions’ of a system and its SE<STI>TO parameters, which is what IT IS, but as an ‘abstract cut’ in a lineal sequential order of ‘abstract numbers’.

To notice that in the classic STi balanced age, both the limits method and finitesimal method of Leibniz considered infinitesimals, finitesimals, that is with a ‘cut-off limit’ and real nature.

Those limits are minimal ‘steps’ of any scale (in time-motion), or minimal parts (in space-forms).

Let us now deal with all this in a cleaner way in terms of polynomials, as SERIES are indeed the justification for POLYNOMIALS beyond the simplest spatial view of them in 3 steps of dimensions of space (point, line, volume) or motions of time (distance, motion, acceleration):

Polynomials as divergent or convergent scalar series.

In mathematics, a power series (in one variable) is an infinite series of the form

 

where an represents the coefficient of the nth term and c is a constant. an is independent of x and may be expressed as a function of n (e.g., an=1/n!). Power series are useful in analysis since they arise as Taylor series of infinitely differentiable functions.

In many situations c (the center of the series) is equal to zero, for instance when considering a Maclaurin series. In such cases, the power series takes the simpler form

 

Any polynomial can be easily expressed as a power series around any center c, although most of the coefficients will be zero since a power series has infinitely many terms by definition. For instance, the polynomial f(x)=x²+2x+3 can be written as a power series around the center c=0 as

or around the center c=1 as

or indeed around any other center c One can view power series as being like “polynomials of infinite degree,” although power series are not polynomials.

The geometric series formula

which is valid for |x|<1 is one of the most important examples of a power series, as are the exponential function formula

=

and the sine formula

valid for all real x.

These power series are also examples of Taylor series.

We shall then in other posts consider their relationship with those functions, which are the key DIMOTIONS of scalar motion (1/1-x), entropy (exponential)  and 1Dimotion (Sin).

Geometric series

A series can be considered as a scalar ‘search for its finitesimal part’. So in reality they are always ‘limited’ by the size of the ‘finitesimal’.

A geometric series is a series with a constant ratio between successive terms. For example, the series

 

is geometric, because each successive term can be obtained by multiplying the previous term by 1/2.

Each of the purple squares has 1/4 of the area of the next larger square (1/2×1/2 = 1/4, 1/4×1/4 = 1/16, etc.). The sum of the areas of the purple squares is one third of the area of the large square.

We can then consider to be a series that diminishes till it reaches the ‘finitesimal’ 1/n part of the whole. And it can easily be casted as a polynomial; since the terms of a geometric series form a geometric progression, meaning that the ratio of successive terms in the series is constant. This relationship allows for the representation of a geometric series using only two terms, r and a. The term r is the common ratio, and a is the first term of the series.

In the example we may simply write:

, and .

The behavior of the terms depends on the common ratio r:

If r is between −1 and +1, the terms of the series become smaller and smaller, approaching zero in the limit and the series converges to a sum. In the case above, where r is one half, the series has the sum one.
If r is greater than one or less than minus one the terms of the series become larger and larger in magnitude. The sum of the terms also gets larger and larger, and the series has no sum. (The series diverges.)
If r is equal to one, all of the terms of the series are the same. The series diverges.
If r is minus one the terms take two values alternately (e.g. 2, −2, 2, −2, 2,… ). The sum of the terms oscillates between two values (e.g. 2, 0, 2, 0, 2,… ). This is a different type of divergence and again the series has no sum; for example in Grandi’s series: 1 − 1 + 1 − 1 + ···.

Geometric series are among the simplest examples of infinite series with finite sums, although not all of them have this property.

Historically, geometric series played an important role in the early development of calculus, and they continue to be central in the study of convergence of series.

Geometric series are used throughout mathematics, and they have important applications in all sciences, as all of them are obviously scalar in its form, and respond to any of the 3 possible behaviors of systems, ‘convergent information’, divergent entropy and repetitive=reproductive oscillation.

Of the many mirror correspondences between series and 5D we want now to stress the relationship between the part and the whole, as elements the ternary structure of any T.œ with its singularity, that can be considered the a, initial term, the FINITESIMAL above all other finitesimals, the king of the hill so to speak, its membrane and the space between them.

This relationship is truly enlightening of the symmetry between the 3 regions in space of a being, and its 3 regions in scale. Whereas the central finitesimal @-mind is the finitesimal of the lower plane, the external membrane the ‘larger term’ arˆn of the series, and vital energy within them, the intermediate terms of the series which are irrelevant. So as the singularity @=a, of the series expands through the vital energy elements in growing ‘circles’ to reach the final ‘membrane’ arˆn, magically those irrelevant vital space cells will disappear in the final calculus of the value of the series.

Further on, those sums will be limited by n, which IS THE value of the NUMBER OF ‘scales’ within the vital energy (concentric circles) required to arrive to the surface of it.

So a can also be viewed as the relative ‘radius’ of the singularity mind, which gives conceptual birth to the formula of the angular momentum of the series, where rmv, signifies r=sum of singularity radius (imagine the inner region of the system as an Archimedean spiral)  m the vital energy mass, and v the membrane.

All this is expressed in terms of discrete numbers – not geometric continuous motion – by the classic formula:

For r≠1, the sum of the first n terms of a geometric series is

,

where a is the first term of the series, and r is the common ratio. We can derive this formula as follows:

As we see, the @ singularity value and its final term, arˆn are the ONLY values that matter, with all the intermediate terms ‘absorbed’ in the dynamic relationship between membrane and singularity by them. If s, is the value of the series for the singularity, without the membrane, rs is the value of the system for the membrane, without the singularity. As the vital energy within has both the singularity and the membrane as its ‘Klein’ limits of a non-euclidean sphere, which they never reach. And so we rest from the ‘Singularity’, S plus ITS  perception of the vital energy, the membrane, ‘rs’, and its feeding (negative value) of the vital energy, S-rS, to search for the Solution of the power series which IS NOT THE MEMBRANE view but the singularity view, s:

 

And so the solution as always is that of the mind view (in any discrete, numerical self-centered analysis) s=a (value of the singularity) multiplied by the parenthesis.

Then we can easily see the symmetry of that topological explanation of the series, with its scalar translation as a travel down a scale from the whole to the finitesimals. Since as we differentiate those series to converge and make sense, because we are traveling down the scale to the finitesimals, r, as n goes to infinity, must be less than one for the series to converge. The sum then becomes:

When a = 1, that is the singularity-minds as the view=value of the whole this can be simplified to:

the left-hand side being a geometric series with common ratio r.

The beauty and simplicity of the formula shows by occam’s razor principle indeed its ‘essential nature’ in terms of time-space laws.

Now the type of series define the different

The behavior of the terms depends on the common ratio r:

If r is between −1 and +1, the terms of the series become smaller and smaller, approaching zero in the limit and the series converges to a sum. In the case above, where r is one half, the series has the sum one.
If r is greater than one or less than minus one the terms of the series become larger and larger in magnitude. The sum of the terms also gets larger and larger, and the series has no sum. (The series diverges.)
If r is equal to one, all of the terms of the series are the same. The series diverges.
If r is minus one the terms take two values alternately (e.g. 2, −2, 2, −2, 2,… ). The sum of the terms oscillates between two values (e.g. 2, 0, 2, 0, 2,… ). This is a different type of divergence and again the series has no sum; for example in Grandi’s series: 1 − 1 + 1 − 1 + ···.

It is quite interesting then to understand in terms of the 5 Dimotions and o-1=1-, TIME-SPACE dual sphere (essential for quantum physics) the variations of the power series. As they work for the o-1 sphere, in which the series travels a SCALE OF THE FIFTH DIMENSION: FROM 1∆ down to ∆-1 vs. its entropic divergent expansion when r is larger than ±1, as it travels in the 1- sphere, which should HAVE A SOLUTION, WHEN we define , a relative infinite as the value of the whole perceived from the finitesimal point of view, which means a relative infinite. Then we make a travel upwards from the ∆-1 finitesimal or ∆-being to the ∆+1 world.

So those series represent the 1D and 4-5Dimotions, while the 3rd reproductive dimotion happens when r=1, as the reproductive sum that creates terms of a reproductive wave, which in a lineal sum of steps will represent the 2D locomotion of the being. Finally if r is -1 the series forms a ‘steady state’ zero sum world cycle, an oscillation of two values.

So the key concept of a proper 5D scalar interpretation of series (this analysis on the simplest of all series for 5D advanced theory would obviously expand to power series Taylor series etc, but we leave this work for the future pouring of my notebooks or in case I likely die earlier, for future researchers) is the concept of finitesimals and relative infinites, ∞.

The limit of a sequence

In that regard we amend the work of the German mathematician Karl Weierstrass and its formal definition of the limit of a sequence as follows:

Consider a sequence (an) of real numbers, by which is meant an infinite list:  a0, a1, a2, ….

It is said that an converges to (or approaches) the limit a as n tends to infinity, if the following mathematical statement holds true: For every ε > 0, there exists a whole number N such that |an − a| < ε for all n > N. Intuitively, this statement says that, for any chosen degree of approximation (ε), there is some point in the sequence (N) such that, from that point onward (n > N), every number in the sequence (an) approximates a within an error less than the chosen amount (|an − a| < ε). Stated less formally, when n becomes large enough, an can be made as close to a as desired.

For example, consider the sequence in which an = 1/(n + 1), that is, the sequence: 1, 1/2, 1/3, 1/4, 1/5, …,  going on forever.

Every number in the sequence is greater than zero, but, the farther along the sequence goes, the closer the numbers get to zero. For example, all terms from the 10th onward are less than or equal to 0.1, all terms from the 100th onward are less than or equal to 0.01, and so on. Terms smaller than 0.000000001, for instance, are found from the 1,000,000,000th term onward. In Weierstrass’s terminology, this sequence converges to its limit 0 as n tends to infinity. The difference |an − 0| can be made smaller than any ε by choosing n sufficiently large. In fact, n > 1/ε suffices. So, in Weierstrass’s formal definition, N is taken to be the smallest integer > 1/ε

This example brings out several key features of Weierstrass’s idea. First, it does not involve any mystical notion of infinitesimals; all quantities involved are ordinary real numbers. Second, it is precise; if a sequence possesses a limit, then there is exactly one real number that satisfies the Weierstrass definition. Finally, although the numbers in the sequence tend to the limit 0, they need not actually reach that value.

Now this n > 1/ε is exactly what Leibniz without so much pedantic formalism considered the finitesimal, what we call the quanta of an ∆-1 scale and what physicists call in its study of different scales, the minimal ‘error-quanta’ h/2π, k-entropy, or ‘Planck mass’ (Black hole of a compton wavelength volume, or minimal quanta of gravitational ∆+1 scales).

Continuity of functions

All this understood we can then return to the inflationary nature of languages, which in the case of mathematics means that without a mirror reflection in reality, it tries to introduce false concepts of infinity and continuity with pedantic axiomatic methods, origin of the concept of absolute continuity of a function; when the true concept is the ‘stop and step’ nature of motions, and dark, non perceived regions between continuous points, or finitesimals. So it is irrelevant if the finitesimal is a natural number to talk of discontinuity, as the system will have contiguous finitesimals of 1 number size. We then talk of measure more than continuity and errors of measure, from an upper ∆º mind”

Intuitively, a function f(t) approaches a limit L as t approaches a value p if, whatever size error can be tolerated, f(t) differs from L by less than the tolerable error for all t sufficiently close to p.

Just as for limits of sequences, the formalization of these ideas is achieved by assigning symbols to “tolerable error” (ε) and to “sufficiently close” (δ). Then the definition becomes: A function f(t) approaches a limit L as t approaches a value p if for all ε > 0 there exists δ > 0 such that |f(t) − L| < ε whenever |t − p| < δ. (Note carefully that first the size of the tolerable error must be decided upon; only then can it be determined what it means to be “sufficiently close.”)

But what exactly is meant by phrases such as “error,” “prepared to tolerate,” and “sufficiently close”?

Again it is the relative ¡-1 quanta of the system studied. The ‘error’ of measure will then become ESSENTIAL to the explanation of the Uncertainty principle of Heisenberg, which indeed can be obtained from theory of measure and error, by pure mathematical methods.

So in ideal mathematics, having defined the notion of limit in this context, with no limit to the infinitesimal size of the error, it is straightforward to define continuity of a function. Continuous functions preserve limits; that is, a function f is continuous at a point p if the limit of f(t) as t approaches p is equal to f(p). And f is continuous if it is continuous at every p for which f(p) is defined. Intuitively, continuity means that small changes in t produce small changes in f(t)—there are no sudden jumps.

But as that small change will always be in detail an ε-quanta, in great detail THERE ARE QUANTUM JUMPS. In fact, as there is always an ε-quanta, in any process in space or time, in form and motion (as we have shown when considering the nature of motion as reproduction of form in adjacent spaces) there will always be a quantum jump for all motions. And motion will be the reproduction of form in quantum jumps of  ε, nature.

 

∆-1: Lebiniz’s definition of Finitesimals: 1/n

So we accept Leibniz’s concept of a finitesimal, as ALL organic systems have a minimal cellular quanta and a maximal enclosure, which in mathematics can be represented in the o-1 finitesimal circle, closed above, as it becomes the 1 element in ∆-1 of the ∆º whole, which is represented by  the 1- equivalent graph, WHICH is opened above into the wholeness of a larger Universe (but will have also a limit normally in the decametric logarithmic scale of the ∆º whole world embedded in the ∆+1 truly infinite Universe).

 Perhaps the most fascinating part of number theory is the finitesimal, as infinitesimal do not exist – being space quantic, there will be always a limit, a micro-cycle of time or quanta of population in space, to signify the finitesimal point, as Leibniz rightly understood and defined it with a simple powerful form: 1/n.

And indeed in the Universe finitesimals tend to be structured as in a russian doll, such as the biggest wholes, n-> have the smallest finitesimals, 1/n->->0.

The absolute zero size is thus the finitesimal of the largest possible Universe. In praxis, we humans only observe a finitesimal from our mind-o perspective, of the planck scale, and accordingly we see a Universe of inverse relative size, being humans in the Ƽ middle view (at cellular level) as physicists wonder without realizing this is NOT a coincidence, but a natural law of the scalar, fractal organic structure of the Universe:

1/n: the (in)finitesimal (in)finite

With the convention that ƒ  (x) is normally a function of time frequencies, ƒ (t), of motions of time, whose synthonies of synchronicity in space are expressed by an algebraic equation, we bring the following understanding:

Infinitesimal quanta in any scale is the departure point to build any function, as such it must have a minimal size, and ƒ'(t) is normally a good measure.

The infinitesimal study as perceived from the finite point of view is the view of fractals, when in detail and observing the closed worldcycles that separate and make each infinitesimal a whole.

A derivative is the finitesimal of the function observed, and so when we go even further and study as enlarged into our scalar view tin maximal information we are in the fractal view of reality.

So as we expand our view the fractal view becomes more real, till finally the enclosures observed ∆-1 become fractal and we recognise its self-similarities: ∆-1 ≤ ∆º.

For each derivative thus a function shows its 1/n infinitesimal (not necessarily this function, which is the derivative of the logarithm).

It follows that functions which grow ginormously have a ‘quanta of time’ reproduced and so its minimal derivative finitesimal is the function itself, eª.

In the next graph we see inverse equations of exponentials and logarithms.

Exponentials express better decay than exponential growth, with the exponent “negative”.

Mathematics is a reflection of nature. A small mirror of its ∆º±i Structure and so we need for exponential growth that Nature provides unlimited energy for growth, which happens only in the 0-1 generational dimension of the being, or in its inverse decay/ in its 4D entropy age. of death.

On the other hand the limit of logarithmic growth maps out better in logistic curves real growth being a good function to express Ƥcales.

So numbers reflect those processes in their inverse exponential/logarithm mathematical graphs and numerical series.

ST: As the three coordinate systems, self-centred into an ∆º pov, which reflects each of the three ‘topologies of space-time’ (Cylindrical: lineal, polar: cyclical and cartesian: Hyperbolic); while the infinitesimal o-1 scale, and the infinite 1-∞ scale divided by the ‘1’ ∆º relative element, represent perfectly the ∆-scalar nature of super organisms.

∆º±1: Further on, we can ‘reduce’ each relative infinity to those 3 scales, and represent all timespace phenomena with the different families of numbers that close algebra  (entropic, positive numbers, informative, negative numbers, present space-time, complex bidimensional numbers, s/t ir-ratio-nal numbers, etc.), mathematics becomes essentially the more realist language to represent the scalar, organic, ternary Universe.

The 0-1 scale is equivalent to the 1-∞ scale for the lower ∆-1 Universe, where 1=∆º, the whole and 1-∞ is the ∆+1 eternal world.

And this is the symmetry to grasp the consequences of the o-1-∞ fundamental graph of the fifth dimension. Let us see how with a simple example:

Now the mirror symmetries between the 0-1 universe and the 1-∞ are interesting as they set two different ‘limits’, an upper uncertain bound for the 1-∞ universe, in which the 1-world, ∆º exists, and a lower uncertain bound for the 0-1 Universe, where the 1 does not see the limit of its lower bound. Are those unbounded limits truly infinite?

This is the next question where we can consider the homology of both the microscopic and macroscopic worlds.

Of course the axiomatic method ‘believes’ in infinity – we deal with the absurdities of Cantorian transinfinities in articles on numbers. But as we consider maths, after lobachevski, Godel and Einstein, an experimental science; we are more interested in the homologies of ∆±1. For one thing. While 0 can be approached by infinite infinitesimal ‘decimals’, so it seems it can never be reached, we know since the ‘violet catastrophe’ that the infinitesimal is a ‘quanta’, a ‘minimum’, a ‘limit’. And so we return to Leibniz’s rightful concept of an 1/n minimal part of ‘n’, the whole ‘1’.

This implies by symmetry that on the upper bound, the world-universe in which the 1 is inscribed will have also a limit, a discontinuity with ∆+2, which sets up all infinities in the upper bound also as finite quanta, ‘wholes of wholes’.

So the ‘rest’ of infinities, must be regarded within the rest of ‘theory of information languages’ and its inflationary nature, inflationary information. What is then the ‘practical limit’ for most infinities and infinitesimals? In GST, the standard limit is the perfect game of 3 x 3 + 0(±1) elements, where the o-mind triples as it is an ∆-1 ‘god of its infinitesimals it rules subconsciously, as you brain rules you cells’, ∆º, consciousness of the whole and ∆+1 infinitesimal of the larger world.

An o-1 time mirrored quantum world of probabilities of existence, as indistinguishable infinitesimals through the surface limit of its statistical description in the thermodynamic scale of atomic beings end in the 1 unit of our human cellular space, where thermodynamic considerations are reduced to temperature gradient towards the homeostatic mass based forces of our human level of existence, Ƽ.

So we consider as usual the Kaleiodoscopic, multiple function of analysis, and the multiple meanings of its inverse, ∆±1 operations, derivatives and integrals; since as usual the potency of ∆st is on the search of whys, not on the discovery of new equations, which humans always exhaust by the Monkey Method of trials and errors, sweat and transpiration more then the inspiration of pure logic thought…

Conclusion. 

The Universe is discontinuous. To differentiate a function we do NOT need absolute continuity but the existence of an infinitesimal 1/n, and no jump between ‘neighbourhoods’, which should be no further than 1/n distance either in the X or Y coordinates. ‘Adjacency’ of the function then is defined by discrete 1/n intervals, which suffice in Nature=reality, REGARDLESS of mathematical methods to define them.

 

2nd AGE: CALCULUS

  OPERATIONS: ∫∂

Its inverse symmetries on the cartesian plane: merging all the elements of ∆@s=t maths.

Descartes idea of representing solutions to equations with a larger dimension – the variable letter that represented all the ‘§ets’ of dual X, Y possible solutions; and to ‘imagine’ them in a graph to plot them, forming a visual ‘in-form-ative’ geometric figure, the new ‘scalar dimension‘ that gathered all the X(S)<≈>Y (t) pairs of possible ‘variations’ on the space-time construct.

Up to the time of Descartes, where an algebraic equation in two unknowns F(x, y) = 0 was given, it was said that the problem was indeterminate, since from the equation it was impossible to determine these unknowns; any value could be assigned to one of them, for example to x, and substituted in the equation; the result was an equation with only one unknown y, for which, in general, the equation could be solved.

Then this arbitrarily chosen x together with the so-obtained y would satisfy the given equation. Consequently, such an “indeterminate” equation was not considered interesting.
Descartes looked at the matter differently. He proposed that in an equation with two unknowns x be regarded as the abscissa of a point and the corresponding y as its ordinate. Then if we vary the unknown x, to every value of x the corresponding y is computed from the equation, so that we obtain, in general, a set of points which form a curve.

The deepest insight on what Descartes did is then evident:

HE GAVE MOTION=CHANGE TO GEOMETRY, ADDING ITS TIME-DIMENSION; AND SO its method could be used to study the actions/motions of a ‘fractal point’ whose inner geometry of social numbers was NOW ignored, in the ∆+1 scale of its world.. And so the graph would be a perfect graph to study all the ACTIONS=MOTIONS external to a given being, becoming for that reason the foundational structure of mathematical physics.

Thus analysis we will find that the curves DO represent key features of the ‘arrows of change’ of the Universe, specially the ‘standing points’ of change of parameters of Space=Information, ST=energy and Time=entropy (or any other kaleidoscopic combination of ST), in essence they represent the world cycle of the action or motion we study, with its 3 phases of starting motion, steady state, and 3rd informative age coming to a halt.

Historic view.

Particularly important here is the theorem of Newton and Leibnitz to the effect that the problem of quadratures is the inverse, in a well-known sense, of the problem of tangents.

For solving the problem of tangents, and problems that can be reduced to it, there was worked out a suitable algorithm, a completely general method leading directly to the solution, namely the method of derivatives or of differentiation.

It turned out that if the law for the formation of a given curve is not too complicated, then it is always possible to construct a tangent to it at an arbitrary point; it is only necessary to calculate, with the help of the rules of differential calculus, the so-called derivative, which in most cases requires a very short time. Up till then it had been possible to draw tangents only to the circle and to one or two other curves, and no one had suspected the existence of a general solution of the problem.

If we know the distance traversed by a moving point up to any desired instant of time, then by the same method we can at once find the velocity of the point at a given moment, and also its acceleration. Conversely, from the acceleration it is possible to find the velocity and the distance, by making use of the inverse of differentiation, namely integration. As a result, it was not very difficult, for example, to prove from the Newtonian laws of motion and the law of universal gravitation that the planets must move around the sun in ellipses according to the laws of Kepler.

Of the greatest importance in practical life is the problem of the greatest and least values of a magnitude, the so-called problem of maxima and minima.

A note of importance, specially in such calculus of variations, will then be the nature of that minimal fractal step, which is the point of a tangent, as a point has always parts (it is a fractal point) the finitesimal is the fractal point and it is not a single point but the point and a very ‘small’ surrounding (the previous or next points). So a maximum and minimum will be a dual point, so to speak, with a zero tangent (flat line), or in terms of motion a still moment in the summit of the function, which justifies such 0 value (or else if it was a single point and before it it was upwards and then downwards, or viceversa in a minimum, the value of the derivative in that point will be undetermined).

At various points of a curved line, if it is not a straight line or a circle, the curvature is in general different. How can we calculate the radius of a circle with the same curvature as the given line at the given point, the so-called radius of curvature of the curve at the point? It turns out that this is equally simple; it is only necessary to apply the operation of differentiation twice. The radius of curvature plays a great role in many questions of mechanics.

Now, we observe a curious duality between mathematical mind solutions vs. reality check: while classic science differentiates ‘twice’ to know if the point will ‘fall’ or ‘rise’ after the standing point, (∆nalytical solution) we obtain the same knowledge by ‘seeing’ in ‘reality’ how the ‘2 sequential points’ that surround the flat step behave in spacetimes. It is this ‘time interval’ of 4 sequential ‘steps’, what the derivative method, which can be considered a reduction of the curve to the essence of its time sequence solves. And we shall see often this inverse GST reduction, from reality and its complex actions to its sequential origin.

Indeed, in our study of sequential actions of world cycles we noticed that the steps of actions are always the same:

1D: ï-perception -> 2D: A: motion towards energy -> E: feeding -> 3D:wide storage of food, or 3Dx5D: O: reproduction and U: social evolution.

1D: As we move from the first ‘action’: to open you ‘eyes’ perceive and be perceived as a function in existence (with a quantitative parameter, which is a scalar ‘point’)…

2D: We then move into a motion, (with a more complex quantitative parameter, a bi-vector o 3-vector parameter).

Locomotion physics is thus a 2 Steps, ST action which we can measure as momentum  with 2 parameters;   the ‘tiƒ parameter, frequency, mass, temperature’ for the 1D point and the spatial location, which will break into 3 parameters for a vector x,y,z parameter (x,y,z + t). 

It is though still a simple S=T, though the time parameters are reduce to an external measure of space-otion.

We though will depart from this simplest I->A-nalysis of locomotion to include not necessarily in quantitative terms the description of the other ‘motions/actions’ of reality, Energy feeding, O & U.

 

Mathematicians were greatly pleased when it turned out that the theorem of Newton and Leibnitz, to the effect that the inversion of the problem of tangents would solve the problem of quadrature, at once provided a method of calculating the areas bounded by curves of widely different kinds. It became clear that a general method exists, which is suitable for an infinite number of the most different figures. The same remark is true for the calculation of volumes, surfaces, the lengths of curves, the mass of inhomogeneous bodies, and so forth.

And this as most pure spatial questions is straight forward: you put finitesimal line-steps or square areas (which would also have the absolute limit of triangular Planck’s areas, which according to the bidimensional holographic principle, are the minimal area of information of a black hole; as indeed the black hole converts the spherical event horizon into ‘static hexagonal π=3 shrunk curvatures, incidentally the strongest most stable ‘buckminster domes’ and graphenes).

The new method accomplished even more in ‘time’ mechanics, because unlike easy-to-figure out approximations of areas, the staple food that started up mathematics in agricultural measure, time was NOT, it is still NOT understood – so alas, the ‘magic’ method of LN (ab. for Leibnewton, Leibniz first:), got solved questions of time-change without knowing much about time.

As it seemed that there was no problem of loco-motion and ratios of change the new calculations would not clarify and solve.

Not long before, Pascal had explained the increase in the size of the Torricelli vacuum with increasing altitude as a consequence of the decrease in atmospheric pressure. But exactly what is the law governing this decrease? The question is answered immediately by the investigation of a simple differential equation (the deep philosophical insights on S≈T transformations ignoramus – who cares would say Feynman ):

It is well known to sailors that they should take two or three turns of the mooring cable around the capstan if one man is to be able to keep a large vessel at its mooring. Why is this? Of course, you need two and better 3 elements for a ‘system’ to become a stable whole, so there are always a sailors said ‘3 saint marys… 3 huge waves, and 3-knots are best’… but alas, a similar differential equation to that of Torricelli solves it magically.

Thus, after the creation of analysis, there followed a period of tempestuous development of its applications to the most varied branches of technology and natural science. Since it is founded on abstraction from the special features of particular problems, mathematical analysis reflects the actual deep-lying properties of the material world; and this is the reason why it provides the means for investigation of such a wide range of practical questions. The mechanical motion of solid bodies, the motion of liquids and gases of their particular particles, their laws of flow in the mass, the conduction of heat and electricity, the course of chemical reactions, all these phenomena are studied in the corresponding sciences by means of mathematical analysis.

At the same time as its applications were being extended, the subject of analysis itself was being immeasurably enriched by the creation and development of various new branches, such as the theory of series, applications of geometry to analysis, and the theory of differential equations.

So among mathematicians of the 18th century, there was a widespread opinion that any problem of the natural sciences, provided only that one could find a correct mathematical description of it, could be solved by means of analytic geometry and the differential and integral calculus. And so the flurry of activity in the next centuries would be to extend its practical uses

Discovery of the calculus and errors in dogmatic foundations

Two ‘S≈t’ and ∆±1 major steps led to the creation of analysis:

S≈t: The first was the discovery of the surprising relationship, known as the fundamental theorem of calculus, between spatial problems involving the calculation of some total size or value, such as length, area, or volume (integration), and problems involving rates of change in time, such as slopes of tangents and velocities (differentiation). ( Gottfried Wilhelm Leibniz and Isaac Newton.)

  • While the utility of calculus in explaining physical phenomena was immediately apparent, its use of infinity in calculations (through the decomposition of curves, geometric bodies, and physical motions into infinitely many small parts) generated widespread unease… as only Leibniz got the understanding of a ‘fractal point which is a world in itself’ and the finitesimal nature of derivatives (1/n).

So the dogmatic zealot, an Anglican bishop George Berkeley published a famous pamphlet, ‘The Analyst; or A Discourse Addressed to an Infidel Mathematician’ (1734), pointing out that calculus—at least, as presented by Newton and Leibniz—possessed serious logical flaws on the arrogant pov of the human mind son of god, who must access absolute truths. LOL.

Analysis the grew out of the resulting painstakingly experimental close examination of concepts such as function and limit, which are still improperly defined with axiomatic zealots of the humind (ab. Human mind) rights to more than humid truths: ‘man is a mush over a lost rock of the Universe, departing from this (relative) principle, we can talk about him’ Schopenhauer.

As all entities have a causal development from a spatial, first entropic age into complex time analysis to end in the ‘awareness of an ∆±1 dimension to it’, the pioneers, Newton’s and Leibniz’s approach to calculus had been primarily geometric, involving ratios with “almost zero, +0” divisors—Newton’s “fluxions” and Leibniz’s “infinitesimals.”

During the 18th century calculus became increasingly temporal, algebraic, as mathematicians—most notably the Swiss Leonhard Euler and the Italian French Joseph-Louis Lagrange—began to generalize the concepts of continuity and limits from geometric curves and bodies to more abstract algebraic functions and began to extend these ideas to complex numbers, which are the ideal elements to study ∆spacetime-processes in its more complex interrelationships oftenreduced to the 0-1 ‘infinite/simal domain’.

Then in a useless attempt to show the humind absolute in its truth, as these developments were not entirely satisfactory from such deluded foundational standpoint, the so called ‘rigorous’ (: basis for calculus was ‘invented’ by the Augustin-Louis Cauchy, Bernhard Bolzano, and above all the idealist squared, usual suspect of total false truths – a cultural, simpleton German Karl Weierstrass in the 19th century.

In the regard, (see ∞|0 post on the meaning of numbers, infinites and infinitesimals), the logical difficulties involved in setting up calculus on a sound basis are all related to one central problem, the notion of continuity.

Newton and Leibniz.

NOW THERE has been much irrelevant argument about who was first Newton or Leibniz on the discovery of calculus. To me it has always been obvious at all levels the enormous superiority of Leibniz over Newton, ethically and intellectually.

And it can be resumed in this: Newton is NOT really a modern 2nd age researcher of calculus but rather the culmination of the Arquimedes’ method of exhaustion of limits – he didn’t understand anything about the true meaning of calculus. And for that reason his notation is so convoluted (plus his nauseating treatment of Leibniz makes him a complete a$$hole). Leibniz on the other hand understood more than all what would come after him when he said ‘a point is a world in itself, defined the ‘finitesimal’ as 1/n – which latter abstract mathematicians forgot and build an entire philosophy of the Universe (monads), right in the spot, clear predecessor of all our work.

And as usual the a$$hole, making military instruments for the Navy, bullying and calumniating Leibniz carried the day. But we use Leibniz’s notation, the modern view…

So let us first close the ‘Greek era’ with the last of the Greek Alcibiades’, Mr. Newton.

In the second age of mathematics, the question of infinitesimals was resolved but not accepted by Leibniz, which used geometrical concepts on the Cartesian plane to understand them, as opposed to Newton which used algebraic concepts in his study.

INVERSION OF SYMMETRIES.

A key concept as it belongs to the fundamental structure of the Universe is that of inverted entropic numbers.

Entropy, ¬, in mathematical systems are the inverse operations that eliminate the information of a system. As it happens entropy can then take the general format of the negative operand of the systems.  So for each positive system there is a negative one. And among all the operand, there is one which is the most entropic of them all, the exponential, notably eˆ-x, whose massive negative growth signifies the growing dissolution of a form into its finitesimal parts. As systems are in general decametric, such exponential entropy also affect the very same number, which looses its ‘meaningful series form’ after 10 decimals.

Entropy then in calculus is represented by the inverse function of a positive social growth, normally a derivative that extracts the infinitesimal, and when we work on entangled series of Dimotions, a differential equation, perpendicular or inverse to other equation.

Geometric Interpretation of the Problem of Integrating Differential Equations; Generalization of the Problem

For simplicity we will consider initially only one differential equation of the first order with one unknown function dy/dx = ƒ (x,y) where the function f(x, y) is defined on some domain G in the (x, y) plane. This equation determines at each point of the domain the slope of the tangent to the graph of a solution of equation (29) at that point. If at each point (x, y) of the domain G we indicate by means of a line segment the the direction of the tangent (either of the two directions may be used) as determined by the value of f(x, y) at this point, we obtain a field of directions. Then the problem of finding a solution of the differential equation (29) for the initial conditon y(x0) = y0 may be formulated thus: In the domain G we have to find a curve y = ϕ(x), passing through the point M0(x0, y0), which at each of its points has a tangent whose slope is given by equation (29), or briefly, which has at each of its points a preassigned direction.

From the geometric point of view this statement of the problem has two unnatural features:
1.  By requiring that the slope of the tangent at any given point (x, y) of the domain G be equal to f(x, y), we automatically exclude tangents parallel to Oy, since we generally consider only finite magnitudes; in particular, it is assumed that the function f(x, y) on the right side of equation (29) assumes only finite values.
2.  By considering only curves which are graphs of functions of x, we also exclude those curves which are intersected more than once by a line perpendicular to the axis Ox, since we consider only single-valued functions; in particular, every solution of a differential equation is assumed to be a single-valued function of x.
So let us generalize to some extent the preceding statement of the problem of finding a solution to the differential equation (29). Namely, we will now allow the tangent at some points to be parallel to the axis Oy. At these points, where the slope of the tangent with respect to the axis Ox has no meaning, we will take the slope with respect to the axis Oy. In other words, we consider, together with the differential equation (29), the equation: dy/dx = ƒ (x,y)
where f1(x, y) = 1/f(x, y), if f(x, y) ≠ 0, using the second equation when the first is meaningless. The problem of integrating the differential equations (29) and (29′) then becomes: In the domain G to find all curves having at each point the tangent defined by these equations.

These curves will be called integral curves (integral lines) of the equations (29) and (29′) or of the tangent field given by these equations. In place of the plural “equations (29), (29′)”, we will often use the singular “equation (29), (29′)”. It is clear that the graph of any solution of equation (29) will also be an integral curve of equation (29), (29′). But not every integral curve of equation (29), (29′) will be the graph of a solution of equation (29). This case will occur, for example, if some perpendicular to the axis Ox intersects this curve at more than one point.
In what follows, if it can be clearly shown that: ƒ(x,y)= M (x,y)/N (x.y)  then we will write only the equation: dy/dx = M (x,y)/N (x.y) and omit dx/dy = N (x,y)/M (x.y).   Sometimes in place of these equations we introduce a parameter t, and write the system of equations: dy/dt = M (x,y),  dx/dt = N (x,y) where x and y are considered as functions of t.

Example 1. The equation:dy/dx = y /x defines a tangent field everywhere except at the origin. This tangent field is sketched in figure 7. All the tangents given by equation (30) pass through the origin.

It is clear that for every k the function: y=kx is a solution of equation (30). The collection of all integral curves of this equation is then defined by the relation ax=by=0 where a and b are arbitrary constants, not both zero. The axis Oy is an integral curve of equation (30), but it is not the graph of a solution of it.

Since equation (30) does not define a tangent field at the origin, the curves (31) and (32) are, strictly speaking, integral curves everywhere except at the origin. Thus it is more correct to say that the integral curves of equation (30) are not straight lines passing through the origin but half lines issuing from it.

Example 2. The equation: dx/dy= – x/y defines a field of tangents everywhere except at the origin, as sketched in figure. The tangents defined at a given point (x, y)  are perpendicular to each other. It is clear that all circles centered at the origin will be integral curves of equation (33). However the solutions of this equation will be the functions: Now this duality is ESSENTIAL as any undergraduate student knows to the duality of potential fields vs. charge singularity forces; and if he has understood anything it will see they are the 2 views of a T.œ control of its vital energy, from the perpendicular=predatory view of the singularity (4th non-E postulate) vs. the parallelism of the membrane that encircles it.

So it also ultimately reflects the DEEPEST meaning of the potential, parallel, stable vs. kinetic, perpendicular, unstable duality of the 2 existential states of a vital energy, which naturally will tend to a potential state of minimal kinetic disturbance, to the eternally wished for state of informative 1D curved eternal existence over the lineal, destructive entropic motion.

So deep it is the duality of y=kx vs. dx/dy=-x/y (:

And so we shall bring it back in many posts.

LEIBNIZ: GEOMETRIC FINITESIMALS

Leibniz along Aristotle and Leonardo are the triad of great Genius of the Western, visual-dominant civilization, whose ternary structure in ages an regions we treat on the posts on the European civilization… As this writer, though, Leibniz and his equals had the problem of wanting to understand it all and that gets more complex than a simple lineal sword, as the one of Newton, who had a very specialized mathematical mind, without asking further whys filled with the myths of the Bible. By all means then Leibniz along Descartes and its parallel, artist fellow Basque countryman Fermat can be considered for its insights on the whole mirror of mathematics the fathers of the second age of mathematics –  Newton being greatly overrated.

Its insights were on the S=T, point=number symmetries of the mathematical mirror; on the fractal point with breath – a world in itself, on the finitesimal with a limit 1/n; and its ‘curvature’ respect to the lineal radius, and hence on the duality between lineal freedom and cyclical order.

Enter Leibniz. Derivatives as spatial tangents: line vs. curve. Its pentalogic.

All this said the true innovation on calculus was to understand that an infinitesimal ‘h’, can be represented by the lineal tangent to a curve.

This point is NOT infinitesimal but a finitesimal, since in I non-Euclidean maths, points have breath. So the finitesimal, which Leibniz defined also in a algebraic terms as 1/n does have breath. The pretentious search for ideal exactitude in the axiomatic method is what makes the lack of empirical knowledge reject him in modern mathematics.

S<=>T. Pentalogic then gives the derivative multiple functions in a single plane: as it can be used when applied to functions that define entropy (negative exponential), angles of perception (trigonometric functions) or simple locomotion (lineal and angular momentum, energy functions) 

∆±¡: The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. So it is a finitesimal and it is lineal.

So in pentalogic we say that the ‘FINITESIMAL’ feel always free, unbounded, ‘lineal’, as your steps in the flat earth, but the whole is curved, ordered, closed. So we could state that a derivative by giving us the finitesimal part, misses the order of the whole and becomes a lineal transformation, and this holds also for multiple variable derivatives.

Let us then introduce derivatives in simple ‘wiki-terms’ with 5D insights:

The derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). It is then all what type of change does take place? As normally a space function is one with form, within change, this should be the input argument; and the y-variable that of change. But when study locomotion we shall observe a clear variation in 5D. It is NOT longer space but time, the frequency of motion what changes, the speed of the wheel is not a change in ‘position’ – the external change – but a change in the internal frequency of turn of the car’s wheel. Time speed thus is what changes, not only in relativity but also in normal motion. 

Can then assess change in space? Only externally in the position respect to a background larger ∆+1 world, but not in the internal vital space of the being, which remains stable, changing also the frequency of its stœps.

So while the derivative of the position of a moving object with respect to time is the object’s velocity: this measures how quickly the position of the object changes when time advances, in 5D is inverse – change in the frequency of motion.

@-Mind. Derivatives may be generalized to functions of several real variables. And again in this generalization, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function.

The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables.

It can be calculated in terms of the partial derivatives with respect to the independent variables.

For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector.

Why the Jacobian Matrix defines the change of mind perspective is clear, when we use it to transform planar coordinates (Cartesian) Into polar spherical ones:

The transformation from polar, particle/head/informative coordinates (r, φ) to Cartesian, body/wave hyperbolic coordinates (x, y), is given by the function F: ℝ+ × [0, 2π) → ℝ2 with components:

x=r cos⁡ρ; y=r sin ρ.  

The Jacobian determinant is equal to r. This can be used to transform integrals between the two coordinate systems:

Change of a derivative is a measure of a finitesimal change in any of its Dimotions, and a lineal free approximation to a larger order.

Differential of a function..

This deep thought fact – that small steps are ‘lineal’ and longer ones are curved and ultimately zero sum closed paths is the justification for the use of differentials instead of derivatives in most applications of calculus to reality.

Differentials in essence are ‘lineal’ rates of change in small ‘intervals’ of any function that is curved, and whose exact, ideal, non-lineal rate of change in a long stretch is difficult to calculate. And in reality is used everywhere instead of the ideal derivative. And the justification in 5D is the concept of a finitesimal minimal quanta, and the fractal nature of points and stœps, the minimal quanta of change. That is change IS NEVER infinitesimal, but a change implies a minimal ¡-1 unit of the being, either its frequency step or reproductive cell, etc. So that ‘quanta’ of change, which is better measure by the ‘diameter’ or ‘height’ or length of the spherical or tall or flat form (cell, atom, individual) is a differential.

The maths of it, are well known to any student:

Let us then consider a function  S = ƒ(t) that has a derivative. The increment of this function: ∆s = ƒ (t+∆t) – ƒ(t) corresponding to the increment Δt, has the property that the ratio Δs/Δt, as Δt → 0, approaches a finite limit, equal to the derivative:

∆s/∆t->ƒ'(t)

This fact may be written as an equality:

∆s/∆t->ƒ'(t) +a

where the value of a depends on Δt in such a way that as Δt → 0, a also approaches zero; since in ∆st the minimal step of any entity always has a lineal form.

Thus the increment of a function may be represented in the form:

∆s=ƒ'(t)∆t + a∆t

where a → 0, if Δt → 0.
The first summand on the right side of this equality depends on Δt in a very simple way, namely it is proportional to Δt. It is called the differential of the function, at the point tn corresponding to the given increment Δt, and is denoted by:

ds=ƒ'(t)∆t

The second summand has the characteristic property that, as Δt → 0, it approaches zero more rapidly than Δt, as a result of the presence of the factor a.

It is therefore said to be a finitesimal of higher order than Δt and, in case f′(t) ≠ 0, it is also of higher order than the first summand.

By this we mean that for sufficiently small Δt the second summand is small in itself and its ratio to Δt is also arbitrarily small.

In the graph, practical stience only needs to measure a differential either in space dy=BD+BC or  in time, as a fraction of the unit world cycle, ƒ(x)=cos²x+sin²x=1 which becomes a minimal lineal st-ep or action, ƒ(t)=S step.

In graph, decomposition of ΔS into two summands: the first (the principal part) depends linearly on ΔT and the second is negligible for small ΔS. The segment BC = ΔS, where BC = BD + DC, BD = tan β · ΔT = f′(t) Δt = dS, and DC is an infinitesimal of higher order than Δt.

For symmetry in the notation it is customary to denote the increment of the independent variable by dx, in our case dt, and to call it also a differential. With this notation the differential of the function  is:

ds= ƒ'(t) dt

Then the derivative is the ratio, f′(t) = ds/dt of the differential of the function, normally a ‘whole spatial view’ to the differential of the independent variable, normally a temporal step or minimal change-motion in time.
The differential of a function originated historically in the concept of an “indivisible”, similar to our concept of a finitesimal and so much more appropriate for ∆st than the abstraction of an infinitesimal with ∆t->0, since time is discrete and there is always a minimal step of change, or reproductive step in a motion of reproduction of information.

 Differentials of calculus are practical infinitesimals and its knowledge for any function acts as an ∂st limit.

On the other hand, there is for any group that we can take as vital space-time, finds us a middle point.

Rightly then the indivisible, and later the differential of a function, were represented as actual infinitesimals, as something in the nature of an extremely small constant magnitude, which however was not zero.

According to this definition the differential is a finite magnitude, measurable in space, for each increment Δt and is proportional to Δt. The other fundamental property of the differential is that it can ONLY be recognized in motion, so to speak: if we consider an increment Δt which is approaching its finitesimal limit then the difference between ds and Δs will be arbitrarily small even in comparison with Δt – till it becomes zero. The error of interpretation in classic calculus being that it is THE DIFFERENCE what approaches 0 as finally the function will be also lineal, not ∆t, which will become a ‘quanta’ – as quantum physicists would latter discover.

As this is the ‘real’ model, the substitution of the differential in place of small increments of the function forms the basis of most of the REAL applications of the now-called ‘finitesimal analysis’ to the study of nature.

Finitesimals of minimal Dimotions=actions

For starters the word to use is ‘finitesimals’, not infinitesimals. Infinite does not exist in a single continuum, but through multiple discontinuities as all systems in time and space are limited in space and time, both in a single membrane, and in within the scales of the 5th dimension (as information and energy doesn’t flux between those scales without loss of entropy).

It would be in that sense important to understand the need for a finite limit, solving the paradox of Zeno, with the concept of a quanta or limit of a finitesimal.

Now the HUGE QUESTION TO CONNECT DERIVATIVES AND DIFFERENTIAL CALCULUS WITH REALITY IS THIS: WHAT KIND OF FINITESIMALS RATE OF CHANGE ARE WE CALCULATING?

The answer is deeper in 5D, as a finitesimal Dimotion of a being is called an action. A finitesimal is an action of space-time of any of the 5 Dimensions, an action being a space-time cycle hence a bidimensional holographic quanta, which can be expressed in any of the graphs, lineal, cyclical, cylindrical or polynomial (complex).

What we will measure then is an action of space-time, which classifies time cycles in 5 subspecies by its complexity:

A-celerations, lineal motions, entropic motions, energy flows, informative vortices and Social evolutions (a,e,i,o,u).

And the sum of actions is what creates a sequential world cycle. So we must conclude that in ‘time’, a finitesimal is a quanta of a Dimotion; and in space it is a quanta of a volume. But in general terms, derivatives and differentials will be better suit to calculate temporal quanta, and THE TYPE OF DIMOTION STUDIED WILL DEPEND ON THE ‘FUNCTION’ to which the differential is applied:

The five dimotions of space-time have a minimal unit on the  actions of beings which are expressions of those general dimotions, equivalent to the 5 drives of existence scientists recognize as defining life (gauging information, moving, feeding on energy, reproducing and organizing a system socially into a larger synchronous whole). Life is everything, as all is a spacetime organism with its 5D actions of survival (motion, feeding on energy, information gauging, social evolution and reproduction).

Why finitesimals and derivatives are so important becomes then evident as they MEASURE IN THE REAL UNIVERSE, THE ACTIONS, which are ∆-1 minimal Dimotion units of any system of nature.

Quarks and electrons, the simplest particles  gauge information, absorb energy, reproduce and evolve socially into wholes, bosons, plasma flows and atoms. So the unit of life is the smallest particle and as fractal systems are self-reproductive, emerging in its fundamental properties in larger scales all what exists is alive. Only human egos prevent us from understanding that obvious truth, fundamental principle of all ‘exist¡ences’.

So the full theory of CALCULUS becomes related to the actions of exist¡ence of a supœrganism, and as such depending on the functions we use (related themselves to entropic, reproductive, locomotive etc. actions), the derivative will calculate the minimal actions of the being.

Of course this is of not use for complex systems of which we have languages less synoptic than maths to evaluate actions – we do not for example measure a bite of food or an amino acid with a ‘derivative’ but just explain it with full morphological detail of the being involved.

But in physical systems, where the finitesimal values are quite homogenous, actions of spacetime can be measured with derivatives. And theses the main application.

Then we can evaluate sums of action with integrals and in this manner, mathematical physics and 5D connect to each other, through the use of calculus, which is widespread in all the analysis of atomic systems, where the regularity and determinism of its Dimotions rests assured.

So what 5D will introduce in mathematical physics is the connection of equations of physics with the vital actions of its physical systems.

And so in this constant merge of ‘vital non-euclidean geometry’, actions of organic space-time beings, we can give also vital meaning to the different use of the operand of calculus in FUNCTIONS THAT REPRESENT the 5 Dimotions of exist¡ence.      

In that sense the MOST important ad on that ∆st will bring to the use of differentials is its temporal use as the ‘minimal action in time’, of a being, a far more expanded notion that the action of physics (which however will be related to the lineal actions of motion on 1D).

IN THE GRAPH, the general 5 actions-dimensions of existence of different ∆±i species, from above down – a view of them all, one of the physical simplest light and electronic i<eye minds, and below the human being.

Mathematically it is quite irrelevant to make derivatives in time of human actions beyond some quantitative results – so the quanta of minimal human time, the second is the minimal informative action for the 3 synchronous t-st-s parts, an eye glimpse of mind-perception, a limb-ate of motion and a heart-beat of the body.

Nothing to blame on Nature, rather as Lindau put it, a feature of the human ego: ‘what time uncertainty, I don’t see any time uncertainty in quantum physics, I look at my clock and I know what time is’ (:

Yet mathematical operations are better for simpler social numbers forming herds, moving=reproducing simple information in a few scales of existence, with topological evolution at the height on its capacity to describe complex simultaneous super organisms in more detail. This means the actions best explained by operators are those simpler, and quantitative operations will give then the simplest of the interactions between the elements of those smaller, ‘larger’ ensembles of beings.

So to escape the limits of huminds and mathematical reductionism, it will be important even for physical systems today only described quantitatively in abstract mathematical terms, to vitalise and explain the organic whys of its space-time events by adding its existential actions to those ‘analysis’:

It will be important for physical systems today only described quantitatively in abstract terms, to vitalise and explain the organic whys of its space-time events.

The connection on qualitative terms though is self-evident, for all scales, as most actions of any being are extractions of motion, energy and form from lower ∆-i scales.

So we and all other beings perceive from ∆-3 quanta (light in our case), feed on amino acids, (∆-2 quanta for any ∆º system), seed with seminal ∆-1 cellular quanta (electrons also, with ∆-1 photon quanta).

So derivatives are the essential  quantitative action for the workings of any Tœ, space-time organism.

And so we study in depth the connection of the a,e,i,o,u actions between Planes (qualitative understanding) and its mathematical, analytic development (quantitative understanding of 1st second and 3rd derivatives – the late extracting ‘1D motion’  from the final invisible gravitational and light space-time scales).

SO THE FUNDAMENTAL LAW OF OPERANDS TO VITALIZE THEM IS THIS:

In Pentalogic ALL differential OPERANDS CAN BECOME AN ACTION IN ONE OF THE 5D DIMENSIONAL VOWELS (A,E,I,O,U) THAT DEFINE THE FIVE dimensions OF EXISTENCE, AS VITAL QUANTA-ACTIONS OF THE BEING.

THIS IS THE LOGIC CONCEPT THAT TRULY VITALIZES THE OPERANDS OF calculus.

So analysis allow us to extract actions from wholes, among many other uses, reason why THERE ARE NOT REALLY use of the third derivative of a being, as super organisms co-exist in 3 only scales. So to speak, if you derivate a world, you get its organism, and if you derivate it again you get its cell and then its molecular parts.  And then if you do that in time, you get its speed and then its acceleration and then its jerk.

Of course, this is NOT how simplifying maths work – but it works in terms of a volume, then you get its plane, then its unit-cell and its point…

The connection on qualitative terms is self-evident, for all scales, as most actions of any being are extractions of motion, energy and form from lower ∆-i scales.

So we and all other beings perceive from ∆-3 quanta (light in our case), feed on amino acids, (∆-2 quanta for any ∆º system), seed with seminal ∆-1 cellular quanta (electrons also, with ∆-1 photon quanta).

So derivatives are the essential  quantitative action for the workings of any Tœ, space-time organism.

And so we study in depth the connection of the a,e,i,o,u actions between Planes (qualitative understanding) and its mathematical, analytic development (quantitative understanding of 1st second and 3rd derivatives – the late extracting ‘1D motion’  from the final invisible gravitational and light space-time scales).

SO THE FUNDAMENTAL LAW OF OPERATIONS TO VITALIZE THEM IS THIS:

‘BY THE RASHOMON EFFECT ALL differential OPERATIONS CAN BECOME AN ACTION IN ONE OF THE 5D DIMENSIONAL VOWELS (A,E,I,O,U) THAT DEFINE THE FIVE dimensions OF EXISTENCE, AS VITAL QUANTA-ACTIONS OF THE BEING.

THIS IS THE LOGIC CONCEPT THAT TRULY VITALIZES THE OPERANDI OF ALGEBRA.

 Derivatives allow us to integrate, a sum of the minimal quanta in space or actions in time of any being in existence, namely the fact that its sums tend to favor growth of information on the being and then signal the 3 st-ages and/or st-ates of the being through its world cycle of existence, which in its simplest physical equations is the origin of… ITS space-time beats.

Actions in timespace are the main finitesimal part of reality, its quantity of time or space if we consider tridimensional actions as combinations of S and T states, stt, tst, tss, sss and so on…

So how differential equations show us the different actions of the Universe?

To fully grasp that essential connection between ∆st and mathematical mirrors, we must first understand how species on one hand, and equations on the other, probe in the scales of reality to obtain its quanta of space-time converted either in motion steps or information pixels, to build up reality. 

So for each action of space-time we shall find a whole, ∆ø, which will enter in contact with another world, ∆±i, from where it will extract finitesimals of space or time, energy or information, entropy or motion, and this will be the finitesimal ∂ ƒ(x), which will be absorbed and used by the species to obtain a certain action, å.

So the correspondence to establish is between the final result, the åction, and the finitesimal quantas, the system has absorbed to perform the action, ∫∂x, such as: å= ∫ ∂x, whereas x is a quanta of time or space used by ∆ø, through the action, å to perform an event of acceleration, e-nergy feeding, information, offspring reproduction or universal social evolution.

It is then when we can establish how operations are performed to achieve each type of actions.

The first element to notice is the fact that the space between the actor and the observable quanta is relative, so even if there are multiple ∆-planes between them the actor will treat the quanta as a direct finitesimal, pixel, bit, or bite which it then will integrate with a polynomial derivative or sinusoidal function that reflects the changes produced.

We will consider in this introductory course only a few of the finitesimal ∫∂ actions where the space state is provided by the integral and the ∂ finitesimal action by the derivative.

 

Further on derivatives will allow us to point out the main consequence of the sum of those actions in any being in existence, namely the fact that its sums tend to favor growth of information on the being and then signal the 3 st-ages and/or st-ates of the being through its world cycle of existence, which in its simplest physical equations is the origin of… the maximal and minimal points of a well-behaved function.

 

∫∂ Operands reflect 5 D¡

The purpose of operands is obvious: to explain the most general laws of Ðimotions of spacetime beings. We put them in correspondence with those Dimotions in the posts of Algebra, and the introduction to Maths. So here we just plunge on the use of analysis makes of this operators when it derivates them to obtain a quanta of a Ðimotion; since depending on which operator derivates it will be a quanta of a different ‘function=ƒn(action) of exist¡ence’.

The all pervading use of ∫∂ is then clearly because it reflects ALL forms of change. And so analysis is the most extended sub-set of Algebra.

Why there is a negative inverse operand to each of them, is the first question to be considered: simply stated, the 5 Dimotions of reality have its destructive entropic 5th dimotion for all of them. So negative/inverse operand balance positive ones.

∫∂: Finally it came calculus, with  its inverse operand, which represent the scalar next social gathering of elements of algebra, as it IS APPLIED TO THE PREVIOUS OPERANDS, AS WHOLES, except for the trivial xª – the previous more complex polynomial, and so we must regard calculus not only as the operand of all dimotions of change, but also as the operand between planes of existence, since the logarithimic/power previous expression, just reaches between the two limits of two planes of the fifth dimension, but calculus allow us to ’emerge’ and transcend between planes.

Indeed, consider the ‘derivative’ of the exponential, which is also the exponential, and that of the logarithm, which is 1/x, where 1 is the whole and x its parts. It gives us the ratio of change of an individual that form a whole or a radiation of a species, with two clear phases; the first of maximal possible growth as the E number is the base with maximal exponential growth of all numbers, but then that growth reduces to a ‘cell’ after a ‘cell’ of the whole, which means it merely maintains the being as the system will also keep loosing its -1/ units slowly.

Thus while all operand are in correspondence mainly to a Dimotion=action of a being, Analysis IS the operand ∫∂ which applies ‘in a secondary level’ to the previous simplex study of Ðimotions in a single plane. So Analysis study all Ðimotions of space-time, all dimensional motions, all forms of change.

Let us THEN SEE THe analysis of the main functions THAT EXPRESS THE 5 DIMOTIONS of existence and its derivatives:

In the graph the main functions, its derivatives and integrals. Each of them would define under the pentalogic uses of calculus, multiple true insights in the structure of reality, the finitesimals of wholes and its actions. We shall just consider some elements of them.

 The simplest ‘mathematical form’ is a ‘social number’, a constant group. But a constant doesn’t change, which also means it has no ‘hair’ that is no scales of 5D depth. It never emerges or degrades. It is a static form, eternal unchanged, and its derivative is zero.

0 then must be interpreted as an infinitesimal for the whole, x/∞, which is the ultimate meaning of a derivative.

But a zero can have WHEN OBSERVED AS A FRACTAL POINT ANY NUMBER OF ‘INTEGRAL’ PARTS. So its integral which give us the parts of the system, while the derivative gives us the value of the whole, is any number, c.

Next the whole x, indeed shows its derivative to be 1, which shows really the meaning of a derivative in terms of scales – the whole, 1. So if 0 is the infinitesimal, which is never truly zero but the minimal element that can be perceived in a plane of existence, 1 is a relative infinite, the whole of the 0-1=1-∞ dual symmetric scales represented in mathematics.

Yet a whole, again, can give us, besides its number of parts (integral), any relative ad on, C.

To notice at this stage that as in so many cases the words are confusing. Derivatives should be call integrals as they give us the number of parts the whole has, which are always less in quantitative terms (the whole is 1, the parts are x, each part is 1/x in terms of the whole total value).

POLYNOMIAL ACTIONS

This understood then a whole expressed as a power of parts, xª, will be reduced if we want to ‘perceive’ its parts as a whole into aX ª¯¹, which simply means there will be a number of parts, converted into X ª¯¹ wholes and vice versa.

Polynomial Algebra as an approximation to Analysis.

It follows obviously from this fact that when using polynomials for calculus over more than a plane of existence, it is better to approach the question through the more sophisticated procedure of the integral and derivative operandi, that first localises the minimal ‘finitesimal’ of change through a derivative and the integrates it along a varying ‘curve’ that better reflects the 3 ‘different’ sections of a flow of space-time evolving through scales, with its central lineal region, better suited for multiplications and simpler power laws, which become hyperbolic in the decaying and emerging frontiers of the plane.

The concept of differential and finitesimal makes us possible to consider a different ‘finite view’ of continuity, as the region in which the function DO actually has a meaningful differential, meaning, the region where ∆t truly comes to zero, instead of provoking a huge gap, making DC very large, towards a hyperbolic form:

In those ‘verges’ of the Plane or the T.œ, (singularity center opening to 5D, still mind and membrane, opening to 4D entropic world) continuity breaks because the change in ∆S is huge for small increments of ∆t (time age discontinuity) in the simplest obvious case of 1D analysis or if we are measuring a different type of dimensional change, for example, that of topological form, we find a ‘change’ of state, or form, or region of the being – a topological tearing and transformation.

What happens on those limits of the ‘entity’, its membrane and singularity? Simply we change of state, of topology, of region of the being, and as the limits where the function looses its meaning they are no longer of use.

And indeed, we shall see that in the real use of calculus to resolve problems of mathematical functions, SOLUTIONS TO DIFFERENTIAL EQUATIONS are limited to the regions between the Limits of the functions in space-form, and in time form most SOLUTIONS required to KNOW THE initial and final ‘conditions’, that is the value of the function in its t=0 and t=t, begining and end of its worldcycle.

Continuity therefore is not always quantitative, but also topological, qualitative:

The key algebraic concept of ∆st systems is the existence of a STable region of balance between planes or topologies where the asymmetry of the system is fairly lineal operated in decametric scales of growth and superposition, and the regions of relative past and future, | or O, ∆-1 or ∆+1, where there is a split towards the purity of motion or form, disconnected parts or wholes, accelerated vortices or lineal scattering and must be operated not with scalar powers but finitesimal integrals and derivatives, more precise in their measure of the ‘curvature’ of the phase space we study. In the graph, we explain the difference between a polynomial ‘regular’ description of a system as it changes in the “Newtonian’, social scale in which changes are not of quality as much as of quantity and the analytical region in which there is a change of structure and hence of quality with irregularities better shown by analytic operations.

In the graph, between planes there are 3 regions, one of entropic dissolution in the border with ∆-1 (left side), as the being finally emerges into a 1-whole susceptible of being operated lineally by additions, multiplications and potencies, across the 3 dimensions of a single $<S>§ PLANE, till in the §ð region enters also a hyperbolic ‘sink’ of collapse in growing density (spatial view) or acceleration (temporal view) as its vortex tries to emerge into the ∆+1 plane.

Power-root systems and integral-derivatives operate fully on the ∆§cales and planes of the system, which require two slightly different operandi. As §¹º ‘social decametric scales’ are lineal, regular, so we can operate them with potencies, roots and logarithms.

∂∫ But when we change between scales into new wholes and new planes of existence we are  into ‘a different species’ and so we need to operate with the magic of finitesimal derivatives and analytical integrals, which keep a better track of the infinitesimal ‘curved’ exponential changes that happen between two planes, where linearity is lost.

In that regard the main difference between polynomials/logarithms vs derivatives and integrals is dual:

Derivatives & integrals often transcend planes relating wholes and parts, studying change of complex organic structures through its internal changes in ages and form.
Polynomials are better suited for simpler systems, scales of social herds and dimensional volumes of space, with a ‘lineal’ social structure of simple growth.  
So polynomials work better for single planes, its scales of social growth, or square and cubic surfaces of space.
In the graph, the region in which polynomials are better suited with powers ≤3 is the central ‘Newtonian region’. But both approaches must be similar in its results, as they essentially observe the same phenomena with different focus, which is the reason of the existence of the Taylor and Newton’s approximations through derivatives of any polynomial: 
It is for that reason that the proper way to consider algebra is as a first approximation to more complex organic ages and processes of organic growth and reverse the concept of McLaurin series, where we approach polynomial simple social growth with derivatives as usual taken as more precise the ‘simpler spatial mind-view – the polynomial’ (§@) than the subtle temporal view (∆∂) – the derivative.

Algebra is the  analysis of systems which focus in numerical social quantities and symmetries between dimensions, than the organic ‘fluid’ properties of systems described through analysis.Now departing from the general rule that f(x) is a function of ‘time motions’ as all variables are by definition time motions, and the Y function its spatial views as ‘a whole’, we can take this as the rule of interpretation ƒ(x)=t=S=Y as a general rule, and as so often we have a function of the type ƒ (x)=ƒ(t)=0, we consider the polynomial a representation of a world cycle.  And from that we can differentiate factors through ∆ scales such as… ∆±1= 0-1 probability sphere (∆<0) and Polynomial (Xª=∆ª).

It is then obvious that one of the key equations of the Universe, the equation that relates polynomials and derivatives, space and time views of complex symmetric bundles must be reinterpreted on the light of those disomorphisms between the mathematical mirror, and the 5D³ Universe.

So polynomials are the rough approximation to the more subtle methods of finding dimensional change proper of analysis – even if huminds found first the unfocused polynomials and so we call today McLaurin & Taylor’s formulae of multiple derivatives, approximations to Polynomials.

While Derivatives & integrals can transcend planes relating wholes and parts, studying change of complex organic structures through its internal changes in form and even ages (variational calculus ). WHILE Polynomials are better suited for simpler systems, scales of social herds and dimensional volumes of space, with a ‘lineal’ social structure of simple growth.
And that is the ultimate reason why Galois could prove through permutations of its coefficients, which are lineal operators, of sums and multiplications power 5 polynomials could not be resolved, as they were prying in non-lineal regions of ∆±i space-time.
Further derivatives are thus just ‘approximations’ to the last of the operands between planes, the power/logarithm dual curve of growth. But they have no physical relevance. The more interesting graph though is the one above the polynomial formulae, as only ‘analysis’ emerges from one to another scale, from wholes into parts without great distortion and so it becomes truly the most important of all operand and branches of algebra.

Taylor’s formula resumes the main space-time symmetries and its development, left polynomials, right derivatives fills in the content of algebra in the measure of space-time systems.

In a given point it can then be understood as a differential value and then consider Dc the polynomial vs. ∫∂ep differential. Lineal functions in short distances’ view that become curved in larger more accurate spatial views, make us think that the ƒ(t) time function is step by step building the ƒ(y) spatial worldycycle, which dId all those step curvatures.

And for that reason it was born from arithmetic and its basic social operandi (±, x÷)
But as the Universe in social space has only 3 dimensions, before a discontinuity of ∆-planes is reached, ever since the infamous Fermat’s THEOREM we know that beyond quadratic x² equations, systems are not always possible, solutions are not ever found and the humind wasted (as it does today with string theory) incredible amounts of ink to ‘invent solution’ to unreal 5D Universes (no solutions of quintics). 
In the graph both in algebra and geometry there are ‘inflationary solutions’ of more than 3 dimensions in space, which are meaningless in nature, due to the discontinuity between planes, beyond 3 st dimensions.
Ad maximal we can work with 3 Spatial dimensions and put together the 3 dimensions of time symmetric to them as relativity does in its metric S²=∑ (X,Y,Z)² – (ct)².
And that is indeed really how far polynomials make truly sense, with specific extremely symmetric simpler solutions for certain species of higher degree polynomials normally able to reduce to products of i≤3 equations, as the young genius of Abel and Galois found.
Hence the importance acquired not so much by what can be resolved in Algebra but by what CANNOT, that is systems which do not reflect nature’s holographic, bidimensional nature, and hence have not automatic ‘radicals’, which occupied most of the historic development of classic algebra (resolution of polynomials, i>2). 
The fact that most methods of resolution of Algebra implies the reduction of a polynomial to products of Holographic bidimensional equations becomes then one of the strongest proofs of the Holographic principle of SxT dual dimensions. 
And the analysis made partially in number theory and @nalytic geometry of the meaning of its operands, which should be the true focus of algebra, as the operations allowed between space and time dimensions.
In that sense the main error of Algebra is precisely the obscuring of its meaning due to the elimination of most of its real content through the abstraction of its ‘letters and the methods of resolving equations by placing all the elements of the equation in one side, leaving the right side in zero, which ELIMINATES THE SYMMETRY between two possible sides of the equation.
This explanation of the most frequently found dual operands of scalar operations between planes, parts and wholes, will have not be clearly understood unless you have read the entire article on ∆nalysis, something impossible to do as i have not written it yet (: sorry for the tease… what i want to convey is that algebra can get ever more complex, as it gets further in depth into the dualities, paradoxes, symmetries, scales, hidden variables, multiple social groups that interact, interconnect, network, web and finally create reality; even if ultimately as a Fourier transform, it can be decomposed in its minimal units of formal motion, the ststtss beats of reality, which NEVER ceases to move through s-t symmetric steps. 

 

The actions it describes.

The minimal unit for any T.Œ are its a,e,i,o,u actions of existence, its accelerations energy feedings, information processing, offspring reproduction and universal evolution. So the immediate question about mathematical mirrors and its operations is what actions reflect. We have treated the theme extensively in the algebraic post, concluding that being mathematics a mostly spatial, social more than organic language, its operations are perfect to mirror simple systems of huge social numbers=herds; and as such to describe the simpler accelerations=motions, which are reproductions between two continuous scales of the fifth dimension; informative processes, where the quanta perceive are truly finitesimal ∆-i elements pegged together into the mirror images of the singularity and so we talk of motions, simple reproductions and vortices of information, and time>space processes of deceleration of motion into form, as the key actions reflected by mathematical operations.

It also follows that when we study the more complex systems and actions of reality, reproduction and social evolution of networks into organisms, mathematics will provide limited information, and miss properties for which illogical biological and verbal languages are better.

And it follows that physical and chemical systems are the best to be described with mathematical equations, either in algebraic terms or analytic terms, which fusion together when we try to describe the most numerous, simpler systems of particles and atoms (simpler because by casting upon them only mathematical mirrors we are limited to obtain mathematical properties).

1D: PERCEPTIVE ACTIONS

Next comes from the bottom of that list, the functions of perception, sin and cos angles; and the result have some ‘metaphysical’ meanings. Indeed, the rate of change of our informative angle measure (the sine), becomes the cos, the rate of change of our motion, or in other words, we SWITCH from sin-stop states to cos-moving states, in stœps. We go from stop-sin to step-cos; but the inverse doesn’t hold. That is if we go from motion cos to stop sin, this will be perceived from the perspective of cos-motion as a ‘negative’ reduction motion – sine.

 

 Finally the logarithmic function and exponential function the ratio of change (derivative) diminishes from the absolute maximal, eª, which is its own derivative, to the absolute minimal 1/a the log derivative which is the definition of an infinitesimal part (Leibniz(, till it peaks, converting an ∆-1 first unit into an ∆º whole in the peak of an existential world cycle that then will start an inverse function of decay with -1/x diminution and a final fast collapse in the 3rd age<<death moment at eˆ-a speed.

So the combination of ± exponentials and logarithm curves are also the best way to graph as a bell curve the worldcycle of existence in lineal terms.

 

PRODUCT AND INVERSE DIVISION INTEGRALS

The holographic principle, best served by the operand of multiplication that combines space and time parameters into a single ‘entity’, the best known being physical momentums.

We defined in Algebra the product as the king of all operations, since it ‘merges’ into ‘a new entity of space-time’, two parameters disjoined previous to the product, proving the very same existence of a holographic Universe, often through a merge of the ‘¡-1’ elements or ‘cellular parts’ of the being, which create ‘axons’ of communication with all the other parts of the being, such as:

X(5¡-1) x Y (4¡-1) = {X(5¡-1)Y(4¡-1)}20¡-2

That is, a multiplication that merges two elements gives us the number of i-2 axons of communication connecting at a deeper level the two parts.

In a momentum the mass stop state and the wave step state merge in the potential I-2 level that holds them together.

The most abundant of all operand, the merging product requires therefore a more complex rule than a direct sum, which acts by ‘superposition’ of EQUAL BEINGS.

IT IS ALSO susceptible to be operated by calculus and ‘derivatives’ as now we INVOLVE FOR THE FIRST TIME, BOTH, a SCALAR LEVEL, since multiplication tends to happen in the lower scale of the being and different states of time and space. So we no longer operate as in additions, with the same type of T.œs in the same plane.

The most abundant of all operand, the merging product requires its own rule which interesting enough shows how indeed product is a merging operation, as the derivative of a product of functions merges first each function with the change rate of the other, and then once both are merged, superposes them by addition:

The Product Rule used to find the derivative of a product of two functions, is thus more complex than the sum even though it also keeps as in polynomials the distributive property – which shows once again that the product is a ‘democratic merging’ that can go both ways.

So h'(x) = [ƒ(x) x g(x)]’ = ƒ(x) • g'(x) + ƒ'(x) • g(x).

The rule, interesting enough shows how indeed product is a merging operation, as the derivative of a product of functions merges first each function with the change rate of the other, and then once both are merged, superposes them by addition.

In that sense it keeps with the ‘rule’ of merging at the lower ‘plane level’ of its infinitesimal parts, in this case, taking instead of the spatial elements of X and Y, its ‘temporal’ quanta of change, f(x)’ and g(x)’, MERGING them with the other wholes, before a ‘superposition’=addition can be effected.

In the product rule thus Derivatives act in inverse fashion to power laws, searching for the infinitesimal.

While power wholes (integrals) search the wholeness, and as we know the two directions of space-time are different in curvature, quantity of information and entropic motions.

Here we shall bring a little explained fact – derivatives act in the inverse fashion to power laws, searching the infinitesimal, while power wholes (integrals) search the wholeness, and as we know the two directions of space-time are different in curvature, quantity of information and entropic motions.

So an external operation that reduces a whole which is NOT integrated as such but a lineal product of two wholes, ƒ(x) and g(x), a COUPLE, is mixing the infinitesimals of one, with the other whole before herding them; in a process of ‘genetic mixing’ of the parts of the first shared with the second whole and the parts of the second shared with the first whole.

This law of existential algebra simplified ad maximal as usual in mathematical mirrors surprisingly enough is the origin also of genetic ‘reproduction’, which occurs at two levels, mixing the ‘parts’ – the genes of the whole – in both directions to rise then the mixing to the ∆º level of the G and F gender couple.

Then WHAT WILL COME out of that genetic multiplication is its division into two equal parts, showing how the INTERACTION OF INVERSE operandi DOES NOT CANCEL REALITY but MERELY COMPLETES A DIMOTION moving ahead the eternal time space Universe.

So if a power followed by a logarithm brings the infinitesimal seed into a whole herd, the multiplication followed by a division of the reproduced new layer of mixed ‘axons, genes’ or parts, brings the replication of identical forms.

While the simplest definition of a division is as usual in huminds an entropic destructive feeding action, the complex view from the perspective of information is a genetic mitosis. And both are reflected in the derivative of a division, which is impossible for two equal functions (resulting in 0 constant) and viceversa can give us any constant value in its integral – so it does not give us any information.

While in most cases is NOT a positive communicative act but a perpendicular negative reducing game, where the DOMINANT element is the ‘predator’ larger denominator that cuts the function, multiplying its infinitesimal f'(x) parts, to which it will deduce the lesser parts absorbed by the f(x) function from it, and then cut it at the ‘lower’ level of its potential elements (G(x)²) :

So the numerator, the victim, shared by the denominator the predator so to speak is first absorbed in its ƒ'(x) parts, g(x) ƒ'(x), subtracting the g'(x) parts that the prey has absorbed in the ‘fight’, ƒ(x) g'(x), and then shared by the parts, g(x)² of the whole as entropic feeding.

So we can consider the derivative of a divisive function as an ‘idealized’ expression of the process of killing and feeding of a system, whereas the predator absorbs the infinitesimal parts of the other being, and feeds its cellular, i-1 elements with it.

Which obviously is NOT a commutative process.

Of course, we love to bring vital interpretation to abstract math, but as we apply such rules to particular cases, the interpretations vary but in all cases will be able to be interpreted in terms of sub-equations of the fractal generator.

What might be notice in any case is that unlike in our rather ‘abstract’ dimensional explanation of the rules of power laws, here we are able to bring real vital analysis of those roles in terms even of biological processes, showing how much more sophisticated is the ∫∂ operandi, the king of the hill of mathematical mirrors on real st-ep motions and actions, reason why its use is so wide spread.

SO THE FUNDAMENTAL LAW OF OPERANDS TO VITALIZE THEM IS THIS:

‘BY THE RASHOMON EFFECT ALL differential OPERANDS CAN BECOME AN ACTION IN ONE OF THE 5D DIMENSIONAL VOWELS (A,E,I,O,U) THAT DEFINE THE FIVE dimensions OF EXISTENCE, AS VITAL QUANTA-ACTIONS OF THE BEING.

THIS IS THE LOGIC CONCEPT THAT TRULY VITALIZES THE OPERANDI OF ALGEBRA.

So those properties tell us new things about the meaning of ∫∂.

Finally the chain rule WHICH IS TRULY the one that encloses all others is

used in the case of a function of a function, or composite function writes:

And this truly an organic rule, as we are not derivating on ‘parts’ loosely connected by ± and x÷ herds and lineal dimensional growth, but the ‘function’ is a function of a function – a functional, as all ∆+1 is made of ∆º which are also functions of xo fractal points.

So this is the most useful of all those rules to mirror better reality.  And we see how the derivative, the change process deeps in at the two levels, at the ∆º=g(xo) level, which becomes g'(xo) and at the whole level, which becomes ƒ'[g(xo)], which tell us we can indeed go deeper with ∫∂ between organic scales, which is what we shall learn in more depth when consider partial derivatives and second derivatives and multiple integrals.

We are getting so to speak into the infinitesimal of the parts of a whole from its ∆+2 perspective, and thsi rule encloses all others, because it breaks into the multiplication of its parts – DWINDLING TRULY A SCALE DOWN, AND SEPARATING THE WHOLE AND THE PARTS DERIVATED INTO LOOSE PARTS AND FINITESIMALS NOW MULTIPLIED. 

And what will the parts do when they see their previous finitesimals now camping by themselves but ‘at sight’ to get them to ‘produce’ an operative ‘action’ (a,e,i,o,u actions are ALL subject to the previous operandi), ON them.

AND WHAT WILL COME of that multiplication. Normally it will capture them all again and then normally will not re=produce on them (one of the operandi actions which are possible under the rashomon effect) but divide and feed on them the last operation to treat:

And its inverse, which is NOT a positive communicative act but often a perpendicular negative reducing game also consequently differs.

In that sense the MOST important ad on that ∆st will bring to the use of differentials in EXISTENTIAL ALGEBRA, is its temporal use as the ‘minimal action in time’, of a being, a far more expanded notion that the action of physics (which however will be related to the lineal actions of motion on 1D).

Finally in the next stage of algebra, when @nalytic geometry allowed a more clear representation of those polynomials in more detail, as usual through its 3 AGES of evolution and through its SCALES of complexity and through its ‘Rashomon effect’; that is, how analysis operates independently to extract information from the 5 DImotions of a being.

Time evolution equations.

Time evolution is the change of state≈age brought about by the passage of time, applicable to systems with internal state≈age distribution (also called ageful systems).

In this formulation, time is not required to be a continuous parameter, but may be discrete or even finite. And so we can use frequencies and densities, fluxes and all the elements required for a real description of the ∆ST universe.

In classical physics, time evolution of a collection of rigid bodies is governed by the principles of classical mechanics. In their most rudimentary form, these principles express the relationship between forces acting on the bodies and their acceleration given by Newton’s laws of motion. These principles can also be equivalently expressed more abstractly by Hamiltonian mechanics or Lagrangian mechanics; which themselves use the ∫∂ jargon.

The concept of time evolution may be applicable to other state systems as well. For instance, the operation of a Turing machine can be regarded as the time evolution of the machine’s control state together with the state of the tape (or possibly multiple tapes) including the position of the machine’s read-write head (or heads). In this case, time is discrete.

State systems often have dual descriptions in terms of states or in terms of observable values. In such systems, time evolution can also refer to the change in observable values. This is particularly relevant in quantum mechanics where the Schrödinger picture and Heisenberg picture are (mostly) equivalent descriptions of time evolution.

Calculus of finitesimals, ∂, and its integrals, ∫ are a dual dimotion, back into ∆-1 to extract a part, and forwards into its integral ∫ through an spatial or temporal finite domain. So the inverse operations of analysis have multiple functionality in terms of actions performed through them, because of its perfect mirroring of the action itself, which consists in using ∆-i finitesimals to absorb energy, motion or information for the 3 simpler actions of motion, informative perception, and energy feeding for which paradoxically, the more complex ‘organic’ operations are the most useful.

This paradox though has a less ‘motivating’ cause for those involved with the mirror of mathematics – essentially that mathematics is NOT the best language to describe the complex actions and relationships that appear out of the reproductive biological and social, engaging processes of organisation, at least algebraic operations.

NEXT OPERAND IN COMPLEXITY IMPROVES UPON THE PREVIOUS ONE (POWERS)

It is a rule of the scalar Universe that all the action are chained such as to effect a more complex action we need the previous ones, as the basic chain of 1D perception->2D locomotion->4D entropic feeding->3D reproduction->5D social evolution into a larger whole shows.

This chain can then be expressed with the classic operands from angle to sum to division and product to power law. But then once the carry capacity of the system is completed at 5D which is a larger 1D we emerge into a larger world and here we get results only with the ∫∂ operands, which therefore must be related to a degree of accuracy in its simplest forms to the more complex forms of the polynomial, and this is the 5D why explanation of a well known rule to approximate functions by its higher derivatives.

Indeed, in Algebra, the third and higher derivatives are used to improve the accuracy of an approximation to the function:

f(xo+h)=f(xo)+f′(xo)h+f″(xo)h²/2!+f‴(ξ)h³/3!

Thus Taylor’s expansion of a function around a point involves higher order derivatives, and the more derivatives you consider, the higher the accuracy. This also translates to higher order finite difference methods when considering numerical approximations.

Now what this means is obvious: beyond the accuracy of the three derivatives canonical to an ∆º±1 supœrganism, information as it passes the potential barrier between scales of the 5th≈∆-dimension, suffers a loss of precision so beyond the third derivative, we can only obtain approximations by using higher derivatives or in a likely less focused=exact procedure the equivalent polynomials, more clear expressions of ‘dimensional growth.

So their similitude first of all proves that both high derivatives and polynomials are representations of growth across planes and scales, albeit loosing accuracy.

Let us briefly then deal with the operands of the 5th dimension treated extensively in ∆nalysis:

However in the fifth dimensional correct perspective is more accurate the derivative-integral game; as it ‘looks at the infinitesimal’ to integrate then the proper quanta. Let us briefly comment the operands treated with the 5th dimension of ∫∂ treated extensively in ∆nalysis:

As we did with the other operands, we need to consider the properties of calculus and its two operandi. This poises a problem, as there is not a ‘bottom operation, such as ±, x÷, directly related as with powers as the third dimension of calculus. But as calculus is a refined analysis of power laws, the direct connection is not exact.

Hence a certain discontinuity is established what implies that ∫∂ equations have been solved by the obvious method of applying the function h’ (x)= lim h->o  h (x+h) – h (x)/h.  We are not though to repeat here that procedure to get the results but merely analyse from ∆st perspective as we did with power laws, and x, the properties of derivatives, to see what they tell us in the higher T.œ language and then consider some specific functions and its integral and derivatives to learn more of it.

Those key properties are expressed in its rules of calculus, starting from the ‘derivative’ of a polynomial:

Xª= a Xª‾¹

So we are NOT fully lineally diminishing a polynomial dimension despite being derivatives a reduction of dimensionality – the search for the finitesimal 1/n quanta. Why? Obviously because in the rough view from a quanta, xª into its whole xª+¹, we grow lineally (polynomial), but as we repeat ad nauseam, the lineal steps curve into geodesic closed wholes, in the ∆+1 scale (Non-E geometry), from the lineal spatial mind to the wholeness cycle of the closed being, and so as the ‘curve of a parabola’ diminishes the distance of a cannonball, growth is NEVER lineal but falls down as we approach the ‘(in)finite limit’.

IN THE GRAPH, the wholeness is curved upwards, the parts spread scattering entropically. The whole is a mind circle, @. So it curves/diminish the quantity of energy available, for the whole, as it really must be an addition of all the planes that share that vital energy to build ever slower, curved larger wholes.

Or in terms of the integral function: 

And here we find the second surprise. There are ∞ integrals with the addition of a constant. As a constant by definition does not change.

Let us express this in terms of past (∆-1: derivative )< Present (Function)  > future (∆+1: Integral)

The past is fixed, the infinitesimal enclosed, only one type of species, ‘happening already’, as the parts must exist before the wholes to sustain them.  But from the pov of the present function, the future integral into wholes is open, with ∞ variations on the same theme; unless we have already enclosed that whole, limiting its variations, which happens with the definite Integral.

So if the function f(x) is given on the interval [a, b] and, if F(x) is a primitive for f(x) and x is a point in the interval [a, b], then by the formula of Newton and Leibniz we may write:

Here the integral on the right side differs from the primitive F(x) only by the constant F(a). In such a case this integral, if we consider it as a function of its upper limit x (for variable x), is a completely determined primitive of f(x). That is the importance of the enclosure membrane to define a single organism, and establish its order, as opposed to the entropic, multiple open future of a non-enclosed vital function which will scatter away.

Consequently, an indefinite integral of f(x) may also be written as follows:

where C is an arbitrary constant, the enclosure will eliminate. 

 

Linearity: Yet and this seems to contradict the previous finding, when we operate derivatives with the ‘basic dimension’ of social herding, ± operators, linearity comes back, and so the minimal Rashomon effect give us two explanations:

Γ(st):  We are INDEED herding in the base dimension of a single plane, where each derivative will now be considered a fractal point of its own:

∆+1 perspective: Suppose f(x) and g(x) are differentiable functions and a and b are real numbers.

Then the function h (x) = aƒ(x) + bg (x) is differentiable and h’ (x) = a ƒ'(x) + b g'(x), which is really the distributive law already studied in algebra’s post for x and power  law. So the interpretation of the sum rule from ∆+1 is one of ‘control’:

WHEN operating from a whole perspective, the  whole breaks the ‘smaller’ parts and its simpler dimensional operandi, +, to treat each part with its ‘whole action’ (in this case ∂). In brief the whole totally control the parts.

5D FINITESIMALS IN SPACE, TIME AND SCALE

TIME FINITESIMALS: ACTIONS

Let us recall then how actions are added through frequency integrals of time instead of area integrals of space, through a mathematical method called…

 

2nd dimension line=sum of points:

Why both definitions work? In pure equations, T=1/ƒ=1/ð.

In depth, because both are isomorphic definitions, albeit in ‘different scale’:

The continuous definition focuses on the ∆-1 ‘potential field of forces’ over which the system reproduces its wave of form. So the ‘frequency steps’ are substituted by the external ‘nanoscopic’ continuous (indistinguishable) gravitational and electromagnetic fields over which the ‘being’ slides unaware of the invisible=indistinguishable field over which glides.  In the previous equation we adopt the ∆º point of view, internal to the being, where its quanta are much larger, and not subject to derivatives.

It is then important to notice that the need for a ‘function to be continuous’ implies to study S-Teps in which the actions happen in a lower ‘scale’ of being; hence we talk of the primary actions of motion (Max. S) and perception (Max. I), of minimal forms (∆Max. i) in relationship to the actor and/or death processes of entropy. We can hardly establish ∫∂ operations for the complex social actions of the 5th dimension, and many of the 3D reproductive actions of seminal ∆-1 seeds, for where a qualitative analysis of evolutionary topology is more proper.

Reason why the operation of ∫∂ is more proper of the 1D, 2D and 4Dimensions of existence.

Speed is important, on the other hand, as the ratio (s/t) continuous speed or s x t (discontinuous: step x frequency) it is one of the 3 fundamental ratios, s/t-speed, t/s-density and txs-momentum that define in its simplest form, the singularity vital energy and angular momentum of the 3 parts of an organism, which for the perfect being, s=t, are, 1, 1, and 1.

 

S=T: ANALYSIS ON SPACE

The second consideration on the rashomon effect should be on analysis of SPACE and trans-form-ations between space=form and motion=time states. SO FIRST we shall remember what space is made of – namely Non-Euclidean points:

Dimensions and analysis are possible because points have volume.

In the graph, in a Universe of fractal spaces point-particles have an inner volume of information as Non-Euclidean points which gauge information in the stillness of a mind systax, language mirror of the Universe. As points have volumes, lines are waves and planes topological networks, which ensemble in ternary a(nti)symmetries to form the topological super organisms of reality across 3 time ages, 3 topological forms and 3 scales. It is this physical T.œ which we shall study in mathematical physics, explaining the meaning in 5D of the main mathematical laws of physics, which are enhanced by the understanding of an enhanced geometry and logic of time, born of the fractal cyclical structure of both, a priori elements of reality that the language of mathematics and its operands so accurately mirror.

The fundamental truth derived from this simple analysis of derivatives is profound. First it connects them immediately with the pure geometric nature of dimensions, which in non-Euclidean geometry (graph) are relevant in as much as they represent motions in time but also dimensions in space.

In that regard, it is important to understand that in the fractal Universe a dimension has ‘always inner breath’ as the points grow when we see them closer.. So it is very simple to consider a single dimensional being, simply as one, whose preferential X-dimension is much larger than the others, but still the other exist as the particle-point in detail is big:

1D being: X>>Y ≈ Z, for example a string, a lineal momentum…

And then a two dimensional being one whose two D are larger than Z:

2D Being: X ≈ Y >> Z; for example a graphene sheet; a plane wave.

Whereas a 3Dimotion being has volume, motion on the 3, for example a spherical being, an entropic explosion.

A derivative then merely ‘annihilates’ one dimension or one motion in space or time – we have here to split dimotions, as humans do, even if it is not the proper unit of the Universe, which is always bidimensional. I.e. even in a motion there is a particle that moves, so you have a point-dimension for the particle and one for the motion in time…

So indeed analysis IS the main mathematical instrument to study the 5D Universe and its ternary mirror symmetries between scales, topologies and modes of time-change. And we can consider a general formulae for analysis, as a specific version of the fractal generator:

∂(Bodywave of vital energy) = Membrane; ∂Membrane = Singularity path and its inverse, better known as line integrals, surface and volume integrals.

Because analysis is mainly used in mathematical physics, in praxis, the previous relationship is connected to the 3 elements of a physical system:

Field (entropic, locomotion source) < wave (reproductive body) > Particle.

So we make double derivatives to obtain the field (Laplacians), and single derivatives to relate particles and waves – ‘one-dimensional species.’ (Fourier series). And those are the all pervading analytical functions of the 3 parts of the being:

Spherical harmonics and electron orbitals are the same, because our light space-time in particle state are photons that form the electronic nebulae. So both are homologic.

The result are spherical harmonics,  a set of functions used to represent functions on  bidimensional membranes – surface of the sphere – the higher dimensional homology of Fourier series – periodic, single variable functions on the circle.

Spherical harmonics are thus the eigenfunctions of the angular part of the Laplacian, representing solutions to partial differential equations in which the Laplacian appears. Since the Laplacian appears frequently in physical equations (e.g. the heat equation, Schrödinger equation, wave equation, Poisson equation, and Laplace equation) ubiquitous in gravity, electromagnetism/radiation, and quantum mechanics.

The orbitals of the hydrogen atom in quantum mechanics in fact are totally undistinguishable from spherical harmonics, showing indeed that we are all topologic beings, and mathematical functions for the simplest forms of spacetime as the electron is – a dense function of ‘light spacetime particle-points.

The intimate connection between the 3 elements of the being thus is perfectly explained by the dual ∫∂ functions.

 

In that regard, variations over the same theme respond to the ternary structure of all T.œs:

In the graph, when deriving and integrating, most operations refer to a ‘limited’ system, in which first we extract the finitesimal part-element, and then we integrate it to obtain a whole; so most likely the system described with depart from a time-changing-variable quanta, and integrate it to obtain a ‘static whole-spatial view’.

But variations on the same theme happen by the natural symmetry of space and time states.

So we can also start with a quanta of space integrated over time to get a spatial area or volume.

What we shall always need to find ‘single solutions’ is the parameters that describe in time or space the 3 elements of the T.œ: So we shall start with initial or final conditions (definite integrals), and define mostly in space as a whole, the enclosure or membrane thate limits the domain of the function (which might include as a different limit the singularity).

All in all the analytical approach will try to achieve a quantitative description of the unit/variable of ‘change’, the ‘finitesimal quanta of space – interval, area, volume’ or the ‘steps of time’ (frequency), and then integrate it over a super organism of space or an interval of time, we wish to study, often because it forms a whole or a zero sum world cycle.

Galilean Paradox. LINEAL vs. Cyclical view.

In that regard, the S=T symmetry will once more become essential to the technical apparatus of analysis as it has done in all other sub disciplines.

Of them the 3 key ‘dualities’ between lineal perception in short and cyclical perception in large, is the key to obtain solutions, as the mind of measure is lineal made of small steps that approximate larger cyclical wholes. It is in essence the method of differential equations, where the differential dy= ƒ'(x) ∆x + α∆x, approaches to a lineal derivative, ƒ'(x) ∆x in short increases, and so we can get away with the smaller element that curves in longer distances the solution.

Finally the third Galilean paradox between continuity and discontinuity is also at the heart of analysis (and most forms of dual knowledge). Analysis has accepted as a dogma the continuity of the real number and so it considers continuity a necessary condition for differentiability but we disagree in a discontinuous Universe, continuity has a loose definition (as neither the axiomatic method is the proof of mathematical statements but experience also matters). So continuity is defined by a simpler rule: that the term α∆x of the discontinuity between the lineal and cyclical view of an infinitesimal derivative does indeed diminish faster the closer we are to the point ‘a’ in which the differential equation is defined. In brief, continuity means no big jumps and big changes in the direction of a function and the T.œ it reflects.

 

∆±¡: SCALES, PLANES

Following the Rashomon effect we shall thus consider now some basic themes of analysis to calculate scales and planes.

Galilean px in analysis: finitesimal steps (derivatives) integrated to calculate a cyclic whole.

Further on analysis has over all other branches of mathematics a special quality to study ‘changes’ between planes of the fifth dimension, as multiple derivatives ‘jump’ (albeit with different degrees of ‘focus’) better than mostly ‘lineal polynomials’ between planes, and the ‘curvilineal, Lorentzian’ variations, slow downs and accelerations on the S x T= K parameters happen between scales:

The formal stience of the 1st and 5th Disomorphisms in the mathematical mirror is analysis, which deals directly with the relationships between  ∆-1 ‘finitesimal’ parts’  and (in)’finities’. Two new terms we still accompany, with the lost inflationary  term ‘in’; since infinitesimals and infinities are a Kantian paralogism; as all planes have a limit in its quantic units, and all wholes a finite circle that encloses them into a relative 0-1 ‘circle unit’. 

Besides the duality of the 0-1 probabilistic mind unit which reflects the external 1-∞ universe, a second duality that weights heavily in analysis is that of perception of lineal vs. cyclical form:  We are minds of space that measure time cycles: ∫@≈∆ð.

Hence the equation of mind-measure defines the understanding of differential calculus: As always in praxis, the concept is based in the duality between huminds that measure with fixed rulers, lineal steps, over a cyclical, moving Universe. So Minds measure Aristotelian, short lines, in a long, curved Universe.

So the question comes to which minimalist lineal step of a mind is worthy to make accurate calculus of those long curved Universal paths.

The general rule to identify both polynomials and analysis, is this:

Y=S= ƒ (x=t)

The difference between lineal polynomials and non-lineal analysis

In the graph, we explain the difference between a polynomial ‘regular’ description of a system as it changes in the “Newtonian’, social scale in which changes are not of quality as much as of quantity and the analytical region in which there is a change of structure and hence of quality with irregularities better shown by analytic operations.

It follows that more important than ‘variables’ are to algebra ‘operands’, whose encoded meaning and ‘magic’ way of relating systems to get a ‘future or present’ outcome by merging them according to certain rules of ‘creative engagement’, truly gives the power to algebra to mirror the a(nti)symmetries of the Universe.

The key connector of T.Œ with classic science is the full understanding of the dual algebra operands, ±, x/, ∂∫, √xª as part of the ¡logic, pentagonal game of reality in all its mirror symmetries; that is, as dimotions≈actions and structural elements, whereas an immediate correspondence between those operands and the ternary elements ∆@st can be established as follows:

  • The sum-rest are the inverse arrows of the simplest superpositions of dimensions between species which are identical in motion and form.
  • The product/division rises the complexity of operands a first layer, and serves the purpose, besides the obvious sum of sums, of calculating the margin of dimensions, as combinations which are not purely parallel between clone beings, most likely through the recombination of its ∆-1 elements, as the product of 2 Sœts inner elements give us all possible combinations. Ie. 5 x 4 = 20 IS also the number of connections between all the 5 elements and 4 elements of both sets. So multiplication ads either a dimension of multiple sums in the same plane, or probes for the first time in an inner scalar dimension.
  • Then we arrive finally to the powers-root systems and integral-derivatives, which operate fully on the ∆§cales and planes of the system, which require two slightly different operands. As §¹º ‘social decametric scales’ are lineal, regular, so we can operate them with powers, roots and logarithms.
  • ∂∫ But when we change between scales into new wholes and new planes of existence we are  into ‘a different species’ and so we need to operate with the magic of finitesimal derivatives and analytical integrals, which keep a better track of the infinitesimal ‘curved’ exponential changes that happen between two planes, where linearity is lost. The integral/derivative thus will be related to the closely connected ‘mind integration’ of information.

Ultimately a derivative of a larger world, measure in a still time point of zero latitude) and the processes of integration of parts into wholes that always discharges part of the being, reason why a derivative is essentially smaller than the power operand, as those processes eliminate part of the whole. This being a key technical element of analysis (which often is approached by binomial series – McLaurin, Taylor, etc. – connecting both operands, but reducing the power series to that ‘a’ constant timespace point in with the mental or whole integration takes place.

 

SPACE FINITESIMALS VS. TIME FINITESIMALS

We must ‘differentiate’ when differentiating (:

-Space finitesimals, which are the minimal quantity of a closed energy cycle or simultaneous form of space, easier to understand, as they are ‘quanta’ with an apparent ‘static form’, which can be ‘added’, if they are a lineal wave of motion-reproduction, along the path; or can be integrated (added through different areas and volumes), to give us a 3D reality.

-Time finitesimals, which are the minimal period for any action of the being and will trace a history of synchronicities as the actions find regular clocks, which interact between them to allow the being to perform ALL their 5D actions needed to survive. So we walk (A(a)), but then eat  energy (Å(e)), and we do not do them often together. Actions have different periodicities, for EACH species that perform 5 actions. So to ‘calculate’ all those periodicities in a single all-encompassing function we have to develop a 5D variable system of equations.

– Spacetime finitesimals. But more interesting is the fact that Nature works simultaneously integrating populations in space and synchronising their actions in time. So we observe also space-time finitesimals where the synchronicity consists in summoning the actions of multiple quanta that perform in the same moment the same ‘D-motion’, which is ‘reinforced’ becoming a resonant action. 

And for the calculus of those space-time finitesimals the best way to go around is by ‘gathering the sum of ∆-1 quanta’ into a ‘larger ∆º quanta’ treated as a new ‘1’ adding up its force. EVEN IF most of them are just complex ensembles of the simplest actions of  many cellular parts – steady state motions, reproduction of new dimensions and vortex of curvature and information absorption.

All functions of analysis thus can be considered operations on actions of space-time.

Groups of Finitesimals and their synchronous actions thus meet at ∆º in the mirror of mathematical operations, through the localisation of a ‘theoretical’ tangent≈ infinitesimal of the nano-scale (∂s/∂t proper) or an ‘observable’ differential, a larger finitesimal, which is the real element, as any finitesimal is a fractal micro points that have a fractal volume, expressed in the differential.

Then we gather them, in time or space and study their ‘inverse’ action in space or time.

So the first distinction we must do is between finitesimals expressed as functions of time frequencies and finitesimals expressed as areas of space. And the actions described on them. In practice though most finitesimals are spatial parts whose frequency of action is described by the ƒ(x)=t function.

 

The 3 parts of T.œ.

 Every event and form must be analysed ternary, and so happens with integrals and derivatives, which often represent integrals of space-time quanta belonging to the vital energy of the system, constrained in time or space by the singularity and outer membrane.

If we call energy, e, then:

∑$p x ðƒ = ∆e  becomes the integral of the inner spatial quanta of the open ball, surrounded by the membrane of temporal cycles, which conserves its Energy and by the sum of all T.œs that of the Universe. Its calculus, after finding a ‘continuous derivative’, surrounded by the membrane is then an integral: ∫ Sp x ðƒ = Ke.

And inversely. If we consider a single quanta of space or a single frequency of time, a moment of lineal or angular momentum, the result is a derivative.

So Analysis becomes the fundamental method to study travels upwards and downwards of the 5th dimension.

In general if we call a spatial quanta a unit of lineal momentum of each scale and a time cycle a unit of angular momentum, the metric merely means the principle of conservation of lineal and angular momentum.

Thus analysis studies the process which allows by multiplication of ‘social numbers’ , either populations in space or frequencies of time, a system to ‘grow in size’; which is the ultimate meaning of travelling through the 5th dimension. For example, when a wave increases its frequency, it increases the quantity of time cycles of the system. When a wheel speeds up its increases the speed of its clocks. And vice versa, when a system increases its quanta, growing in mass, or increasing its entropy (degrees of motion in molecular space), it also grows through the 5th dimension.

And the integration along space and time, of those growths, is what we call the total Energy and information of the system

It is what physicists call the integral of momentum or total ‘Energy and information of the system ‘

So we shall only bring about here some examples of analysis concerned with the definitions of the fundamental parameters of the fractal Universe, that is the conservation principles and balances of systems which can be resumed in 2 fundamental laws:

Points of constrain, balance and limits of integrals.

Any equation with a real, determined solution must be a complete T.œ. Hence it will have limits either in space (membrane and singularity of the open ball), or in time, initial and final conditions, bridged by an action in the ‘least time’ possible.

This is the key ∆st law that applies to the search for solutions in both ODE and PDEs.

Maximise its ðƒ/Sp, density of information/mass, its Sp/ðƒ density of energy and hence, reach a balance at ðƒ=Sp

This simple set of equations: max. ðƒ x Sp -> Tƒ=Sp: max Tƒ/Sp and Max. Sp/Tƒ are therefore the fluctuation points of systems that constantly move between the two extremes of information and spatial states across a preferred point of balance Sp=Tƒ as this is the max. Sp x Tƒ place.

Thus integrals, Lagrangians and Hamiltonians are variations of those themes. The motion of springs; the law of least time etc. all are vibrations along a point of balance, Tƒ=Sp, and 2 maximal inverse limits.

Dimensional integration. Dimensions of form that become motions and vice versa.

Now the key to fully grasp the enormous variety of integral and derivative results obtained in all sciences, is to understand that all space forms can be treated as instants in time, or events of motion, and all motions in time can be seen as fixed present moments in space.

These series of combinations of time and space, S>T>S>T, which leaves a trace of steps and frequencies and its whole integration, which emerges as an ∆+1 new scale of reality is at the core of all fractal, reproductive processes of reality.

For example the s-T duality is at the core of the Galilean paradox of relativity (se mueve y no se mueve), of Einstein’s relativity, of zeno’s paradox.

So we can consider motion in time as reproduction of form in adjacent topologies of discontinuous space.

We can consider the stop and go motions of films, picture by picture, integrating those ‘spatial pictures’ into a time ‘motion picture’.

We consider the wave-particle paradox, as waves move by reproduction of form and particle collapse by integration of that form in space into a time-particle.

In those cases integration happens because a system that moves in time, reproduces in space. And vice versa, steps in space become a memory of time. 

Now it is important also to study case by case and distinguish properly what are we truly seeing population in space or events in time, as we can and often it happens that humans confuse in quantum physics where motion is so fast that time cycles appear as forms of space. We shall then unveil many errors, where a particle in time is seeing as a force in space (confusion of electroweak, transformative force as a spatial force,and so on).

All systems can be integrated, as populations in space to create synchronous super organisms  and as world cycles in time, creating existential cycles of life and death. The population integral will be however positive and the integral in time will be zero.

Since. systems of populations in space do have volume. Yet the whole motion in time, can be integrated as closed paths of time, or conservative motions that are zero sums, and this allows us to resolve what is time integration and space integration.

Consider to fully grasp this, the reproduction of a wave, which constantly reproduces its form as it advances in space, and cannot be localised (Heisenberg uncertainty) because it is a present wave of time, as light moves NOT in the least space but the least time. Now, consider a seminal wave – you, which reproduces in time, but becomes a herd of cells that integrated emerges into a larger scale. In both cases the final result is in space and so it is positive.

So as I said, for each case the process must be studied but the results will give us the conclusion that we are observing a time event or a spatial organism.

In that regard the most important and hence first view of the Rashomon Effect on ∫∂ is:

∫≈∂ ARE TIME=SPACE BEATS/STEPS IN ANY D²

We have further defined the Disomorphism between the 5 Dimensions of space-time and the 5 actions=motion=operators of mathematical space.

An operation or actor is thus a Disomophism of a language or form, which enacts through the operator mirrors of the language or form, which in mathematics are the operandi, ±, x ÷, xª log, ∫∂… but in other species as all of them encode, the social evolution, darwinian fight, decay, growth, functions of a super organism might be coded as 5D functions by genes, or words, or any other syntactic form.

It is the most fruitful ∫∂ symmetry, soon used by Leibniz and Newton to develop laws of lineal time motion in space, IN WHICH the full realisation of all other views BECAME THE BIGGEST MIRACLE OF magic mathematics.

Alas, the entire planet was astonished, when in a not yet fully understood scalar duality, derivatives in time turns out to be inverse to volumes of an integral of space?! This was the biggest surprise of mathematics since the finding by Pythagoras of the irrational pi. Why God had made coincide two different operations till then seemingly not related to each other; derivatives of time motions and volumes of spatial form?

Answer because according to the Galileo’s paradox, the first insight that prompted me 30 years ago to discover 5D², time and space are indeed the two sides of a holographic 2-manifold dimension.

So a motion in time decelerates by becoming a new dimension of space, and indeed a moving curve is equivalent to a surface, it generates when we measure as space. So lineal time-motions produces space surfaces:

In the graph, a dimension of space volume transforms into a dimension of angular time motion, and so we can apply a derivative and integral duality as there has been an S>T step-motion of space-time.

4D: S∂

ENTROPIC MOTIONS: DERIVATIVES

ðIME VIEW

NEXT, to this realisation we must wonder the question of causality, which is expressed in terms of independence.

It seems then that most spatial functions are dependent on time the independent factor:   $=ƒ(t)

Yet as we recall that time motions ‘stop’ into space we can interpret this independence in terms of order: as functions are first motions in time that stop and become ‘forms’ of space, leaving a past-memorial trace.  which if NOT erased becomes a population of space, which moves again and then becomes a population and in this manner reproduction of dimensions takes place, building a being of growing ∆Dst.

How this is expressed in ∫∂ terms becomes then clear: since time is discrete, discontinuous, made of a toe, moving, stopping (often perceiving), moving and stopping, we must first ‘encounter’ the minimal step of the time motion, and this is what we shall call dt, and then move, stop and move stop a number of steps, which we integrate, building in this manner a new dimension of space-time.

So the combination of ∫∂ is in fact a process of creation of an ST dual dimension of space-time.  And that is the ultimate meaning of it.

So when study those simplest equations of physics, we shall consider those in which we make a ‘ceteris paribus’ rhythm of considering it first from the point of view of ‘time’ steps and then from the point of view of ‘space’ integrating them as a simultaneous space, when we have ‘traced’ enough steps to make that simultaneity meaningful .

And this is the meaning of a definite integral.

It follows then that we can escape the memorial creation, step by step of the spatial form, as something which for us is no longer needed, when we are interested only in integrating the space, and for that reason the integral work merely as an integral of a volume, a surface – whose creation in time has already happened.

But we still have to find a quanta of that ‘creation’ now a mere ‘population in space’.

The different time-space beats.

This of course must be done because reality is bidimensional and a dimension of space goes accompanied by a dimension of time, generating as in the previous graphs, the motions=changes, S≈T≈S≈T that shape reality.

And it is the justification on why differential equations that make systems dependant of such pair of variables happen.

BUT then it follows we shall be able to APPLY THE RASHOMON EFFECT and find a use for the pair ∫∂ as expression of an inverse beating for each pair of dimensions of space-time. 

And decompose both space-time forms and time-space events in S>T<S beats.

And in the process of doing so, learn further insights about the symmetries between space and time.

The algebraic/graphic duality.

On view of our deeper departure from the ultimate essence of Analysis, which is to study steps of space-time.  That is to put algebraic S=T symmetries in motion; the algebraic vs. graphic interpretations of calculus responds to yet another symmetry of spatial vs. temporal methods, considered on our posts of @nalytic geometry and Algebra. 

It does show more clearly what we mean by those ‘steps’ as basically the ‘tangent’ of the curve is in most cases a space-time step expressed by the general function: X(s) = ƒ(t)

Obviously as s and t are ill defined, it was only understood for lineal space-distance and time-motion.  And so the ‘geometrical’ abstract concept remains, void of all experimental meaning… as a… Tangent… it was…

SPATIAL:GEOMETRIC VIEW.

We are led to investigate a precisely analogous limit by another problem, this time a geometric one, namely the problem of drawing a tangent to an arbitrary plane curve.
Let the curve C be the graph of a function y = f(x), and let A be the point on the curve C with abscissa x0 (figure 10). Which straight line shall we call the tangent to C at the point A? In elementary geometry this question does not arise. The only curve studied there, namely the circumference of a circle, allows us to define the tangent as a straight line which has only one point in common with the curve.

To define the tangent, let us consider on the curve C (figure up) another point A′, distinct from A, with abscissa x0 + h. Let us draw the secant AA′ and denote the angle which it forms with the x-axis by α. We now allow the point A′ to approach A along the curve C. If the secant AA′ correspondingly approaches a limiting position, then the straight line T which has this limiting position is called the tangent at the point A. Evidently the angle α formed by the straight line T with the x-axis, must be equal to the limiting value of the variable angle β.
The value of tan β is easily determined from the triangle ABA′ (figure up):

It is then clear that h is the frequency quanta of time, or if we are inversely using the ∫∂ method to measure space populations, the minimal unit.  And so the ultimate concept here is that h NEVER goes to 0. And the clear proof is that if it were arriving to zero, x/h=∞.

So infinitesimals do NOT exist, and it only bears witness of the intuitive intelligence of Leibniz that he so much insisted on a quantity for h=1/n… (and the lack of it of 7.5 billion infinitesimals of Humanity, our collective organism, which memorise this h->o that so much abstract pain gave me when a kid – one of those errors I annotated mentally with the absurd concept of a non-E point with no breath, or else how you fit many parallels, of the limit of c-speed, how Einstein proved that experimentally?, and other ‘errors’ that ∆st does solve in all sciences).

But for other curves such a definition will clearly not correspond to our intuitive picture of “tangency.”

Thus, of the two straight lines L and M in figure below, the first is obviously not tangent to the curve drawn there (a sinusoidal curve), although it has only one point in common with it; while the second straight line has many points in common with the curve, and yet it is tangent to the curve at each of these points.

And yet such a curve is ultimately the curve of a wave, and we know waves are differentiable. So the tangent IS NOT the ultimate meaning of the ∫∂ functions – time/space beats are. The question then is what kind of st beat shall we differentiate in such a transversal wave?

A DIFFERENT DIMENSION, NORMALLY as waves are the 2nd dimension of energy, as in the intensity of an electric flow… a mixture of a population and a motion; or ‘momentum’ (the derivative of energy)…

And so the next stage into the proper understanding of ∫∂ operations is what ‘kind of dimensional space-time change-steps’ we are measuring.

∆ VIEW

The inversion of the finitesimal calculus of ∆-1 is the integral calculus of 5D.

The transition to ∆nalysis: new operations

The mathematical method of limits was evolved as the result of the persistent labor of many generations on problems that could not be solved by the simple methods of arithmetic, algebra, and elementary geometry.

The inverse properties of ∫pace problems and ∂temporal problems

What were the problems whose solution led to the fundamental concepts of analysis, and what were the methods of solution that were set up for these problems ? Let us examine some of them.

The mathematicians of the 17th century gradually discovered that a large number of problems arising from various kinds of motion with consequent dependence of certain variables on others, and also from geometric problems which had not yielded to former methods, could be reduced to two ST types:

Temporal examples of problems of the first type are: find the velocity at any time of a given nonuniform motion (or more generally, find the rate of change of a given magnitude), and draw a tangent to a given curve. These problems (our first example is one of them) led to a branch of analysis that received the name “differential calculus.”

Spatial examples: The simplest examples of the second type of problem are: find the area of a curvilinear figure (the problem of quadrature), or the distance traversed in a nonuniform motion, or more generally the total effect of the action of a continuously changing magnitude (compare the second of our two examples). This group of problems led to another branch of analysis, the “integral calculus.”

Thus two fundamental problems were singled out: the temporal problem of tangents and the spatial problem of quadratures.

Now the reader would observe that unlike the age of Arithmetics and Algebra, which stays in the same ‘locus/form’; here we observe a key property of analysis: the transformation of a temporal cyclical question, into a lineal spatial solution.

I.e. the solution of acceleration/speed by a lineal tangent, through an approximation; and the calculus of a cyclical, spatial area by the addition of squares. And the deep philosophical truth behind it, which only Kepler seemed to have realized at the time:

‘All lines are approximations or parts of a larger worldcycle

And so we can consider in terms of modern fractal mathematics, that ‘the infinitesimal is the fractal unit, quanta or step’ of the larger world cycle, and as a general rule:

‘All physical processes are part of a conservative 0-sum world cycle’.

Which explains ultimately the conservation of energy and motion, as motions become ultimately world cycles, either closed paths in a single plane, or world cycles balanced through ∆±1 planes.

Such is the simple dual GST justification of Analysis, as always based in ∆…  finitesimals and St… the inverse properties of ∫∂.

 THE MAIN FUNCTIONS OF NATURE UNDER THE ∫∂ OPERATIONS.

Functions.

In simple terms, a function f is a mathematical rule that assigns to a number x (in some number system and possibly with certain limitations on its value) another number f(x). For example, the function “square” assigns to each number x its square x2.

The common functions  are thus definable by formulas, which are related to the ∆s ≈ ∆T duality, such as:

∆§: Polynomials of the type, f(x) = x2. The logarithmic function log (x); & the exponential function exp (x) or ex (where e = 2.71828…; and the square root function √x.

∆T: Trigonometric functions, sin (x), cos (x), tan (x), and so on.

Then there is the question of transformations between space and time and 5D a(nti)symmetries, which is an essential part of classic algebra and we resume in those terms:

  • Integral transforms make possible to convert a differential equation of 5D space-time within certain boundary values (time membrane, which limits the equation as a ‘real system’, not an infinity, into terms of an algebraic equation that can be easily solved (a polynomial which is a result in a single space-time plane). And this transformation obviously should be of two canonical forms. And as it happens there are 2 canonical transforms:
  • – A spatial, lineal transformation, and this is the Laplace transform: f(p), defined by the integral:

F(p)=∫∞0 e-pt F(t)dt .

The linear Laplace operator L thus transforms each function F(t) of a certain set of functions into some function f(p) and it is used most frequently by electrical engineers in the solution of various electronic circuit problems.

  • A temporal transformation and this is the Fourier analysis, which proved that a function y = f(x) could be expressed between the limits x = 0 and x = 2π by an infinite series of waves:

F(x)=1/2 α ∑a cos kx + b sin kx.

That is an equation could become a cyclical time dependent equation developed as a sum of harmonic waves.

And finally the inverse, the fact that a function could be converted into a 5D analytical equation between scales of the 5th dimension is proved by the third most used approximation of functions, the Taylor series, which expresses a function f—for which the derivatives of all orders exist—at a point a in the domain of f in the form of the power series:

∑∞∆=0 f (∆)(a)(z-a)∆/∆!

In which Σ denotes the addition of each element in the series as ∆ ranges from zero (0) to infinity (∞), f (∆) denotes the nth derivative of f, and ∆! is the standard factorial function.

So this 3 transformations a means – and its applications enlighten an infinite number of real equations that the different 5D scales of reality can transfer energy or information.

That is, a 5D flow of energy and information can travel into a single membrane with absolute accuracy (no loss of entropy, no need of transforms or groups to resolve them.

But there is a minimal loss of entropy when we transform between planes back and forth (of information or energy) as the transform is NOT absolutely exact – as for it to be exact the number of terms normally tend to infinity, which is not possible in the finite duration of any flow between ∆±1 scales of the 5th dimension.

Further on it is important to understand the meaning of the operandi and the laws of relative equality and dynamic transformations of ¬Æ where equality never fully exists, but we transform,  F(t)<=> F(s) as in E<=>Mc², or we approximate values through an evident property ≈.

We are not here extensive but just showing some ∆st insights to those inversions.

LET US do some comments of the main functions with fundamental roles in ∆st and its derivatives BY DIVIDING THEM IN 3 GREAT ∆st ‘groups’:

@: ∫∂ of IDENTITY ELEMENTS – FORMS THAT DO NOT CHANGE

The interest of those results refer to the concept of an identity number, as 0 is the identity of sum and 1 of product. But they also have a clear meaning as the interval 0-1 of the generation ‘seed’ dimension from ∆-1 to ∆º.

And indeed, the surprising result that ∫o dx = C does indeed suggest that the 0-point is a fractal point that ‘has volume’, or else how integrating the nothingness of existence shall we get a ‘constant’ which is a social number? But if we do start from a o-1 unit its ‘integral’ sum will give us a reproductive group, or ‘social number’.

And if we integrate the full ‘1 being’, we shall get a new dimension, the variable plus the constant, which suggest also a little understood process related to the operations of derivatives and integrals, the switching CAUSED by OPERATIONS on motions of sets (our definition of analysis), which CHANGE a spatial state into a time state and viceversa. So the spatial 1-form-whole becomes a time-variable X, while the variable X becomes a spatial derivative constant.

Since  constant number does NOT change. So a time variable gives us the spatial identity number.

Finally, the deepest thought on those seemingly well known operations regards the subtle difference between both operations: the derivative localises a single ‘finitesimal solution’, or minimal ∆-1 past part of the system…

But the inverse, ‘integral’ or ‘future 5th Dimensional arrow’ of social wholes opens up the possibility of multiple constant solutions to add to the variables, as the future is open to subtle variations (∫) but the past is fixed by the infinitesimal identity number (∂).

Of course if we instead consider the integral not in time but as a fixed spatial path, this concept of future vanishes and we get a determined single solution to the integral where the constant is just the starting point.

Other way of seeing it though is to consider the identity element @, the constant mind that does NOT change.

∫∂ of POLYNOMIAL GROWTH: 

The first result already considered are the polynomial ‘reduced’ dimension by means of searching its infinitesimal, which however is for simple polynomials quite larger, compared to a direct xˆn-1 reduction.

Further on, the logarithm IS clearly the 5D social scaling operation and its derivative is indeed the absolute finitesimal, 1/n.

And inversely the maximal growth is its inverse, the absolute decay of e¯ª.

It is worth to talk of those 3 co-related results from the philosophical pov: the maximal expansion of an event is an absolute future to past, ∆+1 <<∆-1 entropic death expressed by the exponential:

The minimal process of growth (Log) is an infinitesimal, the maximal process of decay (e¯ª) is equivalent to the whole, in a single quanta of time. We state in the general law that death happens in a single quanta of time, in which the entire network that pegged together the being, disappeared. 

Γst functions.

The third type of functions are concerned not WITH ∆±1 past to future to past d=evolutions but with present sinusoidal wave repetitions of the same time-cycle, hence change is cyclical repetitive, and so those functions are very useful for the 3rd reproductive dimotion in space, but also for a time dimotion or cycle:

Both functions thus are clearly inverse not only in Γst but also in the ∆±1 scales – being the negative symbol one of conventions regarding the chosen ± direction of the cyclical, sinusoidal motion.

Here though the interest resides in comparing both type of present vs. ∆ past-future functions: the present derivative is self repetitive, as we return to the sin after 4 quadrant derivatives; and ideed we return to the present considering also the generational cycle, after 4 ages of life. So we can model a sinusoidal function as a world cycle of existence in its 4 quadrants. 

 PDEs and ODEs

A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including 5D stience, concerned with the Dimotions of spacetime.

Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution.

Experimental justification vs. Axiomatic method.

The axiomatic method, which is valid as all mirror languages have a similar consistency to the reality it mirrors, justifies and classifies them with Group theory – an instantaneous picture of all its varieties put in relationship with somewhat confuse concepts of symmetry.

We prefer as said ad nauseam the experimental method to limit the inflationary mirror to what is useful as reflection of ‘real space-time properties’.

So the commonly used distinctions of O/PDES include 3 DUALITIES which we put in correspondence with THE 3 elements, ∆ST according to pentalogic. So IF the equation studies:

T by its Number of Dimotions can be Ordinary (1 Dimotion) /Partial (multiple demotions): An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term “ordinary” is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.

ODES therefore imply Partial differential equations (PDEs) are equations that involve rates of change with respect to continuous variables. The position of a rigid body is specified by six parameters, but the configuration of a fluid is given by the continuous distribution of several parameters, such as the temperature, pressure, and so forth. The dynamics for the rigid body take place in a finite-dimensional configuration space; the dynamics for the fluid occur in an infinite-dimensional configuration space. This distinction usually makes PDEs much harder to solve than ordinary differential equations (ODEs), but here again, there will be simple solutions for linear problems. Classic domains where PDEs are used include acoustics, fluid dynamics, electrodynamics, and heat transfer.

S-Topology, according to its form can be Linear/Non-linear=cyclical (entangled by product).

It follows from what we have said of ®lgebra that Odes and lineal PDEs are those in which the ratio of change of the being adds to the function but does NOT entangle through multiplication with them.

This is really what makes non-lineal PDEs so difficult to solve as the entanglement which will happen in other scales of reality will make it almost impossible to get all the information needed, and multiply its solutions, themes those of 5D analysis.

Only the simplest differential equations are solvable by explicit formulas; and most have multiple solutions, implying the future is pentalogic – it can go different ways. Which ones are solvable then helps to understand the philosophy of time:

T=S symmetries. 

A Cauchy problem in mathematics asks for the solution of a partial differential equation that satisfies certain conditions that are given on a hypersurface in the domain. A Cauchy problem can be an initial value problem (Time symmetry) or a boundary value problem (space-symmetry or Cauchy boundary condition) or it can be neither of them.

The Cauchy problem consists of finding the unknown functions and solutions only will exist if there is an initial FINITE TIME (singularity related, as the will of the system and its dimotions) OR FINITE SPACE (membrane related), hence a formed T.œ structure for the space-time event/being studied.

 

∆±¡: Equation order Differential equations are described by their order, determined by the term with the highest derivatives. An equation containing only first derivatives is a first-order differential equation, an equation containing the second derivative is a second-order differential equation, and so on. Each order representing them a scale of reality. And since most systems just extend through 3±¡ planes Differential equations that describe natural phenomena almost always have only first and second order derivatives in them.

Also a scalar division is that between Inhomogeneous/Homogeneous, which studies those in which its scaling by multiplication is conserved.

Since a homogeneous function is one with multiplicative scaling behaviour: if all its arguments are multiplied by a factor, then its value is multiplied by some power of this factor:  for some constant k and all real numbers α. The constant k is called the degree of homogeneity.

Lineal, affine functions of the type y = Ax + c are not HOMOGENEOUS, which again brings us the duality of ± in the same plane and x/ in different planes of existence.

We can then with those simple concepts understand intuitively many properties of physical equations and parameters by the type of ‘rates of change that take place’.

I.e. products are NOT reproductions but entanglements in a lower plane. So lineal equations will study NON-entangled additions in a single plane, and follow the superposition principle. They are the only solvable, as we have all the parameters.

Most ODEs that are encountered in physics are linear, as they deal with the 2nd Dimotion, lineal locomotion, and, therefore, most special functions may be defined as solutions of linear differential equations.

 Partial differential equations 

is a differential equation that contains unknown multivariable functions and their partial derivatives, used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs which in general will correspond TO SYSTEMS THAT ARE NOT particle/head controlled, and hence hierarchical with definitive ‘stillness’ in position and single @ristotelian logic. THIS BASICALLY leaves two type of PDEs, those related to entropic, memoriless states, which will tend to be ‘lineal’ as a superposition of non-entangled elements, and those related to complex fluids that interact among its particles and have a complex, variable internal structure which tend to be non-lineal and partial and hence irresolvable. For example:

Lineal PDE: The position of a rigid body (ð§) is specified by a few parameters and it is a lineal ODE.

but the configuration of a fluid is given by several parameters, such as the temperature, pressure, and so forth. Classic domains where such PDEs are used include acoustics, fluid dynamics, electrodynamics, and heat transfer: the heat equation, the wave equation, Laplace’s equation, Helmholtz equation, Klein–Gordon equation, and Poisson’s equation.

Non-linear differential equations finally are formed by the products of the unknown function and its derivatives are allowed and its degree is > 1. TNonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos, as they are BOTH co-existing in several scales and interacting in its parts on a single scale.

So even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, are hard problems as the. Navier–Stokes differential equation of fluids show. 

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.

So the conclusion is obvious: Nature with its infinite monads and scales IS NOT ALWAYS reflected in a mathematical mirror, which cannot be the origin of Nature (false creationist theories).

 

Generally speaking the techniques of differentiation distinguish between ODE ordinary equations with a single ST variable, which probe in depth on either space or time consecutive derivatives, but have a limited use as reality only allows 3 multiple derivatives into the single time or space dimension (beyond 3 the results are not essentially not related to the direct experience of how space-time systems evolve through scales). Multiple derivatives though are the tool to approximate two of the 3 great fields of observance of the scalar Universe, through mathematical mirrors, which we can write as generator equation:

∆-i: Fractal Mathematics (discontinuous analysis of finitesimals) < Analysis – Integrals and differential equations (∆º±1: continuous=organic space) < ∆+i: Polynomials (diminishing information on wholes).

It is important in that sense to understand the difference focus of the 3 approaches of mathematical mirrors to observe reality. We shall study in the usual order in which they were born, first ODE then PDE and finally fractals.

Now the mathematical elements of analysis are all well known and standard. Leibniz started them with the symbol ∫ that means summations.

Ordinary differential equations

An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term “ordinary” is used in contrast with the term partial differential equation which may be with respect to more than one independent variable.

They are basically analysis of single ‘steps’/symmetries of S≈T systems, but ODE can go ‘deeper’ into the spatial or temporal structure of the system by establishing multiple derivatives on the original parameter, thus they are perfect systems to ‘study’ the ternary dimensions of ‘integral’ s pace (1Distance, 2D area and 3D volume) and ‘derivative’ time (steady motion, acceleration and deceleration).

And as the symmetries between those 3D of space and time are not clearly understood, ∆st can bring some insights in its analysis.

To notice finally that the best use of mathematical equations and its operations are the simplest actions of motion as reproductions of information in its 3 states/varieties (potentials, waves and particles); but for complex social and reproductive processes very few internal characteristics can be extracted with mathematical tools.

And yet even in those simple cases, exact solutions are not always possible, regardless of the dogmatic myths of mathematical accuracy. This happens as usual because humans measure ‘lineal distances’ and reality is curved, so we approximate lineal quanta/finitesimal and then ad them to find the whole curved state, making use of one of the 3 ‘primary Galilean dualities’ between continuity and discontinuity, linearity and cyclicality, large and small.

So what are the key elements for finding ‘solutions’, that is descriptions of the full T.œ, its state and simpler actions of 1D-motion/reproduction in space, and topological ≤≥ change from lineal to cyclical form? Basically to have enough data about the ‘boundary conditions’ of the vital energy open ball (that is a parameter for the singularity if it exists, and for the membrane that encloses the system). As both are 1D, 2D hence lineal forms of the type A+Bx, then it is possible to measure and find determined solutions.

Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.

ODEs are thus symmetric to simple Space-time steps, which correspond themselves to the simplex 3 actions of 1D, 2D and some possible 4D simple entropy deaths and some simple 3D reproductive steps (3D however when combining space and time parameters and most combined steps of several dimensions and 5D worlds will required PDEs. )

Let us illustrate by a simple example. Consider a material particle of mass m moving along an axis Ox, and let x denote its coordinate at the instant of time t. The coordinate x will vary with the time, and knowledge of the entire motion of the particle is equivalent to knowledge of the functional dependence of x on the time t. Let us assume that the motion is caused by some force F, the value of which depends on the position of the particle (as defined by the coordinate x), on the velocity of motion ν = dx/dt and on the time t, i.e., F = F(x, dx/dt, t).

According to the laws of mechanics, the action of the force F on the particle necessarily produces an acceleration ω = d²x/dt² such that the product of w and the mass m of the particle is equal to the force, and so at every instant of the motion we have the equation:

2. m d²x/dt² = F (x, dx/dt,t) 

Where we find the first key ‘second derivative’ for the dimension of time acceleration, which requires a first insight on the Nature of physical systems and its dimensions in space vs. time.

In space, the dimensions seem to us in an easy hierarchical system of growth, 1D lines (2D if we consider them waves of Non-E fractal points with a 0-1 unit circle dimension for each point), 2D areas and 3 D volumes, but in time the 3 arrows depart from 1D steady state motion and can be considered as opposite directions, when volumes of space grow, through the scattering arrow of entropy diminishing its speed, vs. the acceleration of speed that diminishes space as the system collapses into a singularity:

So the 3D of classic space, ‘volume’ actually belongs to the entropic arrow of decelerating time that creates space-volume, vs. the opposite arrow of imploding time vortices that diminish space volume and increases speed, Vo x Ro = k.

So what seems in space a natural growth of volume in space, in time has a different order:

Entropic ≈decelerating volume < steady state ≈ distance-motion > Informative, cyclical area ≈ accelerated motion.

This different ‘order’ of dimensions when perceived in simultaneous space and cyclical time is the main dislocation in the way the mind perceives both (which is sorely painful when we consider the order of a world cycle, always starting in the ∆-1 scale of maximal information to decline as it grows and reproduces into less perfect, more entropic volumes of iterative forms that finally decline and die in the arrow of entropy; which the mind that has a SPATIAL-VOLUME INCLINED nature of ever-growth, does not understand).

This is the differential equation that must be satisfied by the function x(t) describing the behavior of the moving particle. It is simply a representation of laws of mechanics. Its significance lies in the fact that it enables us to reduce the mechanical problem of determining the motion of a particle to the mathematical problem of the solution of a differential equation.

Later the reader will find other examples showing how the study of various physical processes can be reduced to the investigation of differential equations.

The theory of differential equations began to develop at the end of the 17th century, almost simultaneously with the appearance of the differential and integral calculus. At the present time, differential equations have become a powerful tool in the investigation of natural phenomena. In mechanics, astronomy, physics, and technology they have been the means of immense progress. From his study of the differential equations of the motion of heavenly bodies, Newton deduced the laws of planetary motion discovered empirically by Kepler. In 1846 Leverrier predicted the existence of the planet Neptune and determined its position in the sky on the basis of a numerical analysis of the same equations.

To describe in general terms the problems in the theory of differential equations, we first remark that every differential equation has in general not one but infinitely many solutions; that is, there existsan infinite set of functions that satisfy it. For example, the equation of motion for a particle must be satisfied by any motion induced by the given force F(x, dx/dt, t), independently of the starting point or the initial velocity. To each separate motion of the particle there will correspond a particular dependence of x on time t. Since under a given force F there may be infinitely many motions the differential equation (2) will have an infinite set of solutions.

Every differential equation defines, in general, a whole class of functions that satisfy it. The basic problem of the theory is to investigate the functions that satisfy the differential equation. The theory of these equations must enable us to form a sufficiently broad notion of the properties of all functions satisfying the equation, a requirement which is particularly important in applying these equations to the natural sciences. Moreover, our theory must guarantee the means of finding numerical values of the functions, if these are needed in the course of a computation. We will speak later about how these numerical values may be found.

If the unknown function depends on a single argument, the differential equation is called an ordinary differential equation. If the unknown function depends on several arguments and the equation contains derivatives with respect to some or all of these arguments, the differential equation is called a partial differential equation. The first three of the equations in (1) are ordinary and the last three are partial.

The theory of partial differential equations has many peculiar features which make them essentially different from ordinary differential equations.

Let us now consider the ∫∂ operations for the different dimensions of reality, starting in this case with the simplest cyclical clock-motions, which as they do NOT move in space, and repeat its form in time, are in fact not operated by ∫∂ measures of change:

1D: cyclical clocks, angular momentum

In the graph, in the simplest physical systems 1D is merely the angular momentum of its cyclical clocks of time, maximised in the membrane that encloses the system. Strictly speaking it does not change but becomes the ‘present function’ of a repetitive frequency clock without a derivative of change as the time-space steps seem not to vary. When we introduce a torque, change happens, called ‘acceleration’, the second dimension of time motion in physics, which we shall latter study when analysing in 5D with the Galilean Px. Newton’s laws. Here we just shall briefly explain why in lineal time, as humans only use t to measure change, the 1D is the invariant one and its derivative is zero.

What about ‘higher’ more complex, cyclical, and scalar Dimensions? The answer is that as we change the form of the dimensions, we have to change the operandi we use; and specifically when we study the Dimensions of change, which is the one differential/integral equations quantify, those equations MUST ADAPT not the other way around as mirrors of reality to the FORM of the dimensions of space-time they describe.

So as 1D is  A STEADY STATE ROTARY MOTION, strictly speaking it does NOT change in space-time locomotion (which is what humans with its lineal single time express in derivatives). Hence basically the derivative of those angular momentums is zero. It is conserved. 

Let us recall briefly those classic definitions and maths:

Angular momentum is a vector that represents the product of a body’s rotational inertia and rotational velocity about a particular axis. In the simple case of revolution of a particle in a circle about a center of rotation, the particle remaining always in the same plane and having always the same distance from the center, we discard the vector nature of angular momentum, and treat it as a scalar proportional to moment of inertia, I and angular speed, ω:

L= Iω:   Angular momentum = moment of inertia × angular velocity, and its time derivative is

dL/dt =dI/dt ω +I dω/dt is zero, and dL/dt=0+I dω/dt, which reduces to dL/dt =Iα.

Therefore, angular momentum is constant,  dL/dt=0 when no torque is applied. And this is the essence of its conservation law, a specific case of the conservation of the 5Dimensions of space-time of the Universe:

‘In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque. Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant’.

But when a torque is applied in a single present plane, or much more relevant to our inquire: when a system is submitted to the organising or disorganising entropic force of a higher or lower plane of existence, and acceleration exists, a vortex of time-space happens and we enter into the social dimensions of evolution – the 5th Dimension of the mind.

Social Number = first dimension that defines regular ‘points’ which are undistinguishable, as societies in regular polygons, where prime polygons have the property of ‘increasing inwards’ its numbers through reproduction of vortex-points (n-grams), as the graph shows, studied in Theory of Numbers. So a number in its geometric interpretation is a ‘cyclical point’ of regular ‘unit-points’ of growing ‘inner dimensional density’ a point with a volume of vital energy and information, a fractal point.

The bottom line though of this brief analysis of a system with a single Dimension of time-space, either as a fractal point emerging through a parameter visible to the ∆+1 observer, notably angular momentum as in quantum physics (h), is that it IS A CONSTANT, not a differentiable parameter for which a first step in space-time, SS ˆST ˆ TS ˆTT is needed. Then we are in a 2D system, normally a TS motion, or ‘speed’ in which the quanta of space, ‘moves≈reproduces’ in a time trajectory, which allows to measure the change on one parameter, normally the spatial location, and accordingly ‘derivate’ the ‘ratio’ or ‘inverse product’ between them.

2D: LINEAL SPACE-TIME

Let us consider one example of each dual dimension, $t, the two samples mentioned – speed and area, which were the first 2 themes solved historically, with the classic notation, to keep the historical approach, to see how really the methods can be used equally for quanta=frequency=steps of time, or quanta=populations=finitesmals of space:

Speed and acceleration: 2D TT

The next possible steps or motions in space-time are given by a dual time-time motion, which is acceleration, or a similar dual motion in space which is volume. As such those 2D motions have diametral opposite consequences shrinking a system in time, towards a mind zero point (TT->5D) or expanding it in space towards a extension of free, entropic ∆-1 elements (SS->4D).

But they can be used in combined forms to extract the same equations of speed, density and momentum.  Let us put the TT example:

The velocity of a point for which the distance s is a given function of the time s = f(t) is equal to the derivative of this function: v = s’ = ƒ ‘ (t).

So as it was established experimentally by Galileo, the distance s covered in the time t by a body falling freely in a vacuum is expressed in terms of TT-acceleration by the formula:   s=gt²/2

Whereas g is a constant that measure the acceleration on Earth, equal to 9.81 m/sec².

What is the velocity of the falling body at each point in its path?

Here as we are in two TT variation already we must do exactly the inverse operation to that of searching for speed departing from space:

Let the body be passing through the point A at the time t and consider what happens in the short interval of time of length Δt; that is, in the time from t to t + Δt. The distance covered will be increased by a certain increment Δs. The original distance is s1 = gt²/2.

From the increased distance we find the increment:image073

This represents the distance covered in the time from t to t + Δt. To find the average velocity over the section of the path Δs, we divide Δs by Δt:image075

Letting Δt approach zero, we obtain an average velocity which approaches as close as we like to the true velocity at the point A. On the other hand, we see that the second summand on the right-hand side of the equation becomes vanishingly small with decreasing Δt, so that the average υav approaches the value gt, a fact which it is convenient to write as follows:image077

Consequently, gt is the true velocity at the time t, and so we can consider gt as yet another expression of the Sð equation of speed inn cyclical time, where now t is a ‘step’ and g the measure of its ‘feeding’ on gravitational space.

Let us make the following remark. The velocity of a nonuniform motion at a given time is a purely physical concept, arising from practical experience. Mankind arrived at it as the result of numerous observations on different concrete motions.

The study of nonuniform motion of a body on different parts of its path, the comparison of different motions of this sort taking place simultaneously, and in particular the study of the phenomena of collisions of bodies, all represented an accumulation of practical experience that led to the setting up of the physical concept of the velocity of a nonuniform motion at a given time. But the exact definition of velocity necessarily depended upon the method of defining its numerical value, and to define this value was possible only with the concept of the derivative.

In mechanics the velocity of a body moving according to the rule s = f(t) at the time t is defined as the derivative of the function f(t) for this value of t.

But now, as a result of our analysis, we have reached an exact definition of the value of the velocity at a given moment, namely the finitesimal, minimal action of a given time motion. This result is extremely important from a practical point of view, since our empirical knowledge of the velocity has been greatly enriched by the fact that we can now make an exact definition for the 5 different motions of time, greatly expanding our understanding in terms of analysis of  those motions and its variations.

And we have use the method departing from time-quanta, frequency of steps, ‘speed motion’…

And that’s good enough, keeping ∆t->o without reaching 0, as we shall always find a limiting ‘time unit’…

In the extreme of those limits (c-speed) the limit will be found on the gravitational field from where light extracts motion.

As it happens that limit can be in ∆-4, according to the decametric 10¹º-¹¹ scales between ∆planes and the 5D metric which accelerates clocks in smaller quanta, 4 planes down  x 10¹º-¹¹ time units between scales… faster in frequency, hence around ±10ˆ44  faster clocks/smaller bits of time.

So the gravitational infinitesimal is truly ∆t->o and hence irrelevant. (Incidentally physics discovered this value for the minimal clock of Nature, without knowing its scalar planes, social quanta, and 5 metric, by sheer chance. It was Planck and he call it the time of ‘God’ (:

How he did it? with the Universal constants; a fascinating theme treated further down this texts when done. In any case he was closer to the truth, as I always considered those numbers a solid quantitative proof along many other elements of the theory of grand numbers of ∆ST theory:screen-shot-2017-02-07-at-19-36-28

The theoretical importance of Tp in our argument over which type of continuous, S/t or discontinuous, λ(s) x ƒ (t) speed treat becomes now clear. As the absolute finitesimal of all the planes and scales in which time happens among human observers (regardless of possible ∆±|≥4| planes beyond human observance) in as much as those are the minimal scales in which gravitational fields might exercise its forces over our atomic substance, continuity happens and makes possible finitesimal/integral calculus because by all means this is undistinguishable from our pov/scale.

3D: POPULATIONS

Now when we get into 3D, which are combinations of 1D + 2D, the vibrations of different S-T combinations multiply our possibilities.

If there is only a type of 1D the fractal point or the invisible distance with no form, 2D GIVE us 4 fundamental variations, SS tt and st, ts. Now with 3D we can combine 2 and 1 varieties to give us the following orations:

1D + 4 2D, 2D + 4 1D and if they are not commutative as it seems the case, the inverse case, for 20 combinations of 3D populations.

It is then a whole encyclopaedia what you need to explain all the practical cases in which variations of 3 D integral Derivatives can be used to explain different vibrations of Sts motions of all kinds.

ADDING A NEW DIMENSION OF ‘WIDTH-ENERGY’ INTENSITY

Once this concept is fully understood we then need to deal with ‘finitesimal quantities’, either in time or in space, as the previous argument on ‘changes of speeds and frequencies of time motion steps’, ∆s/∆t , can be reversed to study changes of volumes of space and populations of simultaneous space-beings.  And so we apply all those concepts to the analysis of 2D populations. Let us put an example and resolve it in terms of space-quanta (method of limits) and in terms of its change with differentials.

Quanta of space.

Now a spatial use of the limit concept to calculate not a time but a space volume, forebear of differential calculus: 

image079

Example 2. A reservoir with a square base of side a and vertical walls of height h is full to the top with water (figure 1). With what force is the water acting on one of the walls of the reservoir?

We divide the surface of the wall into n horizontal strips of height h/n. The pressure exerted at each point of the vessel is equal, by a well-known law, to the weight of the column of water lying above it. So at the lower edge of each of the strips the pressure, expressed in suitable units, will be equal respectively to:image081

We obtain an approximate expression for the desired force P, if we assume that the pressure is constant over each strip. Thus the approximate value of P is equal to:image084

To find the true value of the force, we divide the side into narrower and narrower strips, increasing n without limit. With increasing n the magnitude 1/n in the above formula will become smaller and smaller and in the limit we obtain the exact formula:

P = ah²/2

 Leibniz rightly considered  1/n the ‘finitesimal unit’, whereas we consider 1 the whole, and n, its minimal fraction, usually 1 of its 10¹º elements (1/10¹º): the standard value of finitesimal units.

In the example again the finitesimal limit is extremely small. How much? We should consider statistical mechanics to find it is the size of molecules of water, which form bidimensional layers of liquid to shape the 3D volume, and are indeed about 10¹º times smaller than the whole.

Now the error ε is so small as to be P=(ah²/2) • 1.00000000001 (1 +1/n)

And this is a general rule in most cases: the finitesimal error is as small as 1/n, where n is the quanta of the scale. So when we do ∆+1 calculations as in most cases it is irrelevant. But theoretically it is important and in fact it will give us a ‘realist’ concept for the uncertainty principle of Heisenberg.

Hence indeed unnoticeable, truly infinitesimal but absolutely infinitesimal and certainly never proved by the axiomatic method as maths must be experimentally proved to avoid inflationary errors, and certainly as always in Γst (I should write gst as ∆st or Γst, the proper acronym, but i am lazy with wordpress 🙂 we do not accept a mathematical result without experimental proof (for me the fundamental use of mathematical physics), following Lobachevski, Godel and Einstein’s dictums.

The idea of the method of limits is thus simple, accurate and amounts to the following. In order to determine the exact value of a certain magnitude, we first determine not the magnitude itself but some approximation to it. However, we make not one approximation but a whole series of them, each more accurate than the last. Then from examination of this chain of approximations, that is from examination of the process of approximation itself, we uniquely determine the exact value of the magnitude. by ignoring the finitesimal error.

The same practical problem can be resolved with the differential used as an approximate value for the increment in the function. For example, suppose we have the problem of determining the volume of the walls of a similar closed cubical box whose interior dimensions are 10 × 10 × 10 cm and the thickness of whose walls is 0.05 cm. If great accuracy is not required, we may argue as follows. The volume of all the walls of the box represents the increment Δy of the function y = x3 for x = 10 and Δx = 0.1. So we find approximately: 

DIFFERENTIALS – ANY ST-DIMENSIONAL STEPS

The disquisition of which ‘minimalist finitesimal’ allow us to differentiate an S≈T algebraic symmetry, brings us to the ‘praxis’ of calculus techniques that overcome by ‘approximations’ the quest for the finitesimal quanta in space or time, susceptible of calculus manipulation, which gave birth to the praxis of finding differentials, which are the minimal F(Y) quanta to work with and obtain accurate results (hence normally an spatial finitesimal of change under a time dependent function). This was the origin of the calculus of differentials.

As always in praxis, the concept is based in the duality between huminds that measure with fixed rulers, lineal steps, over a cyclical, moving Universe. So Minds measure Aristotelian, short lines, in a long, curved Universe.

So the question comes to which minimalist lineal step of a mind is worthy to make accurate calculus of those long curved Universal paths.

It is then obvious that the derivative of a lineal motion has more subtle elements that its simplest algebraic form, the x ÷ lineal operation of ‘reproductive speed’ and so the concept of a differential to measure the difference between steady state lineal reproduction and the variations observed by a curve appeared as the strongest tool of approximation of both type of functions.

As we have considered that most differential equations will be of the form: F(s) ≈ g(t), where s and t are any of the 5 Dimensions of Space ($, S, §, ∫, •) or 5 Dimensions of time (t, T, ð, ∂, O), whose change respect to each other, we are bound to study…  showing how a spatial whole is dependent on the change and form of a world cycle, we shall consider generally that y->s and x->t…

The result of this change will be a much more GENERIC CONCEPT OF SPEED OF CHANGE in any OF THE DIMENSIONS OF ENTROPY, MOTION, ITERATION, INFORMATION OR FORM that defines the Universe, letting us introduce its 3 fundamental parameters, S/t=speed, t/s=density and s x t = momentum/force/energy… in a natural way with its multiple different meanings, Ðisomorphic to each other – as we repeat the s and t of the general

TIME DIMENSIONS BECOME SPACE DIMENSIONS BECOME TIME…

Physical equations in differential form, a general overview of its main species. History

Differential equations first came into existence with the invention of calculus by Newton and Leibniz. Newton listed three kinds of differential equations: those involving two derivatives one of space and time (or fluxions) and only one undifferentiated quantity (space or time parameter); those involving 2 derivatives and two quantities of space and time; and those involving more than two derivatives.

Its analysis thus was right in the spot as he referred changes to change in space or time, thus ∫∂ with ST-eps – a fact latter forgotten and today thoroughly missed with the ‘view’ of time as a single dimension of space (1D-lineal motion confused with 4D-entropy in philosophy of science)

It is still a good classification of partial differential equations as ‘time-like’ (∂x, ∂²x, ∂³x), or space like (∂²y, ∂y, ∂³y) or space-time like (∂x∂y, ∂y∂x) as the main variations that represent, T, TT, TTT; S, SS, SSS, ST, TS steps, which are the main 5D, 4D and 1,2,3D changes of the Universe.

And it speaks of the enormous range of real phenomena ∫∂ functions can describe as the essential operandi of mathematical physics and any ∆st phenomena.

What allow all those ∆st phenomena to enter the world of quantitative mathematics was the discovery of a pendulum clock to measure time in lineal fashion and a telescope to measure space. Both gave birth to the 2nd age of science, the mathematical/scientific method, added to the experimental Aristotelian method, which now the isomorphic GST age of stience completes.

In 1609 appeared the “New astronomy” of Kepler, containing his first and second laws for the motion of the planets around the sun.

In 1609 too Galileo directed his recently constructed telescope, though still small and imperfect, toward the night sky; the first glance in a telescope was enough to destroy the ideal celestial spheres of Aristotle and the dogma of the perfect form of celestial bodies. The surface of the moon was seen to be covered with mountains and pitted with craters. Venus displayed phases like the Moon, Jupiter was surrounded by four satellites and provided a miniature visual model of the solar system. The Milky Way fell apart into separate stars, and for the first time men felt the staggeringly immense distance of the stars. No other scientific discovery has ever made such an impression on the civilised world.

It also killed a method equally valid of thought represented by the Greeks and Leonardo: the idealised understanding of the canonical perfect GST game of existence, of which we were all impure platonic forms, bond to dissolve unlike the perfect game of the ∞ Universe, which is immortal.

Man never went back because alas! what really mattered was ballistics, mechanics and power. Idealism died away:

The further development of navigation, and consequently of astronomy, and also the new development of technology and mechanics necessitated the study of many new mathematical problems. The novelty of these problems consisted chiefly in the fact that they required mathematical study of the laws of motion in a broad sense of the word. And now we had machines to measure it better than the artistic Sp-eye-T=words of the human space-time mind.

 

The mean value theorem.

The differential expresses the approximate value of the increment of the function in terms of the increment of the independent variable and of the derivative at the initial point. So for the increment from x = a to x = b, we have:

ƒ(b) – ƒ(a)≈ ƒ'(a) (b-a).

It is possible to obtain an exact equation of this sort if we replace the derivative f′(a) at the initial point by the derivative at some intermediate point, suitably chosen in the interval (a, b). More precisely: If y = f(x) is a function which is differentiable on the interval , then there exists a point ξ, strictly within this interval, such that the following exact equality holds:

ƒ(b)-ƒ(a)=ƒ'(ξ)(b-a)

The geometric interpretation of this “mean-value theorem” (also called Lagrange’s formula or the finitedifference formula) is extraordinarily simple. Let A, B be the points on the graph of the function f(x) which correspond to x = a and x = b, and let us join A and B by the chord AB.

Now let us move the straight line AB, keeping it constantly parallel to itself, up or down. At the moment when this straight line cuts the graph for the last time, it will be tangent to the graph at a certain point C. At this point (let the corresponding abscissa be x = ξ), the tangent line will form the same angle of inclination α as the chord AB. But for the chord we have:

tan α = ƒ(b) – ƒ (a) / b-a.       On the other hand at the point C: tan α = ƒ’ (ξ):

This equation is the mean-value theorem, which has the peculiar feature that the point ξ appearing in it is unknown to us; we know only that it lies somewhere in the interval (a, b).

Its interpretation in ∆st is that ƒ'(ξ) corresponds to the value of a finitesimal lying between both.

FIRST, the fact that ‘membranes must determine the beginning and end point of any function for it to be meaningful and solvable. And indeed, only because we know when it starts and ends the domain, we are sure to find a mean point.

If we consider then a T.œ mean value theorem, where ƒ(b) > ƒ(a) if we are deriving in space, where f(b)=Max. S represents the parameter of the membrane, ƒ(a) will represent the singularity and so we shall find in between a finitesimal part of the vital energy of the T.œ with a mean value within that of Max. S (membrane) and Min. S (singularity). And viceversa, if we are deriving in search of the minimal quanta of time, ƒ(a) > ƒ  (b), where ƒ(a) represents the time speed of the singularity and ƒ(b) the time speed of the membrane. And the mean value will be that of the infinitesimal. 

But in spite of this indeterminacy, the formula has great theoretical significance and is part of the proof of many theorems in analysis.

The immediate practical importance of this formula is also very great, since it enables us to estimate the increase in a function when we know the limits between which its derivative can vary. For example:

|sin b – sin a| = |cos  ξ| (b-a) ≤ b-a.

Here a, b and ξ are angles, expressed in radian measure; ξ is some value between a and b; ξ itself is unknown, but we know that |cos  ξ |≤1

Another immediate expression of the theorem which allow to derive a general method for calculating the limits and approximations of polynomials with derivatives is: 

For arbitrary functions ϕ(x) and ψ(x) differentiable in the interval [a, b], provided only that ψ′(x) ≠ 0 in (a, b), the equation, holds where ξ is some point in the interval (a, b).

From the mean value theorem it is also clear then that a function whose derivative is everywhere equal to zero must be a constant; at no part of the interval can it receive an increment different from zero. Analogously, it is easy to prove that a function whose derivative is everywhere positive must everywhere increase, and if its derivative is negative, the function must decrease.

And so the ‘classic function of mean-value theorem’ allow us to introduce an essential element of ∫∂ which will open up the ∆st calculus of worldcycles of existence, the standing points of a function.

The mean value sets for the region between the limiting points of the curve – which must be taken in higher step-timespace as two sections of a bi-podal spherical line, part of the membrane of a 3D form, gives us then a value for the vital energy to be expressed with a scalar. And the initial and final point of the segment become the maximal and minimal of the function in F(f)=x values.

It is then between those two limits a question of find points of the vital energy among them the singularity Max. S x t, to have a well-defined TOE in its membrane (maximal minimal values) volume of energy, mean value and Maximal point of the Singularity.

Maxima and minimum. The 3 standing points of a world cycle. 

The minimal reality is a 3D² form seen in a single plane, with a singularity @-mind a membrane and a vital energy within. When we make a holographic broken image of this reality the simplest way to do it is in four cartesian regions, TT, ST, ts, and ss, which correspond to the +1 +1, +1 -1, -1 +1 and -1 -1 quadrants of the plane.

We can then dissect the sphere in antipodal points related to the identity neutral number o-1 the sphere of time probabilities that the largest whole maximises in its antipodal points. If we consider the antipodal points the emergent and final death point, which imperfect motions still close to zero-sums, the maximal middle point will be the singularity, Max. S x Max t.

One of the simplest and most important applications of the derivative in that sense is in the theory of maxima and minima.

Let us suppose that on a certain interval a≤t≤b we are given a function S = f(t) which is not only continuous but also has a derivative at every point. Our ability to calculate the derivative enables us to form a clear picture of the graph of the function. On an interval on which the derivative is always positive the tangent to the graph will be directed upward. On such an interval the function will increase; that is, to a greater value of t will correspond a greater value of f(t). On the other hand, on an interval where the derivative is always negative, the function will decrease; the graph will run downward.

We have drawn the graph of an ∆st function of the general form, S (any dimension of a whole world cycle or T.Œ) = f(T) – Any time motion or action.

It is defined on the interval between a minimal quanta in space or time (t1) and its limit as a function (d).

And it can represent any S=T duality, or more complex 5Ds=5Dt forms or simpler ones. We can also change the s and t coordinates according to the Galilean paradox, etc. Hence the ginormous numbers of applications, but essentially it will define a process of change in space-time between the emergence of the phenomena at ST1 AND ITS DEATH mostly by scattering and entropic dissolution of form at d.

And in most cases will have a bell curved from of fast growth after emergence in its first age of maximal motion (youth, 1D) till a maximal point where it often will reproduce into a discontinuous parallel form (not shown in the graph at Max. S x Max. T; which will provoke its loss of energy and start its diminution till its extinction at point d.

Thus the best way to express quantitatively in terms of S-T parameters (mostly information and energy), for any world cycle of any time-space super organism is a curve where we can find those key standing points in which a change of age, st-ate or motion happens. 

Of a special interest thus are the points of this graph whose abcissas are t1,2, 3, 4, 5.
At the point t0 the function f(t) is said to have a local maximum; by this we mean that at this point f(t) is greater than at neighboring points; more precisely for every t in a certain interval around the point x0.
A local minimum is defined analogously. For our function a local maximum occurs at the points t0 and t3, and a local minimum at the point t1.
At every maximum or minimum point, if it is inside the interval [a, b], i.e., if it does not coincide with one of the end points a or b, the derivative must be equal to zero.
This last statement, a very important one, follows immediately from the definition of the derivative as the limit of the ratio ΔS/ΔT. In fact, if we move a short distance from the maximum point, then ∆S≤0.

Thus for positive ΔT the ratio ΔS/ΔT is nonpositive, and for negative ΔT the ratio ΔS/ΔT is nonnegative. The limit of this ratio, which exists by hypothesis, can therefore be neither positive nor negative and there remains only the possibility that it is zero.

By inspection of the diagram it is seen that this means that at maximum or minimum points (it is customary to leave out the word “local,” although it is understood) the tangent to the graph is horizontal.

In the figure we should remark that at the points t2, and t4 also the tangent is horizontal, just as it is at the points t1, t3, although at these points the function has neither maximum nor minimum. In general, there may be more points at which the derivative of the function is equal to zero (stationary points) than there are maximum or minimum points.
Determination of the greatest and least values of a function.

In numerous technical questions it is necessary to find the point t at which a given function f(t) attains its greatest or its least value on a given interval.
In case we are interested in the greatest value, we must find x0 on the interval [a, b] for which among all x on [a, b] the inequality ƒ(to)≥ƒ(t) is fulfilled.
But now the fundamental question arises, whether in general there exists such a point. By the methods of modern analysis it is possible to prove the following existence theorem:

If the function f(t) is continuous on a finite interval, then there exists at least one point on the interval for which the function attains its maximum (minimum) value on the interval [a, b].
From what has been said already, it follows that these maximum or minimum points must be sought among the “stationary” points. This fact is the basis for the following well-known method for finding maxima and minima.
First we find the derivative of, f(t) and then solve the equation obtained by setting it equal to zero.

If t1, t2, ···, tn, are the roots of this equation, we then compare the numbers f(t1, f(t2), ···, f(tn) with one another. Of course, it is necessary to take into account that the maximum or minimum of the function may be found not within the interval but at the end (as is the case with the minimum in figure) or at a point where the function has no derivative.

Thus to the points t1, t2, ···, tn, we must add the ends a and b of the interval and also those points, if they exist, at which there is no derivative. It only remains to compare the values of the function at all these points and to choose among them the greatest or the least.

With respect to the stated existence theorem, it is important to add that this theorem ceases, in general, to hold in the case that the function f(t) is continuous only on the interval (a, b); that is, on the set of points x satisfying the inequalities a <t < b.

It is then necessary to consider an initial time point and a final time point, birth and death, emergence and extinction to have a determined solution.

 Derivatives of higher orders.

We have just seen how, for closer study of the graph of a function, we must examine the changes in its derivative f′(x). This derivative is a function of x, so that we may in turn find its derivative.
The derivative of the derivative is called the second derivative and is denoted by y”=ƒ”(x)

Analogously, we may calculate the third derivative y”‘=ƒ”‘(x) and more generally the nth derivative or, as it is also called, the derivative of nth order. But as there are not more than 3 ‘similar derivatives, with meaning’ in time (speed, acceleration, jerk) or space (distance, area and volume), beyond the 3rd derivative the use of derivatives is only as an approximation to polynomial equations, whose solvability itself is not possible by radicals beyond the 3rd power.

So it must be kept in mind that, for a certain value of x (or even for all values of x) this sequence may break off at the derivative of some order, say the kth; it may happen that f(k)(x) exists but not f(k + 1)(x). Derivatives of arbitrary order are therefore connected to the symmetry between power laws and ∫∂ operations in the 4th and inverse 5th Dimension, through the Taylor formula. For the moment we confine ourselves to the second and third derivatives for ‘real parameters’ of the 3 space volumes and time accelerations.

The second derivative has then as we have seen a simple significance in mechanics. Let s = f(t) be a law of motion along a straight line; then s′ is the velocity and s″ is the “velocity of the change in the velocity” or more simply the “acceleration” of the point at time t. For example, for a falling body under the force of gravity: That is, the acceleration of falling bodies is constant.

Significance of the second derivative; convexity and concavity.

The second derivative also has a simple geometric meaning. Just as the sign of the first derivative determines whether the function is increasing or decreasing, so the sign of the second derivative determines the side toward which the graph of the function will be curved.

“Suppose, for example, that on a given interval the second derivative is everywhere positive. Then the first derivative increases and therefore f′(x) = tan α increases and the angle a of inclination of the tangent line itself increases (figure 17). Thus as we move along the curve it keeps turning constantly to the same side, namely upward, and is thus, as they say, “convex downward.”

On the other hand, in a part of a curve where the second derivative is constantly negative  the graph of the function is “convex upward.

Criteria for maxima and minima; study of the graphs of curves.

If throughout the whole interval over which x varies the curve is convex upward and if at a certain point x0 of this interval the derivative is equal to zero, then at this point the function necessarily attains its maximum; and its minimum in the case of convexity downward. This simple consideration often allows us, after finding a point at which the derivative is equal to zero, to decide thereupon whether at this point the function has a local maximum or minimum.

Now, the apparently equal nature on a first derivative of the minimal and maximal points of a being, have also deep philosophical implications, as it makes at ‘first sight’ indistinguishable often the processes of ‘reproductive expansion’ towards a maximal and explosive decay into death, the ‘two reversal’ points of the 5D (maximal) and 4D (minimal) states of a cycle of existence, for which we have to make a second assessment (second derivative) to know if we are in the point of maximal life (5D) or maximal death (4D) of a world cycle.

And further on to know if the cycle will cease in a continuous flat encephalogram or will restart a new upwards trend.

Or in other words is any scalar, e>cc>m big-bang both the death and the birth of matter?

Finitesimal Quanta, as the limit of populations in space and the minimal action in time.

So there is behind the duality between the concept of limits and differentials (Newton’s vs. Leibniz’s approach), the concept of a minimal quanta in space or in time, which has been hardly explored by classic mathematics in its experimental meaning but will be the key to understand ‘Planckton’ (H-planck constants) and its role in the vital physics of atomic scales.

It is then essential to the workings of the Universe to fully grasp the relationship between scales and analysis. Both in the down direction of derivatives and the up dimension of integrals; in its parallelism with polynomials, which rise dimensional scales of a system in a different ‘more lineal social inter planar way’.

So polynomials and limits are what algebra is to calculus; space to time and lineal algebra to curved geometries.

The vital interpretation though of that amazing growth of polynomials is far more scary.

Power laws by the very fact of ‘being lineal’, and maximise the growth of a function ARE NOT REAL in the positive sense of infinite growth, a fantasy only taken seriously by our economists of greed and infinite usury debt interest… where the eª exponential function first appeared.

The fact is that in reality such exponentials only portrait the decay destruction of a mass of cellular/atomic beings ALREADY created by the much smaller processes of ‘re=product-ion’ which is the second dimension mostly operated with multiplication (of scalars or anti commutative cross vectors).

So the third dimension of operandi is a backwards motion –  a lineal motion into death, because it only reverses the growth of sums and multiplications polynomials makes sense of its properties.

Let us then see how the operations mimic the five dimensions, beyond the simplest ST, SS and TT steps, namely reproductive and 4D-5D inverted arrows.

In general we can establish as the main parameter of the singularity, its time frequency, which will be synchronised to the rotary motion or angular momentum of the cyclical membrane. They will appear as the initial conditions and boundary conditions of a derivative/integral function, which often will be able to define the values of the vital energy within, as the law of superposition should work between the 3 elements, such as:

1D (singularity) + 2D (Holographic principle) = 3D (vital energy).

In practice this means the ‘synchronicity in time of the clocks of the 3 parts of the being’ and the superposition of the solutions that belong to each of the 3 elements of any T.œ

4th dimension: Entropy: S∂ polynomial death dimension of decay.

POLYNOMIALS DO NOT EVOLVE REALITY towards an impossible  infinite growth. THEY ARE the inverse decay process; of exponential extinction, eˆ-x.

5th dimension: ∫T…

This is understood better observing that the inverse function does in fact model growth in the different models of biology and physics, limited by a carrying capacity straight flat line:

screen-shot-2016-12-30-at-12-06-38

The logarithmic function has as derivative an infinitesimal, 1/x, which makes it interesting as it models better the curve of growth from o to 1 in the emergent fast explosive ∆-1 seed state, while the inverse eˆ-x model the decay death process.

Integrals and derivatives which have a much slower growth, than polynomials on the other hand do model much better as they integrate the ‘indivisible’ finitesimal quanta of a system, its organic growth and ‘wholeness’ integrated in space.

Thus integrals do move up a social growth in new ∆+1 5D planes. And its graphs are a curved geometry, which takes each lineal step (differential) upwards, but as it creates a new whole, part of its energy growth sinks and curves to give birth to the mind-singularity @, the wholeness that warps the whole, and converts that energy into still, shrunk mind-mappings of information, often within the 3D particle-head.

We will retake the analysis of the more complex st-eps on 3, 4 and 5D, since most of the complex process related to the 3rd dimension, as a mixture of S and T inner scales, will require a more complex double or triple derivative and integrals – only the 4D decay entropic explosion can be satisfied as the decay of the single ∆-1 finitesimal with a single variable.

Let us now move to the inverse 5D function:

5D ∫∆-1 INTEGRALS

5D: VORTEX OF INFORMATION: ∫T@. The culmination of the process of dimensional growth so far is the state of absolute stillness of the mind. It is the integral function of wholes, made of finitesimal 4D parts. And so as we integrate them we reconstruct the being in i=ts-elf:

All what we have said changes though in 5D, where a force is exerted by a 5D SINGULARITY at the center of the vortex.  How the Galilean paradox observes this ‘change’? Establishing a second dimension of ‘dynamic time motion’, which we call acceleration, or inversely establishing a new dimension of space as the angular momentum decelerates, creating a volume of space from a present flat space-time sheet. And as both do imply change in ST volume, we find again relevant derivatives (finitesimal steps either in time frequency or space volume, and integrals (to bring the whole spatial static view of the phenomena  to calculate that change.

Galileo discovered something essential to ∆ST: Relativity of motion, which is also a distance:

The state of rest and motionlessness is unknown in nature, but a construct of the mind (@-view). The whole of nature, from the smallest particles up to the most massive bodies, is in a state of eternal creation and annihilation, in a perpetual flux, in unceasing motion and change. In the final analysis, every natural science studies some aspect of this temporal motion vs. spatial form duality. This is the ST question of analysis.

Next Newton and Leibniz studied the ∆-question of analysis: how small instants of time or pieces of space gather into larger wholes and vice versa, how to extract the finitesimal quanta from the larger whole.

Both questions put together gave birth of analysis. Hence the classic book text definition:

“∆st Mathematical analysis is that branch of mathematics that provides methods for the quantitative investigation of various processes of change, motion, and dependence of one magnitude on another. ”

The name “infinitesimal analysis” says nothing about the subject matter under discussion but emphasizes the ∆-method.

We are dealing here with the special mathematical method of infinitesimals, or in its modern form, of limits.

The error of CLASSIC science being to consider there are NO LIMITS to infintesimals. Yet ∆-scales introduce a limit in the minimal quanta, or single frequency in space or time of the parameter study. So we shall talk of finitesimals and quanta, frequencies and bits≈minimal time cycles.

This was far more evident in the beginning, through the calculus of limits and Leibniz’s concept of an infinitesimal as the inverse, 1/n of a quantity,  before the axiomatic method stubbornly decided to go ‘inflationary’ with the language of information (as all are) in its 3rd age and talk about bizarre infinities (Cantor).  

In fact, analysis is just in its theory an inflationary extension (classic error of all languages from money to fiction words) of the method of limits.

Integrals

Thus integrals, tend to represent the growth of a space population, till it reaches a wholeness in a closed domain. So we can do ‘line integrals’, ‘surface integrals’, ‘volume integrals’, in simultaneous space.

Integrals though are also related to a world cycle, specially the motion if time, closer to the action of reproduction in space, as nature is  ‘constantly building integrated wholes by the accumulation of single time actions of reproduction that become ‘clone’ cells-atoms-citizens of an integrated supœrganism. 

It is precisely in those more complex games of integration of a ‘flow’ of time actions of reproduction, and ‘constrains’ to time actions by an integral line membrane, where we find the more subtle use of both functions. Let us consider it in more detail.

But they can also portray the growth or diminution of populations of space, and then as space is symmetric, we can use inverse functions, normally related to ‘e’.

They can give birth unlike and when a system decreases, the space is dying when it grows it does so slower, so we find also different speeds on the two time arrows of space through the 5th dimension.

Space is symmetric; in its directions and they co-exist together. Time is not symmetric and it is experienced as a sequential pattern of single Time cycles. So Time parameters are shorter in form, space is a more extended system. Of time we see only an instant, of space we integrate instants/cycles of time and sum them as frequencies which all play the same world cycle.

Time though often is just the reproduction of a new unit of space. Thus, time cycles become populations of a spatial herd due to its reproduction of a ‘seed’ form.

Space thus is the ‘mirror reproductive symmetry’ of ‘frequencies in time’, its tail of memories, by reproduction, expansion, and radiation along the path of the singular timeline of the wave.

So in broad strokes derivative and integrals cover a wide range of 5D themes: the infinitesimal units of  time frequencies and complex herds of space populations.

Whereas given the simultaneity properties of space, integrals tend to be used to calculate space populations, and given the individual sequential structure of time frequencies, derivatives are most often used to calculate time motions.

HOLOGRAPHIC INTEGRALS: Area.

The area of a function has different meanings, but generally speaking is a measure of its vital energy between the singularity at 0 point or initial condition and its membrane, the function itself, between the limits of the domain. So it is an operation constantly performed by membrane and singularity as the initial point and boundary condition, by ‘ex-foliating’ in layers the vital space and adding it up piece by piece, finitesimal by finitesimal.

The way mathematics treats integrals deals mostly with the obsession with perfect measure achieved by reducing the sections of the being to minimal infinitesimals. We have discussed the limits and irrelevance of such approach. It is of more interesting to discuss the different type of time-like, space-like and combined space-time dimensional functions integrated through this procedure.

And to consider the question of the ‘boundary conditions’, in which the membrane determines the volume which is integrated as the Space-time area, surrounded by the being

Let us suppose that a curve above the x-axis forms the graph of the function y = f(x). We attempt to find the area S of the segment bounded by the line y = f(x), by the x-axis and by the straight lines drawn through the points x = a and x = b parallel to the y-axis.

To solve this problem we proceed as follows. We divide the interval [a, b] into n parts, not necessarily equal. We denote the length of the first part by Δx1, of the second by Δx2, and so forth up to the final part Δxn. In each segment we choose points ξ1, ξ2, ···, ξn and set up the sum:image591

The magnitude Sn is obviously equal to the sum of the areas of the rectangles shaded in figure:

image593

The finer we make the subdivision of the segment [a, b], the closer Sn will be to the area S. If we carry out a sequence of such constructions, dividing the interval [a, b] into successively smaller and smaller parts, then the sums Sn will approach S.

The possibility of dividing [a, b] into unequal parts makes it necessary for us to define what we mean by “successively smaller” subdivisions. We assume not only that n increases beyond all bounds but also that the length of the greatest Δxi in the nth subdivision approaches zero. Thus the calculation of the desired area has in this way been reduced to finding the limit:

image597 image601

We note that when we first set up the problem, we had only an empirical idea of what we mean by the area of our curvilinear figure, but we had no precise definition. But now we have obtained an exact definition of the concept of area: It is the limit.:

image603

We now have not only an intuitive notion of area but also a mathematical definition, on the basis of which we can calculate the area numerically .

 

We have assumed that: ƒ(x)≥0. If f(x) changes sign, then in figure, the limit will give us the algebraic sum of the areas of the segments lying between the curve y = f(x) and the x-axis, where the segments above the x-axis are taken with a plus sign and those below with a minus sign.

Definite integral.

The need to calculate the integral Sum limit arises in many other problems in which a new dimension is reached by the sum of finitesimal paths. For example, suppose that a point is moving along a straight line with variable velocity v = f(t). How are we to determine the distance s covered by the point in the time from t = a to t = b?

Let us assume that the function f(t) is continuous; that is, in small intervals of time the velocity changes only slightly. We divide the interval [a, b] into n parts, of length Δt1, Δt2, ···, Δtn. To calculate an approximate value for the distance covered in each interval Δti, we will suppose that the velocity in this period of time is constant, equal throughout to its actual value at some intermediate point ξ1. The whole distance covered will then be expressed approximately by the sum:image601

and the exact value of the distance s covered in the time from a to b, will be the limit of such sums for finer and finer subdivisions; that is, it will be the limit:

image603

It would be easy to give many examples of practical problems leading to the calculation of such a limit. We will discuss some of them later, but for the moment the examples already given will sufficiently indicate the importance of this idea. The limit is called the definite integral of the function f(x) taken over the interval [a, b], and it is denoted by

image605

The expression f(x)dx is called the integrand, a and b are the limits of integration; a is the lower limit, b is the upper limit.

The connection between differential and integral calculus.

The problem considered theN reduces to calculation of the definite integral:

image607

Another example IS the problem of finding the area bounded by the parabola y = x².

Here the problem reduces to calculation of the integral:

image609

We were able to calculate both these integrals directly, because we have simple formulas for the sum of the first n natural numbers and for the sum of their squares. But for an arbitrary function f(x), we are far from being able to add up the sum  (that is, to express the result in a simple formula) if the points ξi, and the increments Δxi are given to suit some particular problem. Moreover, even when such a summation is possible, there is no general method for carrying it out; various methods, each of a quite special character, must be used in the various cases.

So we are confronted by the problem of finding a general method for the calculation of defiqite integrals. Historically this question interested mathematicians for a long period of time, since there were many practical aspects involved in a general method for finding the area of curvilinear figures, the volume of bodies bounded by a curved surface, and so forth.

We have already noted that Archimedes was able to calculate the area of a segment and of certain other figures. The number of special problems that could be solved, involving areas, volumes, centers of gravity of solids, and so forth, gradually increased, but progress in finding a general method was at first extremely slow. The general method could not be discovered until sufficient theoretical and computational material had been accumulated through the demands of practical life.

The work of gathering and generalizing this material proceeded very gradually until the end of the Middle Ages; and its subsequent energetic development was a direct consequence of the rapid growth in the productive powers of Europe resulting from the breakup of the former (feudal) methods of manufacturing and the creation of new ones (capitalistic).

The accumulation of facts connected with definite integrals proceeded alongside of the corresponding investigations of problems related to the derivative of a function. The reader already knows from that this immense preparatory labor was crowned with success in the 17th century by the work of Newton and Leibnitz. It is in this sense that Newton and Leibnitz are the creators of the differential and integral calculus.

One of the fundamental contributions of Newton and Leibnitz consists of the fact that they finally cleared up the profound connection between differential and integral calculus, which provides us, in particular, with a general method of calculating definite integrals for an extremely wide class of functions.

To explain this connection, we turn tq an example from mechanics.

We suppose that a material point is moving along a straight line with velocity v = f(t), where t is the time. We already know that the distance a covered by our point in the time between t = t1 and t = t2 is given by the definite integral:

image611Now let us assume that the law of motion of the point is known to us; that is, we know the function s = F(t) expressing the dependence on the time t of the distance s calculated from some initial point A on the straight line. The distance σ covered in the interval of time [t1, t2] is obviously equal to the difference: σ= F(t2) – F(t1)

In this way we are led by physical considerations to the equality:image615

which expresses the connection between the law of motion of our point and its velocity.

From a mathematical point of view the function F(t), may be defined as a function whose derivative for all values oft in the given interval is equal to f(t), that is:

F'(t)= ƒ(t).    Such a function is called a primitive for f(t).

We must keep in mind that if the function f(t) has at least one primitive, then along with this one it will have an infinite number of others; for if F(t) is a primitive for f(t), then F(t) + C, where C is an arbitrary constant, is also a primitive. Moreover, in this way we exhaust the whole set of primitives for f(t), since if F1(t) and F2(t) are primitives for the same function f(t), then their difference ϕ(t) = F1(t) − F2(t) has a derivative ϕ(t) that is equal to zero at every point in a given interval so that ϕ(t) is a constant.*

From a physical point of view the various values of the constant C determine laws of motion which differ from one another only in the fact that they correspond to all possible choices for the initial point of the motion.

We are thus led to the result that for an extremely wide class of functions f(x), including all cases where the function f(x) may be considered as the velocity of a point at the time x, we have the following equality:

where F(x) is an arbitrary primitive for f(x).

This equality is the famous formula of Newton and Leibnitz, which reduces the problem of calculating the definite integral of a function to finding a primitive for the function and in this way forms a link between the differential and the integral calculus.

Many particular problems that were studied by the greatest mathematicians are automatically solved by this formula, stating that the definite integral of the function. f(x) on the interval [a, b] is equal to the difference between the values of any primitive at the left and right ends of the interval. It is customary to write the difference (30) thus:
image621

Example 1. The equality: (x³/3)’=x² shows that the function x³/3 is a primitive for the function x2. Thus, by the formula of Newton and Leibnitz:

image625

Example 2. Let c and c′ be two electric charges, on a straight line at distance r from each other. The attraction F between them is directed along this straight line and is equal to:

F=a/r²   (a = kcc′, where k is a constant). The work W done by this force, when the charge c remains fixed but c′ moves along the interval [R1, R2], may be calculated by dividing the interval [R1, R2] into parts Δri.

On each of these parts we may consider the force to be approximately constant, so that the work done on each part is equal to:image629. Making the parts smaller and smaller, we see that the work W is equal to the integral:
image631

The value of this integral can be calculated at once, if we recall that:
image633

So that:
image635

In particular, the work done by a force F as the charge c′, initially at a distance R1, from c, moves out to infinity, is equal to:
image637

From the arguments given above for the formula of Newton and Leibnitz, it is clear that this formula gives mathematical expression to an actual tie existing in the objective world. It is a beautiful and important example of how mathematics gives expression to objective laws.

We should remark that in his mathematical investigations, Newton always took a physical point of view. His work on the foundations of differential and integral calculus cannot be separated from his work on the foundations of mechanics.

The concepts of mathematical analysis, such as the derivative or the integral, as they presented themselves to Newton and his contemporaries, had not yet completely “broken away” from their physical and geometric origins, such as velocity and area. In fact, they were half mathematical in character and half physical. The conditions existing at that time were not yet suitable for producing a purely mathematical definition of these concepts. Consequently, the investigator could handle them correctly in complicated situations only if he remained in close contact with the practical aspects of his problem even during the intermediate (mathematical) stages of his argument.

From this point of view the creative work of Newton was different in character from that of Leibnitz.* Newton was guided at all stages by a physical way of looking at the problem. But the investigations of Leibnitz do not have such an immediate connection with physics, a fact that in the absence of clear-cut mathematical definitions sometimes led him to mistaken conclusions. On the other hand, the most characteristic feature of the creative activity of Leibnitz was his striving for generality, his efforts to find the most general methods for the problems of mathematical analysis.

The greatest merit of Leibnitz was his creation of a mathematical symbolism expressing the essence of the matter. The notations for such fundamental concepts of mathematical analysis as the differential dx, the second differential d2x, the integral ∫y dx, and the derivative d/dx were proposed by Leibnitz. The fact that these notations are still used shows how well they were chosen.

One advantage of a well-chosen symbolism is that it makes our proofs and calculations shorter and easier; also, it sometimes protects us against mistaken conclusions. Leibnitz, who was well aware of this, paid especial attention in all his work to the choice of notation.

The evolution of the concepts of mathematical analysis (derivative, integral, and so forth) continued, of course, after Newton and Leibnitz and is still continuing in our day; but there is one stage in this evolution that should be mentioned especially. It took place at the beginning of the last century and is related particularly to the work of Cauchy.

Cauchy gave a clear-cut formal definition of the concept of a limit and used it as the basis for his definitions of continuity, derivative, differential, and integral. These definitions have been introduced at the corresponding places in the present chapter. They are widely used in present-day analysis.

The great importance of these achievements lies in the fact that it is now possible to operate in a purely formal way not only in arithmetic, algebra, and elementary geometry, but also in this new and very extensive branch of mathematics, in mathematical analysis, and to obtain correct results in so doing.

Regarding practical application of the results of mathematical analysis, it is now possible to say: If the original data are verified in the actual world, then the results of our mathematical arguments will also be verified there. If we are properly assured of the accuracy of the original data, then there is no need to make a practical check of the correctness of the mathematical results; it is sufficient to check only the correctness of the formal arguments.

This statement naturally requires the following limitation. In mathematical arguments the original data, which we take from the actual world, are true only up to a certain accuracy. This means that at every step of our mathematical argument the results obtained will contain certain errors, which may accumulate as the number of steps in the argument increases.*

Returning now to the definite integral, let us consider a question of fundamental importance. For what functions f(x), defined on the interval [a, b], is it possible to guarantee the existence of the definite integral:image639Namely a number to which the sum:image641

Tends as limit as max Δxi, → 0? It must be kept in view that this number is to be the same for all subdivisions of the interval [a, b] and all choices of the points ξi.

Functions for which the definite integral, namely the limit (29), exists are said to be integrable on the interval [a, b]. Investigations carried out in the last century show that all continuous functions are integrable.

But there are also discontinuous functions which are integrable. Among them, for example, are those functions which are bounded and either increasing or decreasing on the interval [a, b].

The function that is equal to zero at the rational points in [a, b] and equal to unity at the irrational points, may serve as an example of a nonintegrable function, since for an arbitrary subdivision the integral sum sn, will be equal to zero or unity, depending on whether we choose the points ξi, as rational numbers or irrational.

Let us note that in many cases the formula of Newton and Leibnitz provides an answer to the practical question of calculating a definite integral. But here arises the problem of finding a primitive for a given function; that is, of finding a function that has the given function for its derivative. We now proceed to discuss this problem. Let us note by the way that the problem of finding a primitive has great importance in other branches of mathematics also, particularly in the solution of differential equations.

As we stated before integrals are mostly useful when we are studying a ‘defined’ full Spe<ST>Tiƒ system with a membrane or contour closing the surface. As integrals are more concerned with ‘space’ and ‘derivatives’ with time.  And further on, those which integrate space-time systems, or double and triple integrals.

Calculus of ALL type of vital spaces, enclosed by time functions, with a ‘scalar’ point of view, parameter that measured what the point of view extracted in symbiosis with the membrane, from the vital space it enclosed. Alas, this quantity absorbed and ab=used by the point of view, on the vital space would be called ‘Energy’, the vital space ‘field’, the membrane ‘frequency’, the finitesimal ‘quanta or Universal constant’, and the scalar point of view ‘active magnitude.

The fundamental language of physics are differential equations, which allow to measure the content of vital space of a system. The richness and varieties of ‘world species’ will define many variations on the theme. Sometimes there will not be a central point of view, and we talk of a liquid state’, where volumes will not have a ‘gradient’, but ‘Pressure’, the controlling parameter of the time membrane will be equal, or related to the gradient of the eternal world p.o.v. of the Earth (gravitational field).

Then we shall integrate along 3 parameters, the density that defines the liquid, the height that defines the gradient and the volume enclosed. Liquids, due to the simplicity of lacking an internal POV, would be the first physical application of Leibniz’s findings by his students, the Bernoulli family. Next a violin player would find the differential equation of waves – the essential equation of the membranes of present time of all systems. The 3rd type of equations, those of the central point of view, will have to wait a mathematician, Poisson – latter refined by Einstein in his General Relativity.

This is the error of Newton. All cycles are finite, as they close into themselves. All worldcycles of life and death are finite as they end as they begun in the dissolution of death. All entropic motions stop. All time vortices once they have absorbed all the entropy of their territory become wrinkled, and die. Newton died, his ‘time duration’ did not extend to infinity.

But those minds measure from their self-centered point of view, only a part of the Universe, and the rest remains obscure. So all of them display the paradox of the ego, as they confuse the whole Universe with their world, and see themselves larger than all what they don’t perceive. Hence as Descartes wittingly warned the reader in his first sentences ‘every human being thinks he is gifted with intelligence.

The ternary parts of a T.œ: its calculus.

We have already studied the process of integration for functions of one variable defined on a one-dimensional region, namely an interval. But the analogous process may be extended to functions of two, three, or more variables, defined on corresponding regions.

For example, let us consider a surface:z= ƒ (x,y)

defined in a rectangular system of coordinates, and on the plane Oxy let there be given a region G bounded by a closed curve Γ. It is required to find the volume bounded by the surface, by the plane Oxy and by the cylindrical surface passing through the curve Γ with generators parallel to the Oz axis (figure 33). To solve this problem we divide the plane region G into subregions by a network of straight lines parallel to the axes Ox and Oy and denote by: G1, G2… Gn.

image849

those subregions which consist of complete rectangles. If the net is sufficiently fine, then practically the whole of the region G will be covered by the enumerated rectangles. In each of them we choose at will a point:

image853

and, assuming for simplicity that Gi denotes not only the rectangle but also its area, we set up the sum:image855

It is clear that, if the surface is continuous and the net is sufficiently fine, this sum may be brought as near as we like to the desired volume V. We will obtain the desired volume exactly if we take the limit of the sum (47) for finer and finer subdivisions (that is, for subdivisions such that the greatest of the diagonals of our rectangles approaches zero):image857
From the point of view of analysis it is therefore necessary, in order to determine the volume V, to carry out a certain mathematical operation on the function f(x, y) and its domain of definition G, an operation indicated by the left side of equality (48). This operation is called the integration of the function f over the region G, and its result is the integral of f over G. It is customary to denote this result in the following way:image859

Similarly, we may define the integral of a function of three variables over a three-dimensional region G, representing a certain body in space. Again we divide the region G into parts, this time by planes parallel to the coordinate planes. Among these parts we choose the ones which represent complete parallelepipeds and enumerate them:G1, G2… Gn.

In each of these we choose an arbitrary point:image863

and set up the sum:image865

where Gi denotes the volume of the parallelepiped Gi. Finally we define the integral of f(x, y, z) over the region G as the limit:image867

to which the sum (50) tends when the greatest diagonal d(Gi) approaches zero.

Let us consider an example. We imagine the region G is filled with a nonhomogeneous mass whose density at each point in G is given by a known function ρ(x, y, z). The density ρ(x, y, z) of the mass at the point (x, y, z) is defined as the limit approached by the ratio of the mass of an arbitrary small region containing the point (x, y, z) to the volume of this region as its diameter approaches zero.* To determine the mass of the body G it is natural to proceed as follows. We divide the region G into parts by planes parallel to the coordinate planes and enumerate the complete parallelepipeds formed in this way:  G1, G2, …, Gn

Assuming that the dividing planes are sufficiently close to one another, we will make only a small error if we neglect the irregular regions of the body and define the mass of each of the regular regions Gi (the complete parallelepipeds) as the product:image871

where (ξi, ηi, ζi) is an arbitrary point Gi. As a result the approximate value of the mass M will be expressed by the sum:image873

and its exact value will clearly be the limit of this sum as the greatest diagonal Gi approaches zero; that is:image875

The integrals, 49 and 51 are called double and triple integrals respectively.

image877

Let us examine a problem which leads to a double integral. We imagine that water is flowing over a plane surface. Also, on this surface the underground water is seeping through (or soaking back into the ground) with an intensity f(x, y) which is different at different points. We consider a region G bounded by a closed contour (figure 34) and assume that at every point of G we know the intensity f(x, y), namely the amount of underground water seeping through per minute per cm2 of surface; we will have f(x, y) > 0 where the water is seeping through and f(x, y) < 0 where it is soaking into the ground. How much water will accumulate on the surface G per minute ?

If we divide G into small parts, consider the rate of seepage as approximately constant in each part and then pass to the limit for finer and finer subdivisions, we will obtain an expression for the whole amount of accumulated water in the form of an integral:image879 Double (two-fold) integrals were first introduced by Euler. Multiple integrals form an instrument which is used everyday in calculations and investigations of the most varied kind.

It would also be possible to show, though we will not do it here, that calculation of multiple integrals may be reduced, as a rule, to iterated calculation of ordinary one-dimensional integrals.

Contour and surface integrals. Finally, we must mention that still other generalizations of the integral are possible. For example, the problem of defining the work done by a variable force applied to a material point, as the latter moves along a given curve, naturally leads to a so-called curvilinear integral, and the problem of finding the general charge on a surface on which electricity is continuously distributed with a given surface density leads to another new concept, an integral over a curved surface:image881

For example, suppose that a liquid is flowing through space ( and that the velocity of a particle of the liquid at the point (x, y)is given by a function P(x, y), not depending on z. If we wish to determine the amount of liquid flowing per minute through the contour Γ, we may reason in the following way. Let us divide Γ up into segments Δsi. The amount of water flowing through one segment Δsi is approximately equal to the column of liquid shaded in figure 35; this column may be considered as the amount of liquid forcing its way per minute through that segment of the contour. But the area of the shaded parallelogram is equal to:

P i (x,y) • ∆Si • cos α i  where αi is the angle between the direction, ‾x of the x-axis and the outward normal of the surface bounded by the contour Γ; this normal is the perpendicular ñ to the tangent, which we may consider as defining the direction of the segment Δsi. By summing up the areas of such parallelograms and passing to the limit for finer and finer subdivisions of the contour Γ, we determine the amount of water flowing per minute through the contour Γ; it is denoted thus:image889

and is called a curvilinear integral. If the flow is not everywhere parallel, then its velocity at each point (x, y) will have a component P(x, y) along the x-axis and a component Q(x, y) along the y-axis. In this case we can show by an analogous argument that the quantity of water flowing through the contour will be equal to:image891

When we speak of an integral over a curved surface G for a function f(M) of its points M(x, y, z), we mean the limit of sums of the form:image893

for finer and finer subdivisions of the region G into segments whose areas are equal to Δσi.

General methods exist for transforming multiple, curvilinear, and surface integrals into other forms and for calculating their values, either exactly or approximately.

Ostrogradskiĭ Formula.

Several important and very general formulas relating an integral over a volume to an integral over its surface (and also an integral over a surface, curved or plane, to an integral around its boundary) have a very wide application, and are yet another striking proof on the constant trans-form-ations of S≈T DIMENSIONS, and interaction of the parts of the system, in this case between the membrane that encircles the vital space, whose parameters ARE ALWAYS CLOSELY RELATED, as we can consider the membrane, just the last ‘cover’ of maximal size of that inner 3D vital energy (unlike the quite distinct singularity, which ‘moves’ across ∆±i scales and tends to be quite different in form, parameters and substance)

Let us put an example: imagine, as we did before, that over a plane surface there is a horizontal flow of water that is also soaking into the ground or seeping out again from it. We mark off a region G, bounded by a curve Γ, and assume that for each point of the region we know the components P(x, y) and Q(x, y) of the velocity of the water in the direction of the x-axis and of the y-axis respectively.

Let us calculate the rate at which the water is seeping from the ground at a point with coordinates (x, y). For this purpose we consider a small rectangle with sides Δx and Δy situated at the point (x, y).

As a result of the velocity P(x, y) through the left vertical edge of this rectangle, there will flow approximately P(x, y)Δy units of water per minute into the rectangle, and through the right side in the same time will flow out approximately P(x + Δx, y)Δy units. In general, the net amount of water leaving a square unit of surface as a result of the flow through its left and right vertical sides will be approximately:

image895

If we let Δx approach zero, we obtain in the limit: ∂P/∂x.

Correspondingly, the net rate of flow of water per unit area in the direction of the y-axis will be given by: ∂Q/∂y.

This means that the intensity of the seepage of ground water at the point with coordinates (x, y) will be equal to: ∂P/∂x + ∂Q/∂y

But in general, as we saw earlier, the quantity of water coming out from the ground will be given by the double integral of the function expressing the intensity of the seepage of ground water at each point, namely:image903

But, since the water is incompressible, this entire quantity must flow out during the same time through the boundaries of the contour Γ. The quantity of water flowing out through the contour Γ is expressed, as we saw earlier, by the curvilinear integral over Γ:image905

The equality of the magnitudes (52) and (53) gives in its simplest two-dimensional case:image907

A key formula to mirror a widespread phenomenon in the external world, which in our example we interpreted in a readily visualized way as preservation of the volume of an incompressible fluid.

Which can be generalise to express the connection between an integral over a multidimensional volume and an integral over its surface. In particular, for a three-dimensional body G, bounded by the surface Γ:

image909

where dσ is the element of surface.

It is interesting to note that the fundamental formula of the integral calculus:

image911

may be considered as a one-dimensional case. The equation (54) connects the integral over an interval with the “integral” over its “null-dimensional” boundary, consisting of the two end points.

Formula (54) may be illustrated by the following analogy. Let us imagine that in a straight pipe with constant cross section s = 1 water is flowing with velocity F(x), which is different for different cross sections (figure 36). Through the porous walls of the pipe, water is seeping into it (or out of it) at a rate which is also different for different cross sections:

image913

If we consider a segment of the pipe from x to x + Δx, the quantity of water seeping into it in unit time must be compensated by the difference F(x + Δx) – F(x) between the quantity flowing out of this segment and the quantity flowing into it along the pipe.

So the quantity seeping into the segment is equal to the difference F(x + Δx) – F(x), and consequently the rate of seepage per unit length of pipe (the ratio of the seepage over an infinitesimal segment to the length of the segment) will be equal to:

image915

More generally, the quantity of water seeping into the pipe over the whole section [a, b] must be equal to the amount lost by flow through the ends of the pipe. But the amount seeping through the walls is equal to:image917and the amount lost by flow through the ends is F(b) – F(a). The equality of these two magnitudes produces formula.

GREEN’S THEOREM

Then there is of course the fact that a system in space-time, in which there is a displacement in time, will be equivalent to a system in which this time motion is seen as fixed space. Such cases mean that we can integrate lines with motion into planes, and surfaces with motion into volumes.

The result is:

-Green’s theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C. It is named after George Green[1] and is the two-dimensional special case of the more general Kelvin–Stokes theorem.

-Stoke’s theorem, which says that the integral of a differential form ω over the boundary of some orientable manifold Ω is equal to the integral of its exterior derivative dω over the whole of Ω:   ∫∂Ω ω = ∫Ω d ω

In general such Integrals also follow the geometrical structure of a system built with an external membrane, which absorbs the information of the system, and internal 0-point of scalar that focus it and a vital space between them. The result of these relationships allow to define the basic laws of integrals in time and space that relate line integrals to surface integrals to volume integrals, of which the best known example are the laws of electromagnetism, written not as Maxwell did, in terms of derivatives (curls and gradients) but of integrals.

So we can in this manner represent the laws of electromagnetism and fully grasp the meaning of magnetism, the external membrane and charge, the central point of view, with interactions between both, the electromagnetic waves and induced currents and magneto fluxes.

Thus the best examples in physics of this relationship are the 4 equations of Maxwell:

Screen Shot 2016-06-09 at 16.32.11

While the other 2 define the membrane of an electromagnetic field, the magnetic field and the central point of view or charge:

Screen Shot 2016-06-09 at 16.34.25

So we can consider that the Tƒ element of the electromagnetic field, the charge or o-point and the membrane, or closed outer path either in its integral or inverse differential equations, and the wave interaction between them, easily deduced from the stokes theorem or expressed inversely in differential form, give us the full description of an electromagnetic system in terms of the generator:

Sp-membrane (magnetic field-gauss’ law of magnetism) < ST (Faraday/Ampere’s laws of interaction between Sp and Tƒ) >Tƒ (Gauss Law of the central point).

And that those interactions are integrals of the quanta of the ∆-1 field in which the electric charge and magnetic field that integrates them arouses.

Density integrals. The meaning of Tƒ/Sp and Sp/Tƒ: information and energy densities:Screen Shot 2016-05-20 at 17.46.44Screen Shot 2016-05-20 at 17.46.51

Some General Remarks on the Formation and Solution of Differential Equations

As we have expressed many times, equations do NOT have solutions till the whole information on its ternary TΠare given. Which in time means to know an initial condition and an end, through which the function will run under the principle of completing its action in the least possible time: Max. S T (min. t), which for any combination of Space and Time dimensions implies to complete the action in the minimal possible time.

Conditions of this type arise in all equations of all stiences.

In the symmetry of space the boundary of the T.œ though must be expressed as lineal conditions of the 2D membrane and 1D singularity, which can be superposed 1D+2D to give the 3D solution of the vital space both enclose, normally through a product operator, 1 D x 2D=3D.

In any case each equation once determined by its space or time constrictions, can be found certain solutions, which form a sœT of possible frequencies or areas that are efficient parameters for the MIND equation to describe real T.œs (expressed here in the semantic inversion of sets and toes).

The key then to understand those solutions and its approximations is the fact that singularity and membrane conditions are expressed as scalars and lineal functions, while ternary vital energy solutions have cyclical form. 

 

MORE COMPLEX DERIVATIVES: CURVATURE, TENSORS – ITS LIMIT OF UTILITY

Physical quantities may be of 3, s, t, st kinds.

NOW BEYOND 2 planes of existence, the utility of derivatives diminish, as organisms become invisible and do not organise further, and so because in the same plane we use a first derivative, in relationships between two any planes we use derivatives of the 2nd degree, the maximal use possible for derivatives are derivatives COMES from third degree derivatives, which give us the limit of information, in the form of a single parameter, 1/r², curvature.

Beyond that planes of pure space and pure time are not perceivable, so departing from the fact that all is S with some t (energy) or T with some S(information), we can still broadly talk of dominate space-like parameters, time-like parameters and use the Spacetime parameters only for those in which S≈t (e=i) holds quite exactly.

Pure space and pure time.

Now the closest thing to the description of pure space as it emerges from ∆-1 and influences a higher ∆ scale as a ‘field of forces’.

And the closest thing to pure time, is a process of ‘implosion’ that ‘forces down’ or ‘depresses’ (inverse of emergence), a system from an ∆+1, time-like implossive force. And that is the meaning of mass, the force down, in/form/ative force coming from the ∆+1 scale.

Since pure, implosive time and pure expansive, entropic space are not observable, the best way to ‘get it’, is when the implosive time process is felt by something which is smaller inside the vortex. So we feel mass, from the ∆+3 galactic scale as Mach and Einstein mused, because inward implosive in-formative forces affect mostly the internal parts,  NOT the external ones. And we field inversely a field of expansive entropy, from smaller parts, exploding us from inside out.

Then we come to energy-like (max. Se x min Ti) and informative like (max. Ti x min. Se) parameters.

Some are completely characterized by their numerical values, e.g., temperature, density, and the like, and are called scalars. Scalars are then to be considered parameters of closed informative functions. In the case of density is evident.

Temperature is not so clear a time parameter. But temperature, when properly constrain it to what and where we truly measure as temperature (Not the frequency of a wave), that is the vibrating motions of molecules of the ∆-1 scale, in a closed space, hence a time parameter. So goes for mass-energy, as energy becomes added to mass, always that we can measure it in an enclosed region of space, belonging therefore to a time-closed world. So gluons of motion-energy enclosed in a nucleus do store most of the mass of atoms; as they are to be understood in terms of closed time-parameters from a potential point of view.

So goes from potential energy, which is stored in time cycles. So as a rule, regardless of how ‘distorted’ is conceptually current science and how unlikely will be a change of paradigm for centuries to come (we still drag, for example the – sign in electrons since Franklin chose it), the non-distorted truth can classify all parameters and physical quantities in terms of time and space.

On the other hand energy-like parameters will have direction as vector quantities: velocity, acceleration, the strength of an electric field, etc. The simpler those parameters, with less ‘scalar’ elements the more space-like, entropy-like, field-like they will be.  So again as a rule, the less dimensions we need to define a system the more space-entropy-field like it will be.

Thus space-like Vector quantities may be expressed just by the length of the vector and its direction or its space-dimensional “components” if we decompose it into the sum of three mutually perpendicular vectors parallel to the coordinate axes.

While a space-time balanced process will have more ‘parameters’ to study than a simple vector, growing in dimensions till becoming 4-vectors and finally a ‘tensor’, which will be the final ‘growth into a 5D ∆-event

So it is easy just looking at a equation to know what kind of s, t, or st (e, i exi) process we study.

For example, a classic st process, which is, let us remember an S≈T process is one which tends to a dynamic balance between both parameters

So it is an oscillatory system in any ∆-scale or ‘space-time medium’. In such oscillations every point of the medium, occupying in equilibrium the position (x, y, z), will at time t be displaced along a vector u(x, y, z, t), depending on the initial position of the point (x, y, z) and on the time t.

In this case the process in question will be described by a vector field. But it is easy to see that knowledge of this vector field, namely the field of displacements of points of the medium, is not sufficient in itself for a full description of the oscillation.

It is also necessary to know, for example, the density ρ(x, y, z, t) at each point of the medium, the temperature T(x, y, z, t).

So we ad to the Spe-vector field, some T-PARAMETERS, (closed T-vibration, density, and stress, which connects the system in a time-network like fashion to all other points of the whole).

∆-events. Finally in processes which require the study of interactions between ∆-scales, hence 5D processes, we need even more complex elements.

For example a classic ∆-event is the internal stress, i.e., the forces exerted on an arbitrarily chosen volume of the body by the entire remaining part of it.

screen-shot-2017-01-17-at-16-37-04

And so we arrive to systems defined by tensors, often with 6 dimensions (we forget the final r=evolution of thought that would make it all less distorted of working on bidimensional space and time, as to simplify understanding so the idea of a tensor is that the whole works dynamically into the ‘point’, described as a ‘cube’, with 6 faces, or ± elements on the point-particle-being from the 3 classic space-dimensions:

Examples of it are  the mentioned stress, shown in the graph, or in the relativity description of how the ∆+3 scale of gravitational space-time, influences the lower scales with the effect of implosive mass.

Thus in addition to S-vector and T-scalar quantities, more complicated entities occur in SPACE-time events, often characterized everywhere by a set of functions of the same four independent variables; where the function is a description of the ∆-1 scale, which come together into the upper new ‘function’ of functions, or ‘functional equation’.

And so we can classify in this manner according to the ∆•ST ternary method all the parameters of mathematical physics.

And they will reveal us the real structure and ∆ST symmetries they analyse according to the number of dimensions of complexity they use. 

Yet beyond the range of ‘tensors’, which study relationships between 2 or at best a maximal of 3 scales of reality, there is nothing to be found. So happens when we consider the number of differential equations, we want to study. Nothing really goes on beyond the 3rd derivative of a system that scales down 2 scales (entropy-death events, dual emergence upwards from seed to world).

So one of the most fascinating events of the relationship of i-logic and the real world is to properly interpret what Einstein said he could not resolve:

‘I know when mathematics is truth but not when it is real’

By the fact that as the Soviet school of maths from Lobachevski to Aleksandrov rightly explained, we NEED an experimental method inserted in Mathematics to know when mathematics is both, LOGICALLY CONSISTENT as a language in its inner syntax but NOT fiction of beauty but REAL.

A bit on the ‘numbers’

Now the fundamental concept behind analysis is the ∂∫ duality of ‘derivatives’ and ‘integrals’, related at first sights to the concepts of ‘time’ and ‘space’ (you derivate in time, you integrate in space), and to the concept of scalar ‘evolution’ from parts into wholes (you derivate to obtain a higher scalar wholeness.

i.e. you derivate a past space into present speed, adding a time-motion, and then derivate into acceleration – future time – to obtain the ‘most thorough single parameter of the being in time’: its acceleration that encodes also its speed and distance.

On the other hand you integrate in space, and so it is also customary to consider the first and second integral, which will bring also the ternary scale of volume of a system.

And it is a tribute to the simplicity of the Universe that further ‘derivatives’ are not really needed, to perceive the system in its full time and space parameters. As further derivations and integrations are not needed (they happen in the search of curvature and tensions and jerks, rates of acceleration, which are really menial details and in some combined space-time multiple systems).

Approximations.

We have though already commented on Algebra, that the third derivative, or higher derivatives however are used to improve the accuracy of an approximation to the function:

f(Xo+h)=f(xo)+f′(Xo)h+f″(Xo)h²/2!+f‴(ξ)h³/3!

Thus Taylor’s expansion of a function around a point involves higher order derivatives, and the more derivatives you consider, the higher the accuracy. This also translates to higher order finite difference methods when considering numerical approximations.

Now what this means is obvious: beyond the accuracy of the three derivatives canonical to an ∆º±1 supœrganism, information as it passes the potential barrier between scales of the 5th≈∆-dimension, suffers a loss of precision so beyond the third derivative, we can only obtain approximations by using higher derivatives or in a likely less focused=exact procedure the equivalent polynomials, more clear expressions of ‘dimensional growth.

So their similitude first of all proves that both high derivatives and polynomials are representations of growth across planes and scales, albeit loosing accuracy.

However in the fifth dimensional correct perspective is more accurate the derivative-integral game; as it ‘looks at the infinitesimal’ to integrate then the proper quanta.

Taylor’s Formula

The function: where the coefficients ak are constants, is a polynomial of degree=dimension n. In particular, y = ax + b is a polynomial of the first degree and y = ax² + bx + c is a polynomial of the second dimension. Dimensional polynomials have the particularity that are mostly 2-manifolds symmetric in that x=y, that is both dimension square, D1 x D2.

To achieve this feat, S=t, tt or ss STEPS MUST BE CONSIDERED.

Polynomials may be considered IN THAT SENSE as the simplest of all poli-DIMENSIONAL functions. In order to calculate their value for a given x, we require only the operations of addition, subtraction, and multiplication; not even division is needed. Polynomials are continuous for all x and have derivatives of arbitrary order. Also, the derivative of a polynomial is again a polynomial, of degree lower by one, and the derivatives of order n + 1 and higher of a polynomial of degree n are equal to zero. Yet the derivative diminishes slower than the simple square of the function; so if we consider the derivative the parts of the polynomial, the product of those parts would be more than the whole.

It is then when we can increase the complexity establishing for example, ratios of polynomials.
If to the polynomials we adjoin functions of the form:

for the calculation of which we also need division, and also the functions √x and ³√X, finally, arithmetical combinations of these functions, we obtain essentially all the functions whose values can be calculated by such methods.

But what a polynomial describes?  All others are easier to get through approximations:

On an interval containing the point a, let there be given a function f(x) with derivatives of every order. The polynomial of first degree:

p1(x) = ƒ(a) + ƒ'(a) (x-a)   has the same value as f(x) at the point x = a and also, as is easily verified, has the same derivative as f(x) as this point. Its graph is a straight line, which is tangent to the graph of f(x) to the point a. It is possible to choose a polynomial of the second degree, namely: which at the point of x = a has with f(x) a common value and a common first and second derivative. Its graph at the point a will follow that of f(x) even more closely. It is natural to expect that if we construct a polynomial which at x = a has the same first n derivatives as f(x) at the same point, then this polynomial will be a still better approximation to f(x) at points x near a. Thus we obtain the following approximate equality, which is Taylor’s formula:The right side of this formula is a polynomial of degree n in (x − a). For each x the value of this polynomial can be calculated if we know the values of f(a), f′(a), ···, f(n)(a).

For functions which have an (n + 1)th derivative, the right side of this formula, as is easy to show, differs from the left side by a small quantity which approaches zero more rapidly than (x − a)n. Moreover, it is the only possible polynomial of degree n that differs from f(x), for x close to a, by a quantity that approaches zero, as x → a, more rapidly than (x − a)n. If f(x) itself is an algebraic polynomial of degree n, then the approximate equality (25) becomes an exact one.
Finally, and this is particularly important, we can give a simple expression for the difference between the right side of formula (25) and the actual value of f(x). To make the approximate equality (25) exact, we must add to the right side a further term, called the “remainder term”

has the peculiarity that the derivative appearing in it is to be calculated in each case not at the point a but at a suitably chosen point ξ, which is unknown but lies somewhere in the interval between a and x.
So we cab make use of the generalized mean-value theorem quoted earlier:

Differentiating the functions ϕ(u) and ψ(u) with respect to u (it must be recalled that the value of x has been fixed) we find that: The equality of this last expression with the original quantity (27) gives Taylor’s formula in the form (26).
In the form (26) Taylor’s formula not only provides a means of approximate calculation of f(x) but also allows us to estimate the error.

And so with Taylor we close this introduction to derivatives and differentials, enlightened with the basic elements that relate them to the 5 dimensions of space-time, specifically to the ∆-1 finitesimals.

DETERMINISM IN SOLUTION TO ODEs.

Lineal vs cyclic; dis≈continuity; 1st vs 2nd order, partial vs ordinary,∂ vs ∫; 3 states of matter & its freedoms.

In the philosophy of science of mathematical physics some concepts come back once and again, based in dualities

And so the qualitative description of all those entropic, reproductive/moving and informative/Tiƒ time-like vortices became ‘only’ mathematical.

It is interesting at this stage, to consider that the whole world of ∫∂ mathematics has two approaches which humans as always being one-dimensional did not fully find complementary but argue, the method of newton based in infinite series (arithmetic, temporal pov) and that of Leibniz using spatial, geometric concepts (tangents); which is the one, being more evident and simpler, that stood.

First, trivially speaking the existence of such 2 canonical, time-space ways to come to ∫∂ is a clear proof that both researchers found ∫∂ independently. Next, their argument about who was right and better shows how one-minded is humanity, and third, the dominance of Leibniz’s methods for being visual, geometrical, spatial tells a lot about the difficulty humans have to understand time, causality and the concepts of infinity, limit, discontinuity, continuity, and other key elements of ∆nalysis, which we shall argue in our mathematical posts on… ∆nalysis.

All this said, mathematical physics moved to the geometric, continental school with the help of Leibniz’s disciples, the Bernoulli.

And it is interesting to consider a diachronic concept to analyse the enormous flourishing of those equation…

 

PDE

INTRODUCTION: PHILOSOPHICAL CONSIDERATIONS

Physical equations are also equations related to the 3 elements of al the existential entities of the Universe, which we will develop in detail on a future post on physics, accompanying this one. It must then be understood that within the general ƒ(x)≈f(t) and y=S isomorphism between mathematical equations and ST-eps (not always te case as symmetric steps can repeat itself with the same parameters in SSS and TTT derivatives as we have seen in our intro to ODE), partial differential equations, will be combinations of analysis of systems in its ‘primary’ differential finitesimals of space and time then aggregated in more complex St SYSTEMS, giving as an enormous range of possible PDE studies, which we shall strive to order according to the concept that there is a geometric symmetry between algebra  (s≈t symmetries) and geometry (S-wholes sum of t-dynamic points) and analysis (st-eps). 

So it is a good guidance for all algebraic, analytic equations to make a comment of its significance in the vital ternary geometry of a T.œ or complex event between T.œs ACROSS different planes, ∆§ studied with those equations.

Partial Differential equations as ƥst-equations.

Physical events and processes occuring in a space- time system always consist of the changes, during the passage of its finite time, of certain physical magnitudes related to its points of vital space.

This simple definition of space-time processes is at the heart of the whole differential calculus, which with slight changes of interpretation apply to all GST.

Any of those ST processes can be described by functions with four ST, independent variables, S(x, y), and (z, ƒ), where x, y  are the coordinates of a point of the space, and , and z  and ƒ of time.

So ideally in a world in which humans had not distorted bidimensional time cycles, the way we work around mathematical equations would be slightly changed. As we are not reinventing the human mind of 7 billion people – we are not that arrogant, we just will feel happy trying to explain a few of those processes of bidimensional space and time here.

In the study of the phenomena of nature, partial differential equations are encountered just as often as ordinary ones. As a rule this happens in cases where an event is described by a function of several variables. From the study of nature there arose that class of partial differential equations that is at the present time the most thoroughly investigated and probably the most important in the general structure of human knowledge, namely the equations of mathematical physics.

Let us first consider oscillations in any kind of medium. In such oscillations every point of the medium, occupying in equilibrium the position (x, y, z), will at time t be displaced along a vector u(x, y, z, t), depending on the initial position of the point (x, y, z) and on the time t. In this case the process in question will be described by a vector field. But it is easy to see that knowledge of this vector field, namely the field of displacements of points of the medium, is not sufficient in itself for a full description of the oscillation. It is also necessary to know, for example, the density ρ(x, y, z, t) at each point of the medium, the temperature T(x, y, z, t), and the internal stress, i.e., the forces exerted on an arbitrarily chosen volume of the body by the entire remaining part of it.

Physical events and processes occuring in space and time always consist of the changes, during the passage of time, of certain physical magnitudes related to the points of the space. As we saw in Chapter II these quantities can be described by functions with four independent variables, x, y, z, and t, where x, y, and z are the coordinates of a point of the space, and and t is the time.

Physical quantities may be of different kinds. Some are completely characterized by their numerical values, e.g., temperature, density, and the like, and are called scalars. Others have direction and are therefore vector quantities: velocity, acceleration, the strength of an electric field, etc. Vector quantities may be expressed not only by the length of the vector and its direction but also by its “components” if we decompose it into the sum of three mutually perpendicular vectors, for example parallel to the coordinate axes.

In mathematical physics a scalar quantity or a scalar field is presented by one function of four independent variables, whereas a vector quantity defined on the whole space or, as it is called, a vector field is described by three functions of these variables. We can write such a quantity either in the form:

U (x,y,z,t) where the bold face type indicates the u is a vector, or in the form of three functions:Ux (x,y,z,t), U y(x,y,z,t), Uz (x,y,z,t)

where ux, uy, and uz denote the projections of the vector on the coordinate axes.

In addition to vector and scalar quantities, still more complicated entities occur in physics, for example the state of stress of a body at a given point. Such quantities are called tensors; after a fixed choice of coordinate axes, they may be characterized everywhere by a set of functions of the same four independent variables.

In this manner, the description of widely different kinds of physical phenomena is usually given by means of several functions of several variables. Of course, such a description cannot be absolutely exact.

For example, when we describe the density of a medium by means of one function of our independent variables, we ignore the fact that at a given point we cannot have any density whatsoever. The bodies we are investigating have a molecular structure, and the molecules are not contiguous but occur at finite distances from one another. The distances between molecules are for the most part considerably larger than the dimensions of the molecules themselves. Thus the density in question is the ratio of the mass contained in some small, but not extremely small, volume to this volume itself. The density at a point we usually think of as the limit of such ratios for decreasing volumes. A still greater simplification and idealization is introduced in the concept of the temperature of a medium. The heat in a body is due to the random motion of its molecules. The energy of the molecules differs, but if we consider a volume containing a large collection of molecules, then the average energy of their random motions will define what is called temperature.

Similarly, when we speak of the pressure of a gas or a liquid on the wall of a container, we should not think of the pressure as though a particle of the liquid or gas were actually pressing against the wall of the container. In fact, these particles, in their random motion, hit the wall of the container and bounce off it. So what we describe as pressure against the wall is actually made up of a very large number of impulses received by a section of the wall that is small from an everyday point of view but extremely large in comparison with the distances between the molecules of the liquid or gas. It would be easy to give dozens of examples of a similar nature. The majority of the quantities studied in physics have exactly the same character. Mathematical physics deals with idealized quantities, abstracting them from the concrete properties of the corresponding physical entities and considering only the average values of these quantities.

Such an idealization may appear somewhat coarse but, as we will see, it is very useful, since it enables us to make an excellent analysis of many complicated matters, in which we consider only the essential elements and omit those features which are secondary from our point of view.

The object of mathematical physics is to study the relations existing among these idealized elements, these relations being described by sets of functions of several independent variables.

The Simplest Equations of Mathematical Physics

The elementary connections and relations among physical quantities are expressed by the laws of mechanics and physics. Although these relations are extremely varied in character, they give rise to more complicated ones, which are derived from them by mathematical argument and are even more varied. The laws of mechanics and physics may be written in mathematical language in the form of partial differential equations, or perhaps integral equations, relating unknown functions to one another. To understand what is meant here, let us consider some examples of the equations of mathematical physics.

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.

PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.

Both ordinary and partial differential equations are broadly classified as linear and nonlinear.

  • A differential equation is linear if the unknown function and its derivatives appear to the power 1 (products of the unknown function and its derivatives are not allowed) and nonlinear. The characteristic property of linear equations is that their solutions form an affine subspace of an appropriate function space, which results in much more developed theory of linear differential equations. Homogeneous linear differential equations are a further subclass for which the space of solutions is a linear subspace i.e. the sum of any set of solutions or multiples of solutions is also a solution. The coefficients of the unknown function and its derivatives in a linear differential equation are allowed to be (known) functions of the independent variable or variables; if these coefficients are constants then one speaks of a constant coefficient linear differential equation.
  • There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations (see below).

Examples

In the first group of examples, let u be an unknown function of x, and c and ω are known constants.

  • Inhomogeneous first-order linear constant coefficient ordinary differential equation:
  • Homogeneous second-order linear ordinary differential equation:
  • Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
  • Inhomogeneous first-order nonlinear ordinary differential equation:
  • Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:

In the next group of examples, the unknown function u depends on two variables x and t or x and y.

  • Homogeneous first-order linear partial differential equation:
  • Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
  • Third-order nonlinear partial differential equation, the Korteweg–de Vries equation:

Existence of solutions

Solving differential equations is not like solving algebraic equations. Not only are their solutions often times unclear, but whether solutions are unique or exist at all are also notable subjects of interest.

For first order initial value problems, it is easy to tell whether a unique solution exists. Given any point in the xy-plane, define some rectangular region , such that and is in . If we are given a differential equation and an initial condition , then there is a unique solution to this initial value problem if and are both continuous on . This unique solution exists on some interval with its center at .

However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order such that For any nonzero , if and are continuous on some interval containing , is unique and exists.

  • A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times.
  • A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations.
  • A differential algebraic equation (DAE) is a differential equation comprising differential and algebraic terms, given in implicit form.

The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve approximation of the solution of a differential equation by the solution of a corresponding difference equation.

The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modelling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.

Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena.

As an example, consider propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.

In physics:

Classical mechanics:

So long as the force acting on a particle is known, Newton’s second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton’s second law to obtain an ordinary differential equation, which is called the equation of motion.

Electrodynamics:

Maxwell’s equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These fields in turn underlie modern electrical and communications technologies. Maxwell’s equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell, who published an early form of those equations between 1861 and 1862.

General relativity:

The Einstein field equations (EFE; also known as “Einstein’s equations”) are a set of ten partial differential equations in Albert Einstein’s general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. First published by Einstein in 1915 as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).

Quantum mechanics:

In quantum mechanics, the analogue of Newton’s law is Schrödinger’s equation (a partial differential equation) for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system’s wave function (also called a “state function”).

Other important equations:

  • Euler–Lagrange equation in classical mechanics
  • Hamilton’s equations in classical mechanics
  • Radioactive decay in nuclear physics
  • Newton’s law of cooling in thermodynamics
  • The wave equation
  • The heat equation in thermodynamics
  • Laplace’s equation, which defines harmonic functions
  • Poisson’s equation
  • The geodesic equation
  • The Navier–Stokes equations in fluid dynamics
  • The Diffusion equation in stochastic processes
  • The Convection–diffusion equation in fluid dynamics
  • The Cauchy–Riemann equations in complex analysis
  • The Poisson–Boltzmann equation in molecular dynamics
  • The shallow water equations
  • Universal differential equation

The Lorenz equations whose solutions exhibit chaotic flow.

Simple examples.

In ∆st are therefore Mathematical statement containing one or more derivatives—that is, terms representing the rates of change of continuously varying quantities. Differential equations are very common in science and engineering, as well as in many other fields of quantitative study, because what can be directly observed and measured for systems undergoing changes are their rates of change. The solution of a differential equation is, in general, an equation expressing the functional dependence of one variable upon one or more others; it ordinarily contains constant terms that are not present in the original differential equation. Another way of saying this is that the solution of a differential equation produces a function that can be used to predict the behaviour of the original system, at least within certain constraints.

Differential equations are classified into several broad categories, and these are in turn further divided into many subcategories. The most important categories are ordinary differential equations and partial differential equations. When the function involved in the equation depends on only a single variable, its derivatives are ordinary derivatives and the differential equation is classed as an ordinary differential equation. On the other hand, if the function depends on several independent variables, so that its derivatives are partial derivatives, the differential equation is classed as a partial differential equation. The following are examples of ordinary differential equations:screen-shot-2016-12-22-at-09-37-45

In these, y stands for the function, and either t or x is the independent variable. The symbols k and m are used here to stand for specific constants.

Whichever the type may be, a differential equation is said to be of the nth order if it involves a derivative of the nth order but no derivative of an order higher than this.

The equation:

screen-shot-2016-12-22-at-09-37-52

is an example of a partial differential equation of the second order. The theories of ordinary and partial differential equations are markedly different, and for this reason the two categories are treated separately.

Instead of a single differential equation, the object of study may be a simultaneous system of such equations. The formulation of the laws of dynamics frequently leads to such systems. In many cases, a single differential equation of the nth order is advantageously replaceable by a system of n simultaneous equations, each of which is of the first order, so that techniques from linear algebra can be applied.

An ordinary differential equation in which, for example, the function and the independent variable are denoted by y and x is in effect an implicit summary of the essential characteristics of y as a function of x.

These characteristics would presumably be more accessible to analysis if an explicit formula for y could be produced. Such a formula, or at least an equation in x and y (involving no derivatives) that is deducible from the differential equation, is called a solution of the differential equation. The process of deducing a solution from the equation by the applications of algebra and calculus is called solving or integrating the equation.

It should be noted, however, that the differential equations that can be explicitly solved form but a small minority. Thus, most functions must be studied by indirect methods. Even its existence must be proved when there is no possibility of producing it for inspection. In practice, methods from numerical analysis, involving computers, are employed to obtain useful approximate solutions.

Problems in the theory of differential equations.

We now give exact definitions. An ordinary differential equation of order n in one unknown function y is a relation of the form

image2193

between the independent variable x and the quantities

image2195

The order of a diflerential equation is the order of the highest derivative of the unknown function appearing in the differential equation. Thus the equation in example 1 is of the first order, and those in examples 2, 3, 4, 5, and 6, are of the second order.

A function ϕ(x) is called a solution of the differential equation (17) if substitution of ϕ(x) for y, ϕ′(x) for y′, · · ·, ϕ(n) (x) for y(n) produces an identity.

Problems in physics and technology often lead to a system of ordinary differential equations with several unknown functions, all depending on the same argument and on their derivatives with respect to that argument.

For greater concreteness, the explanations that follow will deal chiefly with one ordinary differential equation of order not higher than the second and with one unknown function. With this example one may explain the essential properties of all ordinary differential equations and of systems of such equations in which the number of unknown functions is equal to the number of equations.

We have spoken earlier of the fact that, as a rule, every differential equation has not one but an infinite set of solutions. Let us illustrate this first of all by intuitive considerations based on the examples given in equations (2-6). In each of these, the corresponding differential equation is already fully defined by the physical arrangement of the system. But in each of these systems there can be many different motions. For example, it is perfectly clear that the pendulum described by equation (8) may oscillate with many different amplitudes. To each of these different oscillations of the pendulum there corresponds a different solution of equation (8), so that infinitely many such solutions must exist. It may be shown that equation (8) is satisfied by any function of the form

image2197

where C1, and C2, are arbitrary constants.

It is also physically clear that the motion of the pendulum will be completely determined only in case we are given, at some instant t0, the (initial) value x0 of x (the initial displacement of the material point from the equilibrium position) and the initial rate of motion:

X’o=(dx/dt)|t=0 These initial conditions determine the constants C1, and C2, in formula (18).

In exactly the same way, the differential equations we have found in other examples will have infinitely many solutions.

In general, it can be proved, under very broad assumptions concerning the given differential equation (17) of order n in one unknown function that it has infinitely many solutions. More precisely: If for some “initial value” of the argument, we assign an “initial value” to the unknown function and to all of its derivatives through order n – 1, then one can find a solution of equation (17) which takes on these preassigned initial values. It may also be shown that such initial conditions completely determine the solution, so that there exists only one solution satisfying the initial conditions given earlier. We will discuss this question later in more detail. For our present aims, it is essential to note that the initial values of the function and the first n – 1 derivatives may be given arbitrarily. We have the right to make any choice of n values which define an “initial state” for the desired solution.

If we wish to construct a formula that will if possible include all solutions of a differential equation of order n, then such a formula must contain n independent arbitrary constants, which will allow us to impose n initial conditions. Such solutions of a differential equation of order n, containing n independent arbitrary constants, are usually called general solutions of the equation. For example, a general solution of (8) is given by formula (18) containing two arbitrary constants; a general solution of equation (3) given by formula (5).

We will now try to formulate in very general outline the problems confronting the theory of differential equations. These are many and varied, and we will indicate only the most important ones.

If the differential equation is given together with its initial conditions, then its solution is completely determined. The construction of formulas giving the solution in explicit form is one of the first problems of the theory. Such formulas may be constructed only in simple cases, but if they are found, they are of great help in the computation and investigation of the solution.

The theory should provide a way to obtain some notion of the behavior of a solution: whether it is monotonic or oscillatory, whether it is periodic or approaches a periodic function, and so forth.

Suppose we change the initial values for the unknown function and its derivatives; that is, we change the initial state of the physical system. Then we will also change the solution, since the whole physical process will now run differently. The theory should provide the possibility of judging what this change will be. In particular, for small changes in the initial values will the solution also change by a small amount and will it therefore be stable in this respect, or may it be that small changes in the initial conditions will give rise to large changes in the solution so that the latter will be unstable ?

We must also be able to set up a qualitative, and where possible, quantitative picture of the behavior not only of the separate solutions of an equation, but also of all of the solutions taken together.

In machine construction there often arises the question of making a choice of parameters characterizing an apparatus or machine that will guarantee satisfactory operation. The parameters of an apparatus appear in the form of certain magnitudes in the corresponding differential equation. The theory must help us make clear what will happen to the solutions of the equation (to the working of the apparatus) if we change the differential equation (change the parameters of the apparatus).

Finally, when it is necessary to carry out a computation, we will need to find the solution of an equation numerically. and here the theory will be obliged to provide the engineer and the physicist with the most rapid and economical methods for calculating. the solutions.

Partial differential equations

In mathematics, equation relating a function of several variables to its partial derivatives. A partial derivative of a function of several variables expresses how fast the function changes when one of its variables is changed, the others being held constant (compare ordinary differential equation). The partial derivative of a function is again a function, and, if f(x, y) denotes the original function of the variables x and y, the partial derivative with respect to x—i.e., when only x is allowed to vary—is typically written as fx(x, y) or ∂f/∂x. The operation of finding a partial derivative can be applied to a function that is itself a partial derivative of another function to get what is called a second-order partial derivative. For example, taking the partial derivative of fx(x, y) with respect to y produces a new function fxy(x, y), or ∂2f/∂yx. The order and degree of partial differential equations are defined the same as for ordinary differential equations.

In general, partial differential equations are difficult to solve, but techniques have been developed for simpler classes of equations called linear, and for classes known loosely as “almost” linear, in which all derivatives of an order higher than one occur to the first power and their coefficients involve only the independent variables.

 

Functions of Several Variables. Geometrical view.

Up to now we have spoken only of functions of one variable, but in practice it is often necessary to deal also with functions depending on two, three, or in general many variables. For example, the area of a rectangle is a function S=xy  of its base x and its height y. The volume of a rectangular parallelepiped is a function V=xyz of its three dimensions. The distance between two points A and B is a function:

of the six coordinates of these points. The well-known formula:  pv = nRT expresses the dependence of the volume v of a definite amount of gas on the pressure p and absolute temperature T.
Functions of several variables, like functions of one variable, are in many cases defined only on a certain region of values of the variables themselves. For example, the function

U = ln (1-x²-y²-z²)  is defined only for values of x, y and z that satisfy the condition x²+y²+z²=1

(For other x, y, z its values are not real numbers.) The set of points of space whose coordinates satisfy the inequality (35) obviously fills up a sphere of unit radius with its center at the origin of coordinates. The points on the boundary are not included in this sphere; the surface of the sphere has been so to speak “peeled off.” Such a sphere is said to be open. The function (34) is defined only for such sets of three numbers (x, y, z) as are coordinates of points in the open sphere G. It is customary to state this fact concisely by saying that the function (34) is defined on the sphere G.
Let us give another example. The temperature of a nonuniformly heated body V is a function of the coordinates x, y, z of the points of the body. This function is not defined for all sets of three numbers x, y, z but only for such sets as are coordinates of points of the body V.
Finally, as a third example, let us consider the function:where ϕ is a function of one variable defined on the interval [0, 1]. Obvious-ly the function u is defined only for sets of three numbers (x, y, z) which are coordinates of points in the cube:  0≤x≤1,0≤y≤1,0≤z≤1.
We now give a formal definition of a function of three variables. Suppose that we are given a set E of triples of numbers (x, y, z) (points of space). If to each of these triples of numbers (points) of E there corresponds a definite number u in accordance with some law, then u is said to be a function of x, y, z (of the point), defined on the set of triples of numbers (on the points) E, a fact which is written thus: u= F(x,y,z)

In place of F we may also write other letters: f, ϕ, ψ.
In practice the set E will usually be a set of points, filling out some geometrical body or surface: sphere, cube, annulus, and so forth, and then we simply say that the function is defined on this body or surface. Functions of two, four, and so forth, variables are defined analogously.

Implicit definition of a function.

Let us note that functions of two variables is a useful means for the definition of functions of one variable. Given a function F(x, y) of two variables let us set up the equation: F(s,t)=0

In general, this equation will define a certain set of points (s,t) of the surface on which our function is equal to zero. Such sets of points usually represent curves that may be considered as the graphs of one or several one-valued functions y = ϕ(s) or s = ψ(t) of one variable. In such a case these one-valued functions are said to be defined implicitly by the equation (36). For example, the equation:

s²+t²=-r²=0   gives an implicit definition of two functions of one variable:

s=+√r²-t² and s= – √r²-t²

But it is necessary to keep in mind that an equation of the form (36) may fail to define any function at all. For example, the equation: t²+s²+1=0  obviously does not define any real function, since no pair of real numbers satisfies it.
Geometric representation. Functions of two variables may always be visualized as surfaces by means of a system of space coordinates. Thus the function:   z=ƒ(s,t)
is represented in a three-dimensional rectangular coordinate system by a surface, which is the geometric locus of points M  whose coordinates s, t, z satisfy the equation:

There is another, extremely useful method, of representing the function (37), which has found wide application in practice. Let us choose a sequence of numbers z1, z2, ···, and then draw on one and the same plane Ost the curves:   z1=ƒ(s,t); z2=ƒ(s,t)

which are the so-called level lines of the function f(s, t). From a set of level lines, if they correspond to values of z that are sufficiently close to one another, it is possible to form a very good image of the variation of the function f(s,t), just as from the level lines of a topographical map one may judge the variation in altitude of the locality.

Figure shows a map of the level lines of the function z = s2 + t2, the diagram at the right indicating how the function is built up from its level lines. In Chapter III, figure 50, a similar map is drawn for the level lines of the function z = st.

Partial derivatives and differential.

Let us make some remarks about the differentiation of the functions of several variables. As an example we take the arbitrary function of two variables: z=ƒ(x,y)

If we fix the value of y, that is if we consider it as not varying, then our function of two variables becomes a function of the one variable x. The derivative of this function with respect to x, if it exists, is called the partial derivative with respect to x and is denoted thus: ∂z/∂x or ƒx/∂x or ƒ’x(x,y)

The last of these three notations indicates clearly that the partial derivative with respect to x is in general a function of x and y. The partial derivative with respect to y is defined similarly.

The general case for space change through any volume.

When we generalise the case to any combination of space or time dimensions, the same method can be used to obtain the ginormous quantity of possible changes in multiple Dimensional analysis.

Thus, in order to determine the function that represents a given physical process, we try first of all to set up an equation that connects this function in some definite way with its derivatives of change of various orders and dimensions.

The method of obtaining such an equation, which is called a differential equation, often amounts to replacing increments of the desired functions by their corresponding differentials.
As an example let us solve a classic problem of change in 3 pure dimensions of euclidean space, which by convention we shall call Sxyz.

In a rectangular system of coordinates Oxyz, then we consider the surface obtained by rotation of the parabola whose equation (in the Oyz plane) is z = y2. This surface is called a paraboloid of revolution). Let v denote the volume of the body bounded by the paraboloid and the plane parallel to the Oxy plane at a distance z from it. It is evident that v is a function of z (z >0).

To determine the function v, we attempt to find its differential dv. The increment Δv of the function v at the point z is equal to the volume bounded by the paraboloid and by two planes parallel to the Oxy plane at distances z and z + Δz from it.
It is easy to see that the magnitude of Δv is greater than the volume of the circular cylinder of radius √z and height Δz but less than that of the circular cylinder with radius √z+∆z and height Δz. Thus:

πz ∆z < ∆v ≤ π (z +∆z) ∆z.        And so:

where θ is some number depending on Δz and satisfying the inequality 0 < θ < 1.
So we have succeeded in representing the increment Δv in the form of a sum, the first summand of which is proportional to Δz, while the second is an infinitesimal of higher order than Δz (as Δz → 0). It follows that the first summand is the differential of the function v:

dv=πz ∆z    or dv=πz dz

since Δz = dz for the independent variable z.  The equation so obtained relates the differentials dv and dz (of the variables v and z) to each other and thus is called a differential equation.  If we take into account that:

dv/dz =v’     where v′ is the derivative of v with respect to the variable z, our differential equation may also be written in the form: v’=π z

To solve this very simple differential equation we must find a function of z whose derivative is equal to πz.

A solution of our equation is given by v = πz²/2 + C, where for C we may choose an arbitrary number. In our case the volume of the body is obviously zero for z = 0 (see figure 22), so that C = 0. Thus our function is given by v = πz²/2.

Geometrically the function f(x, y) represents a surface in a rectangular three-dimensional system of coordinates. The corresponding function of x for fixed y represents a plane curve (figure) obtained from the intersection of the surface with a plane parallel to the plane Oxz and at a distance y from it. The partial derivative ∂z/∂x is obviously equal to the trigonometric tangent of the angle between the tangent. to the curve at the point (x, y) and the positive direction of the x-axis.

More generally, if we consider a function z = f(x1, x2, . . ., xn) of the n variables x1, x2, . . ., xn, the partial derivative ∂z/∂x, is defined as the derivative of this function with respect to xi, calculated for fixed values of the other variables.

We may say that the partial derivative of a function with respect to the variable xi is the rate of change of this function in the direction of the change in xi. It would also be possible to define a derivative in an arbitrary assigned direction, not necessarily coinciding with any of the coordinate axis, but we will not take the time to do this.

It is sometimes necessary to form the partial derivatives of these partial derivatives; that is; the so-called partial derivatives of second order. For functions of two variables there are four of them: However, if these derivatives are continuous, then it is not hard to prove that the second and third of these four (the so-called mixed derivatives) coincide:

For example, in the case of first function considered:

the two mixed derivatives are seen to coincide.
For functions of several variables, just as was done for functions of one variable, we may introduce the concept of a differential.
For definiteness let us consider a function:

z = ƒ (x,y)  of two variables. If it has continuous partial derivatives, we can prove that its increment: corresponding to the increments Δx and Δy of its arguments, may be put in the form:where ∂f/∂x and ∂f/∂y are the partial derivatives of the function at the point (x, y) and the magnitude a depends on Δx and Δy in such a way that α → 0 as Δx → 0 and Δy → 0.
The sum of the first two components:is linearly dependent on Δx and Δy and is called the differential of the function. The third summand, because of the presence of the factor α, tending to zero with Δx and Δy, is an infinitesimal of higher order than the magnitude:describing the change in x and y.

Let us give an application of the concept of differential. The period of oscillation of a pendulum is calculated from the formula:

where l is its length and g is the acceleration of gravity. Let us suppose that l and g are known with errors respectively equal to Δl and Δg. Then the error in the calculation of T will be equal to the increment ΔT corresponding to the increments of the arguments Δl and Δg. Replacing ΔT approximately by dT, we will have:

The signs of Δl and Δg are unknown, but we may obviously estimate ΔT by the inequality:

Thus we may consider in practice that the relative error for T is equal to the sum of the relative errors for l and g.
For symmetry of notation, the increments of the independent variables Δx and Δy are usually denoted by the symbols dx and dy and are also called differentials. With this notation the differential of the function u = f(x, y, z) may be written thus:Partial derivatives play a large role whenever we have to do with functions of several variables, as happens in many of the applications of analysis to technology and physics.

 

THE GEOMETRIV VIEW: Multiple Integrals.

Integrals in topology.

As the Universe is a kaleidoscopic mirror of symmetries between all its elements, this dominant of analysis on ∆-scaling must ad also the use of analysis on a single plane, in fact the most common, whereas the essential consideration is the ∆§ocial decametric and e-π ternary scaling, with minimal distortion (which happens in the Lorentzian limits between scales). 

This key distinction on GST (∆§ well-behaved scaling versus ∆±i ‘distorted emerging and dissolution, which does change the form of the system) does have special relevance in analysis as for very long it was necessary the ‘continuity’ without such distortions of the function studied, and so analysis was restricted to ∆(±1 – 0) intervals and ‘broke’ when jumping two scales as in processes of entropy (death-feeding). But with improved approximation techniques, functionals and operators (which assume a whole scale of ∞ parts as a function of functions in the operator of the larger scale) and renormalisation in double and triple integrals and derivatives by praxis, without understanding the scalar theory behind it, this hurdle today…

And it has always amused me that humans can get so far in all disciplines by trial and error, when a ‘little bit of thought on first principles’ could make thinks much easier. It seems though-thought beings are scarce in our species and highly disregarded, as the site’§ight§how (allow me, a bit of cacophony and repetition the trade mark of this blog and the Universe 🙂 As usual I shall also repeat, I welcome comments, and offers of serious help from specialists and Universities, since nothing would make me happier than unloading tons of now-confusing analysis not only of analysis, before I get another health crisis and all goes to waste in the eternal entropic arrow of two derivatives, aka death.

 

Existence and Uniqueness of the Solution of a Differential Equation; Approximate Solution of Equations

The question of existence and uniqueness of the solution of a differential equation. We return to the differential equation (17) of arbitrary order n. Generally, it has infinitely many solutions and in order that we may pick from all the possible solutions some one specific one, it is necessary to attach to the equation some supplementary conditions, the number of which should be equal to the order n of the equation. Such conditions may be of extremely varied character, depending on the physical, mechanical, or other significance of the original problem.

For example, if we have to investigate the motion of a mechanical system beginning with some specific initial state, the supplementary conditions will refer to a specific (initial) value of the independent variable and will be called initial conditions of the problem. But if we want to define the curve of a cable in a suspension bridge, or of a loaded beam resting on supports at each end, we encounter conditions corresponding to different values of the independent variable, at the ends of the cable or at the points of support of the beam. We could give many other examples showing the variety of conditions to be fulfilled in connection with differential equations.
We will assume that the supplementary conditions have been defined and that we are required to find a solution of equation:

that satisfies them.

 

The first question we must consider is whether any such solution exists at all. It often happens that we cannot be sure of this in advance. Assume, say, that equation (17) is a description of the operation of some physical apparatus and suppose we want to determine whether periodic motion occurs in this apparatus. The supplementary conditions will then be conditions for the periodic repetition of the initial state in the apparatus, and we cannot say ahead of time whether or not there will exist a solution which satisfies them.
In any case the investigation of problems of existence and uniqueness of a solution makes clear just which conditions can be fulfilled for a given differential equation and which of these conditions will define the solution in a unique manner.

But the determination of such conditions and the proof of existence and uniqueness of the solution for a differential equation corresponding to some physical problem also has great value for the physical theory itself. It shows that the assumptions adopted in setting up the mathematical description of the physical event are on the one hand mutually consistent and on the other constitute a complete description of the event.
The methods of investigating the existence problem are manifold, but among them an especially important role is played by what are called direct methods. The proof of the existence of the required solution is provided by the construction of approximate solutions, which are proved to converge to the exact solution of the problem. These methods not only establish the existence of an exact solution, but also provide a way, in fact the principal one, of approximating it to any desired degree of accuracy.
For the rest of this section we will consider, for the sake of definiteness, a problem with initial data, for which we will illustrate the ideas of Euler’s method and the method of successive approximations.

Euler’s method of broken lines.

Consider in some domain G of the (x, y) plane the differential equation: dy/dx = ƒ (x,y)

As we have already noted, equation (34) defines in G a field of tangents. We choose any point (x0, y0) of G. Through it there will pass a straight line L0 with slope f(x0, y0). On the straight line L0 we choose a point (x1, y1), sufficiently close to (x0, y0); in figure 9 this point is indicated by the number 1.

We draw the straight line L1, through the point (x1, y1) with slope f(x1, y1) and on it mark the point (x2, y2); in the figure this point is denoted by the number 2. Then on the straight line L2, corresponding to the point (x2, y2) we mark the point (x3, y3), and continue in the same manner with x0, < x1, < x2, < x3, < · · ·. It is assumed, of course, that all the points (x0, y0), (x1, y1), (x2, y2), · · · are in the domain G. The broken line joining these points is called an Euler broken line.

One may also construct an Euler broken line in the direction of decreasing x; the corresponding vertices on our figure are denoted by –1, –2, –3.

It is reasonable to expect that every Euler broken line through the point (x0, y0) with sufficiently short segments gives a representation of an integral curve l passing through the point (x0, y0), and that with decrease in the length of the links, i.e., when the length of the longest link tends to zero, the Euler broken line will approximate this integral curve.

Here, of course, it is assumed that the integral curve exists. In fact it is not hard to prove that if the function f(x, y) is continuous in the domain G, one may find an infinite sequence of Euler broken lines, the length of the largest links tending to zero, which converges to an integral curve l. However, one usually cannot prove uniqueness: there may exist different sequences of Euler broken lines that converge to different integral curves passing through one and the same point (x0, y0). M. A. Lavrent’ev has constructed an example of a differential equation of the form (29) with a continuous function, f(x, y), such that in any neighborhood of any point P of the domain G there passes not one but at least two integral curves.

In order that through every point of the domain G there pass only one integral curve, it is necessary to impose on the function f(x, y) certain conditions beyond that of continuity. It is sufficient, for example, to assume that the function f(x, y) is contitiuous and has a bounded derivative with respect to y on the whole domain G. In this case it may be proved that through each point of G there passes one and only one integral curve and that every sequence of Euler broken lines passing through the point (x0, y0) converges uniformly to this unique integral curve, as the length of the longest link of the broken lines tends to zero. Thus for sufficiently small links the Euler broken line may be taken as an approximation to the integral curve of equation (34).
From the preceding it can be seen that the Euler broken lines are so constituted that small pieces of the integral curves are replaced by line segments tangent to these integral curves. In practice, many approximations to integral curves of the differential equation (34) consist not of straight-line segments tangent to the integral curves, but of parabolic segments that have a higher order of tangency with the integral curve. In this way it is possible to find an approximate solution with the same degree of accuracy in a smaller number of steps (with a smaller number of links in the approximating curve).

The method of successive approximations.

We now describe another method of successive approximation, which is as widely used as the method of the Euler broken lines. We assume again that we are required to find a solution y(x) of the differential equation (34) satisfying the initial condition:  y (xo) = yo

For the initial approximation to the function y(x), we take an arbitrary function y0(x). For simplicity we will assume that it also satisfies the initial condition, although this is not necessary. We substitute it into the right side f(x, y) of the equation for the unknown function y and construct a first approximation y1, to the solution y from the following requirements:Since there is a known function on the right side of the first of these equations the function y1(x) may be found by integration:

It may be expected that y1(x) will differ from the solution y(x) by less than y0(x) does, since in the construction of y1(x) we made use of the differential equation itself, which should probably introduce a correction into the original approximation. One would also think that if we improve the first approximation y1(x) in the same way, then the second approximation:

will be still closer to the desired solution.
Let us assume that this process of improvement has been continued indefinitely and that we have constructed the sequence of approximations: yo(x), y1(x),…yn(x)….
Will this sequence converge to the solution y(x)?
More detailed investigations show that if f(x, y) is continuous and ƒ’y is bounded in the domain G, the functions yn(x) will in fact converge to the exact solution y(x) at least for all x sufficiently close to x0 and that if we break off the computation after a sufficient number of steps, we will be able to find the solution y(x) to any desired degree of accuracy.
Exactly in the same way as for the integral curves of equation (34), we may also find approximations to integral curves of a system of two or more differential equations of the first order. Essentially the necessary condition here is to be able to solve these equations for the derivatives of the unknown functions. For example, suppose we are given the system:Assuming that the right sides of these equations are continuous and have bounded derivatives with respect to y and z in some domain G in space, it may be shown under these conditions that through each point (x0, y0, z0) of the domain G, in which the right sides of the equations in (37) are defined, there passes one and only one integral curve:

y = Φ (x), z = ψ (x)  of the system (37). The functions f1(x, y, z) and f2(x, y, z) give the direction numbers at the point (x, y, z), of the tangent to the integral curve passing through this point. To find the functions ϕ(x) and ψ(x) approximately, we may apply the Euler broken line method or other methods similar to the ones applied to the equation (34).
The process of approximate computation of the solution of ordinary differential equations with initial conditions may be carried out on computing machines. There are electronic machines that work so rapidly that if, for example, the machine is programmed to compute the trajectory of a projectile, this trajectory can be found in a shorter space time than it takes for the projectile to hit its target (cf. Chapter XIV).
The connection between differential equations of various orders and a system of a large number of equations of first order. A system of ordinary differential equations, when solved for the derivative of highest order of each of the unknown functions, may in general be reduced, by the introduction of new unknown functions, to a system of equations of the first order, which is solved for all the derivatives. For example, consider the differential equation: d²y/dx²= ƒ (x, y, dy/dx). We set dy/dx = z. Then equation (38) may be written in the form: dz/dx = ƒ (x, y, z)

Hence, to every solution of equation (38) there corresponds a solution of the system consisting of equations (39) and (40). It is easy to show that to every solution of the system of equations (39) and (40) there corresponds a solution of equation (38).
Equations not explicitly containing the independent variable. The problems of the pendulum, of the Helmholtz acoustic resonator, of a simple electric circuit, or of an electron-tube generator considered in §1 lead to differential equations in which the independent variable (time) does not explicitly appear. We mention equations of this type here, because the corresponding differential equations of the second order may be reduced in each case to a single differential equation of the first order rather than to a system of first-order equations as in the paragraph above for the general equation of the second order. This reduction greatly simplifies their study.
Let us then consider a differential equation of the second order, not containing the argument t in explicit form:F (x, dx,dt, d²x.dt²)=0.  We set dx/dt=y and consider y as a function of x, so that:

Then equation (41) may be rewritten in the form:  F (x, y, y dy/dx)=0

In this manner, to every solution of equation (41) there corresponds a unique solution of equation (43). Also to each of the solutions y = ϕ(x) of equation (43) there correspond infinitely many solutions of equation (41). These solutions may be found by integrating the equation: dx/dt=Φ (x) where x is considered as a function of t.
It is clear that if this equation is satisfied by a function x = x(t), then it will also be satisfied by any function of the form x(t + t0), where t0 is an arbitrary constant.
It may happen that not every integral curve of equation (43) is the graph of a single function of x. This will happen, for example, if the curve is closed. In this case the integral curve of equation (43) must be split up into a number of pieces, each of which is the graph of a function of x. For every one of these pieces, we have to find an integral of equation (44).
The values of x and dx/dt which at each instant characterize the state of the physical system corresponding to equation (41) are called the phases of the system, and the (x, y) plane is correspondingly called the phase plane for equation (41). To every solution x = x(t) of this equation there corresponds the curve: y = x'(t)
in the (x, y) plane; t here is considered as a parameter. Conversely, to every integral curve y = ϕ(x) of equation (43) in the (x, y) plane there corresponds an infinite set of solutions of the form x = x(t + t0) for equation (41); here t0 is an arbitrary constant. Information about the behavior of the integral curves of equation (43) in the plane is easily transformed into information about the character of the possible solutions of equation (41). Every closed integral curve of equation (43) corresponds, for example, to a periodic solution of equation (41).
If we subject equation (6) to the transformation (42), we obtain: dy/dx = -ay – bx/my.
Setting ν = x and dv/dt = y in equation (16), in like manner we get:

Just as the state at every instant of the physical system corresponding to the second-order equation (41) is characterized by the two magnitudes* (phases) x and y = dx/dt, the state of a physical system described by equations of higher order or by a system of differential equations is characterized by a larger number of magnitudes (phases). Instead of a phase plane, we then speak of a phase space.

DUALITIES: The behavior of integral curves in the large DOMAIN self-centred in the small singularity.

the behavior of the integral curves “in the large”; that is, in the entire domain of the given system of differential equations, without attempting to preserve the scale. We will consider a space in which this system defines a field of directions as the phase space of some physical process. Then the general scheme of the integral curves, corresponding to the system of differential equations, will give us an idea of the character of all processes (motions) which can possibly occur in this system:


In figures we have constructed approximate schematized representations of the behavior of the integral curves in the neighborhood of an isolated singular point.

Why those matter obviously because singularities @ matter. We can divide those curves which are canonical of extensive families that exhaust the 3 possibilities:

∑=∏: 3D communication. What first calls attention is the symmetry of the upper fig. 12, when the singularity merely acts as in a tetraktys configuration as the identity neutral element that communicates all the flows that touch the T.œ system, entering and leaving symmetrically the o-point (having hence a 0 line of symmetry diagonal to the point).

It is also noticeable that the paths are ‘fast’, as the points of those paths know they will not be changed by the identity element.

ð•: 1D predation. But in the case the 0-point acts as a predator that won’t let the point-prey go, the form is a spiralled, slow motion.

$: 2D flows. Finally as usual we have a ternary case in which the curves do NOT touch the singularity, which curiously start with the points going straight, perpendicular to it, hence this case tends to apply to spatial points of vital energy with a certain ‘discerning’ view, which makes them feed first on the field established by the singularity to escape it when being aware of what lies ahead. The 2 last cases can be compared in vital terms – not trajectories, the behaviour of smallish ‘blind comets’ spiralling into stars that will feed on them as opposed to symbiotic planets that herd gravitational quanta together with the star but will NOT fall in the gravitational trap.

Mathematically the drawing of those curves, is one of the most fundamental problems in the theory of differential equations:  finding as simple a method as possible for constructing such a scheme for the behavior of the family of integral curves of a given system of differential equations in the entire domain of definition, in order to study the behavior of the integral curves of this system of differential equations “in the large.”

And since we exist in a bidimensional Universe, this problem remains  almost untouched for spaces of dimension higher than 2 (a recurrent fact of all mathematical mirrors from Fermat’s last theorem to the proof of almost all geometrical theorems in a plane).

But the problem is still very far from being solved for the single equation of the form: dy/dx = M (x,y)/N (x.y) even when M(x, y) and N(x, y) are polynomials, which shows how so many times the whys of ∆st are truly synoptic and simple, even if the detailed paths of 1D motions, the obsession of one-dimensional humans are ignored.

In fact the only solution quite resolved is… yes you guess it, that in which the particle has no ‘freedom of thought’ so to speak and falls down the spiral path of entropic death and informative capture by the singularity.

THIS WILL again be a rule of ∆st, the simplest solutions are those related with death, dissolution, entropy and one-dimensional ‘fall’.
In what follows, we will assume that the functions M(x, y) and N(x, y) have continuous partial derivatives of the first order.
If all the points of a simply connected domain G, in which the right side of the differential equation is defined, are ordinary points, then the family of integral curves may be represented schematically as a family of segments of parallel straight lines; since in this case one integral curve will pass through each point, and no two integral curves can intersect. For an equation of more general form, which may have singular points, the structure of the integral curves may be much more complicated. The case in which the previous equation has an infinite set of singular points (i.e., points where the numerator and the denominator both vanish) may be excluded, at least when M(x, y) and N(x, y) are polynomials.

Thus we restrict our consideration to those cases in which the previous equation has a finite number of isolated singular points. The behavior of the integral curves that are near to one of these singular points forms the essential element  in setting up a schematized representation of the behavior of all the integral curves of the equation.

A very typical element in such a scheme for the behavior of all the integral curves of the previous equation is formed by the so-called limit cycles. Let us consider the equation 64:  dρ/dΦ = ρ-1    where ρ and ϕ are polar coordinates in the (x, y) plane.
The collection of all integral curves of the equation  is given by the formula (65):where C is an arbitrary constant, different for different integral curves. In order that ρ be nonnegative, it is necessary that ϕ have values no larger than – In | C |, C < 0. The family of integral curves will consist of
1. the circle ρ = 1 (C = 0);
2. the spirals issuing from the origin, which approach this circle from the inside as ϕ → – ∞(C < 0);
3. the spirals, which approach the circle ρ = 1 from the outside as ϕ → – ∞ (C > 0)

The circle ρ = 1 is called a limit cycle for its equation (65). In general a closed integral curve l is called a limit cycle, if it can be enclosed in-a disc all points of which are ordinary for equation (64) and which is entirely filled by nonclosed integral curves.
From equation (65) it can be seen that all points of the circle are ordinary. This means that a small piece of a limit cycle is not different from a small piece of any other integral curve.
Every closed integral curve in the (x, y) plane gives a periodic solution [x(t), y(t)] of the system:

dx/dt =N (x.y), dy/dt=M (x,y)  describing the law of change of some physical system. Those integral curves in the phase plane that as t → + ∞ approximate a limit cycle are motions that as t → ∞ approximate periodic motions.
Let us suppose that for every point (x0, y0) sufficiently close to a limit cycle l, we have the following situation: If (x0, y0) is taken as initial point (i.e., for t = t0) for the solution of the system (67), then the corresponding integral curve traced out by the point [x(t), y(t)], as t → + ∞ approximates the limit cycle l  in the (x, y) plane. (This means that the motion in question is approximately periodic.) In this case the corresponding limit cycle is called stable. Oscillations that act in this way with respect to a limit cycle correspond physically to self-oscillations. In some self-oscillatory systems, there may exist several stable oscillatory processes with different amplitudes, one or another of which will be established by the initial conditions. In the phase plane for such “self-oscillatory systems,” there will exist corresponding limit cycles if the processes occuring in these systems are described by an equation of the form (67).

The problem of finding, even if only approximately, the limit cycles of a given differential equation has not yet been satisfactorily solved. The most widely used method for solving this problem is the one suggested by Poincaré of constructing “cycles without contact.” It is based on the following theorem. We assume that on the (x, y) plane we can find two closed curves L1 and L2 (cycles) which have the following properties:

1. The curve L2 lies in the region enclosed by L1.

2. In the annulus Ω, between L1 and L2, there are no singular points of equation (64).
3. L1 and L2 have tangents everywhere, and the directions of these tangents are nowhere idertical with the direction of the field of directions for the given equation (64).
4. For all points of L1 and L2 the cosine of the angle between the interior normals to the boundary of the domain Ω and the vector with components [N(x, y), M(x, y)] never changes sign.
Then between L1 and L2 there is at least one limit cycle of equation (64).
Poincaré called the curves L1 and L2 cycles without contact.
The proof of this theorem is based on the following rather obvious fact.

We assume that for decreasing t (or for increasing t) all the integral curves: x = x(t), y = y (t),  of equation (64) (or, what amounts to the same thing, of equations (67), where t is a parameter), which intersect L1 or L2 enter the annulus Ω between L1 and L2. Then they must necessarily tend to some closed curve l lying between L1 and L2, since none of the integral curves lying in the annulus can leave it, and there are no singular points there.

Singular Points.

Now when considering the singular points in relationship to the vital energy mapped out in its cyclical trajectories by those curves, we observe there are 3 cases, the absorption, the crossing and the isolated point, which in abstract math are studied as follows.

Let the point P(x, y) be in the interior of the domain G in which we consider the differential equation: dy/dx = M (x,y)/N (x.y).

If there exists a neighborhood R of the point P through each point of which passes one and only one integral curve (47), then the point P is called an ordinary point of equation (47). But if such a neighborhood does not exist, then the point P is called a singular point of this equation. The study of singular points is very important in the qualitative theory of differential equations, which we will consider in the next section.
Particularly important are the so-called isolated singular points, i.e., singular points in some neighborhood of each of which there are no other singular points. In applications one often encounters them in investigating equations of the form(47), where M(x, y) and N(x, y) are functions with continuous derivatives of high orders with respect to x and y. For such equations, all the interior points of the domain at which M(x, y) ≠ 0 or N(x, y) ≠ 0 are ordinary points.

Let us now consider any interior point (x0, y0) where M(x, y) = N(x, y) = 0. To simplify the notation we will assume that x0 = 0 and y0 = 0. This can always be arranged by translating the original origin of coordinates to the point (x0, y0). Expanding M(x, y) and N(x, y) by Taylor’s formula into powers of x and y and restricting ourselves to terms of the first order, we have, in a neighborhood of the point (0, 0):Equations (45) and (46) are of this form. Equation (45) does not define either dy/dx or dx/dy for x = 0 and y = 0. If the determinant:then, whatever value we assign to dy/dx at the origin, the origin will be a point of discontinuity for the values dy/dx and dx/dy, since they tend to different limits depending on the manner of approach to the origin. The origin is a singular point for our differential equation.
It has been shown that the character of the behavior of the integral curves near an isolated singular point (here the origin) is not influenced by the behavior of the terms ϕ1(x, y) and ϕ2(x, y) in the numerator and denominator, provided only that the real part of both roots of the equation:is different from zero. Thus, in order to form some idea of this behavior, we study the behavior near the origin of the integral curves of the equation:We note that the arrangement of the integral curves in the neighborhood of a singular point of a differential equation has great interest for many problems of mechanics, for example in the investigation of the trajectories of motions near the equilibrium position.
It has been shown that everywhere in the plane it is possible to choose coordinates ξ, η, connected with x, y by the equations:

where the kij are real numbers such that equation (50) is tranformed into one of the the following three types:

If these roots are real and different, then equation (50) is transformed into the form (52). If these roots are equal, then equation (50) is transformed either into the form (52) or into the form (53), depending on whether a2 + d2 = 0 or a2 + d2 ≠ 0. If the roots of equation (55) are complex, λ = α ± βi, then equation (51) is transformed into the form (54).
We will consider each of the equations (52), (53), (54). To begin with, we note the following.
Even though the axes Ox and Oy were mutually perpendicular, the axes Oξ and Oη need not, in general, be so. But to simplify the diagrams, we will assume they are perpendicular. Further, in the transformation (51) the scales on the Oξ and Oη axes may be changed; they may not be the same as the ones originally chosen on the axes Ox and Oy. But again, for the sake of simplicity, we assume that the scales are not changed. Thus, for example, in place of the concentric circles, as in figure 8, there could in general occur a family of similar and similarly placed ellipses with common center at the origin.
All integral curves of equation 1 are given by:

where a and b are arbitrary constants.
The integral curves of equation (52) are graphed in figure 10; here we we have assumed that k > 1. In this case all integral curves except one, the axis Oη, are tangent at the origin to the axis Oξ. The case 0 < k < 1 is the same as the case k > 1 with interchange of ξ and η, i.e., we have only to interchange the roles of the axes ξ and η. For k = 1, equation (52) becomes equation (30). whose integral curves were illustrated in figure 7.
An illustration of the integral curves of equation (52) for k < 0 is given in figure 11. In this case we have only two integral curves that pass through the point O: these are the axis Oξ and the axis Oη. All other integral  curves, after approaching the origin no closer than to some minimal distance, recede again from the origin. In this case we say that the point O is a saddle point because the integral curves are similar to the contours on a map representing the summit of a mountain pass (saddle).
All integral curves of equation (53) are given by the equation:where a and b are arbitrary constants. These are illustrated schematically in figure 12; all of them are tangent to the axis Oη at the origin.
If every integral curve entering some neighborhood of the singular point O passes through this point and has a definite direction there, i.e., has a definite tangent at the origin, as is illustrated in figures 10 and 12, then we say that the point O is a node.
Equation (54) is most easily integrated, if we change to polar coordinates ρ and ϕ, putting:

If k > 0 then all the integral curves approach the point O, winding infinitely often around this point as ϕ → – ∞ (figure 13). If k < 0,
then this happens for ϕ → + ∞. In these cases, the point O is called a focus. If, however, k = 0, then the collection of integral curves of (56) consists of curves with center at the point O. Generally, if some neighborhood of the point O is completely filled by closed integral curves, surrounding the point O itself, then such a point is called a center.
A center may easily be transformed into a focus, if in the numerator and the denominator of the right side of equation (54) we add a term of arbitrarily high order; consequently, in this case the behavior of integral curves near a singular point is not given by terms of the first order.
Equation (55), corresponding to equation (45), is identical with the characteristic equation (19). Thus figures 10 and 12 schematically represent the behavior in the phase plane (x, y) of the curves:

x=x(t), y = x'(t)   corresponding to the solutions of equation (6) for real λ1, and λ2, of the same sign; Figure 11 corresponds to real λ1, and λ2, of opposite signs, and figures 13 and 8 (the case of a center) correspond to complex λ1, and λ2. If the real parts of λ1, and λ2, are negative, then the point (x(t), y(t)) approaches 0 for t → + ∞; in this case the point x = 0, y = 0 corresponds to stable equilibrium. If, however, the real part of either of the numbers λ1, and λ2, is positive, then at the point x = 0, y = 0, there is no stable equilibrium.

There are not many differential equations with the property that all their solutions can be expressed explicitly in terms of simple functions, as is the case for linear equations with constant coefficients. It is possible to give simple examples of differential equations whose general solution cannot be expressed by a finite number of integral of known functions, or as one says, in quadratures.

An equation of the form dy/dx + ay² = x², for a > 0, cannot be expressed as a finite combination of integrals of elementary functions.

So it becomes important to develop methods of approximation to the solutions of differential equations, which will be applicable to wide classes of equations.
The fact that in such cases we find not exact solutions but only approximations should not bother us. First of all, these approximate solutions may be calculated, at least in principle, to any desired degree of accuracy. Second, it must be emphasized that in most cases the differential equations describing a physical process are themselves not altogether exact, as can be seen in all the examples discussed in §1.
An especially good example is provided by the equation (12) for the acoustic resonator. In deriving this equation, we ignored the compressibility of the air in the neck of the container and the motion of the air in the container itself. As a matter of fact, the motion of the air in the neck sets into motion the mass of the air in the vessel, but these two motions have different velocities and displacements. In the neck the displacement of the particles of air is considerably greater than in the container. Thus we ignored the motion of the air in the container, and took account only of its compression. For the air in the neck, however, we ignored the energy of its compression and took account only of the kinetic energy of its motion.
To derive the differential equation for a physical pendulum, we ignored the mass of the string on which it hangs. To derive equation (14) for electric oscillations in a circuit, we ignored the self-inductance of the wiring and the resistance of the coils. In general, to obtain a differential equation for any physical process, we must always ignore certain factors and idealize others.

For physical investigations we are especially interested in those differential equations whose solutions do not change much for arbitrary small changes, in some sense or another, in the equations themselves. Such differential equations are called “intensitive.” These equations deserve particularly complete study.
It should be stated that in physical investigations not only  are the differential equations that describe the laws of change of the physical quantities themselves inexactly defined but even the number of these quantities is defined only approximately. Strictly speaking, there are no such things as rigid bodies. So to study the oscillations of a pendulum, we ought to take into account the deformation of the string from which it hangs and the deformation of the rigid body itself, which we approximated by taking it as a material point. In exactly the same way, to study the oscillations of a load attached to springs, we ought to consider the masses of the separate coils of the springs.

But in these examples it is easy to show that the character of the motion of the different particles, which make up the pendulum and its load together with the springs, has little influence on the character of the oscillation. If we wished to take this influence into account, the problem would become so complicated that we would be unable to solve it to any suitable approximation. Our solution would then bear no closer relation to physical reality than the solution given in §1 without consideration of these influences. Intelligent idealisation of  a problem is always unavoidable.

To describe a process, it is necessary to take into account the essential features of the process but by no means to consider every feature without exception. This would not only complicate the problem a great deal but in most cases would result in the impossibility of calculating a solution.

The fundamental problem of physics or mechanics, in the investigation of any phenomenon, is to find the smallest number of quantities, which with sufficient exactness describe the state of the phenomenon at any given moment, and then to set up the simplest differential equations that are good descriptions of the laws governing the changes in these quantities. This problem is often very difficult. Which features are the essential ones and which are non-essential is a question that in the final analysis can be decided only by long experience. Only by comparing the answers provided by an idealized argument with the results of experiment can we judge whether the idealization was a valid one.

The mathematical problem of the possibility of decreasing the number of quantities may be formulated in one of the simplest and most characteristic cases, as follows.
Suppose that to begin  with we characterize the state of a physical system at time t by the two magnitudes x1(t) and x2(t). Let the differential equations expressing their rates of change have the form: In the second equation the coefficient of the derivative is a small constant parameter ε . If we put ε= 0, the second equation will cease to be a differential equation. It then takes the form: ƒ2(t, x1,x2)=0

From this equation, we define x2, as a function of t and x1, and we substitute it into the first equation. We then have the differential equation: dx1/dt = F (t, x1)  for the single variable x1. In this way the number of parameters entering into the situation is reduced to one. We now ask, under what conditions will the error introduced by taking ε = 0 be small. Of course, it may happen that as ε → 0 the value dx2/dt grows beyond all bounds, so that the right side of the second of equations (28) does not tend to zero as ε→ 0.

Generalized Solutions

The range of problems in which a physical process is described by continuous, differentiable functions satisfying differential equations may be extended in an essential way by introducing into the discussion discontinuous solutions of these equations.

In a number of cases it is clear from the beginning that the problem under consideration cannot have solutions that are twice continuously differentiable; in other words, from the point of view of the classical statement of the problem given in the preceding section, such a problem has no solution. Nevertheless the corresponding physical process does occur, although we cannot find functions describing it in the preassigned class of twice-differentiable functions. Let us consider some simple examples.

  1. If a string consists of two pieces of different density, then in the equation:

image2873

the coefficient will be equal to a different constant on each of the corresponding pieces, and so equation (24) will not, in general, have classical (twice continuously differentiable) solutions.

  1. Let the coefficient a be a constant, but in the initial position let the string have the form of a broken line given by the equation u|i = 0 = ϕ(x). At the vertex of the broken line, the function ϕ(x) obviously cannot have a first derivative. It may be shown that there exists no classical solution of equation (24) satisfying the initial conditions:

image2875

(here and in what follows ut denotes ∂u/∂t).

  1. If a sharp blow is given to any small piece of the string, the resulting oscillations are described by the equation:

image2877

where f(x, t) corresponds to the effect produced and is a discontinuous function, differing from zero only on the small piece of the string and during a short interval of time. Such an equation also, as can be easily established, cannot have classical solutions.

These examples show that requiring continuous derivatives for the desired solution strongly restricts the range of the problems we can solve. The search for a wider range of solvable problems proceeded first of all in the direction of allowing discontinuities of the first kind in the derivatives of highest order, for the functions serving as solutions to the problems, where these functions must satisfy the equations except at the points of discontinuity. It turns out that the solutions of an equation of the type Δu = 0 or ∂u/∂t − Δu = 0 cannot have such (so-called weak) discontinuities inside the domain of definition.

Solutions of the wave equation can have weak discontinuities in the space variables x, y, z, and in t only on surfaces of a special form, which are called characteristic surfaces. If a solution u(x, y, z, t) of the wave equation is considered as a function defining, for t = t1, a scalar field in the x, y, z space at the instant t1, then the surfaces of discontinuity for the second derivatives of u(x, y, z, t) will travel through the (x, y, z) space with a velocity equal to the square root of the coefficient of the Laplacian in the wave equation.

The second example for the string shows that it is also necessary to consider solutions in which there may be discontinuous first derivatives; and in the case of sound and light waves, we must even consider solutions that themselves have discontinuities.

The first question that comes up in investigating the introduction of discontinuous solutions consists in making clear exactly which discontinuous functions can be considered as physically admissible solutions of an equation or of the corresponding physical problem. We might, for example, assume that an arbitrary piecewise constant function is “a single solution” of the Laplace equation or the wave equation, since it satisfies the equation outside of the lines of discontinuity.

In order to clarify this question, the first thing that must be guaranteed is that in the wider class of functions, to which the admissible solutions must belong, we must have a uniqueness theorem. It is perfectly clear that if, for example, we allow arbitrary piecewise smooth functions, then this requirement will not be satisfied.

Historically, the first principle for selection of admissible functions was that they should be the limits (in some sense or other) of classical solutions of the same equation. Thus, in example 2, a solution of equation (24) corresponding to the function ϕ(x), which does not have a derivative at an angular point may be found as the uniform limit of classical solutions un(x, t) of the same equation corresponding to the initial conditions un | t = 0 = ϕn(x), unt | t = 0 = 0 where the ϕ(x) are twice continuously differentiable functions converging uniformly to ϕ(x) for n → ∞.

In what follows, instead of this principle we will adopt the following: An admissible solution u must satisfy, instead of the equation Lu = f, an integral identity containing an arbitrary function Ф.

This identity is found as follows: We multiply both sides of the equation Lu = f by an arbitrary function Ф, which has continuous derivatives with respect to all its arguments of orders up through the order of the equation and vanishes outside of the finite domain D in which the equation is defined. The equation thus found is integrated over D and then transformed by integration by parts so that it does not contain any derivatives of u. As a result we get the identity desired. For equation (24), for example, it has the form:

image2879

For equations with constant coefficients these two principles for the selection of admissible (or as they are now usually called, generalized) solutions, are equivalent to each other. But for equations with variable coefficients, the first principle may turn out to be inapplicable, since these equations may in general have no classical solutions (cf. example 1). The second of these principles provides the possibility of selecting generalized solutions with very broad assumptions on the differentiability properties of the coefficients of the equations. It is true that this principle seems at first sight to be overly formal and to have a purely mathematical character, which does not directly indicate how the problems ought to be formulated in a manner similar to the classical problems.

In order that a larger number of problems may be solvable, we must seek the solutions among functions belonging to the widest possible class of functions for which uniqueness theorems still hold. Frequently such a class is dictated by the physical nature of the problem. Thus, in quantum mechanics it is not the state function ψ(x), defined as a solution of the Schrödinger equation, that has physical meaning but rather the integral av = ∫E ψ(x) ψv(x)dx, where the ψv are certain functions for which:image2911. Thus the solution ψ is to be sought not among the twice continuously differentiable functions but among the ones with integrable square. In the problems of quantum electrodynamics, it is still an open question which classes of functions are the ones in which we ought to seek solutions for the equations considered in that theory.

Progress in mathematical physics during the last thirty years has been closely connected with this new formulation of the problems and with the creation of the mathematical apparatus necessary for their solution.

Particularly convenient methods of finding generalized solutions in one or another of these classes of functions are: the method of finite differences, the direct methods in the calculus of variations and functional-operator methods. These latter methods basically depend on a study of transformations generated by these problems. Here we will explain the basic ideas of the direct methods of the calculus of variations.

Let us consider the problem of defining the position of a uniformly stretched membrane with fixed boundary. From the principle of minimum potential energy in a state of stable equilibrium the function u(x, y) must give the least value of the integral:image2913

in comparison with all other continuously differentiable functions υ(x, y) satisfying the same condition on the boundary, u| S = ϕ, as the function u does. With some restrictions on ϕ and on the boundary S it can be shown that such a minimum exists and is attained by a harmonic function, so that the desired function u IS a solution of the Dirichlet problem Δu = 0, u|S = ϕ. The converse is also true: The solution of the Dirichlet problem gives a minimum to the integral J with respect to all υ satisfying the boundary condition.

The proof of the existence of the function u, for which J attains its minimum, and its computation to any desired degree of accuracy may be carried out, for example, in the following manner (Ritz method). We choose an infinite family of twice continuously differentiable functions {υn(x, y)}, n = 0, 1, 2, …, equal to zero on the boundary for n > 0 and equal to ϕ for n = 0. We consider J for functions of the form:image2915

where n is fixed and the Ck are arbitrary numbers. Then J(υ) will be a polynomial of second degree in the n independent variables C1, C2, …, Cn. We determine the Ck from the condition that this polynomial should assume its minimum. This leads to a system of n linear algebraic equations in n unknowns, the determinant of which is different from zero. Thus the numbers Ck are uniquely defined. We denote the corresponding υ by vn(x, y). It can be shown that if the system {υn) satisfies a certain condition of “completeness” the functions υn will converge, as n → ∞, to a function which will be the desired solution of the problem.

In conclusion, we note that in this chapter we have given a description of only the simplest linear problem of mechanics and have ignored many further questions, still far from completely worked out, which are connected with more general partial differential equations.

 Methods of Constructing Solutions

On the possibility of decomposing any solution into simpler solutions. Solutions of the problems of mathematical physics formulated previously may be derived by various devices, which are different specific problems. But at the basis of these methods there is one general idea. As we have seen, all the equations of mathematical physics are, for small values of the unknown functions, linear with respect to the functions and their derivatives. The boundary conditions and initial conditions are also linear.

If we form the difference between any two solutions of the same equation, this difference will also be a solution of the equation with the right-hand terms equal to zero. Such an equation is called the corresponding homogeneous equation. For example, for the Poisson equation Δu = − 4πρ, the corresponding homogeneous equation is the Laplace equation Δu = 0.

If two solutions of the same equation also satisfy the same boundary conditions, then their difference will satisfy the corresponding homogeneous condition: The values of the corresponding expression on the boundary will be equal to zero.

Hence the entire manifold of the solutions of such an equation, for given boundary conditions, may be found by taking any particular solution that satisfies the given nonhomogeneous condition together with all possible solutions of the homogeneous equation satisfying homogeneous boundary conditions (but not, in general, satisfying the initial conditions).

Solutions of homogeneous equations, satisfying homogeneous boundary conditions may be added, or multiplied by constants, without ceasing to be solutions.

If a solution of a homogeneous equation with homogeneous conditions is a function of some parameter, then integrating with respect to this parameter will also give us such a solution. These facts form the basis of the most important method of solving linear problems of all kinds for the equations of mathematical physics, the method of superposition.

The solution of the problem is sought in the form:image2651

where u, is a particular solution of the equation satisfying the boundary conditions but not satisfying the initial conditions, and the u, are solutions of the corresponding homogeneous equation satisfying the corresponding homogeneous boundary conditions. If the equation and the boundary conditions were originally homogeneous, then the solution of the problem may be sought in the form: U = ∑ Uk.

In order to be able to satisfy arbitrary initial conditions by the choice of particular solutions uk of the homogeneous equation, we must have available a sufficiently large arsenal of such solutions.

The method of separation of variables.

For the construction of the necessary arsenal of solutions there exists a method called separation of variables or Fourier’s method.

Let us examine this method, for example, for solving the problem:image2655image2657

In looking for any particular solution of the equation, we first of all assume that the desired function u and satisfies the boundary condition u | S = 0 and can be expressed as the product of two functions, one of which depends only on the time t and the other only on the space variables: u (x,y,z,t) = U (x,y,z) T (t). Substituting this assumed solution into our equation, we have: T (t) ∆ U = T” (t) U.

Dividing both sides by TU gives: T”/T = ∆U/U.

The right side of this equation is a function of the space variables only and the left is independent of the space coordinates. Hence it follows that the given equation can be true only if the left and right sides have the same constant value. We are led to a system of two equations:image2665

The constant quantity on the right is denoted here by:image2667in order to emphasize that it is negative (as may be rigorously proved). The subscript k is used here to note that there exist infinitely many possible values of:image2669Where the solutions corresponding to them form a system of functions complete in a well-known sense.

Cross-multiplying in both equations, we get:
image2671 The first of these equations has, as we know, the simple solution:

image2673

where Ak and Bk are arbitrary constants. This solution may be further simplified by introducing the auxiliary angle ϕ. We have:

image2675

Then:

image2677

The function T represents a harmonic oscillation with frequency λk, shifted in phase by the angle ϕk.

More difficult and more interesting is the problem of finding a solution of the equation:

image2679

for given homogeneous boundary conditions; for example, for the conditions: U|s=0

(where S is the boundary of the volume Ω under consideration), or for any other homogeneous condition. The solution of this problem is not always easy to construct as a finite combination of known functions, although it always exists and can be found to any desired degree of accuracy.

The equation:image2683 for the condition U | S = 0 has first of all the obvious solution U ≡ 0. This solution is trivial and completely useless for our purposes. If the λk are any randomly chosen numbers, then in general there will not be any other solution to our problem. However, there usually exist values of λk for which the equation does have a nontrivial solution.

All possible values of the constant:image2685 are determined by the requirement that equation (19) have a nontrivial solution, i.e., distinct from the identically vanishing function, which satisfies the condition U | S = 0. From this it also follows that the numbers denoted by:image2687must be negative.

For each of the possible values of λk in equation (19), we can find at least one function Uk. This allows us to construct a particular solution of the wave equation (18) in the form:

image2689

Such a solution is called a characteristic oscillation (or eigenvibration) of the volume under consideration. The constant λk is the frequency of the characteristic oscillation, and the function Uk(x, y, z) gives us its form. This function is usually called an eigenfunction (characteristic function). For all instants of time, the function uk, considered as a function of the variables x, y, and z, will differ from the function Uk(x, y, z) only in scale.

We do not have space here for a detailed proof of the many remarkable properties of characteristic oscillations and of eigenfunctions; therefore we will restrict ourselves merely to listing some of them.

The first property of the characteristic oscillations consists of the fact that for any given volume there exists a countable set of characteristic frequencies. These frequencies tend to infinity with increasing k.

Another property of the characteristic oscillations is called orthogonality. It consists of the fact that the integral over the domain Ω of the product of eigenfunctions corresponding to different values of λk, is equal to zero:image2691 For j = k we will assume:image2693

This can always be arranged by multiplying the functions Uk(x, y, z) by an appropriate constant, the choice of which does not change the fact that the function satisfies equation (19) and the condition U | S = 0.

Finally, a third property of the characteristic oscillations consists of the fact that, if we do not omit any value of λk, then by means of the eigenfunctions Uk(x, y, z), we can represent with any desired degree of exactness a completely arbitrary function f(x, y, z), provided only that it satisfies the boundary condition f | S = 0 and has continuous first and second derivatives. Any such function f(x, y, z) may be represented by the convergent series:image2695

The third property of the eigenfunctions provides us in principle with the possibility of representing any function f(x, y, z) in a series of eigenfunctions of our problem, and from the second property we can find all the coefficients of this series. In fact, if we multiply both sides of equation (20) by Uj(x, y, z) and integrate over the domain Ω, we get:

image2697

In the sum on the right, all the terms in which k ≡ j disappear because of the orthogonality, and the coefficient of Cj is equal to one. Consequently we have:image2699

These properties of the characteristic oscillations now allow us to solve the general problem of oscillation for any initial conditions.

For this we assume that we have a solution of the problem in the form:

image2701

and try to choose the constants Ak and Bk so that we have:image2703

Putting t = 0 in the right side of (21), we see that the sine terms disappear and cos λkt becomes equal to one, so that we will have:image2705

From the third property, the characteristic oscillations can be used for such a representation, and from the second property, we have:image2707

In the same way, differentiating formula (21) with respect to t and putting t = 0, we will have:image2709
Hence, as before, we obtain the values of Bk as:image2711

Knowing Ak and Bk, we in fact know both the phases and the amplitudes of all the characteristic oscillations.

In this way we have shown that by addition of characteristic oscillations it is possible to obtain the most general solution of the problem with homogeneous boundary conditions.

Every solution thus consists of characteristic oscillations, whose amplitude and phase we can calculate if we know the initial conditions.

In exactly the same way, we may study oscillations with a smaller number of independent variables. As an example let us consider the vibrating string, fixed at both ends. The equation of the vibrating string has the form:image2713

Let us suppose that we are looking for a solution of the problem for a string of length l, fixed at the ends:image2715

We will look for a collection of particular solutions:image2717

We obviously obtain, just as before:image2719

or:image2721
Hence:image2723

We use the boundary conditions in order to find the values of λk. For general λk it is not possible to satisfy both the boundary conditions. From the condition Uk | x = 0 = 0 we get Mk = 0, and this means that Uk = Nk sin (λk/a)x. Putting x = l, we get sin(λkl/a) = 0. This can only happen if λkl/a = kπ, where k is an integer. This means that:image2725

The condition:image2727 shows that:image2729Finally:image2731

In this manner the characteristic oscillations of the string, as we see, have sinusoidal form with an integral number of half waves on the entire string. Every oscillation has its own frequency, and the frequencies may be arranged in increasing order:image2733

It is well known that these frequencies are exactly those that we hear in the vibrations of a sounding string. The frequency is called the fundamental frequency, and the remaining frequencies are overtones. The eigenfunction:image2735So on the interval: 0 ≤ X ≤l change sign k − 1 times, since kπx/l runs through values from 0 to kπ, which means that its sine changes sign k − 1 times. The points where the eigenfunctions Uk vanish are called nodes of the oscillations.

If we arrange in some way that the string does not move at a point corresponding to a node, for example of the fist overtone, then the fundamental tone will be suppressed, and we will hear only the sound of the first overtone, which is an octave higher. Such a device, called stopping, is made use of on instruments played with a bow: the violin, viola, and violoncello.

We have analyzed the method of separating variables as applied to the problem of finding characteristic oscillations. But the method can be applied much more widely, to problems of heat flow and to a whole series of other problems.

For the equation of heat flow:image2739 with the condition:image2741

we will have, as before:image2743

Here:image2745

The solution is obtained in the form:image2747

This method has also been used with great success to solve some other equations. Consider, for example, the Laplace equation: ∆u=0

in the circle: x²+y²≤1   and assume that we have to construct a solution satisfying the condition:image2753

where r and ϑ denote the polar coordinates of a point in the plane.

The Laplace equation may be easily transformed into polar coordinates. It then has the form:image2755

We want to find a solution of this equation in the form:image2757

If we require that every term of the series individually satisfy the equation, we have:image2759

Dividing the equation by Rk(r) θk(ϑ)/r2, we get:image2761

Again setting:image2763

We have:image2765

It is easy to see that the function θk(ϑ) must be a periodic function of ϑ with period 2π. Integrating the equation:image2767we get:

image2769

This function will be periodic with the required period only if λk is an integer. Putting λk = k, we have:

image2771

The equation for Rk has a general solution of the form:image2773

Retaining only the term that is bounded for r → 0, we get the general solution of the Laplace equation in the form:image2775

This method may often be used to find nontrivial solutions of the equation:image2777 that satisfy homogeneous boundary conditions. In case the problem can be reduced to problems of solving ordinary differential equations, we say that it allows a complete separation of variables. This complete separation of variables by the Fourier method can be carried out, as was shown by the Soviet mathematician V. V. Stepanov, only in certain special cases. The method of separation of variables was known to mathematicians a long time ago. It was used essentially by Euler, Bernoulli, and d’Alembert. Fourier used it systematically for the solution of problems of mathematical physics, particularly in heat conduction. However, as we have mentioned, this method is often inapplicable; we must use other methods, which we will now discuss.

The method of potentials.

The essential feature of this method is, as before, the superposition of particular solutions for the construction of a solution in general form. But this time for the particular fundamental solutions, we use functions that become infinite at one point. Let us illustrate with the Laplace and Poisson equations.

Let M0 be a point of our space. We denote by r(M, M0) the distance from the point M0 to a variable point M. The function l/r(M, M0) for a fixed M0 is a function of the variable point M. It is easy to establish the fact that this function is a harmonic function of the point M in the entire space,* except of course, at the point M0, where the function becomes infinite, together with its derivatives.

The sum of several functions of this form:image2779

where the points M1, M2, …, MN are any points in the space, is again a harmonic function of the point M. This function will have singularities at all the points Mi. If we choose the points M1, M2, …, MN as densely distributed as we please in some volume Ω, and at the same time multiply by coefficients Ai, we may pass to the limit in this expression and get a new function:image2781

where the points M′ range over all of the volume Ω. The integral in this form is called a Newtonian potential. It may be shown, although we will not do it here, that the function U thus constructed satisfies the equation ΔU = − 4πA.

The Newtonian potential has a simple physical meaning. To understand it, we will begin with the function Ai/r(M, Mi).

The partial derivatives of this function with respect to the coordinates are:image2783

At the point Mi we place a mass Ai, which will attract all bodies with a force directed toward the point Mi and inversely proportional to the square of the distance from Mi. We decompose this force into its components along the coordinate axes. If the magnitude of the force acting on a material point of unit mass is Ai/r2, the cosines of the angles between the direction of this force and the coordinate axis will be (xi − x)/r, (yi, − y)/r, (zi − z)/r.

Thus the components of the force exerted on a unit mass at the point M by an attracting center Mi will be equal to X, Y, and z, the partial derivatives of the function Ai/r with respect to the coordinates. If we place attracting masses at points M1, M2, …, MN, then every material point with unit mass placed at a point M will be acted on by a force equal to the resultant of all the forces acting on it from the given points Mi. In other words:image2785

Passing to the limit and replacing the sum by an integral, we get:image2787

The function U, with partial derivatives equal to the components of the force acting on a point, is called the potential of the force. Thus the function Ai/r(M, Mi) is the potential of the attraction exerted by the point Mi, the function Σ[Ai/r(M, Mi)] is the potential of the attraction exerted by the group of points M1, M2, …, MN, and the function:image2789is the potential of the attraction exerted by the masses continuously distributed in the volume Ω.

Instead of distributing the masses in a volume, we may place the points M1, M2, …, MN on a surface S. Again increasing the number of these points, we get in the limit the integral:image2791

where Q is a point on the surface S.

It is not difficult to see that this function will be harmonic everywhere inside and outside the surface S. On the surface itself the function is continuous, as can be proved, although its partial derivatives of the first order have finite discontinuities.

The functions ∂(l/r)/∂xi, ∂(l/r)/∂yi, and ∂(l/r)/∂zi also are harmonic functions of the point M for fixed Mi. From these functions in turn, we may form the sums:image2793

which will be harmonic functions everywhere except perhaps at the points M1, M2, …, MN

Of particular importance is the integral:image2795

in which x′, y′, and z′ are the coordinates of a variable point Q on the surface S, n is the direction of the normal to the surface S at the point Q while x, y, and z are the directions of the coordinate axes, and r is the distance from Q to the point M at which the value of the function W is defined.

The integral (22) is called the potential of a simple layer, and the integral (23) the potential of a double layer.* The potential of a double layer and the potential of a simple layer represent a function harmonic inside and outside of the surface S.

Many problems in the theory of harmonic functions may be solved by using potentials. By using the potential of a double layer, we may solve the problem of constructing, in a given domain, a harmonic function u, having given values 2πϕ(Q) on the boundary S of the domain. In order to construct such a function, we only need to choose the function μ(Q) in a suitable way.

This problem is somewhat reminiscent of the similar problem of finding the coefficients in the series:image2797

so that it may represent the function on the left side.

A remarkable property of the integral W consists of the fact that its limiting value as the point M approaches Q0 from the inner side of the surface has the form:image2799

Equating this expression to the given function 2πϕ(Q0), we get the equation:image2801

This equation is called an integral equation of the second kind. The theory of such equations has been developed by many mathematicians. If we can solve this equation by any method, we obtain a solution of our original problem.

In exactly the same way, we may find a solution of other problems in the theory of harmonic functions. After choice of a suitable potential, the density, i.e., the value of an arbitrary function appearing in it, is defined in such a way that all the prescribed conditions are fulfilled.

From a physical point of view, this means that every harmonic function may be represented as the potential of a double electric layer, if we distribute this layer over a surface S with appropriate density.

Approximate construction of solutions

. We have discussed two methods for solving equations of mathematical physics: the method of complete separation of variables and the method of potentials. These methods were developed by scientists of the 18th and 19th centuries, Fourier, Poisson, Ostrogradskii, Ljapunov, and others. In the 20th century they were augmented by a series of other methods. We will examine two of them, Galerkin’s method and the method of finite differences, or the method of nets:
image2803
containing an unknown parameter λ, where the indices i, j, k, and l independently take on the values 1, 2, and 3. These equations are derived from equations containing an independent variable t, by using the method of separation of variables in the same way as the wave equation:

∆u=∂²u/∂t²

leads to the equation ΔU + λ2U = 0.

The problem consists of finding those values of λ for which the homogeneous boundary-value problem has a nonzero solution and then constructing that solution.

The essence of Galerkin’s method is as follows. The unknown function is sought in the approximate form:image2807

where the ωm(x1, x2, x3) are arbitrary functions satisfying the boundary conditions.

The assumed solution is substituted in the left side of the equation, resulting in the approximate equation:image2809

For brevity we denote the expression inside the brackets by Lωm, and write the equation in the form:image2811

Now we multiply both sides of our approximate equation by ωn and integrate over the domain Ω in which the solution is sought. We get:image2813

which may be rewritten in the form:image2815

If we set ourselves the aim of satisfying these equations exactly, we will have a system of algebraic equations of the first degree for the unknown coefficients am. The number of equations in the system will be equal to the number of unknowns, so that this system will have a nonvanishing solution only if its determinant is zero. If this determinant is expanded, we get an equation of the Nth degree for the unknown number λ.

After finding the value of λ and substituting it in the system, we solve this system to obtain approximate expressions of the function U.

Galerkin’s method is not only suitable for equations of the fourth order, but may be applied to equations of different orders and different types.

  1. The last of the methods that we will examine is called the method of finite differences or the method of nets.

The derivative of the function u with respect to the variable x is defined as the limit of the quotient:image2817
This quotient in its turn may be represented in the form:image2819
and from the well-known theorem of the mean value:

image2821

where ξ is a point in the interval:image2823

All the second derivatives of u, both the mixed derivatives and the derivatives with respect to one variable, may also be approximately represented in the form of difference quotients. Thus the difference quotient:image2825

is represented in the form:image2827

From the mean-value theorem the difference quotient of the function:image2829

may be replaced by the value of the derivative. Consequently:image2831where ξ is some intermediate value in the interval:image2833

Thus:image2835
On the other hand:image2837

which means that:image2839

Once more using the formula for finite increments, we see that:image2841

where:image2843

Consequently:image2845

where x − Δx < η < x + Δx.

If the derivative u″(x) is continuous and the value of Δx is sufficiently small, then u″(η) will be only slightly different from u″(x). Thus our second derivative is arbitrarily close to the difference quotient in question. In exactly the same way it may be shown, for example, that the mixed second derivative: ∂²u/∂x∂y can be approximately represented by the formula:image2849

We return now to our partial differential equation.

For definiteness, let us assume that we are dealing with the Laplace equation in two independent variables:image2851

Further, let the unknown function u be given on the boundary S of the domain Ω. As an approximation we assume that:image2853

If we put Δx = Δy = h, then:image2855

Now let us cover the domain Ω with a square net with vertices at the points x = kh, y = bh (figure 4). We replace the domain by the polygon consisting of those squares of our net that fall inside Ω, so that the boundary of the domain is changed into a broken line. We take the values of the unknown function on this broken line to be those given on the boundary of S. The Laplace equation is then approximated by the equation:image2857image2859

for all interior points of the domain. This equation may be rewritten in the form:

image2861

Then the value of u at any point of the net, for example the point 1 in figure 4, is equal to the arithmetic mean of its values at the four adjacent points.

We assume that inside the polygon there are N points of our net. At every such point we will have a corresponding equation. In this manner we get a system of N algebraic equations in N unknowns, the solution of which gives us the approximate values of the function u on the domain Ω.

It may be shown that for the Laplace equation the solution may be found to any desired degree of accuracy.

The method of finite differences reduces the problem to the solution of a system of N equations in N unknowns, where the unknowns are the values of the desired function at the knots of some net.

Further the method of finite differences can be shown to be applicable to other problems of mathematical physics: to other differential equations and to integral equations. However its application in many cases involves a number of difficulties.

It may turn out that the solution of the system of N algebraic equations in N unknowns, constructed by the method of nets, either does not exist in general or gives a result that is quite far from the true one. This happens when the solution of the system of equations leads to accumulation of errors; the smaller we take the length of the sides of the squares in the net the more equations we get, so that the accumulated error may become greater.

In the example given previously of the Laplace equation, this does not happen. The errors in solving this system do not accumulate but, on the contrary, steadily decrease if we solve the system, for example, by a method of successive approximations. For the equation of heat flow and for the wave equation it is essential to choose the nets properly. For these equations we may get both good and bad results.

If we are going to solve either of these equations by the method of nets, after choosing the net for the values of t, we must not choose too fine a net for the space variables. Otherwise we get a very unsatisfactory system of equations for the values of the unknown function; its solution gives a result that oscillates rapidly with large amplitudes and is thus very far from the true one.

The great variety of possible results may best be seen in a simple numerical example. Consider the equation:

∂u/∂t=∂²u/∂x²  for the equation of heat flow in the case in which the temperature does not depend on y or z. We take the mesh width of the net along the values of t equal to k and along the values of x equal to h:image2865

Then our equation may be written approximately in the form:image2867

If, for a certain mesh-point value of t, we know the values of u at the points x − h, x, and x + h, it is easy to find the value of u at the point x and the next mesh point t + k. Assume that the constant k, i.e., the mesh width in the net with respect to t, is already chosen. Let us consider two cases for the choice of h. We put h2 = k in the first case and h2 = 2k in the second and solve the following problem 6y the method of nets.

At the initial instant, u = 0 for all negative values of x, and u = 1 for all nonnegative values of x. We will have, writing in one line the values of the unknown function u for the given instant, two tables:

image2869

image2871

In Table 2 we obtain values, for any given instant of time, which vary smoothly from point to point. This table gives a good approximation to the solution of the heat-flow equation. On the other hand, in Table 1, in which, as it would seem, the exactness should have been increased because of our finer division for the x-interval, the values of u oscillate very rapidly from positive values to negative ones and attain values that are much greater than the initially prescribed ones. It is clear that in this table the values are extraordinarily far from those that correspond to the true solution.

From these examples it is clear that if we wish to use the method of nets to get sufficiently accurate and reliable results, we must exercise great discretion in our choice of intervals in the net and must make preliminary investigations to justify the application of the method.

The solutions obtained by using the equations of mathematical physics for these or other problems of natural science give us a mathematical description of the expected course or the expected character of the physical events described by these equations.

Since the construction of a model is carried out by means of the equations of mathematical physics, we are forced to ignore, in our abstractions, many aspects of these events, to reject certain aspects as nonessential and to select others as basic, from which it follows that the results we obtain are not absolutely true. They are absolutely true only for that scheme or model that we have considered, but they must always be compared with experiment, if we are to be sure that our model of the event is close to the event itself and represents it with a sufficient degree of exactness.

The ultimate criterion of the truth of the results is thus practical experience only. In the final analysis, there is just one criterion, namely practical experience, although experience can only be properly understood in the light of a profound and well-developed theory.

If we consider the vibrating string of a musical instrument, we can understand how it produces its tones only if we are acquainted with the laws for superposition of characteristic oscillations. The relations that hold among the frequencies can be understood only if we investigate how these frequencies are determined by the material, by the tension in the string, and by the manner of fixing the ends. In this case the theory not only provides a method of ca!culating any desired numerical quantities but also indicates just which of these quantities are of fundamental importance, exactly how the physical process occurs, and what should be observed in it.

In this way a domain of science, namely mathematical physics, not only grew out of the requirements of practice but in turn exercised its own influence on that practice and pointed out paths for further progress.

Mathematical physics is very closely connected with other branches of mathematical analysis, but we cannot discuss these connections here, since they would lead us too far afield.

 III.

FUNCTIONALS, FRACTALS: ∆≥|3|

CALCULUS OF VARIATIONS

Examples of variational problems.

Before we study the calculus of variations as st-ates/ages of a physical/biological T.œ we consider some classic problems.

The curve of fastest descent.

The problem of the brachistochrone, or the curve of fastest descent, was historically the first problem in the development of the calculus of variations. Its in interest remains in that it is the simplest combination of  a space and time dimension with a curved form, which was faster than a lineal one, showing the least time principle – that time, hence function and motion objectively dominated space, form and mind:

Among all curves connecting the points M1 and M2, it is required to find that one along which a mathematical point, moving under the force of gravity from M1, with no initial velocity, arrives at the point M2 in the least time.
To solve this problem we must consider all possible curves joining M1 and M2. If we choose a definite curve l, then to it will correspond some definite value T of the time taken for the descent of a material point along it. The time T will depend on the choice of l, and of all curves joining M1 and M2 we must choose the one which corresponds to the least value of T.

The problem of the brachistochrone may be expressed in the following way.
We draw a vertical plane through the points M1 and M2. The curve of fastest descent must obviously lie in it, so that we may restrict ourselves to such curves. We take the point M1 as the origin, the axis Ox horizontal, and the axis Oy vertical and directed downward (figure 1). The coordinates of the point M1, will be (0, 0); the coordinates of the point M2 we will call (x2, y2). Let us consider an arbitrary curve described by the equation:

y = ƒ(x), 0 ≤x ≤x2 where f is a continuously differentiable function. Since the curve passes through M1 and M2, the function f at the ends of the segment [0, x2] must satisfy the condition: ƒ(0)=0, ƒ(x2)=y2.
If we take an arbitrary point M(x, y) on the curve, then the velocity υ of a material point at this point of the curve will be connected with the y-coordinate of the point by the well-known physical relation: 1/2 v² = gy, v = √2gy,

The time necessary for a material point to travel along an element ds of arc of the curve has the value:And thus the total time of the descent of the point along the curve from M1 to M2 is equal to:

Finding the brachistochrone is equivalent to the solution of the following minimal problem: Among all possible functions (1) that satisfy conditions (2), find that one which corresponds to the least value of the integral (3).
2. The surface of revolution of the least area. Among the curves joining two points of a plane, it is required to find that one whose arc, by rotation around the axis Ox, generates the surface with the least area.
We denote the given points by M1(x1, y2) and M2(x1, y2) and consider an arbitrary curve given by the equation: y = ƒ(x)

If the curve passes through M1 and M2, the function f will satisfy the condition: ƒ (x1) = y1, ƒ (x2)= y2
When rotated around the axis Ox this curve describes a surface with area numerically equal to the value of the integral:This value depends on the choice of the curve, or equivalently of the function y = f(x). Among all functions (4) satisfying condition (5) we must find that function which gives the least value to the integral (6).

 Uniform deformation of a membrane.

By a membrane we usually mean an elastic surface that is plane in the state of rest, bends freely, and does work only against extension. We assume that the potential energy of a deformed membrane is proportional to the increase in the area of its surface.
In the state of rest let the membrane occupy a domain B of the Oxy plane (figure 2). We deform the boundary of the membrane in a direction perpendicular to Oxy and denote by ϕ(M) the displacement of the point M of the boundary. Then the interior of the membrane is also deformed, and we are required to find the position of equilibrium of the membrane for a given deformation of its boundary:

With a great degree of accuracy we may assume that all points of the membrane are displaced perpendicularly to the plane Oxy. We denote by u(x, y) the displacement of the point (x, y). The area of the membrane in its displaced position will be:If the deformations of the elements of the membrane are so small that we can legitimately ignore higher powers of ux and uy, this expression for the area may be replaced by a simpler one:The change in the area of the membrane is equal to:“so that the potential energy of the deformation will have the value:where μ is a constant depending on the elastic properties of the membrane.
Since the displacement of the points on the edge of the membrane is assumed to be given, the function u(x, y) will satisfy the condition:

u|l = Φ (M)  on the boundary of the domain B.
In the position of equilibrium the potential energy of the deformation must have the smallest possible value, so that the function u(x, y), describing the displacement of the points of the membrane, is to be found by solving the following mathematical problem: Among all functions u(x, y) that are continuously differentiable on the domain B and satisfy condition (8) on the boundary, find the one which gives the least value to the integral (7).

Extreme values of functionals and the calculus of variations.

These examples allow us to form some impression of the kind of problems considered, but to define exactly the position of the calculus of variations in mathematics, we must become acquainted with certain new concepts. We recall that one of the basic concepts of mathematical analysis is that of a function. In the simplest case the concept of functional dependence may be described as follows. Let M be any set of real numbers. If to every number x of the set M there corresponds a number y, we say that there is defined on the set M a function y = f(x). The set M is often called the domain of definition of the function.
The concept of a functional is a direct and natural generalization of the concept of a function and includes it as a special case.
Let M be a set of objects of any kind. The nature of these objects is immaterial at this time. They may be numbers, points of a space, curves, functions, surfaces, states or even motions of a mechanical system. For brevity we will call them elements of the set M and denote them by the letter x.
If to every element x of the set M there corresponds a number y, we say that there is defined on the set M a functional y = F(x).
If the set M is a set of numbers x, the functional y = F(x) will be a function of one argument. When M is a set of pairs of numbers (x1, x2) or a set of points of a plane, the functional will be a function y = F(x1, x2) of two arguments, and so forth.

For the functional y = F(x), we state the following problem:
Among all elements x of M find that element for which the functional y = F(x) has the smallest value.
The problem of the maximum of the functional is formulated in the same way.
We note that if we change the sign in the functional F(x) and consider the functional —F(x), the maximum (minimum) of F(x) becomes the minimum (maximum) of —F(x). So there is no need to study both maxima and minima; in what follows we will deal chiefly with minima of functionals.
In the problem of the curve of fastest descent, the functional whose minimum we seek will be the integral (3), the time of descent of a material point along a curve. This functional will be defined on all possible functions (1), satisfying condition (2).
In the problem of the position of equilibrium of a membrane, the functional is the potential energy (7) of the deformed membrane, and we must find its minimum on the set of functions u(x, y) satisfying the boundary condition (8).
Every functional is defined by two factors: the set M of elements x on which it is given and the law by which every element x corresponds to a number, the value of the functional. The methods of seeking the least and greatest values of a functional will certainly depend on the properties of the set M.
The calculus of variations is a particular chapter in the theory of functionals. In it we consider functionals given on a set of functions, and our problem consists of the construction of a theory of extreme values for such functionals.
This branch of mathematics became particularly important after the discovery of its connection with many situations in physics and mechanics. The reason for this connection may be seen as follows. As will be made clear later, it is necessary, in order that a function provide an extreme value for a functional, that it satisfy a certain differential equation. On the other hand, as was mentioned in the chapters describing differential equations, the quantitative laws of mechanics and physics are often written in the form of differential equations. As it turned out, many equations of this type also occurred among the differential equations of the calculus of variations. So it became possible to consider the equations of mechanics and physics as extremal conditions for suitable functionals and to state the laws of physics in the form of requiring an extreme value, in particular a minimum, for certain quantities. New points of view could thus be introduced into mechanics and physics, since certain laws could be replaced by equivalent statements in terms of “minimal principles.” This in turn opened up a new method of solving physical problems, either exactly or approximately, by seeking the minima of corresponding functionals.

 The Differential Equations of the Calculus of Variations

The Euler differential equation.

The reader will recall that a necessary condition for the existence of an extreme value of a differentiable function f at a point x is that the derivative f′ be equal to zero at this point: f′(x) = 0; or what amounts to the same thing, that the differential of the function be equal to zero here: df = f′(x) dx = 0.
Our immediate goal will be to find an analogue of this condition in the calculus of variations, that is to say, to set up a necessary condition that a function must satisfy in order to provide an extreme value for a functional.
We will show that such a function must satisfy a certain differential equation. The form of the equation will depend on the kind of functional under consideration. We begin with the so-called simplest integral of the calculus of variations, by which we mean a functional with the following integral representation:The function F, occuring under the integral sign, depends on three arguments (x, y, y′). We will assume it is defined and is twice continuously differentiable with respect to the argument y′ for all values of this argument, and with respect to the arguments x and y in some domain B of the Oxy plane. Below it is assumed that we always remain in the interior of this domain.

It is clear that y is a function of x:  y = y(x) continuously differentiable on the segment , and that y′ is its derivative.
Geometrically the function y(x) may be represented on the Oxy plane by a curve l over the interval [x1, x2]:The integral (9) is a generalization of the integrals (3) and (6), which we encountered in the problem of the curve of fastest descent and the surface of revolution of least area. Its value depends on the choice of the function y(x) or in other words of the curve l, and the problem of its minimum value is to be interpreted as follows:
Given some set M of functions (10) (curves l); among these we must find that function (curve l) for which the integral I(y) has the least value.
We must first of all define exactly the set of functions M for which we will consider the value of the integral (9). In the calculus of variations the functions of this set are usually called admissible for comparison. We consider the problem with fixed boundary values. The set of admissible functions is defined here by the following two requirements:
1. y(x) is continuously differentiable on the segment [x1, x2];
2. At the ends of the segment y(x) has values given in advance: y (x1)=y1,   y(x2)=y2

Otherwise the function y(x) may be completely arbitrary. In the language of geometry, we are considering all possible smooth curves over the interval [x1, x2], which pass through the two points A(x1, y2) and B(x2, y2) and can be represented by the equation (10). The function giving the minimum of the integral will be assumed to exist and we will call it y(x).
The following simple and ingenious arguments, which can often be applied in the calculus of variations, lead to a particularly simple form of the necessary condition which y(x) must satisfy. In essence they allow us to reduce the problem of the minimum of the integral (9) to the problem of the minimum of a function.
We consider the family of functions dependent on a numerical parameter α: In order that Ȳ(x)  be an admissible function for arbitrary α, we must assume that η(x) is continously differentiable and vanishes at the ends of the interval [x1, x2]:

“The integral (9) computed for ȳ will be a function of the parameter α: Since y(x) gives a minimum to the value of the integral, the function Φ(α) must have a minimum for α = 0, so that its derivative at this point must vanish:

This last equation must be satisfied for every continuously differentiable function η(x) which vanishes at the ends of the segment [x1, x2]. In order to obtain the result which follows from this, it is convenient to transform the second term in condition (14) by integration by parts:

It may be shown that the following simple lemma holds.
Let the following two conditions be fulfilled:
1. The function f(x) is continuous on the interval [a, b];

2. The function η(x) is continuously differentiable on the interval [a, b] and vanishes at the ends of this interval.

ƒ(x) = 0 if for an arbitrary function η(x) the following integral is equal to zero: For let us assume that at some point c the function f is different from zero and show that then  in contradiction to the condition of the lemma a function η(x) necessarily exists for which:

Since f(c) ≠ 0 and f is continuous, there must exist a neighborhood [α, β] of c in which f will be everywhere different from zero and thus will have a constant sign throughout:

We can always construct a function η(x) which is continuously differentiable on [a, b], positive on [α, β], and equal to zero outside of [α, β] (figure).
Such a function η(x), for example, is defined by the equations

The latter of these integrals cannot be equal to zero since, in the interior of the interval of integration, the product fη is different from zero and never changes its sign.
Since equation (15) must be satisfied for every η(x) that is continuously differentiable and vanishes at the ends of the segment [x1, x2], we may assert, on the basis of the lemma, that this can occur only in the case:This equation is a differential equation of the second order with respect to the function y. It is called Euler’s equation.
We may state the following conclusion.
If a function y(x) minimizes the integral I(y), then it must satisfy Euler’s differential equation (17). In the calculus of variations, this last statement has a meaning completely analogous to the necessary condition df = 0 in the theory of extreme values of functions. It allows us immediately to exclude all admissible functions that do not satisfy this condition, since for them the integral cannot have a minimum, so that the set of admissible functions we need to study is very sharply reduced.
Solutions of equation (17) have the property that for them the derivative [(d/dα)I(y + αη)]α=0 vanishes for arbitrary η(x), so that they are analogous in meaning to the stationary points of a function. Thus it is often said that for solutions of (17) the integral I(y) has a stationary value.
In our problem with fixed boundary values, we do not need to find all solutions of the Euler equation but only those which take on the values y1, y2, at the points x1, x2.
We turn our attention to the fact that the Euler equation (17) is of the second order. Its general solution will contain two arbitrary constants: y = Φ (x, C1, C2). These must be defined so that the integral curve passes through the points A and B, so we have the two equations for finding the constants C1 and C2:   Φ (x1, C1, C2)= y1;  Φ (x2, C1, C2)=y2

In many cases this system has only one solution and then there will exist only one integral curve passing through A and B.
The search for functions giving a minimum for this integral is thus reduced to the solution of the following boundary-value problem for differential equations: On the interval [x1, x2] find those solutions of equation (17) that have the given values y1, y2 at the ends of the interval.
Frequently this last problem can be solved by using known methods in the theory of differential equations.
We emphasize again that every solution of such a boundary-value problem can provide only a suspected minimum and that it is necessary to verify whether or not it actually does give a minimum value to the integral. But in particular cases, especially in those occurring in the applications, Euler’s equation completely solves the problem of finding the minimum of the integral. Suppose we know initially that a function giving a minimum for the integral exists, and assume, moreover, that the Euler equation (17) has only one solution satisfying the boundary conditions (11). Then only one of the admissible curves can be a suspected minimum, and we may be sure, under these circumstances, that the solution found for the equation (17) indeed gives a minimum for the integral.
Example. It was previously established that the problem of the curve of fastest descent may be reduced to finding the minimum of the integral:

Euler’s equation has the form:from which, by integrating, we get: x = ± k/2(u sin u) + C. Since the curve must pass through the origin, it follows that we must put C = 0.

In this way we see that the brachistochrone is the cycloid: The constant k must be found from the condition that this curve passes through the point M2(x2, y2).

Functionals depending on several functions.

The simplest functional in the calculus of variations (17) depended on only one function. In the applications such functionals will occur in those cases where the objects (or their behavior) are defined by only one functional dependence. For example, a curve in the plane is defined by the dependence of the ordinate of a point on its abscissa, the motion of a material point along an axis is defined by the dependence of its coordinate on time, etc.
But we must often deal with objects that cannot be defined so simply. In order to define a curve in space, we must know the functional dependence of two of its coordinates on the third. The motion of a point in space is defined by the dependence of its three coordinates on time, etc. Study of these more complicated objects leads to variational problems with several varying functions.
We will restrict ourselves to cases in which the functional depends on two functions y(x) and z(x), since the case of a larger number of functions does not differ in principle from this one.
We consider the following problem. Admissible pairs of functions y(x) and z(x) are defined by the conditions:
1. The functions: y = y(x);   z=z(x) are continuously differentiable on the segment [x1, x2];
2. At the ends of the segment these functions have given values:

“Among all possible pairs of functions y(x) and z(x), we must find the pair that gives the least value to the integral:

In the three-dimensional space x, y, z, each pair of admissible functions will correspond to a curve l, defined by equations (18) and passing through the points:

We must find the minimum of the integral (20) on the set of all such curves.
We assume that the pair of functions giving the minimum of the integral (20) exists, and we will call these functions y(x) and z(x). Together with them we consider a second pair of functions:

where η(x) and ζ(x) are any continuously differentiable functions vanishing at the ends x1, x2 of the segment; , will also be admissible, and for α = 0 they will coincide with the functions y, z. We substitute them in (20):The integral so derived will be a function of α. Since ȳ and ¯z coincide with y and z when α = 0, the function Φ(α) must have a minimum for α = 0. But at a minimum point the derivative of Φ must vanish: Φ'(0)=0

Computing the derivative gives:or, if the terms in η′ and ζ′ are integrated by parts:

This last equation must be satisfied for any two continuously differentiable functions η(x) and ζ(x) vanishing at the ends of the interval. Hence, from the basic lemma proved earlier, the following two conditions must be fulfilled:Hence, if the functions y, z give a minimum for the integral (20), they must satisfy the system of Euler differential equations (21).
This result again allows us to replace a variational problem for the minimum of the integral (20) by a boundary-value problem in the theory of differential equations: On the interval [x1, x2], we must find those solutions y, z of the system of differential equations (21) that satisfy the boundary conditions (19).

“As in the preceding case, this opens up a possible path for the solution of the minimal problem.
As an example of an application of the Euler system (21), let us consider the variational principle of Hamilton in Newtonian mechanics. We restrict ourselves to the simplest form of this principle.

Now the principle is related to the conservation principles. As the 3 parameters of measure in ∆ physics are an identity:

S/T speed-distance x T/s Density  of Active magnitude = 1

Flux, will be ρv fundamental parameter that encloses all others.

The other key balance is the 0 identity not only of 1 flux = speed x density identity but the 0 for superposition dimensions and this is the balance of energies:

-U (potential energy ) = T (U) kinetic

Thus what the Hamiltonian does is to equal to zero their sum, which is what the Lagrangian does in the smaller finitesimal, ‘differential’ scale.

We consider a material body of mass m and assume that the dimensions and form of the body may be ignored, so that we may consider it as a material point.
We assume that the point moves from its position M1(x1, y1, z1) at time t1, to the position M2(x2, y2, z2) at time t2. We also assume that the motion occurs under the laws of Newtonian mechanics and is caused by application of a force F(x, y, z, t) which depends on the position of the point and on the time t and possesses a potential function U(x, y, z, t). This last condition means the following: the components Fx, Fy, Fz of the force F along the coordinate axes are the partial derivatives of a function U with respect to the corresponding coordinates:We assume the motion to be free, that is not subject to any kind of constraints.
The equations of motion of Newton are: If the point obeys the laws of Newtonian mechanics, it moves in a completely determined manner. But together with these “Newtonian motions” of the point, let us consider other (non-Newtonian) motions, which for brevity we will call “admissible,” and which will be defined by two requirements only, that at time t1 the point is in the position M1 and at time t2, is in the position M2.
How can we distinguish the “Newtonian motion” of the point from these other “admissible” motions? Such a possibility is given by the Ostrogradskiĭ-Hamilton principle.
We introduce the kinetic energy of the point:The principle states: The “Newtonian motion” of the point is distinguished among all its “admissible” motions by the fact that it gives the action integral a stationary value.
The action integral I depends on three functions: x(t), y(t), z(t).
Since for all the motions under comparison the initial and final positions of the point are identical, the boundary values of these functions are fixed. We are dealing here with a variational problem for three varying functions with fixed values at the ends of the interval [t1, t2].
Previously we agreed to say that the integral (17) has a stationary value for any curve which is an integral curve of the Euler equation. In our problem we are integrating a function:

which depends on three functions, so that for a stationary value of the integral we must satisfy the system of three differential equations

Since Fx = ∂U/∂x, Fx = mx′, ···, the system of Euler equations is identical with the equations of motion of Newtonian mechanics, which provides a verification of the -Hamilton principle.
The minimum problem for a multiple integral. The last problem in the calculus of variations to which we wish to draw the attention of the reader is the problem of minimizing a multiple integral. Since the facts connected with the solution of such problems are similar for integrals of any multiplicity, we will confine ourselves to the simplest case, that of double integrals.
Let B be a domain in the Oxy plane, bounded by the contour l. The set of admissible functions is defined by the conditions:
1. u(x, y) is continuously differentiable on the domain B.
2. On l the function u takes given values: U|l = ƒ (M)
Among all functions we must find the one which gives a minimum value for the integral:
The given boundary values (22) for the function u in the space (x, y, u) determine a given space curve Γ, lying above l (cf. figure 2, Chapter VII). We consider all possible surfaces S passing through Γ and lying above B. Among these we want to find the one for which the integral (23) is minimal.
As before, we assume the existence of the minimizing function and denote it by u. At the same time we consider another function:where η(x, y) is any continuously differentiable function vanishing on l. Then the function:

must have a minimum for α = 0. In this case its first derivative must be equal to zero for α = 0   Φ'(0)=0 or:
We transform the last two terms by Hamilton’s formula:The contour integral along l must vanish, since on the contour l the function η is equal to zero, so that condition (24) may be put in the form:This equation must be satisfied for every function η which is continuously differentiable and vanishes on the boundary l.
We may conclude, as before, that all points of the domain B the equation:must be satisfied.
So if the function u gives a minimum for the integral (23), it must satisfy the partial differential equation (25).
As in all the preceding problems, we have here established a connection between a variational problem of minimizing an integral and a boundary value problem for a differential equation (in this case partial).

Example. The displacement u(x, y) of points of a membrane with a deformed boundary is to be found from the condition of the minimum of the potential energy:for the given boundary values U |l = ϕ.
Omitting, for simplicity, the constant factor μ, we may set:so that equation (25) has the form:

Thus the problem of determining the displacement of the points of a membrane has been reduced to that of finding a harmonic function u with given values on the boundary of the domain.

 

Let us bring now the formalism of Hilbert spaces, even if this post is already too long and NOT YET DONE (and still incomplete since many of the paragraphs taken from Aleksandrov excellent and ‘SIMPLE’ introduction to the discipline need further comments to vitalise and reinterpret analysis). So we shall study the rest of it, Hilbert spaces, as the final evolution of complex @nalytic geometry, which is where makes better sense.

The Connection Between Functions of a Complex Variable and the Problems of Mathematical Physics

Connection with problems of hydrodynamics.

The Cauchy-Riemann conditions relate the problems of mathematical physics to the theory of functions of a complex variable. Let us illustrate this from the problems of hydrodynamics.

Examples of plane-parallel flow of a fluid.

We consider several examples. Let:  W= Az where A is a complex quantity. From (29) it follows that: u + iv = ā
Thus the linear function (30) defines the flow of a fluid with constant vector velocity. If we set: A= uo – ivo then, decomposing into the real and imaginary parts of w, we have:so that the streamlines will be straight lines parallel to the velocity vector (figure 7).
As a second example we consider the function: w=Az² where the constant A is real. In order to graph the flow, we first determine the streamlines. In this case: ψ (x,y) = 2 Axy and the equations of the streamlines are: xy = const.

These are hyperbolas with the coordinate axes as asymptotes (figure 8). The arrows show the direction of motion of the particles along the streamlines for A > 0. The axes Ox and Oy are also streamlines.
If the friction in the liquid is very small, we will not disturb the rest of the flow if we replace any streamline by a rigid wall, since the fluid will glide along the wall. Using this principle to construct walls along the positive coordinate axes (in figure 8 they are represented by heavy lines), we have a diagram of how the fluid flows irrotationally, in this case around a corner:

An important example of a flow is given by the function: