**SUMMARY**

5D Povs: ANALYTIC PLANES.

I. 3 Frames of reference as reflection of the 3 topologies of T.œs: flat, hyperbolic and cylindrical.

III. Canonical curves as reflection of the world cycle and structure elf super organisms.

III. @NALYTIC GEOMETRY AND MATHEMATICAL PHYSICS

IV. THE COMPLEX PLANE.

V. Vector spaces.

The Universe is a fractal super organism made of…

**Asymmetries**between its 4 Dual components, which can either annihilate or evolve, which we shall call**Balance≈become symmetric or become Perpendicular/antisymmetric**:**S:**space; an ENSEMBLE OF**ternary topologies,****(|+O≈ ø)**… which made up the**3 physiological networks**(|-motion/limbs-potentials + O-particle/heads ≈ Ø-vital energy) of all*simultaneous**super organisms***∆: Planes of size**distributed in ∆±i relative*fractal scales*that come together as**∆º**super organisms, each one sum of smaller**∑∆-1**super organisms… that*trace*in a larger**∆+1 world…****ðime cycles:***a series of timespace actions of survival that integrated as a whole form a sequential cycles of existence with*each one dominated by the activity of one of those 3 networks: motion-youth, or relative past, dominated by the motion systems (limbs, potential); iterative present dominated by the reproductive vital energy (body waves), and informative 3rd age or relative future dominated by the informative systems, whose ‘center’ is:**3 ages**,**@: The Active linguistic mind**that reflects the infinite cycles of the outer world and controls those of its inner world, through its languages of information, which guide its 5 survival actions: 3 simplex,**aei**, finitesimal actions that exchange energy (**e-**ntropy feeding), motion (**a-**celerations) and**i**nformation (perceptions) with other beings, and two complex actions:**o**ffspring reproduction and social evolution from individuals into**U-**niversals that maximize the duration in time and extension in space of the being.

Because the scientific method requires OBJECTIVE measure of the existence of a mind, which is NOT perceivable directly, we *infer its existence by the fact a system performs the 5 external actions, which can be measure objectively, in the same manner we infer the existence of gravitational in-form-ative forces by its external actions upon massive objects. *Hence eliminating the previous limit for a thorough understanding of the sentient, informative Universe. And further classify organic in simplex minds – all, which must gauge information, move and feed to survive, and complex systems, those who can perform a palingenetic reproductive, social evolution, ∆-1: ∑∆-1≈∆º.

The study of those 4 elements of all realities, its actions and ternary operandi, structures *the dynamic ‘Generator Equation’ of all Space-time Systems of the Universe, written in its simplest form as a singularity-mind equation:*

* O x ∞ = K *

*Or in dynamic way, S@<≈>∆ð.*

So that is the game: 3 asymmetries of scale, age and form, which can come together or annihilate and each language represent in different manners, those elements and its operandi.

In mathematics*, with the duality of **inverse operations, + -, X ÷, √ xª and ∫∂.*

**Languages express the elements of reality and its operandi**

It is then clear that what languages as synoptic mirrors of the mind will try to do is to establish the basic relationships between the space, time, scale of the being, expressing them through its operandi, DEPENDING on the degree of perception the being has of reality and its scales which might be reduced if the being is not fully aware of all the scales of existence, as most minds exist only in a plane of reality

So does mathematics, through combinations of:

Sum/rest->multiplication/division->potency/logarithm; point->line->plane->volume and so on.

And to do so, as a fractal can always be divided in sub-fractals, mathematical disciplines subdivide further at all levels in 5 elements.

**THE RASHOMON TRUTH OF MATHEMATICAL SYSTEMS**

It follows then from the definition of the 5 elements of all systems, an immediate classification of the five fundamental sub disciplines of mathematics specialised in the study of each of those 5 dimensions of space-time:

**S: ¬E Geometry**studies fractal points of**simultaneous****space, ∆-1,**& its ∆º networks, within an ∆+1 world domain.**T§: Number**theory studies time**sequences**and ∆-1 social numbers,which gather in ∆º functions, part of ∆+1 functionals.**S≈T: ¬Ælgebra**studies ∆º A(nti)symmetries between space and time dimensions and its complex ∆+1 structures… Namely is the science of the operandi <≈> translated into mathematical mirrors.**∆±¡ st: ∆nalysis**studies the motions, STeps and social gatherings derived of algebraic symmetries between functions and numbers (first derivatives/integrals), and the wider motions between scales of the fifth dimension (higher degree ∫∂ functions).**@: Finally Analytic geometry**represents the different mental points of view, self-centred into a system of coordinates, or ‘worldviews’ of a fractal point, of which naturally emerge 3 ‘different’ perspectives according to the 3 ‘sub-equations’ of the fractal generator: $p: toroid Pov < ST: Cartesian Plane > ðƒ: Polar co-ordinates.

To which we can add the specific @-humind elements (human biased mathematics) and its errors of comprehension of mathematics limited by our ego paradox **Philosophy of mathematics** and its ‘selfie’ axiomatic methods of truth, which tries to ‘reduce’ the properties of the Universe to the limited description provided by the limited version of mathematics, known as Euclidean math (with an added single 5th non-E Postulate) and Aristotelian logic (A->B single causality). This limit must be expanded as we do with Non-Æ vital mathematics and the study of Maths within culture, as a language of History, used mostly by the western military lineal tradition, closely connected with the errors of mathematical physics.

In this post thus we shall deal with the multiple aspects of @-nalytic geometry and Philosophy of mathematics. As usual is a work in progress, at a simpler level than future ad ons in the fourth line; and using some basic book texts of the Soviet school, which had the proper ‘dialectic’ logic and experimental sense of the discipline, which western idealist, German-Based axiomatic schools lack.

**I. 5D PLANES**

**THE 5D MAIN @NALYTIC PLANES.**

In the graph, as usual we shall find a Kaleidoscopic relationship between the Dimensions of reality and its ‘image-mirror’ in any element or language, in this case in the ‘frames of reference’ of mathematical minds.

So a more detailed 5D ‘rashomon’ effect on the discipline of @nalitic geometry gives us 5 sub-planes, each one a main frame of reference that studies reality from a distorted perspective.

So the 5 ‘world views’ or topological forms and its planes of reference are related to the 5 dimensions of reality:

**3D hyperbolic Dimension:** the Cartesian, ∑∏ graph.

** 1D vortex**: the polar, ð§ cyclical graph.

**2D Lineal motion:** the Lineal, cylindrical $t.

To which we can add several *spatial graphs that portray from a ‘static mind-view’ the upper and lower planes of the 4-5Dimensions:*

*-All kind of phase spaces which detach mathematical analysis from the light space-time reality of the human eye and **portray a static mental space-form with information relevant to the perceiver.*

*And 3 more generalised perspectives closely connected to those scalar dimensions.*

**-0-1 D Generator Dimension’s main plane:** It is the Unit circle, expressed mostly in probabilistic temporal terms (where 1 is the value of ‘existence’ – the happening of an event/form), coupled with its 1-∞ equivalent plane (Measure theory) or:

**-5D Social Dimension:** It is the complex plane, where the polynomial=scalar degree of coordinates are different (either a root vs a real number or if we ‘square’ both axis, as we do in ∆st, a squared double positive real line vs. a ± i² real coordinates).

**-4D entropic Dimension:** Finally as a generalisation that breaks the whole into all its points of view, generalised coordinates, with infinite individual points of view representing the statistical ensembles of entropic particles, of which Hilbert spaces used in quantum physics and phase spaces, used in thermodynamics are the main varieties.

There is THUS as usual a closed ‘homeomorphism’ to use some pedantic math jargon (:’a correspondence between two figures or surfaces or other geometrical objects, defined by a one-to-one mapping that is continuous in both directions’:) between the 5 Dimensions of reality and the 5 main graphs of mathematics, cartesian, polar, cylindrical, complex and Hilbert’s; as reality can always be seen from the point of view of the time functions and space forms of those 5D dimensions.

Each of those graphs will therefore BE USED mostly to study problems regarding forms and functions of those 5 Dimensions.

**RECAP.**

The multiple planes of analytic geometry, do not hide its fundamental property: to *perceive reality from a given point of view, the 0 point, or @-mind, and create from that perspective a certain distorted worldview that caters to the function and form of the @-mind, selecting information of reality to form a given space, which might seem ‘reality’ to the mind but will always be ‘a representation in which the mind will exercise its territorial will’ to paraphrase Schopenhauer.*

Thus Analytic geometry ultimately studies the Universe from the perspective its 3±∆ main ST dimensional subspecies or partial equations of the fractal generator equation of T.œs

**THE HUMAN CARTESIAN MIND-PLANE.**

What is the most important of those planes to mankind? Obviously the one that defines the Humind perspective, which is the one that has the dimension of our ‘space-time’, which is LIGHT! that for that reason is both the limit of transmission of information perceivable by man, and a constant speed force, *as we perceive it in the stillness of the mind.*

**Spaces as the a priori reality. ****The cartesian 3D graph**

In the graph, since the Humind is Cartesian the first and most widely used plane of reference will be a 3D-∑∏ cartesian flat/hyperbolic plane, which mimic – it is homeomorphic – to the ‘vital’ 3 perpendicular dimensions of light space-time.

As such our mental spatial reality is a distortion of the Universe suitable for perception in 3D±∆ social colours. And so immediately we observe a correspondence between the FUNCTIONS-FORMS-ACTIONS of light as an organism and its Dimensions used in the humind to build our worldview of the Universe and its T.œs:

That we reduce further light information is self-evident but treated in the Physical analysis of quantum uncertainty principles. SUFFICE to say that *we reduce from each bidimensional holographic plane, or topological variety of form, one dimension to obtain the 3 canonical perpendicular lineal dimensions, which correspond in the humind to the functions of its light-systems:*

-1D: Height is the function of information where animal systems will place its particle-heads.

-3D: Width is the dimension of present reproduction across which animal systems will multiply its cells and reach bilateral symmetry.

-2D: Length is the dimension of motion ALONG which systems will move

But the mind also observes the social and entropic, evolving and devolving arrows of time to complete a 5D dimension of reality. And this is done with the social/devolving arrows of light, called frequency colours. So humans add the duality of:

-Red/magneta: dissolving/dying colours of Entropy and ‘tired’ light as it dissolves into vacuum.

-Blue/violet: colours of maximal frequency/form, as blue stars generate its photons.

-Green/Yellow: colours of vital energy/information in the intermediate spectra, which are colours of processed plant energy and direct light.

So the codes of light and its dimensions create the ‘interpreted’ world of the humind.

This was clear to Descartes who called his geometry not Universal, BUT HUMAN; depicting the world as a limited mirror of the total Universe. Since it is the human light space-time mind with its 3 perpendicular magnetic, electric and speed fields what determined the eye-geometry – an electronic machine absorbing light – something Kant also explained when affirming our Mind IS Euclidean because light is Euclidean.

*So it is fundamental to understand the homeomorphism between the 3 lineal dimensions and the bidimensional REAL topologies, which they **simplify. *

**VARIOUS 5D-MIND GEOMETRIES**

**All minds depict with different languages a 5D Universe.**

** **The mind is a subjective perspective constructed with pixels of a language appearing first ‘simplex, disconnected’, then build limited mirror-images of ‘space’ (simpler present dimensions), then move in time, locking into symmetries of colours and finally build background planes. So the ‘subject’ or mind selects meaningful ‘5D bits’ to do finally in the language an efficient mirror of reality to act on it.

This happens also in mathematical minds. And both type of minds likely evolved in animal life and today in huminds and future computers, *adding ever more complex layers of perception of those dimensions.*

As we perceive first ‘simple motion’ (2D), coupled with position & formal size (1D), next ‘depth’ and the relationships between those positions, ≈, (3D) and finally ‘colour’ (4D red, 5D blue).

The mathematical mind representation can then be seen as a sequential evolution in time, with the Social Human mind of scientists learning first, a bidimensional, still ‘Greek’ geometric spatial form (2D), coupled with a social-number size-density (∆§), It came then 1D position, as the @-mind view of analytic geometry, a self-centred humind point of view, which soon allowed to establish 3D S=t algebra symmetries… and finally the development of ∆nalysis and its infinitesimal parts and wholes, closed the 5 great ‘disciplines’ of Maths as mirrors of the 5D elements of the Universe.

Now if you are ‘starting’ to understand the entangled, symmetric, kaleidoscopic Universe, the sequence of growth in the perception of the complexity of any language as mirror of the Universe, is similar to the way any system develops in time, orders in space, and acts in space-time. So the kaleidoscopic views reinforce each other to create the ultimate order of all systems of reality. As all ultimately comes to a world cycle in time symmetric to an adjacent efficient organic structure in space.

This happened in classic age of mathematics with Descartes, when finally analytic geometry allowed to mix the time view of discrete numbers and the spatial view of continuous points. And from then, in layers of growing complexity maths mirrored the S≈T dualities of the Universe of which the Superorganism ≈Time world cycle is the one that encompasses all others.

The birth of @nalytic geometry thus starts the classic age (as beauty=classicism is indeed an S=t, form and function balance), the mature, most expansive≈reproductive, fruitful age, of mathematics which soon gave birth to ∆nalysis of the scales of the Universe; completing the ∆@•ST elements of maths (S-geometry, T=s algebra, T-number theory, @nalytic geometry and ∆nalysis).

So as complexity grow, new connections of the S≈T 5D²imensions of the being, BRANCHED analytic geometry into new dual ‘connected’ parts; and finally *the entangled, kaleidoscopic Universe connected* each discipline of maths, mirror of one of the 5 Dimensions to all the others (analysis, number theory, algebra and geometry all being put in a mind-view on a frame of reference).

And of course, each discipline sub-branched internally through the Disomorphisms, first into the 3 ‘sub-mind views’ or 3 states, the S-cylindrical pov/frame of reference, the T-polar and the ST-cartesian ‘hyperbolic’ frame. And then into the ∆-subdisciplines of Hilbert, phase spaces.

So @nalytic geometry could both expand to the 3 ‘present topologies’ of any super organism, and define its properties in scales, but always from an homeomorphism with one of the 5D.

For all those reasons, we consider the mind view of @nalytic geometry, the key step to professionalize mathematics, and ∆nalysis its final frontier to complete its 5D mirror view.

*The key symmetry between 5D seeds/minds and @nalytic geometry/∆nalysis.*

To that aim, two features of the @nalytic plane, as mirror of the 2 informative still states of reality – the seed and the mind, are a key to understand also is homeomorphism with the 2 sides of ∆nalysis.

WE REFER to a fundamental theorem of measure – the homeomorphism between:

- The 0-1 ‘generational seed interval, or unit sphere, which represents the growth of the ‘unit element’, from the singularity positioned at 0, to the membrane of the one, at 1 with the vital energy contained inside, and…
- The 1-∞ universe, which represents the OUTER world mirror as seen within the infinitesimal mind but paradoxically has more information, more infinitesimal details than the N+Z line .

So we can find many magic congruent relationships between the 0-1 and 1-∞ ‘scaling’ of maths, which *apply to different parallelisms of scales in science: i.e. the quantum 01-probability sphere and the 1-∞ thermodynamic statistic ∆+1 scale*.

Of all of them the key relation is the understanding of the unit cell of a whole organism as a ‘finitesimal’ 1/n – a part of a whole social number, n, which is a theme more proper of T§-number theory and ∆nalysis.

ALL THIS Said, THE GREAT advance of Analytic geometry besides being a mind-biased view,* which illustrates the different distortions of each of the 5D ‘organs’ of a system, and how they see reality*, WAS to add the cartesian coordinates of sequential numbers to allow the first clear S=T analysis of space-time symmetries, referring two completely different elements of reality, the continuous plane view and the discrete numerical view – even if huminds were/are still unaware of the differences, shown in classic paradoxes of irrational numbers, as they insist (Dedekind cut, axiomatic method) on *making equal a discrete numerical system that has ‘dark spaces’ that do NOT exist to differentiate numbers, between them – albeit infinitesimal cuts – vs. the ‘white’ continuous view that ideally considers the NO existence of wholes. *

We shall thus study in this post, analytic geometry not ONLY BUT MAINLY as the humind view WITH all its further derivations, which fusions the S-patial, continuous point-description and T-emporal, algebraic discrete, numerical description of Space-time evens, as such is the 3rd S≈T, classic age of geometry.

**3 AGES OF @NALYTIC GEOMETRY: S≈T**

*But to the synchronous view of the 5D adjacent minds of a Supœrganism we can add an analysis of the metalanguage of @nalyltic geometry. That *is we shall apply the Disomorphic method in its main temporal view, with a temporal study of its evolution in the 3 ages of maths:

YOUTH OF ANALYTIC GEOMETRY

In that sense @nalytic geometry started with the work of Descartes and my fellow countryman Fermat. But it really was already embedded on the study of conics by the Greeks; which now could be fully represented in the Cartesian graph, which is in topological terms the conversion of a conic into a plane, where the center of the conic becomes the point of view of the observer, or world of geometry.

We shall then as usual make a parallel analysis of the evolution of the discipline through 3 ages of increasing ‘form’, and its structures in space – so we shall study also in @-nalytic geometry, the different mind views of reality according to coordinates.** **Let us start with this simpler first spatial view, of @G (ab.) and how @G mirrors the 3 points of view or states of any system which distort our image mirror of reality within a given mind.

Now in the classification of the different subjects of MATHEMATICS as a mirror symmetry of the 5Dimensions of space-time, @naltic geometry, ∆nalysis, S=T algebra, T-heory of numbers and S-geometry, as we enter the final 3rd age of mathematics, *all of them reflecting the kaleidoscope which is the Universe, where those 5 elements constantly merge to give us more complex 5D² reflections, an epistemological theme is how to order all subjects, when they are in fact today merged, as they are in experimental reality (we do NOT see 1D beings, or 2D but complex ‘knots’ of 5D elements).*

So in true form, we should INCLUDE EVERYTHING in every post, with a particular bias given by the dominant element in its ‘ceteris paribus analysis.

This would make it very redundant as it is in fact now, and require as i hope to do before dying, or rather lowering my 5D into a 4D entropic explosion of little faster-thinking insects, to cut off a lot of redundancies to make it readable.

Yet the division of the 5D st factors is largely a personal matter of distribution of subjects, which now has such ‘extemporaneous’ elements as the symbols accessible in wordpress (: well, more seriously I am doing an effort. So in this post, which if we were to be strict should be concerned WITH EVERYTHING IN MATHS, as everything in maths is viewed through human @-minds and its ternary functions/forms of perception, equivalent to the 3 parts of the organic Universe.

But that would be silly, so instead we shall use analytic frames of reference in all other posts but kept their theorems, when they are used for algebraic s=t or ∆nalysis or pure geometric thought, in this other sections.

Here then we shall only be concerned with the ‘frames of reference’ in themselves, and the 3 types of ‘functions/forms’ they describe, hyperbolic, euclidean or elliptic geometry and the dimensions we use to study each of those possible s<st>t symmetries, frames of reference of 3 lineal dimensions of 2D-manifolds and of 3D-volumes, and its related ∆st theorems.

This arguably could also be studied in geometry, from where analytic geometry starts. But as we are using the post on geometry mostly for the advanced third age of geometry, non-euclidean points with volume, fractal and topological geometries, which ARE the most important modern branches that connect with the understanding of T.œs and its organic parts and structures, we keep it for this post-taken into account also that analytic geometry in its origins was used mostly by mathematical physics to study precisely those 3 type of curves first in one and then in 2 dimensions.

It was also the beginning of its use for mathematical physics, so we shall include at least the original experimental understanding of maths as the best mirror of astrophysics, a stuff which obviously could belong to the articles of astrophysics, where we shall only consider Kepler’s orbital geometries and some comments on the biased views of lineal time introduced by cartesian graphs, THE TRUE ORIGIN OF THAT ABSURDITY called lineal time and absolute time and space, which occurred to Newton just because he was drawing on the sacred language of God, the cartesian graph, its ellipses and comets. So he thought below reality there was such infinite single line of time and space, drawn by God, his ‘alter ego’.

In brief, we shall introduce analytic geometry, mathematical physics and expand ad maximal the analysis of the 3 type of @-frames and its relationships with ∆st of which the laws of INVERSION and GROWTH of DIMENSIONS and the understanding in vital terms of concepts such as ‘angles of perception, identity and continuity’ are the most important.

So instead of doing as usual an analysis of the ‘3 ages of the discipline’ in time, we might say we shall do here after a brief introduction to mind representation in @- geometry, an analysis in space of the 3 main sub-varieties of mind-views, frames of reference and geometric figures there exists.

**The a priori reality of other ‘systems’.**

The importance of the specific mind species and information we define with a given frame of reference cannot be stressed enough. We have repeated ad nauseam that ‘spaces’ are artificial constructs of the mind, to ‘navigate’ reality; built with the features of the forces of information available to them. *They are the a priori ‘Kantian’ categories that deforms our reality and viceversa, knowing the properties of a given space is essential to know the type of mind and species that navigates thanks and through it. *

For example, the heart and soul of quantum mechanics is contained in the Hilbert spaces that represent the state-spaces of quantum mechanical systems. The internal relations among states and quantities, and everything this entails about the ways quantum mechanical systems behave, are all woven into the structure of these spaces, embodied in the relations among the mathematical objects which represent them

This means that understanding what a system is like according to quantum mechanics is inseparable from familiarity with the internal structure of those spaces. Know your way around Hilbert space, and become familiar with the dynamical laws that describe the paths that vectors travel through it, and you know everything there is to know, in the terms provided by the theory, about the systems that it describes.

Know your way around Euclidean light space-time with its 3 perpendicular coordinates and you will know a lot about how huminds and similar electronic systems perceive the Universe. *And be aware that such Universe is only a monad’s world, different from many others.*

But as we treat the general theory of ‘mind-worlds’ in our post on Non-E geometry, we shall leave it here at this stage.

We shall finish with the complex plane for mathematical physics and number theory as it is the closest frame related to Temporal numbers) and close the subject with a short view on vector spaces, and its more complex purely spatial frames of reference ‘across ∆-scales’, that is Hilbert spaces and functional operators.

Thus we shall close with ‘far removed’ Hilbert spaces, which are of interest to understand the most far removed scales of reality – those of undistinguishable zillions of particles… *In an absolute** relative Universe it is NOT that important to know in so much detail a far removed scale such as quantum physics is – the galaxy/atom as viewed from humans have ‘weird properties’ because our ‘perception of it’ is limited so we can only know certain simple dimensions of their structure namely the most external and ‘visible’, MOSTLY 2D-motions.*

** 1, 2, ****3 MIND COORDINATES IN A SINGLE ∆-PLANE**

Descartes’ theory is based on two concepts: the concept of coordinates and the concept of representing by the coordinate method any algebraic equation with two unknowns in the form of a curve in the plane.

**Generator Equation. The 3 Points of view and its geometries & topological planes. **

*There is always a first observer which starts an action of perception in space-time from a perspective that usually is biased by the function of the observer (STi, Se or To coordinates), which correspond to the Cartesian, Cylindrical and spherical, polar coordinates of science*

The Universe is a sum of a series of action, ∆aeiou, of them, the first action is perception by an observer, ∆o, of a field of energy, ∆e, to which it will move, ∆a, in order to feed, ∆e and use that energy to reproduce its information, ∆i, iterating a form like itself, which will gather with clone forms to create a larger, ∆U universal social plane.

This is all what we should describe when we reduce to minimal cyclical space-time actions the total reality of any self.

But to do so from a merely formal point of view, we do need to consider first the point of view that kicks out the world cycle of actions of any function of existence, ∆o, which is thus the minimal and first Unit-action of the Universe.

And this point of view and its relative frame of reference, will start the comprehension of an external reality.

*Thus the departure point of any mathematical course, should be the definition of the 3 fundamental coordinates which any entity uses to ‘adapt’ the perception of reality to its mind-view and the equation of the mind, which in terms of mathematical co-ordinates writes:*

**0.** O-point* x ∞ Universe = constant, static frame of reference.*

Thus the Observer is not only the dominant element of those 5 actions in the real Universe, but also the initial ‘point of view’ of any mathematical analysis, and analytic geometry rightly the first branch of mathematics to be studied.

And as it turns out in the first of many fascinating symmetries between ‘iTS’ and each science (isomorphic, |-space & O-Time, the abb. most commonly used for the 3 components of the Universe), 3 are the fundamental points of view and frames of reference of analytical geometry, each one belonging to a ‘fundamental state’ of being, in the Universe.

The temporal polar point of view, centred in the O-point and its external membrane determined by the radius and angle of perception; the cylindrical, lineal, energetic point of view, determined by the lineal axis of the frame of reference or ‘altitude’, the Z co-ordinates, and finally the Cartesian ‘hyperbolic plane’, corresponding to the STi bodies & waves of physical and biological systems.

From those 3 ‘mathematical perspectives’ the Universe constructs its vital geometries that we call ‘existential beings’.

And so the observer’s causal, logic, cyclical, informative sentient properties that allow it to perceive time cycles are the first questions to inquire.

To, is the first element to describe in reality.

And it is a frame of reference, an observer, an inertial point in relative fixed form with no motion that can perceive and map the Universe, an O x ∞ constant mapping mind. The observer.

**Coordinates, the 3 main frames of reference.**

Now, when move to 3 dimensions, the 3 frames of reference become the 3 bidimensional planes of any of the infinite fractal systems of the Universe:

Se(Toroid field/limb)<STi (Hyperbolic wave/body)>Ot (spherical head/particle)

Now, the fact that each of the 3 fundamental domains of a space-time organism correspond to one of the 3 fundamental topologies of a 2-3-4D manifold, the planar/toroid, cylindrical≈hyperbolic waves/bodies and spherical particles/heads, and to one of the 3 fundamental 0-points of perception, shows how close mathematics is to reality.

In the graph, the 3 subspecies of 2-manifolds have their expression in 3 coordinates, where the Cartesian, is taken as an ‘infinity growing Toroid’ space.

The Organisms of the Universe are assemblies of geodesic curves perceived by each of the 3 elements in its corresponding coordinates.

This gives also a simple rule to know what kind of element we are studying in different sciences and events (the Se, STi or Ot element):

‘The simplest frame of reference in which a problem formulates indicates which is the ternary topological element we study’. For example, if we can formulate the problem in polar coordinates with an equation simpler than in cartesian coordinates, we will be studying a TO element (as when we formulate a gravitational or charge problem in polar coordinates). If it is easier to formulate in the toroid or the simplified Cartesian graph (a toroid opened along a z-cut), it will be an ‘Se, limb/field problem’; which is the case of most physical problems of ‘lineal motions’; and so on.

The conclusion of this duality, is that we are mathematical organisms, with topo-logic properties, which give birth to biological, organic assembles of ternary functions/forms and those systems do have an O-point of view, its frame of reference.

**The Cartesian revolution: hyperbolic PRESENT coordinates mix time and space.**

By the coordinates of a point in the plane Descartes means the abscissa and ordinate of this point, i.e., the numerical values x and y of its distances (with corresponding signs) to two mutually perpendicular straight lines (coordinate axes) chosen in this plane (see Chapter II). The point of intersection of the coordinate axes, i.e., the point having coordinates (0, 0) is called the origin.

With the introduction of coordinates Descartes constructed, so to speak, an “arithmetization” of the plane. Instead of determining any point geometrically, it is sufficient to give a pair of numbers x, y and conversely.

**The notion of comparison of equations with two unknowns with curves in the plane. **

Descartes’ second concept is the following. Up to the time of Descartes, where an algebraic equation in two unknowns F(x, y) = 0 was given, it was said that the problem was indeterminate, since from the equation it was impossible to determine these unknowns; any value could be assigned to one of them, for example to x, and substituted in the equation; the result was an equation with only one unknown y, for which, in general, the equation could be solved. Then this arbitrarily chosen x together with the so-obtained y would satisfy the given equation. Consequently, such an “indeterminate” equation was not considered interesting.

Descartes looked at the matter differently. He proposed that in an equation with two unknowns x be regarded as the abscissa of a point and the corresponding y as its ordinate. Then if we vary the unknown x, to every value of x the corresponding y is computed from the equation, so that we obtain, in general, a set of points which form a curve.

*So essentially Descartes gifted the static bidimensional plane geometry of a ‘variable motion’, and give us the capacity to study an st-evolving system in space-time:*

Thus, to each algebraic equation with two variables, F(x, y) =0, corresponds a completely determined curve of the plane, namely a curve representing the totality of all those points of the plane whose coordinates satisfy the equation F(x, y) =0.

This observation of Descartes opened up an entire new science.

Since it was only needed to consider an informative or time function, one of the coordinates and then the other one would represent the ‘spatial function’, which are inverse dimensions, to make it work magically an represent a ginormous number oF s@≤≥∆ð something soon used by Galileo.

But there is more to that simple scheme from a GST p.o.v.

**The multiple S, ST, ∆ applications of Cartesian graphs.**

It soon followed that the discovery of a system which represented dimensional symmetries of space and time – both forms and motions – would yield enormous capacity to model the Universe with the mind as it was the case – soon to add the ∆nalysis of scales with the awesome concepts of ∆-1 derivatives (finitesimals) and ∆+1 integrals, treated independently of this texts and then the complex plane, ‘magic’ on ∆scales.

So Analytic geometry provides mappings on:

1: T<S: of solving construction problems of continuous with discrete temporal computation, such as the division of a segment in a given ratio; *thus adding time, hierarchical features to geometry.*

2: S>T: of finding the equation of curves defined by a geometric property (for example, of an ellipse defined by the condition that the sum of distances to two given points is constant); thus again adding time functions to space forms, in *this case a key function to define 2 physical systems controlling with equal force if we consider an internal of time, the same territory of space (Kepler’s 2nd law).*

S≈T: of proving new geometric theorems algebraically (see, for example, the derivation of Newton’s theory of diameters; and conversely, of representing an algebraic equation geometrically, to clarify its algebraic properties (see, for example, the solution of third- and fourth-degree equations from the intersection of a parabola with a circle.

∆: which will lead at the end of the road to the understanding of ∆nalysis, derivatives and integrals, *a notch above in the complexity of the ternary elements of reality.*

Thus, to the classic definition of analytic geometry as that part of mathematics which, applying the coordinate method, investigates geometric objects by algebraic means, we can now ad the insight of knowing its direct homology with the S, T, ∆st elements of reality.

** THE 3 POVS OF MATHEMATICAL MINDS. |-cylindrical, O-polar, Ø-cartesian.**

The ternary nature of the universe will again be evident when we consider the other 2 canonical ‘coordinate systems’:

The other two planes then will be the polar Tiƒ plane and the Cylindrical, toroid plane, which will give us 3 different ‘views of the Universe’. And it follows naturally that by ‘changing’ the equations of systems from one frame of reference to another we *change often the topological analysis of them – a fundamental feature of quantum physics, described as a hyperbolic wave in cartesian coordinates and as a particle-field in polar co-ordinates (Bohm’s model).*

Further on the choice of coordinates, in which the function/form is simpler often indicates they type of part-species, we are analysing according to the ‘generator equation’ of mind-coordinates:

Γ(generator of mind p.o.v.s): * [Spe (cylindrical) <ST-Cartesian> Tiƒ (polar)]∆i(complex) *

The most complete thus, is the conic, Cartesian coordinate, with the negative and positive, inverse directions, *distorted from the point of view of the observer, self-centred in ∆o; which requires a bit of paradoxical logic, far more expanded in the articles on •-mathematical minds:*

Now, there are 2 other ‘spaces’ worth to notice, to explain all this fractal space-time complex world:

**Complex space** ideal for studying ∏imespace world cycles as the previous ones, so we keep it on the section of T-heory of numbers.

**Hilbert space, ***t*he space elf the fifth dimension which each point is a world in itself, which we shall study at the end of the post.

**The Rashomon Truth and the a(nti)symmetries of creation on graphs. Parallel creation**

We have stressed the fractal division of all realities in 5 elements, and the fact that the most important of them are the operandi, as two ‘asymmetric’ beings, let us say as we are in analytic geometry, the Line and the Cycle, come together, fusioning in a creative way, when *their coming is parallel, or destroying each other when it is perpendicular. *

We can then redefine a ginormous number of mathematical elements in geometry vitalising its meaning with this essential ‘fourth postulate of ¬Æ geometry and we shall do so, slowly as we keep improving those posts.

*Let us be clear enough, the most important law of the Universe is the fourth postulate of non-Æ geometry and it will come all over the place, as the ‘angle of communication’ determines so many facts, in all sciences. And so this is a huge field of @nalytic geometry to explore, which spreads to Analysis (a derivative is indeed a parallel, so when there is NO parallels there are NO derivatives, NO communication between ∆ø and ∆-1, NO possibility hence to create a super organism of ∆±1 scales); and illuminates complex analysis (where the rules are more… complex)… And so on.*

At least in my youth exploring ∆st, when I did have a permanent Nirvana orgasm of perception of all those laws I recall to have had some of the biggest orgasms literally when I realized after all the orgasm is the sensation of a cycle invaded by a line in parallel by penetration (:

More seriously the *fundamental penetration of the line into the cycle which moves along the line, increasingly ‘tightening’ its grip on it, is the cone that will generate all the curves, called conics. So we do have two fundamental modes of mathematical creation by parallelism in the plane:*

- 2D creation in a holographic bidimensional 2-manifold by tangent parallelism.
- 3D creation by penetration of the line into the curve.
- All other approaches ‘cut’ perpendicularly and ‘destroy’ one of the two elements. As this what perpendicularity is all about – it is real, it ‘cuts’, it hurts’. Remember in ∆st, we follow Godel and Lobachevski and Einstein: mathematics as all languages are real mirrors of a higher reality.

All this said the first thing we have to understand about graphs is that they *must be seen as all systems of reality under the kaleidoscopic view of the rashomon truth. Indeed, we just have considered the @-pov on graphs, but we are left to explain 3 other povs, ∆, s and t. And then refer the graph to the specific really we observe. Let us illustrate this with a single example, that of a graph of an exponential function, with a growing; accelerated speed towards its asymptote.*

**2D $:** From a spatial point of view, the growth of the function makes no sense, but it does its space below, the distance covered.

**1Dð:** The curve though as most of them are Y (s) = ƒ (t) functions do represent a time dimension perfectly, as it is not a constant line, the dimension is one of accelerated change, hence a o-1 dimension of growth and/or a vortex of acceleration towards a singularity point (1D)

**∆±i: **And this from the pov of scale means the function accelerates towards an entropic dissolution (i.e e¯ª, which is a decay entropic process) or inversely towards an emergence, as in resonances, distribution equation or the limiting case of distribution, a function that grows exponentially towards the 0-point, in the simplest seemingly function of them all, the Delta Function, whose integral is 1 even if it only exists in that 0 point showing that indeed a point-particle is a fractal point with dimensionality 1, once emergence in its ‘infinite density’ point of resonance, emerging into an ∆+1 plane – a fact which incidentally proves that all infinities ARE finite ones of a higher plane: infinity IS just the limit of an ∆-plane; the whole is the infinity of the part; or else the delta would not emerge when integrated between ∞ and – ∞ as 1. Because once we cross a discontinuum of scale, we are IN ANOTHER TYPE of parameter, so infinity does NOT exist.

**Creation by parallelism, annihilation by perpendicularity.**

How dimensions combine to create form is an essential feature of the duality between symmetric parallelism and perpendicular annihilation (antisymmetry), which special plays in geometry. In 2 dimensions the tangent that gives origin to a derivative is the creative form of ∆-scales, the circle that turns around and peels off a wave cover is the creative form in S<≈>T dimensions.

In 3 dimensions the conic that can be seen as a line penetrating and dragging a circle, which turns faster in smaller spaces around it, is the ∆-creative form. The harmonic oscillator in multiple of its devices, the S<≈>T creative form.

We shall then find for all systems of the Universe the dualities of tangential retain associated to different dimensionalities and split in the duality between past to future to past ∆§calling and S<≈>T present transformations of topological cycles into hyperbolic waves as the result of combining with a lineal motion.

**Topological emergence between planes.**

When we deal with annihilation by perpendicularity things get also 2 variations as it is logic, to think by ∆-scattering or by S<≠>T antisymmetry. But as annihilation ultimately means destruction of an ∆ scale, it derives in entropic dissolution. The results are often shown in exponential functions.

We can then think of the ‘change of planes’ as a perpendicularity against which the internal function (of momentum) ‘collides’ , trying to puss the ‘wall’ that separates scales without result, growing then in ‘inertial mass’ no longer in speed. As ultimately the vital energy enclosed by a membrane finds always the membrane to be perpendicular and annihilating it very often; the military border in a nation, the barrier of the cattle, or the sepherd dog, the predators, etc.

The best known case of those process in physics are related to the hypothetical impossibility of a function to cross a discontinuity between planes, which *is what it means in the Lorentz transformations: as a mass comes closer to the relative infinite limit of his light space-time domain, its grows ‘theoretically’ towards infinite as it cannot speed more. *

So its momentum mv ‘changes’ no longer in v but in m (as change cannot be stopped, the ∑∏ energy fed in the system must either derive into the singularity m or the speed-membrane in parallel to the larger whole galaxy membrane (Mach explanation of angular momentum). This no longer possible as the part cannot MOVE FASTER than the whole (c-speed limit for the galactic space-time membrane), the vital energy does NO longer feed the membrane but the singularity and its active scalar mass, the 0-1 Dimensional parameter of density reflected in the Dirac membrane.

So we can see geometrically or algebraically how this momentum becomes then ‘deviated’ as a parallel angular momentum of the membrane either in lineal or cyclical fashion (itself a transformation of an SH motion from cyclical into lineal), to a growth of mass.

**II. CURVES**

The interest of maths as a mirror of reality is its ‘simplicity’ to describe the basic laws and symmetries of space-time, and hence the properties of worldcycles and super organisms and ∆-planes. This reaches its final simplicity in the analysis of curves in a bidimensional plane, and the whys they reflect on S=T SYMMETRIES. Let us explore some of those elements of ∆st isomorphism with mathematical mirrors.

**The cycloid. **

An example of this concept is found in the cycloid.

Fermat and Descartes struggled to understand the properties of the cycloid, a curve not studied by the ancients. The cycloid is traced by a point on the circumference of a circle as it rolls along a straight line, as shown in the figure.

Roberval first took up the challenge, by proving a conjecture of Galileo that the area enclosed by one arch of the cycloid is three times the area of the generating circle.

Christopher Wren in England found that the length (as measured along the curve) of one arch of the cycloid is eight times the radius of the generating circle, demolishing a speculation of Descartes that the lengths of curves could never be known. Such was the acrimony and national rivalry stirred up by the cycloid that it became known as the Helen of geometers because of its beauty and ability to provoke discord. Its importance in the development of mathematics was somewhat like solving the cubic equation, for a reason: the hidden beauty of the cycloid encodes:

If we consider the moving cycle a world cycle of life and death, the point on the surface, the singularity-mind, transiting along a lineal sequential timeline, the cycloid is the simplest isomorphism to the properties of a life-death cycle, in which our ‘vital energy’ (the area enclosed by the singularity and the timeline) is split in 3 ages with a similar volume to that of the unit circle.

So the** ∆-1 generational o-1 age** is followed by 3 more ages of equal vital value:

2D: Max. $ x Min. ð= youth 3D: ∑=∏: maturity, 1D: Max. ð x Min. $: 3rd age…

4D: And then the cycle ends in the ‘landing’ on the point on the lineal entropic timeline, exhausted its vital energy.

Further on, the great mathematicians of Classical times interested in variational problems solved the famous problem of the brachistochrone – finding the shape of a curve with given start and end points along which a body will fall in the shortest possible time – when they realized it was part of an upside-down cycloid, traced by a point on the rim of a the rolling circle.

The interest here being that the brachistochrone is the spatial symmetry of the law of least time; a key law of ∆st, as systems *try to achieve its actions and motions in the less possible time.*

This simplest representation of the world cycle shows already the key new/old insight of ∆st in curves represent on analytic geometry – to refer them as spatial representations of temporal flows that follow the laws of T.œs.

This symmetry will become even more sophisticated when we observe the properties of…

**The canonical curves.**

Conics’ general equation is: **Ax² + Bxy + Cy² + Dx + Ey + F = 0**

As such it is the canonical curve to define a holographic bidimensional manifold the fundamental ST symmetry of space-time. And the cone, itself a circle moving along a line, as *it decreases its size, represents as it does the cycloid, a fundamental motion of space-time between scales: ∆+1 ð>∆ ð and then forwards. So the 2 inverted cones form together an image of the 5D-4D inverse collapsing and expanding, evolving and entropic arrows of space-time.
But if the motion from ∆+1 larger wholes to the ∆º singularity of the cone and its expansion on the inverse cone can be used as we do in the general model to represent a world cycle of existence, what mathematicians have focused on, are slices of *

*space-time, along that trajectory*

Descartes realized that curves in the plane are represented by second-degree equations with two variables whose general form represents an ellipse, a hyperbola, or a parabola; i.e., curves very well known to the mathematicians of antiquity.

So the first obvious fact is that ‘cyclical, time-like curves’ have 2 dimensions (degrees) as opposed to single-dimensional Spe-lines.

And with both, the line through which a cycle ‘moves’ and shrinks in an ‘accelerated’-timelike process we can construct the Universe.

This was the wonder of the Greeks and Descargues – who proved that all curves can be drawn from the conic, the simplest |xO=ø representation of the world cycle. So we can start translating the hidden meanings of those facts, as usual, to GST:

The ancient Greeks had already investigated in detail those curves obtained by intersecting a straight circular cone by a plane. If the intersecting plane makes with the axis of the cone an angle ϕ of 90°, i.e., is perpendicular to it, then the section obtained is a circle.

It is easy to show that if the angle ϕ is smaller than 90°, but greater than the angle α which the generators of the cone make with its axis, then an ellipse is obtained. If ϕ is equal to α, a parabola results and if ϕ is smaller than α then we obtain a hyperbola as the section.

What this means in GST terms is that the *hyperbola is the opposite concept to the circle/ellipse, as closed and open, Tiƒ and Spe inverse ‘geometries’ which if we consider the y axis, the longitudinal entropic axis, and the x axis of the cycle, the informative one, yields an inverse Space-time graph within the dual cone, which can be used (and will be used) to represent world cycles and space-time events.*

** Ellipse, Hyperbola, and Parabola: ****The conics.**

Before investigating the general second degree equation, it is useful to examine some of its simplest forms.

The equation of a circle with center at the origin. First of all, we consider the equation: x²+y²=a²

It evidently represents a circle with center at the origin and radius a, as follows from the theorem of Pythagoras applied to the shaded right triangle, since whatever point (x, y) of this circle is taken, its x and y coordinates satisfy this equation, and conversely, if coordinates x, y of a point satisfy the equation, then the point belongs to the circle; i.e., the circle is the set of all those points of the plane that satisfy the equation.

The equation of an ellipse and its focal property. Let two points F1 and F2 be given, the distance between which is equal to 2c. We will find the equation of the locus of all points M of the plane; the sum of whose distances to the points F1 and F2 is equal to a constant 2a (where, of course, a is greater than c). Such a curve is called an ellipse and the points F1, and F2 are its foci.

Let us choose a rectangular coordinate system such that the points F1 and F2 lie on the Ox-axis and the origin is halfway between them. Then the coordinates of the points F1, and F2 will be (c, 0) and (–c, 0). Let us take an arbitrary point M with coordinates (x, y), belonging to the locus in question, and let us write that the sum of its distances to the points F1, and F2 is equal to 2a:

This equation is satisfied by the coordinates (x, y) of any point of the locus under consideration. Obviously the converse is also true, namely that any point whose coordinates satisfy the equation belongs to this locus. The Equation is therefore the equation of the locus.

And while *mathematicians* simplify it, the interest for the topological o-point of view remains precisely in its complete form.

**The parabola and its directrix.**

Thus we define the parabola as the graph of quadratic proportion and the hyperbola as the graph of inverse proportion. We recall that the graph of quadratic proportion: y=kx² is a parabola and that the graph of inverse proportion **y = k/x y x = K** is a hyperbola. Let us then consider the parabola from the perspective of its foci.

We consider the equation ** y² = 2px **and call the corresponding curve a parabola.

The point F lying on the Ox-axis with abscissa p/2 is called the focus of the parabola, and the straight line y = –p/2, parallel to the Oy-axis, is its directrix. Let M be any point of the parabola (figure 24), ρ the length of its focal radius MF, and d the length of the perpendicular dropped from it to the directrix. Let us compute ρ and d for the point M. From the shaded triangle we obtain ρ2 = (x – p/2)² + y². As long as the point M lies on the parabola, we have y² = 2px, hence:

But directly from the figure it is clear that d = x + p/2. Therefore ρ² = d², i.e., ρ = d. The inverse argument shows that if for a given point we have ρ = d, then the point lies on the parabola. Thus a parabola is the locus of points equidistant from a given point F (called the focus) and a given straight line d (called the directrix):

**The property of the tangent to a parabola. **

Let us examine an important property of the tangent to a parabola and its application in optics.

Since for a parabola y2 = 2px we have 2y dy = 2p dx, it follows that the derivative, or the slope of the tangent, is equal to dy/dx = tan ϕ = p/y.

On the other hand, it follows directly from the figure that:

**tanϒ=y/x-p/2**

But:

i.e., γ = 2ϕ, and since γ = ϕ + ψ, therefore ψ = ϕ. Consequently, by virtue of the law (angle of incidence is equal to angle of reflection) a beam of light, starting from the focus F and reflected by an element of the parabola (whose direction coincides with the direction of the tangent) is reflected parallel to the Ox-axis, i.e., parallel to the axis of symmetry of the parabola.

On this property of the parabola is based the construction of reflecting telescopes and modern antennae, as invented by Newton. If we manufacture a concave mirror whose surface is a so-called paraboloid of revolution, i.e., a surface obtained by the rotation of a parabola around its axis of symmetry, then all the light rays originaring from any point of a heavenly body lying strictly in the direction of the “axis” of the mirror are collected by the mirror at one point, namely its focus. The rays originating from some other point of the heavenly body, being not exactly parallel to the axis of the mirror, are collected almost at one point in the neighborhood of the focus.

Thus, in the so-called focal plane through the focus of the mirror and perpendicular to its axis, the inverse image of the star is obtained; the farther away this image is from the focus, the more diffuse it will be, since it is only the rays exactly parallel to the axis of the mirror that are collected by the mirror at one point.

The image so obtained can be viewed in a special microscope, the so-called eye piece of the telescope, either directly or, in order not to cut off the light from the star with one’s own head, after reflection in a small plane mirror, attached to the telescope near the focus (somewhat nearer than the focus to the concave mirror) at an angle of 45°.

The searchlight is based on the same property of the parabola. In it, conversely, a strong source of light is placed at the focus of a paraboloidal mirror, so that its rays are reflected from the mirror in a beam parallel to its axis. Automobile headlights are similarly constructed.

The parabolic being is thus a single ‘foci’, which is able to ‘focus’ the information and entropy of a given field, *in as much as the field comes to it, without the need of a second focus, which would complete the symbiosis between both. *

Indeed, in the case of an ellipse, as it is easy to show, the rays issuing from one of its foci Fl and reflected by the ellipse are collected at the other focus F2 (previous figure), and in the hyperbola the rays originating from one of its foci F1 are reflected by it as if they originated from the other focus F2:

*The directrices of the ellipse and the hyperbola.*

Like the parabola, the ellipse and the hyperbola have directrices, in this case two apiece. If we consider a focus and the directrix “on the same side with it,” then for all points M of the ellipse we have ρ/d =ε where ε the constant is the eccentricity, which for an ellipse is always smaller than 1; and for all points of the corresponding branch of the hyperbola, we also have ρ/d =ε , where is again the eccentricity, which for a hyperbola is always greater than 1.

Thus the ellipse, the parabola and one branch of the hyperbola are the loci of all those points in the plane for which the ratio of their distance ρ from the focus to their distance d from the directrix is constant. For the ellipse this constant is smaller than unity, for the parabola it is equal to unity, and for the hyperbola it is greater than unity. In this sense the parabola is the “limiting” or “transition” case from the ellipse to the hyperbola; born as the ellipse tears apart its 2 focuses, that split entropically into two different entities, albeit maintaining its relative symmetry as two parts that were once entangled into one.

We have now considered the most important second- order curves: the circle, the ellipse, the hyperbola, and the parabola. What other curves and generations are relevant, or at least exhaust the field of bidimensional geometries? Let us consider this question from a couple of ∆ST perspectives.

**There are no more curves.**

It turns out that if we select a suitable Cartesian coordinate system, then a second-degree equation with two variables always can be reduced to one of the following canonical forms:

Alas, once more the Universe appears as a simple structure, of closed and open systems, the perfect circle, the split ellipse with 2 focus, the parabola, which further splits them and the hyperbola which through the y-axis of entropy sends both in different ‘height arrows’ of the ∆-scales of the fifth dimension.

Plus 3 varieties of lineal couples, the intersecting couple, the parallel couple and the identical ones, which again respond to the ternary symmetries of the Universe, whose profound meaning, relevant to the outcome of all events in space-time is studied in depth in the article dedicated to the 3rd postulate of non-Æ logic.

While in *3 dimensions, which as we have shown (Fermat’s theorem, superposition laws) is merely done by accumulation of reproduced, identical ‘social numbers’ of planes, one after another, the same curves merely engrossed through the reproductive growth of a z-dimension are still the same unique varieties:*

**Other |xO generations.**

Not only the conic can be generated by | x O dualities as it is the case basically all forms of nature can.

Consider the case of Rectilinear generators of a hyperboloid of one sheet.

It is not at all obvious the fact that the hyperboloid of one sheet and the hyperbolic paraboloid can be obtained, just like the cone and the cylinder, by the motion of a straight line.

In case of the hyperboloid, it is sufficient to prove this fact for a hyperboloid of revolution of one sheet x2/a2 + y2/b2 – z2/c2 = 1, since the general hyperboloid of one sheet is obtained by a uniform expansion from the Oxz-plane and under such an expansion any straight line will go into a straight line.

Let us intersect the hyperboloid of revolution with the plane y = a parallel to the Oxz-plane. Substituting y = a we obtain:

But this equation together with y = a gives in the plane y = a a pair of intersecting lines: x/a – z/c = 0 and x/a + z/c = 0.

Thus we have already discovered that there is a pair of intersecting lines lying on the hyperboloid. If now we revolve the hyperboloid about the Oz-axis, then each of these lines obviously traces out the entire hyperboloid (graph). It is easy to show that:

1. 2 arbitrary straight lines of one and the same family of lines so obtained do not lie in the same plane (i.e., they are skew lines)

2. any line of one of these families intersects all the lines of the other family (except its opposite, which is parallel to it.

3. three lines of one and the same family are not parallel to any one and the same plane.

Now, how we write CONIC equations in GST terminology?

The first thing we realise is that it is a closed domain, either in the single cone or the dual one, and that those equations are written as G(x,y) = 1, for closed or ‘dual’ semi-closed hyperboles, and as G(x,y)=0, for the open parabola.

Since we have established that ∆-1 is the infinite-simal domain of 0-1, and ∆+1, the infinite domain of 1-∞ on the fractal scales of the Universe, it follows easily that we are working in the closed domain where 1 means the whole, and 0 the singularity and can write them as: G(x,y) = o+1. And so those closed cures really mean:

G(x, y) : ST= 1 (s) + 0 (t).

So that is what a conic means, the membrane 1, the relative infinite, outer enclosure and the 0, the relative center or singularity enclose the G(x,y), where all the combined points, (<x, <y), form the inner region.

And since the Universe is *for each layer of identical being immersed in a gradient world, bidimensional, the result is that those equations are the most pervading in all forms of Nature, further on proving GST is real:
*In the graph the Universe is bidimensional and holographic, polynomials reflect this fact in the degree of its equations.

Yet only one-dimensional, 2-dimensional and 4-dimensional equations are truly real, being 3d equations mere superpositions as Fermat’s grand theorem proved (since x³+y³≠z³).

How then we deal with *equations which are NOT bidimensional or 4-dimensional representations of the real holographic Universe?*

**By Reducing equations to 2-holographic forms…**

Descartes in fact already realized of this and presented a method for determining the real roots of third and forth degree equations, both analytically and studying *the intersection of the parabola y = x² with circles, which was the first representation of the holographic method of creation of the Universe: *

He first showed that the solution of an arbitrary third-or fourth-degree equation can be reduced to the solution of an equation: (1)

which we shall call the ‘holographic equation’ (as it has only 1, 2, 4 polynomials).

Let the given third-degree equation be z³ + az² + bz + c = 0. Substituting z = x – a/3, we obtain:

The x²-terms in the expansion of the parentheses will cancel out, so that we get an equation of the form x³ + px + q = 0. Multiplying this equation by x, we bring it to the form (1) with r = 0, which also admits a root xˆ4 = 0.

An equation of the fourth-degree zˆ4 + azˆ3 + bzˆ2 + cz + d = 0 can be reduced to the form (1) by the substitution z = x – a/4. Hence, the solution of all third- and fourth-degree equations can be reduced to the solution of an equation of the form (1).

The solution of third- and fourth-degree equations by the intersection of a circle with the parabola y = x².

Let us first derive the equation of a circle with center (a, b) and radius R.

If (x, y) is any of its points, then the square of its distance to the point (a, b) is equal to (x – a)² + (y – b)².

The equation of the circle in question is: (x – a)² + (y – b)²=R²=1+0 for a ‘unit’ circle (we ad the singularity 0, to stress we have defined the 3 elements of the circle). Now we try to find the points of intersection of this circle with the parabola y = x². In order to do this, it is necessary to solve simultaneously the equation of this circle and the equation of the parabola. Substituting y from the second equation into the first, we obtain a fourth-degree equation in x:

If we choose a, b and R2 such that:

Then a ‘holographic equation’ (1) is obtained and all its real roots are the abscissas of points of intersection of the parabola y = x² with this circle. (In case r = 0, this circle passes through the origin and the are only 2 intersections; if R² < 0, equation (1) is known to have no real roots and there is no interrsections.

But in GST this tells us more, when we perform the inverse *decomposition of a holographic quartic into its 2 ‘bidimensional forms’, the Tiƒ closed circle and the Spe-expansive parable: ST (QUARTIC) : Tiƒ ≈ Spe, ‘recorded’ with fire in my mind. Now, *most Tiƒ parts *are self-centred and have 2 roots only where its Spe (limb-field) function intersect with its Tiƒ part, determining a closed inner space, which we shall call its body-wave; while some are all positive, having only an intersection with the parabola, whose deep meaning, we shall consider:*

THE OSCULUM of the sphere on the open other main quadratic form make us realise the main curves allow us to define in vital terms the conic game. A sphere or polygon of 19² bidimensional = 361 ‘degrees’ of inner points, in the squared complex plane an Rˆ2, becomes in a single plane a disk, which encloses its vital energy and whose actions must be defined inter ally. The parabola is thus its death into an explosion of upward entropy that ends that vital energy…

Let us now after this ‘classic introduction’ to basic @-geometry go much deeper connecting it with…

**THE GENERATOR’S TERNARY SYMMETRIES AND ITS S=T 1, 2, 3 DIMENSIONAL ANALYSIS**

There are 3 relationships in space-time between entities, which are part in non-Æ of the laws of the fourth postulate of similarity, and *of course, they are 3 because we can relate them to the 3 elements of the fractal generator.*

**ST:**Complementary**adjacency**, in which in a single plane,*membranes of parts fusion into wholes*, and in multiple scales, parts become enclosed by an ‘envelope’ curve that becomes its membrain. Its main sub postulates being the realm of**topology**proper.**$t:**Darwinian**perpendicularity**, in which a membrain/enclosure is ‘torn’, and punctured by a penetrating perpendicular, causing its disrupter of organic structure.Its main postulates being the realm of**Non-Euclidean geometries.****§ð: Parallelism**, in which two systems*remain different without fusioning its membrains, but maintain a distance to allow communication and social evolution into herds and network supœrganisms.*Its main postulates being the realm of**Affine geometry.**

The correspondence of those relationships with the 3 elements of the generator, $<ST>ð§ ARE IMMEDIATE:

**– ST-Adjacency** allow to peg parts into present space-time complex dualities.

**-$-Perpendicularity** simplifies the broken being into its minimalist ‘lineal forms’, $t.

**-§-Parallelism** allows the social evolution of entities into larger §ocial scales.

They will define ‘ternary organisms, in which the 3 topologies in 1, 2 or 3 s=t dimensions of a single space-time plane, can be studied in ceteris paribus analysis or together, *but no more, as all other attempts to include more dimensions in a single plane are ‘inflationary fictions caused by the error of continuity’ – a waste of time for researchers too (: *

*DIMENSIONS*

Dimensions thus must also be considered besides *the 3 logic relationships.*

*And there are 3 levels of complexity in dimensions, lineal, 2-manifolds and 3-D volumes that express also the ternary generator: *

So for the 3 lineal coordinates, the equivalencies are immediate:

**1DΓ:*** $t: length/motion <ST width/reproduction> §ð: height/information.*

As lineal length is the shortest distance between two points, height the projective geometry of perception from antennae to heads, and its product mixes them to reproduce in the width dimension where you store your fat…

For 2 Dimensional surfaces is also a logic extension from lines of length to flat planes, ST-reproductive widths that mix the other two elements, the hyperbolic geometry with its dual ± curvatures and for height/information, and finally the sphere is the volume that stores more information in lesser space. So in principle we must suggest the following 2D generator:

**2DΓ***: $t: plane/motion <ST hyperbola/reproduction> §ð: sphere/information.*

The graph shows also how the parallel property, becomes now more complex showing clearly some of its key ‘social properties’:

**–****Spherical systems** are social as they become tighter, informative elements causing the social evolution of points into supœrganisms of a higher ∆+1 scales.

–**Flat surfaces** maintain the parallelism ad infinitum. So they are ideal for network herding, in a balance between adjacency and connection.

–**Hyperbolic ST** vital energy if left in the open without a closing membrane will diverge into entropy, seeking for ‘freedom’ and becoming unconnected.

**Angles of perception**

We can also consider other ‘vital properties’ till today merely treated in abstract terms, of fundamental importance to the vital geometry of the Universe.

Indeed, what is about ‘angles and triangles’ so intense in ∆st that the geometric mind-mirror is obsessed by ‘it’.

SIMPLE ENOUGH: Angles of perception define the capacity of a point of view to measure and obtain information from the external Universe, such as the closest angle, the less perceptive (5D) a system is, and the more dark space will have.

**Minimal ‹: It is the case of a hyperbolic plane,** again showing that 2-manifold hyperboles tend to entropic freedom minimalizing the inverse arrow of information, reason why they are indeed the herded vital energy of the ts-ystem.

**Maximal <:** And so inversely the triangle in the sphere whose angles are greater than 180 makes *spherical beings in ∆-1, living as parts of an ∆-spherical world more perceptive. *

**Medium «: **The plane is in thus in the middle term, with angles at 180.

So here we come to the first *seemingly contradiction as we expand our dimensions in the function/form of the next scale.*

The kin observer indeed will have notice that the role of the 1D line *in its entropic function is being taken by the hyperbolic plane in 2D, transposing its functions with those of the plane, generated by the entropic line, which now takes the ST FUNCTIONS of the hyperbole.*

Why? The graph shows that they still keep its S-hortest, ƒ-astest (Straightest ) space-time trajectory in terms of lines, hyperbolas and circles, which mean *by the principle of least action that makes those paths overwhelming in experimental reality, that they are indeed related and generated by them: ∑lines = plane, ∑ hyperbolas = hyperbolic chair, ∑ circles= Sphere. *

But its properties have definitely switched between $-lines and ST-HYPERBOLAS, into $-hyperbolas and ST-planes.

So *while the motions in time of the generator have been conserved (still the flat air-plane and Formula 1 moves faster, the sphere is still the informative eye-head on top; the hyperbola combines both), the functions in space as we ’emerge’ from 1 to 2 dimensions HAVE been trasposed. *

And this is one of the paradoxes of ‘growth in ∆-planes’, as we can regard a 2D as a social gathering of 1D elements. *Functions become often inverted. And so while an elementary analysis might seem in abstract to relate lines to planes, circles to spheres and hyperbolas to lobachevski’s geometries, the universe, which is a constant iteration, transformation and merging of dimensional Kaleidoscopes has changed ‘again’. *

**The laws of functional inversion as we emerge into higher social planes.**

This is part indeed of an essential law we shall repeat ad nauseam: *when growing in social scales to form a new plane, functions change, most often becoming inverted.*

And the reason is obvious, the whole spherical micro point is the king of its inner world, but just a particle micro point in the larger whole, where its role is slavish to the super organism.

So the explanation of this change of vital roles is immediate when considering the Disomorphic laws of ∆st, which expressed in i-logic writes:

*∑|i-1 ≈ Øi, ∑Øi=|i+1*

This law indeed comes all over the place, in experimental systems, from biological systems where proteins that are lineal, become the hyperbolic elements with multiple dimensional folding that control the reproduction of proteins, to atoms which have perfect cyclical form (iron), which become the lineal strongest element for creation of entropic weapons in the ∑+1 scale.

We can resume this in Shakespearean fashion: we are all buffoons or kings depending on our perspective. And it connects also with the fact that *as we grow in size perspective (Lobachevski’s r/k ratio), from being ‘cyclical’ beings we become moving dot-points tracing lines in the larger perceived flat world. *

(This is indeed the stuff which delights the paradoxical mind but has always made so difficult for naive realism and its mathematical physicists to understand a dot of this). So let us explore it as NOTHING OF THE EXPERIMENTAL world can be understood without this law.

First to notice the one to one correspondence. We talked of **distance** as the sum of ‘minimal steps of measure’ which applies to transpositions, in the simplest form, with the stop and go, S>T steps of all motions in 5D² realities. So here we observe a particular case of this ‘motion through transformation of states of the being, *across the scales of the fifth dimension, symmetric to the change of states in timespace and topological functions=forms.*

An example will suffice. The best way to REPRODUCE=generate by transposition from a line more dimensions is to make it grow into a plane that grows into a cube. Yet each growth has a different function according to the GENERATOR EQUATION that *changes topology, age and grows in ∆-scale, the 3 elements of any super organism, as its ternary generator shows:*

∆-1: $< ∆º ST > ∆+1>ð§

We did indeed told you many times that the fractal generator encodes within it *ALL LAWS OF THE UNIVERSE. So it is ultimately the explanation of the ‘sliding’ of functions, 1 at a time as we move from lineal limbs/potentials at ∆-1, into Ø-ST-iterative space-time into ∆+1 O-spherical particles-heads of information*

So the line is 1D lineal motion, the plane is ST hyperbolic iteration and the cube is the cyclical formal-function or §ð:

**The cube is** the ð§tate, excellent for social evolution into larger networks, the preferred crystal form for matter to socially evolve, from 3D to its social 5D grouping into minds, the closest form to become by elliptic deformation the sphere, in another st beat, as it bloat feeding on energy into sphere, it depleats into cube.

And then again as the form shows, the cube displaces to form a line on the ∆+2 plane, NOT a fourth-dimensional spatial being, which does NOT exist.

So functions do slide and change but the pattern is encoded in the generator and so as usual ONCE we understand THE UNDERLYING ∆st basic laws of the Universe encoded on it, everything keeps falling into place.

Now we saw that beyond 3 dimensions there are no more dimensions in a single plane, so the cube generates then a line of the larger scale, transposing its function again, completing a full zero-sum cycle; and for the same reason, on close analysis the ’empty sphere’ (not a ball, only the surface), IS NOT as the inner ball it grasps a volume of information but the topologically simplest, strongest (normally made of strong triangles of entropic nature) membrane that acts as the entropic envelope that in a non-euclidean klein disk the inner vital space can never reach because it will kill them.

So the sphere KILLS by enclosing, trapping and acting as in lion hunting of zebras by enclosing them with predator or shepherding functions – as dogs enclosing sheeple. Then the membrane will sharply penetrate perpendicularly the zebra herd and eat it up; the military border of a human social territory will give a coup d’etat and collapse into the capital and conquer the world, the battle will be lost once in Cannae, Hannibal had encircled the prey (graph, where the red color of entropy is used and the dimension of motion of the horse allows to ‘close the dark spaces’ as the shepherd dog does).

So we are talking experimental reality here, as ∆st laws might seem abstract to you but are the stuff of which survival and existence is made.

In the graph the classic conception of 3D geometries we use ad nauseam in this blog to explain the fractal generator of T.œs. Things are though a bit more complex when we ad the laws of ‘transposition’ of functions as we move through the generator ternary symmetries in time, (not explained here as the illusionary separation between past, present and future, is the hardest one to understand for huminds, so I won’t blow up your mind with past local travelling beyond the zero sum death cycle).

**So for 3 Dimensional elements, the realm of topology, those correspondence are not so immediate, **as things start to become multifunctional and here it is the KEY LAW of ternary systems, its MULTIFUNCTIONALITY, which allows a ternary topology TO PLAY different roles in reality acting as $, ST, and §ð beings (∂ easier than ð often appears for 3D ∫∂ being 4D)…

Ok, so what are NOT the classic roles of 3D variations in ∆st? To notice first the silly error of topologists which consider a donuts to be as in the graph a flat plane, just because you can cut the empty donuts, spread it in 2 D and alas you have the plane, but *you have cut it, producing a non-topological transformation as you have torn it and changed it. *

To the point, the real equivalents are easy to obtain considering simply the main law of topology, which is the GENUS of each variation, such as obviously as we grow in complexity when we slide on the generator functions-form, the *form with minimal genus that is, whose number of dissections is minimal, must the simplest form-function.*

Where the genus is a number less than the number of cuts NEEDED to break the form in its component parts, eliminating its adjacency (ask me why it is NOT the same number than the cuts – response: huminds love to make it difficult for students and to feel they are complicated geniuses).

**3D $:** So the sphere just accepts one cut and has zero genus and so IT IS indeed the entropic membrane that controls and predates on the inner region, which is however trying to keep in the middle as the singularity in the center is no joke – you are between a ‘sword’ (the membrane that kills you by perpendicular penetration – as the invaginations of the biological stomach show) and a hard place (the singularity of maximal density that kills you by warping)’ as they say.

And indeed in biology *where the laws of vital geometry are more self-evident all predators have the same form of killing, they cut the ‘shortest’ part of the membrane, the neck, split the O-head and the Ø-body and the thing is done with. *

**3D §ð:** Next it comes the genus of the toroid which cannot be cut with a single line – only transformed into a new variety, in this case from a ‘circle closed’ into itself into an open cylinder, and as such IT MUST be the singularity

**3D ST:** As the number of parameters needed for the |-function is smaller than for the O-function, which is smaller for the iterative/reproductive Ø-ST function, which is therefore the 3rd variety, the dual donuts, whose genus is 6, as you can do 3 superficial cuts laterally (inside the two holes and outside the whole form) and 3 perpendicularly, in the bridge between holes and between the donuts and the outer world.

But all this is studied in topology, where we do copy-paste and enlarge this needed comment on the transposition laws (the fundamental laws of ∆st are repeated in different post or else you won’t ‘learn’ the essence of T.œs – anti-Goebbel’s method of repetition of truth to memorise 🙂

So we close here the section on the 3 frames of reference, knowing this is only the tip of the iceberg.

Since as Descartes did, I will end a summary of its work, saying:

‘I hope that posterity will judge me kindly, not only as to the things which I have explained, but also as to those which I have intentionally omitted so as to leave to others the pleasure of discovery.’ (:

**III. COMPLEX PLANE**

While most of the theme on complex numbers are treated in number theory some indication on the complex plane are given in this section, and further parts are treated in Algebra, where they complete the closure of its polynomial functions,

In essence the not-so-well understood property of complex numbers is to be NOT only bidimensional numbers but *numbers of different dimension, hence kept separated, as squared numbers and single numbers when we ‘square’ the coordinates, or real numbers and its roots as observed normally. And so we must differentiate first two key ∆st kaleidoscopic analysis: *

**REALITY AND COMPLEX NUMBERS.**

*Experimentally, the key duality is between its experimental use ‘directly’ as representation of real dualities in nature, which happens ONLY in connection with waves and its potentials or path of its motion vs. the indirect use to simplify calculus in which the ± inverse directions of a temporal symmetry are represented by them.*

Thus, there are two distinct areas that I would want to address when discussing complex numbers in real life:

- Real-life quantities that are naturally described by complex numbers rather than real numbers;
- Real-life quantities which, though they’re described by real numbers, are nevertheless best understood through the mathematics of complex numbers.

Examples of the first kind are fairly rare, whereas examples of the second kind occur all the time.

**1.** In electronics, the state of a circuit element is described by two real numbers (the voltage V across it and the current I flowing through it). A circuit element also may possess a capacitance C and an inductance L that (in simplistic terms) describe its tendency to resist changes in voltage and current respectively. These are much better described by complex numbers.

Rather than the circuit element’s state having to be described by two different real numbers V and I, it can be described by a single complex number z = V + i I. Similarly, inductance and capacitance can be thought of as the real and imaginary parts of another single complex number w = C + i L. The laws of electricity can then be expressed using complex addition and multiplication.

*What we find then is that the potential-motion $t element is represented with a higher dimension (squared or real number) than the current flow (root or negative number). And this case can be found for other examples of relative ∆-1 potentials and ∆-waves:*

Another example is electromagnetism. Rather than trying to describe an electromagnetic field by two real quantities (electric field strength and magnetic field strength), it is best described as a single complex number, of which the electric and magnetic components are simply the real and imaginary parts.

What’s a little bit lacking in these examples in classic physics is why it is complex numbers (rather than just two-dimensional vectors) that are appropriate; i.e., the physical applications of complex multiplication *which allows to superpose together potentials and waves (∆-1>∆ dualities), and its parallel Galilean duality – lineal, ∆-1 potentials and cyclical ∆-scales). So for example * a beam of light passing through a medium which both reduces the intensity and shifts the phase, can be found by the multiplication of a single complex number that represents the real, lineal potential and its intensity, and the shift of its phase with the complex number.

And so the real plane *is the biggest jump in complexity on the representation of two elements of the Generator together, which a frame of reference that uses a single ‘species’ cannot*.

A third similar case which shows the generality of its use was the first to be found in science:

D’Alembert in 1752 discovered equations for fluid flow. Their equations govern the velocity components u and v at a point (x, y) in a steady two-dimensional flow ARE better expressed as a combination of the velocity components, u + iv, forming a differentiable function of x + iy.

**2.** Much more abundant is the second kind of application of complex numbers, which *disappear in the final results. Why?*. Let us do an analogy. Think of measuring two populations: Population A, 200 people, 50 of them old. Population B, 1300 people, 13 of them old men. You might say that the fraction of old men in population A is 50/200 while the fraction in population B is 13/1300 (approx. 1%) is much less than 50/200 (25%), so population B is a much younger population on the whole.

Now point out that you have used fractions, non-integer numbers, in a problem where they have no physical relevance. You can’t measure populations in fractions; you can’t have “half a person”, for example. The kind of numbers that have direct relevance to measuring numbers of people are the natural numbers; fractions are just as alien to this context as the complex numbers are alien to most real-world measurements. But *not if you preserve the fractions as RATIO NUMBERS, which is a whole different concept. And so happens when you consider complex numbers as combinations of bidimensional, ‘square numbers’ (the real part) and 1Dimensional ± imaginary/conjugates.*

And yet, despite this, allowing ourselves to move from the natural numbers to the larger set of rational numbers enabled us to deduce something about the real world situation, even though measurements in that particular real world situation only involve natural numbers.

Thus the larger set of complex numbers allows us to draw conclusions about real world situations even when actual measurements in that particular real world situation only involve the real numbers, *when we are able to ‘think’ in terms of holographic ST dual **solutions*.

When trying to solve equations like a **y” + b y’ + c y = 0** for the unknown function y or any problem which involves quadratic equation** a r^2 + b r + c = 0** for the variable r will bring those roots, which *squared will often have a meaning, albeit different to the single meaning of a real number. *The starting and ending points of the argument involve only real numbers, but one can’t get from the start to the end without going through the complex numbers and understanding what they mean in those intermediate stages, as one cannot go from young to old age (single valued) without going through the reproductive (double valued) age, or go from S to T without an ST intermediate state. So we shall study all those cases whenever arising in different sciences with the new insight of 2-manifolds and squared complex numbers.

**RASHOMON TRUTHS.**

Theoretically we can consider the Rashomon effect for complex number and consider its 4 different perspectives:

**T(s):**As inverse, TEMPORAL numbers.**S(t):**As bidimensional REAL numbers (Bolay) with its own rules of multiplication (a,b) x (c,d) = (ac-bd; ad+bc)**@:**As a self-centred polar frame of reference (r, α)**∆:**And finally when squaring the coordinates, as an ∆-frame of terence. As this is the new solution found by ∆st theory we shall consider t in more detail.

# ∆: i²

**The ∆-symmetry of i-numbers.**

The second fundamental role of i-numbers must be related to scalar space-time, hence polynomial structures.

We have also stressed in many paragraphs the duality and differences between polynomials and ∫∂ functions, whose ‘magic’ closeness (as polynomials can be approached by differentials through Taylor binomials), *responds to the social ‘lineal’ herd-like of polynomials, vs. the organic, more complex ‘cyclic’ nature of ∫∂ systems.*

So if we combine the two concepts we can ‘reorganise the complex plane’ in polynomial terms ‘naturally’ by *squaring it, getting rid of the √ elements and its negative complex roots, to reflect a mapping of a fundamental principle of nature.*

The holographic principle: the bidimensionality of all real forms of nature, which are in its simplest forms, dual dimensions, and so the unit of reality IS not a single dimensional system/point but a bidimensional ST system. Ergo the proper way to use imaginary numbers is ‘squaring’ them.

Then we obtain a ‘realist’ graph, with an X² coordinate system in the real line, which can be projected further with a negative -y and a positive +y axis, often ‘inverse’ directions or symmetries of timespace.

And since the square of a negative number is positive, the X² main axis of bidimensional space-time units IS mirrored in the positive in both sides. So the proper way to represent the graph is by tumbling it, and making a half positive plane, with the now ± real axis (the i and -i axis) on the X coordinates and the square axis on the Y coordinates, which is much closer to the ‘REAL universe’ of bidimensional T.œs of timespace moving around its 0-identity element in two inverse directions of time (-i²= -1; +i²=1 axis).

And suddenly we can start to understand many realist whys of complex numbers and complex spaces.

FIRST WE obviously deal with the ancient question on how to map out the dual solutions of the complex plane, for which Riemann invented his dual surfaces then pegged to each other. The answer however is now far more intuitive, as *there are no real solutions, since indeed when squared 1/2 of the map is redundant:*

In the graph, the Riemann way is now unnecessary as 1/2 of the plane is indeed redundant, reason why it gives us two solutions.

Next, we see the immediate application to quantum physics, where ‘probabilities’ are born of the product of the ‘two conjugates’, the positive and negative sides of the imaginary axis, which now looses its √ and so become positive and negative ‘inverse’ arrows/functions/forms, which combine into a bidimensional holographic real squared element. So the graph is an excellent form of represent, the neutral present, ST, in the square real line and the ± inverse S and T functions in the positive and negative conjugate axis no longer imaginary.

This is somehow acknowledge in many equations of physics notably in those related to relativity where we USE square functions (in praxis just to simplify calculus, but now we know in reality they have ‘meaning’).

The other even best known case is the use of square factors to define energy and momentum in relativistic physics:

So in special relativity we write:** ****s²=x²-c²t²**

WHERE THE negative factor of time means AN inverse motion/form, as the other 3 positive space parameters represent a lineal distance-stretching on the lower quantum potential=gravitational space-time where the light displaces and the negative ‘cyclical time parameter of the light wave’ the warping by light of a space-time-light function as it ‘forms’ the quantum field potential below (we assume it to be the ∆-1 gravitational scale), the speed of light space that drags and warps the gravitational potential into form appears as a negative element that slows down the motion.

So on one side the light wave moves lineally in the euclidean space-time below it and on the other hand it warps and reduces the distance by a factor which allows it to create the magnetic/electric field of energy and information, creating a new plane of ‘smaller space-time over the ‘larger, more entropic gravitational/quantum potential’:

In the i-scale unlike in a normal XY system, the exponential function if we use the customary logarithmic graph is ‘lineal’, while the sinusoidal function of the wave, which as we said is present, grows exponentially through the i-scale, as IT IS THE PRESENT WAVE, WHICH ACCUMULATES AND GROWS constantly in its value. While in the normal cartesian graph, the sinusoidal wave becomes repetitive and the exponential grows to infinity.

**ð**

In the graphs a very useful function on the complex plane that mimics the passing of time in world cycles towards an internal i-point. Complex planes are thus better systems of co-ordinates for cyclical times. And its simpler polar form, shows it is indeed, the perfect plane for ∆t analysis. Notice that as X becomes a ‘whole’ of a larger scale, it becomes paradoxically smaller in ‘volume but it speeds up in time’ according to ∆-metric.

To notice also that the second graph is NOT defined when x,y=0, the singularity (spatial view of the complex plane), or final point of the world cycle, alas, its point of death (temporal view).

So we could state the following:

- Complex numbers in the cartesian plane model the whole super organism in scalar space.
- Complex numbers in the polar plane, model the world cycle of the super organism in time.

Alas, finally we know what al that fuzz about imaginary numbers means and why they do have so many applications in all systems of sciences. The number of insights this ‘insight’ reveals for each science is thus enormous.

**t>T<ð**

**Complex numbers as expressions of the 3 states: potential past <present wave> future particle**

**Complex space** ideal for studying ∏imespace world cycles as the previous ones. Since in a complex space, the ‘negative’ arrow inverse to the positive one (Spe<=>tiƒ) *are kept separated as* lateral i-numbers, by the convention – initially a fortuitous error of interpretations – than the minus of √-1 *cannot be taken out of the √. *

*It can, but as this was not understood mathematicians maintained the Tiƒ arrow represented by a negative root as a different coordinates. And the result of* its combinations with real numbers, gave origin of complex numbers ℂ, which *are the best complex numbers to represent the 2 inverse arrows of world cycles of **existence.*

A complex plane can represent a tumbled world cycle with the positive real as the 2D $t, the negative real as 1D ðƒ and the squared positive as the ∑∏ dimension.

i=√-1, is the root of an inverse number, and represents the information of a bidimensional system.

In that sense, we will use the lateral concept of Gauss for the imaginary units. These numbers are of the form *x* + *iy* where *x* and *y* are real numbers and *i* = √(−1) .

It is then fascinating to observe how slowly was the realisation of imaginary numbers as rotary numbers related to time cycles. In fact the concept of a time cycle does not exist in science, so the imaginary number is related to a specific time cycle that of physical waves, *which being in ∆-3 scales, have a maximal speed of time. Hence they can complete a time cycle in a very brief span, making possible to study them as recurrent cyclical function of i.*

Consider the most famous example of the use of complex numbers in mathematical physics, *not as a calculus device but as an integral part of its form*:

It is Schrodinger no-time dependent wave of quantum physics. As it is no time dependent, it means it is deterministic: it represents a complete worldcycle of past, present and future fully ‘integrated in the equation’ and related to the ~~h~~ constant of angular momentum (the closed world cycle ‘spin’ of the wave or particle), which cannot and should not be broken apart.

And as such it becomes the operator of angular momentum, of ‘time worldcycles’ of waves and particles in quantum physics; even in quantum physicists don’t know it (:

So we can study the 3 arrows of time of the quantum wave, conceding the imaginary number in this case to the angular momentum, or membrane of the wave, which will then surface often in connection with the particle envelope (itself origin of all other extensions of quantum physics, hence the importance of the example). So we write in 5D:

∆ø: Vψ ≈ ~~h~~²/2m ∇²ψ + i~~h~~ ∂ψ/∂t

The ‘wave-state’ is a present state; the PARTICLE, or angular momentum that envelops the wave is function which has the i-number (right side). And the way *to properly write the equation is therefore, moving the 2 h factors to the right, to form a nice complex number equivalent to the potential that generates both the particle and its kinetic energy.*

So in the right side we have now the kinetic≈entropic energy of the particle or singularity parameter and its membrane.

While in the other side we have the quantum wave itself, its vital energy, which is the potential energy of the wave, Vψ.

And both are balanced. In this manner, we obtain *a third interpretation besides the expression of Schrodinger and the Bohm’s interpretation, closely related to this one. *

So the equation understood in terms of the generator reads:

Singularity (kinetic≈entropic, moving energy) + angular momentum (represented by i-numbers+ = vital energy (Potential energy)

And as any other T.œ, where the singularity and angular momentum encloses its vital energy in balance with it, so does the wave.

So in quantum physics as physicists realise with certain surprise, imaginary numbers are not mere ‘artefacts of calculus’ but ‘REAL’ in its meaning, expressed in the graph above.

We shall of course extend its ‘reality’ to explain why actually *they are also real in electromagnetism, even if the humind prefers to obtain as final result a real, which is less real (: result, but appears as ‘real’ to the unreal human mind (quip intended).*

In terms of past, present and future (left side), which is a more general view of the ternary elements on imaginary equations, an i-number rotate sfrom a relative past/future to a relative future/past a function of existence, through a 90° middle angle, that is, the middle ST balanced point, in which the present becomes a bidimensional function, with a REAL, + (relative past-entropic coordinates) and an imaginary, negative (future coordinates) elements.

This needs a couple of clarifications: first we move ‘backwards’ in a complex i-graph from future to past as we rotate through i-scales, in as much as most graphs are measuring the ‘motion’ of lineal entropic time which is the *relative Spe Function.*

So the negative side becomes the time function, something which mathematical physics makes obvious, when considering an attractive, informative force-field-potential negative.

**FUNCTIONS OF A COMPLEX VARIABLE**

**All together now: pi, e and i…**

The understanding of complex numbers as a better reflection of the scalar Universe should have an obvious use, in mapping out ‘easily’ with less deformation, the fundamental motions we have described in our analysis of geometry and algebraic geometrical groups.

By combinations of rotations, translations, and expansions/shrinkages (spiral motions), you can do all 5-dimensional motions on reality. And as the Cx. plane ads a ‘scaling’ factor (real vs. √ scales) *and a rotary factor (±i rotations); it allows to connect and represent in symmetric relationships the 3 motions of reality. Let us study in depth this proposition.*

An “application,” of the cx. plane is that a certain class of complex numbers behave as **rotation operators.**

For example, draw the usual real and imaginary axes, and plot any point on it (say 3 + 5i) Multiply this number by i, and you get (-5 + 3i). If you plot this new point, you’ll find that it is the original point rotated about the origin by 90 degrees counter- clockwise. This works for ANY complex number. Multiply by i, and you’ll rotate it by 90 degrees.

Multiply any complex number by cos(x) + i*sin(x), and you’ll rotate the number about the origin by an angle x.

In the same way, **adding a fixed complex number is equivalent to a translation.**

And** multiplying by a real number expands or contracts the values.**

In 3 dimensions, it’s not so easy, as it involves something called matrix multiplication. So *as we have obtained a closer mirror of reality with the use of o-5D the Cx. plane does so by establishing bidimensional numbers, power coordinates and rotary i-parameters.*

IF WE consider its translational and rotary power, the complex plane as a bidimensional form is the most perfect expression of the dynamic 3D ∑∏ wave.

If we consider its scalar properties it is the more sophisticated way to understand the ∆±i motions which expand or *shrink a sphere, the only deformable form to infinity without tear and change of topology (Poincare Conjeture), to a point-mind or to a dissolving bang of entropy.*

*The duality of ∆±i arrows in the sphere, which Poincare’s conjecture expresses for any dimensionality is perfectly shown by the imaginary rotating sphere in terms of e, pi and i – all together now to express the transformations between ∆-e-scalar motions and rotary motions of reality:*

In the graph, the expression of a world cycle of existence, with broad applications for physical sciences (quantum, electromagnetism) as it describes the ‘unitarian’ cycle of light and electrons (dense fractals of light photons), in its cyclical turns around an atom that constrains its motion, or in its displacement ‘in freedom’ across space, as a function of present iterative motions:

Tiƒ (k) / Spe (µ) = ST (C²) (1/√eµ=c IN maxwell expression).

The graph shows one of the essential trans-formations of dimensions in the Universe, which form a world cycle of its own, between a ‘real’ potential-limb ∆-1 plane from where it emerges by ‘subtraction’ of an exponential e∧it quantity an ∆+1 particle-head defined by the sinusoidal functions, which seen in a ‘real’ plane, would be flattened into an elliptical orbit, such as:

Now, when we transfer the imaginary circle to the squared circle it is an interesting process because we obtain an expansion of one dimension which we can then trace laterally to get the R² SQUARE, AND WE SEE HOW ‘REPRODUCTION’ of a sine and cosine circle mimic both reality and the imaginary squared graph, something we express in the realisation that they aths in the imaginary axis might vary but repeat cyclical the form in the z-axis squared.

So the complex planes mimic better the generation actions of s=st=t symmetries, with ST in its square plane and S and T in inverted planes, as we have done in the graph with Einstein’s metric.

Thus it is a good introduction to the complex plane and its imaginary numbers.

When we transfer the imaginary circle to the squared circle it is an interesting process because we obtain an expansion of one dimension which we can then trace laterally to get the R² SQUARE, AND WE SEE HOW ‘REPRODUCTION’ of a sine and cosine circle mimic both reality and the imaginary squared graph, something we express in the realisation that they aths in the imaginary axis might vary but repeat cyclical the form in the z-axis squared.

So the complex planes mimic better the generation actions of s=st=t symmetries, with ST in its square plane and S and T in inverted planes, as we have done in the graph with Einstein’s metric.

Thus it is a good introduction to the complex plane and its imaginary numbers in ∆st terms and explains why it has so many possibilities even in the more confusing i-scale without squares.

Let us then with the usual help of Aleksandrov’s book, bring about some comments on the core knowledge of the Complex Plane and its functions, which we shall use for the fourth line of specific studies of different disciplines.

**III. ****@-GEOMETRY & PHYSICS**

The theme of Geometry and physics is obviously well beyond anything this author can develop in a few notes, even if it ever comes to complete the fourth line – doubtful, as with no sights this year, 2018 when I finally consider despite its shortcomings, the *work to deserve word of mouth a huge following by university students on planet Earth, * it is very steep for me to take the computer ‘again’… pass an age in which mental laziness sets in… So the reader specially if physicist should not expect more than some marginal comments. As usual we tend to use for all themes of mathematical physics books of the Russian school of dialectics. So we won’t be quoting them

**The importance of bidimensional curves****: Holographic physics.**

Once we understand bidimensionality we can enlighten physics in its simplest mathematical statements, which historically dealt with the laws of astronomy force and motion, heavily drawn before analysis became the meaning of m.p. with the canonical:

- Bidimensional ‘open’ curves – parabolas: entropic motions as incannonball shots.
- Closed curves: cycles, spheres, ellipses: used in informative motions – as in gravitational and charge vortices/clocks.
- And in-between, St-hyperbolas, used in st-ratios: st balances, st-systems, st-constants of nature and 5D=st metric equations; as in Energy laws or the Boyle law:
**P(t) x V(s) = K(st)**

Let us then consider only an example, the conic curves and its relationship with gravitational forces and 5D metric equations.

Now conics acquire a new perspective under the holographic principle of a Universe built on bidimensional ensembles, where most ‘ternary dimensions’ are layers of reproduced bidimensional surfaces or ‘branched networks’, spread on the ‘holes’ of a 3rd dimension’. And so we distinguish 2 kind of conics:

-Time like conics, circles and ellipses, which close into themselves creating a clear ternary structure with an external membrane closing an internal space, self-centred in one or two points separated lineally by a factor of excentricity.

-Space-like conics, parabolas and hyperbolas; which apparently are open systems without closure, but in fact preserve both, the central point of view, the internal territory and the membrane, *albeit open to let the world circulate through it.*

So we can understand the conics as a dynamic transformation between $pe (open) < ≈ > ðiƒ (closed) states of an ST being, with a single parameter to measure them, eccentricity; whereas the most perfect *bidimensional being, is one of o-eccentricity, where the ‘2 focus’ of the central singularity, which can be any S/T VARIATION are both equal in space and time (a single point) – the circle. Which therefore must be considered as the Greeks had it, the perfect form; an all others deformations of it.*

Consider for example…

**The ellipse as the result of “contraction” of a circle.**

We consider a circle with center at the origin and radius a. By the theorem of Pythagoras its equation is , where we have written instead of y, since y will be needed later. Let us see what this circle is contracted into if we “contract” the plane to the Ox-axis with coefficient b/a:

After this “contraction” the x-values of all points remain the same, but the -values become equal to , i.e., . Substituting for in the above equation of the circle, we will have:as the equation, in the same coordinate system, of the curve obtained from the given circle by contraction to the Ox-axis. As we see, we obtain an ellipse. Thus we have proved that an ellipse is the result of a “contraction” of a circle.

From the fact that an ellipse is a “contraction” of a circle, many properties of the ellipse follow directly:

For example, since any vertical strip of the circle under its contraction to the Ox-axis does not change its width and its length is multiplied by b/a, the area of this strip after contraction is equal to its initial area multiplied by b/a, and since the area of the circle is equal to πa², the area of the corresponding ellipse is equal to **πa²(b/a) = πab.**

Now, we shall see the fundamental quality of GST – *to offer at least 2 or 3 ∆ST causalities to any event of reality. *We cannot be exhaustive, but a simple example will suffice, on a property of cubics and ellipses, with profound implications in physics.

This remarkable result shows even more clearly the ‘squaring’ nature of π and the subtle difference between a² and a x a. A square *is a perpendicular product, hence of 2 different dimensions, a product happens only in one dimension, a duality which ‘must’ exist in the fractal Universe for both scalars (single numbers that represent mostly time frequencies and densities of ‘past populations’) and vectors (dot vs. cross product); as the potency and cross product DO create a new dimension. Numbers are forms say Plato and Pythagoras drew them in figures, facts essential to relate numbers in time and points in space, forgotten by the lineal plane and overdevelopment of abstract algebra.*

Now, we see the fundamental quality of GST – *to offer at least 2 or 3 ∆ST perspectives and causalities to any event of reality. *We cannot be exhaustive, but a simple example will suffice, on a property of cubics and ellipses, with profound implications in physics.

**All together now: cubics representing the 3 elements, singularity, lineal membrane and body-wave in algebraic space.**

Back to the cubic question, the next fella after Descartes to work on them was Newt, with a beautiful description of the ‘3rd layered dimensions of a being’, and its inner structure such as the form of the st SPACE is contained by an enclosing curve, itself centred in a single point; by 2 methods, obtained by Newton from its study of cubics, and obtained from the contraction of a ‘sphere’ into an ellipse:

Let an nth-order curve be given, i.e., a curve which is represented by an nth-degree algebraic equation in two unknowns; then an arbitrary straight line intersecting it has in general n common points with it. Let M be the point of the secant that is the “center of gravity” of these points of its intersection with the given nth-order curve, i.e., the center of gravity of a set of n equal point masses situated at these points. It turns out that if we take all possible sets of mutually parallel secants and for each of them consider these centers of mass M,then for any given set of parallel secants all the points M lie on a straight line.

Newton called this line the “diameter” of the nth-order curve corresponding to the given direction of the secants.

In case the curve is of the 2nd order (n = 2) the center of gravity of two points is simply the midpoint between them, so that the locus of midpoints of parallel chords of a second-order curve is a straight line, a result that for the ellipse, as well as for the hyperbola and the parabola, was already well known to the ancients. But this was proved by them, even though only for these partial cases, with quite difficult geometric arguments, and here a new general theorem, unknown to the ancients, is proved in an entirely simple way.

In ∆st theory it reveal an even deeper truth: *systems become ‘compressed’ into smaller networks and final singularity points, which control the entire force, motion and organisation of the system. *

Thus if we consider the surface enclosed by the disk, structurally sustained by the network of lines, itself communicated at equal distances by the line, and finally the line focused in the ‘center of gravity’, we have built an ∆+2>∆+1>∆>o scalar structure. And here we realise why analytic geometry works, as it does compress mentally geometric surfaces into sequences of numbers of lesser ‘volume’ of information that ‘commands’, logically the whole.

This property of diameters – that if parallel secants of an ellipse are given, then their midpoints lie on a straight line, can be shown also from the contraction of ellipses in the following way:

We perform the inverse expansion of the ellipse into the circle. Under this expansion parallel chords of the ellipse go into parallel chords of the circle, and their midpoints into the midpoints of these chords. But the midpoints of parallel chords of a circle lie on a diameter, i.e., on a straight line, and so that the midpoints of parallel chords of the ellipse also lie on a straight line. Namely, they lie on that line which is obtained from the diameter of the circle under the “contraction” which sends the circle into the ellipse.

**w**e go back to the equation of the ellipse:

and its simplification by substituting:

And set **a² – c² = b²** to obtain:** x²/a²+y²/b²=1**

The coordinates (x, y) of any point M of the ellipse thus satisfy this equation. It can be shown on the other hand that if the coordinates of a point satisfy this equation then they also satisfy the more complex one. Consequently, the equation x2/a2 + y2/b2=1 is the equation of the ellipse:

Substituting y = 0 in the equation of the ellipse, we obtain x = ±a, i.e., a is the length of the segment OA, which is called the major semiaxis of the ellipse. Analogously, substituting x = 0, we obtain y = ±b, i.e., b is the length of the segment OB, which is called the minor semiaxis of the ellipse.

The number = c/a is called the eccentricity of the ellipse, so that, since , the eccentricity of an ellipse is less than 1. In the case of a circle, c = 0 and consequently = 0; both foci are at one point, the center of the circle (since OF1 = OF2 = 0).

As the eccentricity grows the 2 points separate but the *points still control the area of the system, which can be shown by * the method of drawing the curve with a thread connected to both:

And this simple fact if we apply to one of the points the concept of ‘motion as indistinguishable of distance’ explains the orbital laws, in which the planet and the sun together scan the same ‘gravitational area’ :In the graph, the aerolar law of a physical vortex might imply as so many have seeked, a second ‘elliptical focus’ of the sun, playing the eccentricity focus. But there is nothing there.

Indeed, the empty focus is along the major axis: the line joining the positions of perihelion (closest to the Sun) and aphelion (furthest from the Sun) in Earth’s orbit.

Since the “centre” of the major axis is halfway between these two points, you can calculate (and visualize) the position of the empty focus in this manner.

Draw an ellipse. Place the Sun at one focus (on the major axis, a bit off to one side). Mark the Sun “S” and the centre “C”.

On the “short side” of the Sun, along the orbit, where the major axis cuts the orbit, that is the perihelion “P”. At the opposite end is the Aphelion “A”.

In 2010, Perihelion was on January 3, at a distance of 147,097,907 km

In 2010, Aphelion was on July 6, at 152,096,520 km.

Total length of the major axis (from P to A) is the sum = 299,194,427 km

The semi-major-axis (distance from P to C and from C to A) is half of that (149,597,213.5 km).

Since we know that the distance PS is 147,097,907 km, then the distance SC must be the difference PC – PS = 2,499,306.5 km

The Sun is 2.5 million km to the “January” side of the centre.

By symmetry, the empty focus is 2,499,306.5 km on the “July” side of C

CE = 2,499,306.5

SE = SC + CE = 2*CE = 4,998,613 km.

If you had looked towards the Sun on July 6, you could have imagined the empty focus to be on the line between us that the Sun, at a distance of 5 million km from the Sun (or roughly 147 million km from Earth).

There is nothing at that point, since it is only a mathematical point that results from our definitions of an ellipse.

So the explanation is the GST explanation: *all distances are motions. So if we make the static planet a motion, with an eccentricity one, what truly matters of the ellipse – that 2 points ‘sweep the same area’ working together, stays. An orbit, indeed in the physical analysis of GST becomes a dual system of a membrane with more momentum (the Spe-planet) and a singularity with more gravity/mass (the Tiƒ), surrounding and absorbing the lower ∆-1 scale of gravitational points.*

**THE SUPERORGANISM OF PHYSICAL SYSTEMS IN MATHEMATICAL PHYSICS. ITS 3 PARTS.**

For example, the length of the sides of a rectangle completely determines its area, the volume of a given amount of gas at a given temperature is determined by the pressure of its walls, and the elongation of a given metallic rod is determined by its scalar temperature. It was uniformities of this sort that served as the origin of the concept of function.

**T: Dynamic forces Follow such line of thought: another physical example in the ‘ellipse of inertia of a plate.**

Following such line of thought we have another physical example in the ‘ellipse of inertia of a plate.

Let the plate be of uniform thickness and homogeneous material, for example a zinc plate of arbitrary shape. We rotate it around an axis in its plane. A body in rectilinear motion has, as is well known, an inertia with respect to this rectilinear motion that is proportional to its mass (independently of the shape of the body and the distribution of the mass). Similarly, a body rotating around an axis, for instance a flywheel, has inertia with respect to this rotation.

But in the case of rotation, the inertia is not only proportional to the mass of the rotating body but also depends on the distribution of the mass of the body with respect to the axis of rotation, since the inertia with respect to rotation is greater if the mass is farther from the axis. For example, it is very easy to bring a stick at once into fast rotation around its longitudinal axis. But if we try to bring it at once to fast rotation around an axis perpendicular to its length, even if the axis passes through its midpoint, we will find that unless this stick is very light, we must exert considerable effort.

“It is possible to show that the inertia of a body with respect to rotation about an axis, the so-called moment of inertia of the body relative to the axis, is equal to ∑r²i mi (where by ∑r²i mi we mean the sum **∑r²1 m1 +∑r²2 m2 +…..+∑r²n mn**) and think of the body as decomposed into very small elements, with mi as the mass of the ith element and ri the distance of the ith element from the axis of rotation, the summation being taken over all elements.

Now escaping its proof, the following remarkable result can be obtained: Whatever may be the form and size of a plate and the distribution of its mass, the magnitude of its moment of inertia (more precisely, of the quantity ρ inversely proportional to the square root of the moment of inertia) with respect to the various axes lying in the plane of the plate and passing through the given point O, is characterized by a certain ellipse. This ellipse is called the ellipse of inertia of the plate relative to the point O. If the point O is the center of gravity of the plate, then the ellipse is called its central ellipse of inertia.

The ellipse of inertia plays a great role in mechanics; in particular, it has an important application in the strength of materials. In the theory of strength of materials, it is proved that the resistance to bending of a beam with given cross section is proportional to the moment of inertia of its cross section relative to the axis through the center of gravity of the cross section and perpendicular to the direction of the bending force.

Let us clarify this by an example. We assume that a bridge across a stream consists of a board that sags under the weight of a pedestrian passing over it. If the same board (no thicker than before) is placed “on its edge,” it scarcely bends at all, i.e., a board placed on its edge is, so to speak, stronger. This follows from the fact that the moment of inertia of the cross section of the board (it has the shape of an elongated rectangle that we may think of as evenly covered with mass) is greater relative to the axis perpendicular to its long side than relative to the axis parallel to its long side. If we set the board not exactly flat nor on edge but obliquely, or even if we do not take a board at all but a rod of arbitrary cross section, for example a rail, the resistance to bending will still be proportiopal to the moment of inertia of its cross section relative to the corresponding axis. The resistance of a beam to bending is therefore characterized by the ellipse of inertia of its cross section, which *becomes therefore its ‘core-singularity element’, often controlled by its central point/s*.

**The logic expansion of the concept of dual elliptical territories.**

Now following this kind of thought, of ellipses as collaborative locus of 2 ‘complementary species’, we can apply the ‘logic’ of the concept to anything and in fact define ‘eccentricity’ lines as the essential form of a wave of communication between 2 points (2nd Non-E Postulate):

So a couple with a son, *is a GST ellipse, where both fathers are constantly seeking a similar distance between them. And a territorial animal couple is also a logic ellipse, tendering for the territory as one moves to hunt, the other stays to breed.*

*Any relationship is a naked ellipse (without the external membrane), joined by the focal line that shares entropy and form between them.*

Steel beams often have an S-shaped cross section; for such beams the cross section and the ellipse of inertia have the greatest resistance to bending is in the z direction. When they are used, for example as roof rafters under a load of snow and their own weights, they work directly against bending in a direction close to this most advantageous direction.

Again this result can only be understood in terms of ‘2 planes’ the ∆-plane of the beam and the ∆-1 plane of the gravitational field, and *the dominant nature of the major axis line that communicates the inner structure of the entity.*

**The open curves: parabolas and hyperbolas**

Now once we have identified what is truly relevant about bidimensional curves as opposed to single ones that represent only a part of the being: to be of a full ternary organism, with 3 parts:

- THE FOCAL POINT OR SINGULARITY: @
- THE MEMBRANE OR CYCLICAL CURVE: ð§
- THE VITAL SPACE OR ENERGY BETWEEN THEM: ST

We can consider ‘open curves’ in which the intermediate space is fully opened and its meaning to represent key elements of T.Œs (Timespace organisms).

And the wonder of them is that in those open systems the key elements *will still be determined by the focal singularities and the relative balance of their ‘co-invariant’ product in relationship to the membrane.*

So they can represent the ‘metric equations’ of co-invariance 5D systems, and in fact, the hyperbola will be the best representation of any function:

S x T = K, Which is by definition, the equation of… The hyperbola and its focal property.

Indeed, consider this equation…** x²/a²-y²/b²=1 **…representing a curve which is called a hyperbola:

In the special case a = b the so-called rectangular hyperbola plays the same role among hyperbolas as the circle plays among ellipses.

In this case, if we rotate the coordinate axes by 45° the equation in the new coordinates (x′, y′) will have the form: x’ y’ = k:

And we shall use both modes to fully grasp fundamental metric equation of systems in the fifth dimension.

Now in the previous hyperbola, if we denote by c a number such that** c² = a² + b²,** then it is possible to show that a hyperbola is the locus of all points the difference of whose distances to the points Fl and F2 on the Ox-axis with abscissas c and –c is a constant:** ρ2 – ρ1 = 2a. **The points F1 and F2 are called the foci.

And so the fundamental relationship between the curve and the 2 foci, is preserved in an inverse ‘resting manner’; which qualifies the hyperbola as the entropic state of the ellipse, its time-reversed figure, an aforementioned property of importance for ‘complex GST analysis’, well beyond the scope of this texts.

the hyperbola is different from the ellipse, as it is pure algebraic in ‘phase space’, with variables in which the hyperbola is NOT a real form, but a mental form to represent, *the metric equation of 5D, in which Spe x ðif = K:*

Consider a simple formula for the Pressure p, due to a liquid column: **P =ρ x g ×h**

It is obvious knowing that density is a measure of the density of form, or information of a system, h, the height dimension of information and g, acceleration, a parameter of a time vortex, and its growing frequency, that *pressure is a pure Tiƒ parameter, with a value, product of a time-dimension (frequency acceleration), an informative dimension, height, and a time dimension, ‘density’. Moreover, we can put the 3 ‘elements’ in terms of time as a measure of the ‘past’ value of the system (its density), the present value (its height) and the future value (its acceleration downwards), and then make a deep philosophical statement about the constancy of pressure.*

Yet if pressure is the Tiƒ parameter, it follow that expansive volume is the pure SPACE-entropy parameter, and so we shall immediately postulate according to 5D metric the existence of a co-invariant relationship:

**P(t) x V (s) = K (st)**

Where K will turn out to be the cyclical space-time vibration of temperature.

This ‘dimensional analysis’ is thus *an entire new fruitful perspective on mathematical physics, akin to the dimensional analysis of classic physics, but far more profound in significance.*

In the graph, Boyle’s law amounts to yet another ‘5D metric’ equation, which we can plot with a straight line departing from O, crossing all different Ti, for equivalent PxV values, maximised in the central region of the asymptotic curves.

*ALL THIS *REVEALS whys and Ðisomorphisms of a simple mathematical equation which for a physicist, means merely ρ, the density in kg/m3, g=1o m/s² the acceleration due to gravity and h, the height or depth of liquid in meters, used to calculate the praxis and future behaviour of a liquid in motion.

But what we have written is essentially the equation of potential energy, PE=m x g x h, which we will indeed define when studying actions and hamiltonians, the ultimate equations of 5D physics (as well as 4D physics), as the time-like component of ‘present space-time energy’.

**THE RELATIONSHIP OF THE COMPLEX PLANE WITH ∆ST**

ALL This said the great importance of the complex plane IS at the level of representation of the different ∆±i planes of a super organism, which is the field of Complex numbers studied in analytic geometry.

And it has to do a lot with the conversion of exponential e function from the whole plane into the unit plane of sinusoidal ∆-1 function by virtue of the negative repetitive nature of the i-factor, which does translates ∆º scales into ∆-1 unit circle elements, or in real nature, the scale of the individual to the cellular scale. So we shall define in a classic discourse the mathematics of it, and intersect some comments referential to ∆st meanings. We shall bring of this extensive theme two great fields – that of the differentiability of the complex plane, which being a ‘scalar plane’ of the fifth dimension allows multiple differentiability both in space (as social scales grow indefinitely) and in time (as they can represent one after another world cycle). And then consider the classic application to physics, through the cauchy-riemann condition that *represents the necessary laws of a(nti)symmetry between the ∆±1 scales shown in the iy and rx ordinates for differentiability – hence communication between ∆±i planes to exist, and how they translate into real concepts of flow, potentials and Lapace equations, becoming the canonical representation of a physical system at the ∆-1 ‘S-limbs/potentials’ scale and ∆+1 particle scale. *

We are now equipped withs some basic understanding of complex numbers to enter into more detail, considering its relationship with the other ‘Rashomon truths’ of mathematics and its fundamental use in physics and reality at large, *to represent two scales of the fifth dimension, in most cases of mathematical physics a ‘basis’ or potential ∆-1 ‘scale’ of larger value (the real form) and its emerging ‘variable’ a ‘relative derivative’ of lesser forms (the imaginary part).*

**The general concept of a function of a complex variable and the differentiability of functions.**

Power series allow us to define analytic functions of a complex variable. However, it is of interest to study the basic operations of analysis for an arbitrary function of a complex variable and in particular the operation of differentiation. Here we uncover very deep-lying facts connected with the differentiation of functions of a complex variable. As we will see on the one hand, a function, having a first derivative at all points in a neighborhood of some point z0, necessarily has derivatives of all orders at z0, and further, it can be expanded in a power series centered at this point; i.e., it is analytic. Thus, if we consider differentiable functions of a complex variable, we return immediately to the class of analytic functions. On the other hand, a study of the derivative uncovers the geometric behavior of functions of a complex variable and the connections of the theory of these functions with problems in mathematical physics.

In view of what has been said, we will, in what follows, call a function analytic at the point z0, if it has a derivative at all points of some neighborhood of z0.

We will say, following the general definition of a function, that a complex variable w is a function of the complex variable z if some law exists which allows us to find the value of w, given the value of z.

Every complex number z = x + iy is represented by a point (x, y) on the Oxy plane, and the numbers w = u + iv will also be represented by points on an Ouv plane, the plane of the function. Then from the geometric point of view a function of a complex variable w = f(z) defines a law of correspondence between the points of the Oxy plane of the argument z and points of the Ouv plane of the value w of the function. In other words, a function of a complex variable determines a transformation of the plane of the argument to the plane of the function. To define a function of a complex variable means to give the correspondence between the pairs of numbers (x, y) and (u, v); defining a function of a complex variable is thus equivalent to defining two functions:The derivative of a function of a complex variable is defined formally in the same way as the derivative of a function of a real variable. The derivative is the limit of the difference quotient of the function: If we assume that the two real functions u and v, making up w = f(z), have partial derivatives with respect to x and y, this is still not a sufficient condition that the derivative of the function f(z) exists. The limit of the difference quotient, as a rule, depends on the direction in which the points z′ = z + Δz approximate the point z (figure 3). For the existence of the derivative f′(z), it is necessary that the limit does not depend on the manner of approach of z′ to z. Consider, for example, the case when z′ approaches z parallel to the axis Ox or parallel to the axis Oy:for Δx → 0 converges to:

If the function w = f(x) has a derivative, these two expressions must be equal, and thus:Satisfying these equations is a necessary condition for the existence of the derivative of the function w = u + iv. It can be shown that condition (24) is not only necessary but also sufficient (if the functions u and v have a total differential). We will not give a proof of the sufficiency of conditions (24), which are called the Cauchy-Riemann equations.

It is easy to establish the fact that the usual rules for differentiating functions of a real variable carry over without alteration to functions of a complexvariable. Certainly this is true for the derivative of the function zn and for the derivative of a sum, a product, or a quotient. The method of proof remains exactly the same as for functions of a real variable, excepting only that in place of real quantities, complex ones are to be understood. This shows that every polynomial in z:

is an everywhere differentiable function. Any rational function, equal to the quotient of two polynomials:is differentiable at all points where the denominator is not zero.

In order to establish the differentiability of the function w = ez, we may use the Cauchy-Riemann conditions. In this case, on the basis of formula (20):we substitute these functions in (24) and show that the Cauchy-Riemann equations are satisfied. The derivative may be computed, for example by formula (22). This gives:On the basis of formula (17) it is easy to establish the differentiability of the trigonometric functions and the validity of the formulas known from analysis for the values of their derivatives.

The function Ln z. We will not give here an investigation of all the elementary functions of a complex variable. However, it is important for our purposes to become acquainted with some of the properties of the function Ln z. As in the case of the real domain, we set: w = Ln z &In order to analyze the function Ln z, we write the number z in trigonometric form:

Applying the multiplication theorem to eˆw, we get:Equating the two expressions derived for z, we have:

Since u and r are real numbers, from formula (α) we derive **u = ln r**

where In r is the usual value of the natural logarithm of a real number. Equation (β) can be satisfied only if **cos v = cos Φ, sin v = sin Φ**, and in this case v and ϕ must differ by a number which is a multiple of 2π: **v = Φ + 2 π n **where for any integer n equation (β) will be satisfied. On the basis of the expressions derived for u and v:Formula (25) defines the function Ln z for all values of the complex number z that are different from zero. It gives the definition of the logarithm not only for positive numbers but also for negative and complex numbers.

The expression derived for the function Ln z contains an arbitrary integer n. This means that Ln z is a multiple-valued function. For any value of n we get one of the possible values of the function Ln z. If we fix the value of n, we get one of the possible values of this function:

“However, the different values of Ln z, as can be shown, are organically related to one another. In fact, let us fix, for example, the value n = 0 at the point z0 and then let z move continuously around a closed curve C, which surrounds the origin and returns to the point z0 (figure 4). During the motion of z, the angle ϕ will increase continuously and when z moves around the entire closed contour, ϕ will increase by 2π. In this manner, fixing the value of the logarithm at z0: (**Ln z)o = ln ro + iΦo **and changing this value continuously while moving z along the closed curve surrounding the origin, we return to the point z0 with another value of the function: (**Ln z)o = ln ro + i(Φo+2π).**

This situation shows us that we may pass continuously from one value of Ln z to another. For this the point need only travel around the origin continuously a sufficient number of times. The point z =0 is called a branch point of the function Ln z.

If we wish to restrict consideration to only one value of the function Ln z, we must prevent the point z from describing a closed curve surrounding the point z = 0. This may be done by drawing a continuous curve from the origin to infinity and preventing the point z from crossing this curve, which is called a cut. If z varies over the cut plane, then it never changes continuously from one value of Ln z to another and thus, starting from a specific value of logarithm at any point z0, we get at each point only one value of the logarithm. The values of the function Ln z selected in this way constitute a single-valued branch of the function.

For example, if the cut lies along the negative part of the axis Ox, we get a single-valued branch of Ln z by restricting the argument to the limits:where k is an arbitrary integer.

Considering a single-valued branch of the logarithm, we can study its differentiability. Putting

it is easy to show that Ln z satisfies the Cauchy-Riemann conditions and its derivative calculated for example by formula (22), will be equal to: **d Ln z/ dz = 1/z. **We emphasize that the derivative of Ln z is also a single-valued function.

**The Connection Between Functions of a Complex Variable and the Problems of Mathematical Physics**

Connection with problems of hydrodynamics. The Cauchy-Riemann conditions relate the problems of mathematical physics to the theory of functions of a complex variable. Let us illustrate this from the problems of hydrodynamics.

Among all possible motions of a fluid an important role is played by the steady motions. This name is given to motions of the fluid for which there is no change with time in the distribution of velocities in space. For example, an observer standing on a bridge and watching the flow of the river around a supporting pillar sees a steady flow. Sometimes a flow is steady for an observer in motion on some conveyance. In the case of a steamship travelling through rough water, the flow will appear nonsteady to an observer on the shore but steady to one on the ship. To a passenger seated in an airplane that is flying with constant velocity, the flow of the air as disturbed by the plane will still appear to be a steady one.

For steady motion the velocity vector V of the particle of the fluid passing through a given point of space does not change with time. If the motion is steady for a moving observer, then the velocity vector does not change with time at points having constant coordinates in a coordinate system which moves with the observer.

Among the motions of a fluid great importance has been attached to the class of plane-parallel motions. These are flows for which the velocity of the particles is everywhere parallel to some plane and the distribution of the velocities is identical on all planes parallel to the given plane.

If we imagine an infinitely extended mass of fluid, flowing around a cylindrical body in a direction perpendicular to a generator, the distribution of velocities will be the same on all planes perpendicular to the generator, so that the flow will be plane-parallel. In many cases the motion of a fluid is approximately plane-parallel. For example, if we consider the flow of air in a plane perpendicular to the wing of an airplane, the motion of the air may be considered as approximately planeparallel, provided the plane in question is not very close either to the fuselage or to the tip of the wing.

We will show how the theory of functions of a complex variable may be applied to the study of steady plane-parallel flow.

Here we will assume that the liquid is incompressible, i.e., that its density does not change with change in pressure. This assumption holds, for example, for water, but it can be shown that even air may be considered incompressible in the study of its flow, if the velocity of the motion is not very large. The hypothesis of incompressibility of air will not produce a noticeable distortion if the velocities of motion do not exceed the range of 0.6 to 0.8 of the velocity of sound (330 mlsec).

The flow of a liquid is characterized by the distribution of the velocities of its particles. If the flow is plane-parallel, then it is sufficient to determine the velocities of the particles in one of the planes parallel to which the motion occurs.

We will denote by V(x, y, t) the velocity vector of the particle passing through the point with coordinates x, y at the instant of time t. In the case of steady motion, V does not depend on time. The vector V will be given by its projections u and v on the coordinate axes. We consider the trajectories of particles of the fluid. In the case of steady motion, there is no change with time in the velocities of the successive particles issuing from a given point in space. If we know the field of the velocities, i.e., if we know the components of the velocity as functions of x, y, then the trajectories of the particles may be determined by using the fact that the velocity of a particle is everywhere tangent to the trajectory. This gives: **dy/dx = v (x,y) / u (x,y)**

The equation so obtained is the differential equation for the trajectories. The trajectory of a particle in a steady motion is called a streamline. Through each point of the plane passes exactly one streamline.

An important role is played here by the so-called stream function. For a fixed streamline C0 let us consider the imaginary channel bounded by the following four walls: One wall is the cylindrical surface (with generators perpendicular to the plane of the flow) passing through the streamline C0; the second wall is the same cylindrical surface for a neighboring streamline C1; the third is the plane of the flow; and the fourth is a parallel plane at unit distance (figure 5). If we consider two arbitrary cross sections of our channel, denoted by γ1 and γ2, then the quantity of fluid passing through the sections γ1 and γ2 in unit time will be the same, as follows from the fact that the quantity of fluid inside the part of the channel marked off by C1, C0 and γ1, γ2 cannot change, because of the constant density, since the side walls of the channel C0 and C1 are formed by streamlines, so that no fluid passes through them. Consequently the same amount of fluid must leave in unit time through γ1 as enters through γ2.:

“Now by the stream fmction we mean the function ψ(x, y) that has a constant value on the streamline C1 equal to the quantity of liquid passing in unit time through the cross section of the channel constructed on the curves C0 and C1.

The stream function is defined only up to an arbitrary constant, depending on the choice of the initial streamline C0. If we know the stream function, then the equations for the streamlines are obviously: **ψ (x,y) = Const.**

We now wish to express the components of the velocity of the flow at a given point M(x, y) in terms of the derivatives of the stream function. To this end we consider the channel formed by the streamline C through the point M(x, y) and a neighboring streamline C′ through a nearby point M′(x, y + Δy), together with the two planes parallel to the plane of flow and a unit distance apart. Let us compute the quantity of the liquid q passing through the section MM′ of the channel during time dt.

On the one hand, from the definition of the stream function:** q = (ψ’ – ψ) dt.**

On the other hand, q is equal (figure 6) to the volume of the solid formed by drawing the vector V dt from each point of the section MM′. If MM′ is small, we may assume that V is constant over the whole of MM′ and is equal to the value of V at the point M. The area of the base of the parallelepiped so constructed is Δy × 1 (in figure 6 the unit thickness is not shown), and the altitude is the projection of the vector V dt on the Ox axis, i.e., u dt so that:

**q≈ u ∆y dt** and thus** u ∆y ≈ ∆ψ. **Dividing this equation by Δy, and passing to the limit, we get: u =∂ψ /∂y. A similar argument gives the second component** v = – ∂ψ/∂x.**

To define the field of the velocity vectors, we introduce, in addition to the stream function, another function, which arises from considering the rotation of small particles of the liquid. If we imagine that a particular particle of the fluid were to become solidified, it would in general have a rotatory motion. However, if the motion of the fluid starts from rest and if there is no internal friction between particles, then it can be shown that rotation of the particles of the fluid cannot begin. Motions of a fluid in which there is no rotation of this sort are called irrotational; they play a fundamental role in the study of the motion of bodies in a fluid. In the theory of hydromechanics it is shown that for irrotational flow there exists a second function ϕ(x, y) such that the components of the velocity are expressed by the formulas:the function ϕ is called the velocity potential of the flow. Later, we will consider motions with velocity potentid.

Comparison of the formulas for the components of the velocity from the stream function and from the velocity potential gives the following remarkable result.

The velocity potential ϕ(x, y[…] And the stream function ψ(x, y)for the flow of an incompressible fluid satisfy the Cauchy-Riemann equations: Is a differentiable function of a complex variable. Conversely, if we choose an arbitrary differentiable function of a complex variable, its real and imaginary parts satisfy the Cauchy-Riemann conditions and may be considered as the velocity potential and the stream function of the flow of an incompressible fluid. The function w is called the characteristic function of the flow.

Let us now consider the significance of the derivative of w. Using, for example, formula (22), we have:

or, taking complex conjugates : where the bar over dw/dz denotes the complex conjugate.

Consequently, the velocity vector of the flow is equal to the conjugate of the value of the derivative of the characteristic function of the flow.

**Examples of plane-parallel flow of a fluid.**

We consider several examples. Let: **W= Az **where A is a complex quantity. From (29) it follows that: **u + iv = ā**

Thus the linear function (30) defines the flow of a fluid with constant vector velocity. If we set: **A= uo – ivo **then, decomposing into the real and imaginary parts of w, we have:so that the streamlines will be straight lines parallel to the velocity vector (figure 7).

As a second example we consider the function: **w=Az² **where the constant A is real. In order to graph the flow, we first determine the streamlines. In this case: **ψ (x,y) = 2 Axy** and the equations of the streamlines are:** xy = const.**

These are hyperbolas with the coordinate axes as asymptotes (figure 8). The arrows show the direction of motion of the particles along the streamlines for A > 0. The axes Ox and Oy are also streamlines.

If the friction in the liquid is very small, we will not disturb the rest of the flow if we replace any streamline by a rigid wall, since the fluid will glide along the wall. Using this principle to construct walls along the positive coordinate axes (in figure 8 they are represented by heavy lines), we have a diagram of how the fluid flows irrotationally, in this case around a corner:

An important example of a flow is given by the function:where a and R are positive real quantities.

The stream function will be:In particular, taking the constant equal to zero, we have either y = 0 or x2 + y2 = R2; thus, a circle of radius R is a streamline. If we replace the interior of this streamline by a solid body, we obtain the flow around a circular cylinder. A diagram of the streamlines of this flow is shown in figure 9. The velocity of the flow may be defined from formula (29) by:At a great distance from the cylinder we find:i.e., far from the cylinder the velocity tends to a constant value and thus the flow tends to be uniform. Consequently, formula (29) defines the flow which arises from the passage around a circular cylinder of a fluid which is in uniform motion at a distance from the cylinder.

**Applications to other problems of mathematical physics.**

The theory of functions of a complex variable has found wide application not only in wing theory but in many other problems of hydrodynamics.

However, the domain of application of the theory of functions is not restricted to hydrodynamics; it is much wider than that, including many other problems of mathematical physics. To illustrate, we return to the Cauchy-Riemann conditions: **∂u/∂x = ∂v/∂y; ∂u/∂y=-∂v/∂x **and deduce from them an equation which is satisfied by the real part of an analytic function of a complex variable.

If the first of these equations is differentiated with respect to x, and the second with respect to y, we obtain by addition: **∂²u/∂x² + ∂²u/∂y²=0. **This equation is known as the Laplace equation. A large number of problems of physics and mechanics involve the Laplace equation. For example, if the heat in a body is in equilibrium, the temperature satisfies the Laplace equation. The study of magnetic or electrostatic fields is connected with this equation. In the investigation of the filtration of a liquid through a porous medium, we also arrive at the Laplace equation. In all these problems involving the solution of the Laplace equation the methods of the theory of functions have found wide application.

Not only the Laplace equation but also the more general equations of mathematical physics can be brought into connection with the theory of functions of a complex variable. One of the most remarkable examples is provided by planar problems in the theory of elasticity.

** The Connection of Functions of a Complex Variable with Geometry**

*Geometric properties of differentiable functions. *

As in the case of functions of a real variable, a great role is played in the theory of analytic functions of a complex variable by the geometric interpretation of these functions. Broadly speaking, the geometric properties of functions of a complex variable have not only provided a natural means of visualizing the analytic properties of the functions but have also given rise to a special set of problems. The range of problems connected with the geometric properties of functions has been called the geometric theory of functions. As we said earlier, from the geometric point of view a function of a complex variable w = f(z) is a transformation from the z-plane to the w-plane. This transformation may also be defined by two functions of two real variables: **u = u (x,y); v= v (x,y).**

If we wish to study the character of the transformation in a very small neighborhood of some point, we may expand these functions into Taylor series and restrict ourselves to the leading terms of the expansion:

where the derivatives are taken at the point (x0, y0). Thus, in theneighborhood of a point, any transformation may be considered approximately as an affine transformation:Let us consider the properties of the transformation effected by the analytic function near the point z = x + iy. Let C be a curve issuing from the point z; on the w-plane the corresponding points trace out the curve Γ, issuing from the point w. If z′ is a neighboring point and w′ is the point corresponding to it, then for z′ → z we will have w′ → w and:This fact may be formulated in the following manner.

The limit of the ratio of the lengths of corresponding chords in the w-plane and in the z-plane at the point z is the same for all curves issuing from the given point z, or as it is also expressed, the ratio of linear elements on the w-plane and on the z-plane at a given point does not depend on the curve issuing from z.

The quantity |f′(z)|, which characterizes the magnification of linear elements at the point z, is called the coefficient of dilation at the point z.

We now suppose that at some point z the derivative f′(z) ≠ 0, so that f′(z) has a uniquely determined argument.* Let us compute this argument, using (34):but arg (w′ – w) is the angle β′ between the chord ww′ and the real axis, and arg(z′ – z) is the angle α′ between the chord zz′ and the real axis. If we denote by α and β the corresponding angles for the tangents to the curves C and Γ at the points z and w (figure 14), then for z′ → zSo that in the limit we get: **arg ƒ'(z) = ß-α. **This equation shows that arg f′(z) is equal to the angle ϕ through which the direction of the tangent to the curve C at the point z must be turned to assume the direction of the tangent to the curve Γ at the point w. From this property arg f′(z) is called the rotation of the transformation at the point z.

From equation (36) the reader can easily derive the following propositions.

As we pass from the z-plane to the w-plane, the tangents to all curves issuing from a given point are rotated through the same angle.

If C1 and C2 are two curves issuing from the point z, and Γ1 and Γ2 are the corresponding curves from the point w, then the angle between Γ1 and Γ2 at the point w is equal to the angle between C1, and C2 at the point z.

In this manner, for the transformation effected by an analytic function, at each point where f′(z) ≠ 0, all linear elements are changed by the same ratio, and the angles between corresponding directions are not changed.

Transformations with these properties are called conformal transformations.

From the geometric properties just proved for transformations near a point at which f′(z0) ≠ 0, it is natural to expect that in a small neighborhood of z0 the transformation will be one-to-one; i.e., not only will each point z correspond to only one point w, but also conversely each point w will correspond to only one point z. This proposition can be rigorously proved.

To show more clearly how conformal transformations are distinguished from various other types of transformations, it is useful to consider an arbitrary transformation in a small neighborhood of a point. If we consider the leading terms of the Taylor expansions of the functions u and v effecting the transformation, we get: If in a small neighborhood of the point (x0, y0) we ignore the terms of higher order, then our transformation will act like an affine transformation. This transformation has an inverse if its determinant does not vanish:

If Δ = 0, then to describe the behavior of the transformation near the point (x0, y0) we must consider terms of higher order.

In case u + iv is an analytic function, we can express the derivatives with respect to y in terms of the derivatives with respect to x by using the Cauchy-Riemann conditions, from which we get:

i.e., the transformation has an inverse when f′(z0) ≠ 0. If we set f′(z0) = r(cos ϕ + i sin ϕ, then:

and the transformation near the point (x0, y0) will have the form: These formulas show that in the case of an analytic function w = u + iv, the transformation near the point (x0, y0) consists of rotation through the angle ϕ and dilation with coefficient r. In fact, the expressions inside the brackets are the well-known formulas from analytic geometry for rotation in the plane through an angle ϕ, and multiplication by r gives the dilation.

To form an idea of the possibilities when f′(z) = 0 it is useful to consider the function: w = z ˆn (37). The derivative of this function w′ = nzˆn−1 vanishes for z = 0. The transformation (37) is most conveniently considered by using polar coordinates or the trigonometric form of a complex number. Let:Using the fact that in multiplying complex numbers the moduli are multiplied and the arguments added, we get:From the last formula we see that the ray ϕ = const. of the z-plane transforms into the ray θ = nϕ = const. in the w-plane. Thus an angle α between two rays in the z-plane will transform into an angle of magnitude β = nα. The transformation of the z-plane into the w-plane ceases to be one-to-one. In fact, a given point w with modulus ρ and argument θ may be obtained as the image of each of the n points with moduli:and arguments: When raised to the power n, the moduli of the corresponding points will all be equal to ρ and their arguments will be equal to:and since changing the value of the argument by a multiple of 2π does not change the geometric position of the point, all the images on the w-plane are identical.

**Conformal transformations. **

If an analytic function w = f(z) takes a domain D of the z-plane into a domain Δ of the w-plane in a one-to-one manner, then we say that it effects a conformal transformation of the domain D into the domain Δ.

The great role of conformal transformations in the theory of functions and its applications is due to the following almost trivial theorem.

If ζ = F(w) is an analytic function on the domain A, then the composite function F[f(z)] is an analytic function on the domain D. This theorem results from the equation: **∆ζ/∆z=∆ζ/∆w•∆w/∆z**

In view of the fact that the functions ζ = F(w) and w = f(z) are analytic, we conclude that both factors on the right side have a limit, and thus at each point of the domain D the quotient Δζ/Δz has a unique limit dζ/dz. This shows that the function ζ = F[f(z)] is analytic.

The theorem just proved shows that the study of analytic functions on the domain Δ may be reduced to the study of analytic functions on the domain D. If the geometric structure of the domain D is simpler, this fact simplifies the study of the functions:The most important class of domains in which it is necessary to study analytic functions is the class of simply connected domains. This is the name given to domains whose boundary consists of one piece as opposed to domains whose boundary falls into several pieces (for example, the domains illustrated in figures 15b and 15c).

We note that sometimes we are interested in investigating functions on a domain lying outside a curve rather than inside it. If the boundary of such a domain consists of only one piece, then the domain is also called simply connected (figure 15d).

At the foundations of the theory of conformal transformations lies the following remarkable theorem of Riemann.

For an arbitrary simply connected domain Δ, it is possible to construct an analytic function which effects a conformal transformation of the circle with unit radius and center at the origin into the given domain in such a way that the center of the circle is transformed into a given point w0 of the domain Δ, and a curve in an arbitrary direction at the center of the circle transforms into a curve with an arbitrary direction at the point w0. This theorem shows that the study of functions of a complex variable on arbitrary simply connected domains may be reduced to the study of functions defined, for example, on the unit circle.

We will now explain in general outline how these facts may be applied to problems in the theory of the wing of an airplane. Let us suppose that we wish to study the flow around a curved profile of arbitrary shape.

If we can construct a conformal transformation of the domain outside the profile to the domain outside the unit circle, then we can make use of the characteristic function for the flow around the circle to construct the characteristic function for the flow around the profile.

Let ζ be the plane of the circle, z the plane of the profile, and ζ = f(z) a function effecting the transformation of the domain outside the profile to the domain outside the circle, where:

We denote by a the point of the circle corresponding to the edge of the profile A and construct the circulatory flow past the circle with one of the streamlines leaving the circle at a (figure 16). This function will be denote: by W(ζ):

The streamlines of this flow are defined by the equation: **ψ= Const. **

We now consider the function:** w(z) = W[ƒ(z)] **and set **w= Φ+iψ**

We show that w(z) is the characteristic function of the flow past the profile with a streamline leaving the profile at the point A. First of all the flow defined by the function w(z) is actually a flow past the profile.

To prove this, we must show that the contour of the profile is a streamline curve, i.e., that on the contour of the profile:**ψ (x,y)= C, **which follows from,** ψ (x,y) = Ψ (ξ,η) ** and the points (x, y) lying on the profile correspond to the points (ξ, η) lying on the circle, where ψ(ξ, η) = const.

It is also simple to show that A is a stagnation point for the flow, and it may be proved that by suitable choice of velocity for the flow past the circle, we may obtain a flow past the profile with any desired velocity.

The important role played by conformal transformations in the theory of functions and their applications gave rise to many problems of finding the conformal transformation of one domain into another of a given geometric form. In a series of simple but useful cases this problem may be solved by means of elementary functions. But in the general case the elementary functions are not enough. As we saw earlier, the general theorem in the theory of conformal transformations was stated by Riemann, although he did not give a rigorous proof. In fact, a complete proof required the efforts of many great mathematicians over a period of several decades.

In close connection with the different approaches to the proof of Riemann’s theorem came approximation methods for the general construction of conformal transformations of domains. The actual construction of the conformal transformation of one domain onto another is sometimes a very difficult problem. For investigation of many of the general properties of functions, it is often not necessary to know the actual transformation of one domain onto another, but it is sufficient to exploit some of its geometric properties. This fact has led to a wide study of the geometric properties of conformal transformations. To illustrate the nature of theorems of this sort we will formulate one of them.

Let the circle of unit radius on the z-plane with center at the origin be transformed into some domain (figure 17). If we consider a completely arbitrary transformation of the circle into the domain Δ, we cannot make any statements about its behavior at the point z = 0. But for conformal transformations we have the following remarkable theorem.

The dilation at the origin does not exceed four times the radius of“the circle with center at w0, inscribed in the domain:** | ƒ'(o| ≤ 4r. **Various questions in the theory of conformal transformations were considered in a large number of studies by Soviet mathematicians. In these works exact formulas were derived for many interesting classes of conformal transformations, methods for approximate calculation of conformal transformations were developed, and many general geometric theorems on conformal transformations were established.

**Quasi-conformal transformations.**

Conformal transformations are closely connected with the investigation of analytic functions, i.e., with the study of a pair of functions satisfying the Cauchy-Riemann conditions: **∂u/∂x = ∂v/∂y; ∂u/∂y=-∂v/∂x**

But many problems in mathematical physics involve more general systems of differential equations, which may also be connected with transformations from one plane to another, and these transformations will have specific geometric properties in the neighborhood of points in the Oxy plane. To illustrate, we consider the following example of differential equations:

**∂u/∂x = p (x,y) ∂v/∂y; ∂u/∂y=- p (x,y) ∂v/∂x **

In this manner, from those equations (38) it follows that at every point the infinitesimal ellipse that is transformed into a circle has its semiaxes completely determined by the transformation, both with respect to their direction and to the ratio of their lengths. It can be shown that this geometric property completely characterizes the system of differential equations (38); i.e., if the functions u and v effect a transformation with the given geometric property, then they satisfy this system of equations. In this way, the problem of investigating the solulions of equations (38) is equivalent to investigating transformations with the given properties.

We note, in particular, that for the Cauchy-Riemann equations this property is formulated in the following manner.

An infinitesimal circle with center at the point (x0, y0) is transformed into an infinitesimal circle with center at the point (u0, v0).

A very wide class of equations of mathematical physics may be reduced to the study of transformations with the following geometric properties.

For each point (x, y) of the argument plane, we are given the direction of the semiaxes of two ellipses and also the ratio of the lengths of these semiaxes. We wish to construct a transformation of the Oxy plane to the Ouo plane such that an infinitesimal ellipse of the first family transforms into an infinitesimal ellipse of the second with center at the point (u, v).

The idea of studying transformations defined by systems of differential equations made it possible to extend the methods of the theory of analytic functions to a very wide class of problems. Lavrent’ev and his students developed the study of quasiconformal transformations and found a large number of applications to various problems of mathematical physics, mechanics, and geometry. It is interesting to note that the study of quasiconformal transformations has proved very fruitful in the theory of analytic functions itself. Of course, we cannot dwell here on all the various applications of the geometric method in the theory of functions of a complex variable.

**The Line Integral; Cauchy’s Formula and Its Corollaries**

*Integrals of functions of a complex variable. *

In the study of the of analytic functions the concept of a complex variable plays a very important role. Corresponding to the definite integral of a function of a real variable, we here deal with the integral of a function of a complex variable along a curve. We consider in the plane a curve C beginning at the point z0 and ending at the point z, and a function f(z) defined on a domain containing the curve C. We divide the curve C into small segments (figure 18) at the points:

If the function f(z) is continuous and the curve C has finite length, we can prove, just as for real functions, that as the number of points of division is increased and the distance between neighboring points decreases to zero, the sum S approaches a completely determined limit. This limit is called the integral along the curve C and is denoted by:We note that in this definition of the integral we have distinguished between the beginning and the end of the curve C; in other words, we have chosen a specific direction of motion on the curve C.

It is easy to prove a number of simple properties of the integral.

1. The integral of the sum of two functions is equal to the sum of the integrals of the individual functions:

All these properties are obvious for the approximating sums and carry over to the integral in passing to the limit.

5. If the length of the curve C is equal to L and if everywhere on C the inequality** |ƒ(z)|≤M is, then |∫c ƒ(z) dz| ≤ ML**

Let us prove this property. It is sufficient to prove the inequality for the sum S, since then it will carry over in the limit for the integral also. For the sum

“But the sum in the second factor is equal to the sum of the lengths of the segments of the broken line inscribed in the curve C with vertices at the points zk. The length of the broken line, as is well known, is not greater than the length of the curve, so that: **| S| ≤ ML**We consider the integral of the simplest function f(z) = 1. Obviously in this case:This result shows that for the function f(z) = 1 the value of the integral for all curves joining the points z0 and z is the same. In other words, the value of the integral depends only on the beginning and end points of the path of integration. But it is easy to show that this property does not hold for arbitrary functions of a complex variable. For example, if f(z) = x, then a simple computation shows that

where C1 and C2 are the paths of integration shown in figure 19.

We leave it to the reader to verify these equations.

A remarkable fact in the theory of analytic functions is the following theorem of Cauchy.

If f(z) is differentiable at every point of a simply connected domain D, then the integrals over all paths joining two arbitrary points of the domain z0 and z are the same.

We will not give a proof of Cauchy’s theorem here, but refer the interested reader to any course in the theory of functions of a complex variable. Let us mention here some important consequences of this theorem.

First of all, Cauchy’s theorem allows us to introduce the indefinite integral of an analytic function. For let us fix the point Z0 and consider the integral along curves connecting z0 and z:Here we may take the integral over any curve joining z0 and z, since changing the curve does not change the value of the integral, which thus depends only on z. The function F(z) is called an indefinite integral of f(z).

An indefinite integral of f(z) has a derivative equal to f(z).

In many applications it is convenient to have a slightly different formulation of Cauchy’s theorem, as follows:

If f(z) is everywhere differentiable in a simply connected domain, then the integral over any closed contour lying in this domain is equal to zero: This is obvious since a closed contour has the same beginning and end, so that z0 and z may be joined by a null path.

By a closed contour we will understand a contour traversed in the counterclockwise direction. If the contour is traversed in the clockwise direction we will denote it by .Γ

**The Cauchy integral. **

On the basis of the last theorem we can prove the following fundamental formula of Cauchy that expresses the value of a differentiable function at interior points of a closed contour in terms of the values of the function on the contour itself:We give a proof of this formula. Let z be fixed and ζ be an independent variable. The function:will be continuous and differentiable at every point ζ inside the domain D, with the exception of the point ζ = z, where the denominator vanishes, a circumstance that prevents the application of Cauchy’s theorem to the function ϕ(ζ) on the contour C.

We consider a circle Kρ, with center at the point z and radius ρ and show that: To this end we construct the auxiliary closed contour Γρ, consisting of the contour C, the path γρ connecting C with the circle, and the circle , taken with the opposite orientation (figure 20). The contour Γρ is indicated by arrows. Since the point ζ = z is excluded, the function ϕ(ζ) is differentiable everywhere inside Γρ and thus:

**Expansion of differentiable functions in a power serie**s.

We apply Cauchy’s theorem to establish two basic properties of differentiable functions of a complex variable.

Every function of a complex variable that has a first derivative in a domain D has derivatives of all orders.

In fact, inside a closed contour our function may be expressed by the Cauchy integral formula:

The function of z under the sign of integration is a differentiable function; thus, differentiating under the integral sign, we get

Under the integral sign there is again a differentiable function; thus we can again differentiate, obtaining:

Continuing the differentiation, we get the general formula:

In this manner we may compute the derivative of any order. To make this proof completely rigorous, we need also to show that the differentiation under the integral sign is valid. We will not give this part of the proof.

The second property is the following:

If f(z) is everywhere differentiable on a circle K with center at the point a, then f(z) can be expanded in a Taylor series:

which converges inside K.

In §1 we defined analytic functions of a complex variable as functions that can be expanded in power series. This last theorem says that every differentiable function of a complex variable is an analytic function. This is a special property of functions of a complex variable that has no analogue in the real domain. A function of a real variable that has a first derivative may fail to have a second derivative at every point.

We prove the theorem formulated in the previous paragraphs.

Let f(z) have a derivative inside and on the boundary of the circle K with center at the point a. Then inside K the function f(z) can be expressed by the Cauchy integral:

Using the fact that the point z lies inside the circle, and ρ is on the circumference we get: **|Z-a/ζ -a|<1**

so that from the basic formula for a geometric progression:

“and the series on the right converges. Using (44) and (45), we can represent formula (43) in the form:We now apply term-by-term integration to the series inside the brackets. (The validity of this operation can be established rigorously.) Removing the factor (z – a)n, which does not depend on ρ, from the integral sign in each term, we get:Now using the integral formulas for the sequence of derivatives, we may write:

We have shown that differentiable functions of a complex variable can be expanded in power series. Conversely, functions represented by power series are differentiable. Their derivatives may be found by term-by-term differentiation of the series. (The validity of this operation can be established rigorously.)

Entire functions. A power series gives an analytic representation of a function only in some circle. This circle has a radius equal to the distance to the nearest point at which the function ceases to be analytic, i.e., to the nearest singular point of the function.

Among analytic functions it is natural to single out the class of functions that are analytic for all finite values of their argument. Such functions are represented by power series, converging for all values of the argument z, and are called entire functions of z. If we consider expansions about the origin, then an entire function will be expressed by a series of the form: If in this series all the coefficients, from a certain one on, are equal to zero, the function is simply a polynomial, or an entire rational function:If in the expansion there are infinitely many terms that are different from zero, then the entire function is called transcendental.

Examples of such functions are:

“In the study of properties of polynomials, an important role is played by the distribution of the roots of the equation:** P (z) = 0 **or, more generally speaking, we may raise the question of the distribution of the points for which the polynomial has a given value A: **P (z) = A**The fundamental theorem of algebra says that every polynomial takes a given value A in at least one point. This property cannot be extended to an arbitrary entire function. For example, the function w = ez does not take the value zero at any point of the z-plane. However, we do have the following theorem of Picard: Every entire function assumes every arbitrarily preassigned value an infinite number of times, with the possible exception of one value.

The distribution of the points of the plane at which an entire function takes on a given value A is one of the central questions in the theory of entire functions.

The number of roots of a polynomial is equal to its degree. The degree of a polynomial is closely related to the rapidity of growth of |P(z)| as | z| → ∞. In fact, we can write:

and since for |z| → ∞, the second factor tends to |an|, a polynomial of degree n, for large values of |z|, grows like |an| · |z|n. So it is clear that for larger values of n, the growth of |Pn(z)| for |z| → ∞ will be faster and also the polynomial will have more roots. It turns out that this principle is also valid for entire functions. However, for an entire function f(z), generally speaking, there are infinitely many roots, and thus the question of the number of roots has no meaning. Nevertheless, we can consider the number of roots n(r, a) of the equation: **ƒ (z) = a **in a circle of radius r, and investigate how this number changes with increasing r.

The rate of growth of n(r, a) proves to be connected with the rate of growth of the maximum M(r) of the modulus of the entire function on the circle of radius r. As stated earlier, for an entire function there may exist one exceptional value of a for which the equation may not have even one root. For all other values of a, the rate of growth of the number n(r, a) is comparable to“the rate of growth of the quantity In M(r). We cannot give more exact formulations here for these laws.

The properties of the distribution of the roots of entire functions are connected with problems in the theory of numbers and have enabled mathematicians to establish many important properties of the Riemann zeta functions,on the basis of which it is possible to prove many theorems about prime numbers.

**On analytic representation of functions**.

We saw previously that in a neighborhood of every point where a function is differentiable it may be defined by a power series. For an entire function the power series converges on the whole plane and gives an analytic expression for the function wherever it is defined. In case the function is not entire, the Taylor series, as we know, converges only in a circle whose circumference passes through the nearest singular point of the function. Consequently the power series does not allow us to compute the function everywhere, and so it may happen that an analytic function cannot be given by a power series on its whole domain of definition. For a meromorphic function an analytic expression giving the function on its whole domain of definition is the expansion in principal parts.

If a function is not entire but is defined in some circle or if we have a function defined in some domain but we want to study it only in a circle, then the Taylor series may serve to represent it. But when we study the function in domains that are different from circles, there arises the question of finding an analytic expression for the function suitable for representing it on the whole domain. A power series giving an expression for an analytic function in a circle has as its terms the simplest polynomials anzn. It is natural to ask whether we can expand an analytic function in an arbitrary domain in a more general series of polynomials. Then every term of the series can again be computed by arithmetic operations, and we obtain a method for representing functions that is once more based on the simplest operations of arithmetic. The general answer to this question is given by the following theorem.

An analytic function, given on an arbitrary domain, the boundary of which consists of one curve, may be expanded in a series of polynomials:

The theorem formulated gives only a general answer to the question of expanding a function in a series of polynomials in an arbitrary domain but does not yet allow us to construct the series for a given function, as was done earlier in the case of the Taylor series. This theorem raises rather then solves the question of expanding functions in a series of polynomials. Questions of the construction of the series of polynomials, given the function or some of its properties, questions of the construction of more rapidly

“ converging series or of series closely related to the behavior of the function itself, questions of the structure of a function defined by a given series of polynomials, all these questions represent an extensive development of the theory of approximation of functions by series of polynomials. In the creation of this theory a large role has been played by Soviet mathematicians, who have derived a series of fundamental results.

** Uniqueness Properties and Analytic functions.**

One of the most remarkable properties of analytic functions is their uniqueness, as expressed in the following theorem.

If in the domain D two analytic functions are given that agree on some curve C lying inside the domain, then they agree on the entire domain.

The proof of this theorem is very simple. Let f1(z) and f2(z) be the two functions analytic in the domain D and agreeing on the curve C. The difference: will be an analytic function on the domain D and will vanish on the curve C. We now show that ϕ(z) = 0 at every point of the domain D. In fact, if in the domain D there exists a point z0 (figure 21) at which ϕ(z0) ≠ 0, we extend the curve C to the point z0 and proceed along the extended curve toward z0 as long as the function remains equal to zero on Γ. Let ζ be the last point of Γ that is accessible in this way. If ϕ(z0) ≠ 0, then ζ ≠ z0 and on a segment of the curve Γ beyond ζ the function ϕ(z), by the definition of the point ζ, will not be equal to zero. We show that this is impossible. In fact, on the part Γζ of the curve Γ up to the point ζ, we have ϕ(z) = 0. We may compute all derivatives of the function ϕ(z) on Γζ using only the values of ϕ(z) on Γζ, so that on Γζ all derivatives of ϕ(z) are equal to zero. In particular, at the point ζ:

Let us expand the function ϕ(ζ) in a Taylor series at the point ζ. All the coefficients of the expansion vanish, so that we get: **Φ(z)=0 **in some circle with center at the point ζ, lying in the domain D. In particular, it follows that the equation ϕ(z) = 0 must be satisfied on some segment of the curve Γ lying beyond ζ. The assumption ϕ(z0) ≠ 0 gives us a contradiction:

This theorem shows that if we know the values of an analytic function on some segment of a curve or on some part of a domain, then the values of the function are uniquely determined everywhere in the given domain. Consequently, the values of an analytic function in various parts of the argument plane are closely connected with one another.

To realize the significance of this uniqueness property of an analytic function, it is only necessary to recall that the general definition of a function of a complex variable allows any law of correspondence between values of the argument and values of the function. With such a definition there can, of course, be no question of determining the values of a function at any point by its values in another part of the plane. We see that the single requirement of differentiability of a function of a complex variable is so strong that it determines the connection between values of the function at different places.

We also emphasize that in the theory of functions of a real variable the differentiability of a function does not in itself lead to any similar wnsequences. In fact, we may construct examples of functions that are infinitely often differentiable and agree on some part of the Ox axis but differ elsewhere. For example, a function equal to zero for all negative values of x may be defined in such a manner that for positive x it differs from zero and has continuous derivatives of every order. For this it is sufficient, for example, to set, for x > 0:

**ƒ(x)=e¯¹/x**

**Analytic continuation and complete analytic functions. **

The domain of definition of a given function of a complex variable is often restricted by the very manner of defining the function. Consider a very elementary example. Let the function be given by the series: This series, as is well known, converges in the unit circle and diverges outside this circle. Thus the analytic function given by formula (49) is defined only in this circle. On the other hand, we know that the sum of the series (49) in the circle |z| < 1 is expressed by the formula: **ƒ(z)=1/1-z** (50). Formula (50) has meaning for all values of z ≠ 1. From the uniqueness theorem it follows that expression (50) represents the unique analytic function, agreeing with the sum of the series (49) in the circle |z| < 1. So this function, given at first only in the unit circle, has been extended to the whole plane.

If we have a function f(z) defined inside some domain D, and there exists another function F(z) defined in a domain Δ, containing D, and agreeing with f(z) in D, then from the uniqueness theorem the value of F(z) in Δ is defined in a unique manner.

The function F(z) is called the analytic continuation of f(z). An analytic function is called complete if it cannot be continued analytically beyond the domain on which it is already defined. For example, an entire function, defined for the whole plane, is a complete function. A meromorphic function is also a complete function; it is defined everywhere except at its poles. However there exist analytic functions whose entire domain of definition is a bounded domain. We will not give these more complicated examples.

The concept of a complete analytic function leads to the necessity of considering multiple-valued functions of a complex variable. We show this by the example of the function: where r = |z| and 4 = arg z. If at some point z0 = r0(cos ϕ0 + i sin ϕ0) of the z-plane we consider some initial value of the function:then our analytic function may be extended continuously along a curve C. As was mentioned earlier, it is easy to see that if the point z describes a closed path C0, issuing from the point z0 and circling around the origin (figure 22), and then returning to the point z0, we find at the point z0 the original value of In r0 but the angle ϕ is increased by 2π. This shows that if we extend the function Ln z in a continuous manner along the path C, we increase its value by 2πi in one circuit of the contour C. If the point z moves along this closed contour n times, then in place of the original value

These remarks show that on the complex plane we are unavoidably compelled to consider the connection between the various values of Ln z. The function Ln z has infinitely many values. With respect to its multiplevalued character, a special role is played by the point z = 0, around which we pass from one value of the function to another. It is easy to establish that if z describes a closed contour not surrounding the origin, the value of Ln z is not changed. The point z = 0 is called a branch point of the function Ln z.

In general, if for a function f(z), in a circuit around the point a, we pass from one of its values to another, then the point a is called a branch point of the function f(z).

Let us consider a second example. Let:

As noted previously, this function is also multiple-valued and takes on n values:All the various values of our function may be derived from the single one:by describing a closed curve around the origin, since for each circuit around the origin the angle ϕ will be increased by 277.

In describing the closed curve (n − 1) times, we obtain from the first value of:, all the remaining (n − 1) values. Going around the contour the nth time leads back to the value:

i.e., we return to the original value of the root.

Riemann surfaces for multiple-valued functions. There exists an easily visualized geometric manner of representing the character of a multiplevalued function.

We consider again the function Ln z, and on the z-plane we make a cut along the positive part of the axis Ox. If the point z is prevented from crossing the cut, then we cannot pass continuously from one value of Ln z to another. If we continue Ln z from the point z0, we can arrive only at the same value of Ln z.

The single-valued function found in this manner in the cut z-plane is called a single-valued branch of the function Ln z. All the values of Ln z are distributed on an infinite set of single-valued branches:

It is easy to show that the nth branch takes on the same value on the lower side of the cut as the (n + 1)th branch has on the upper side.

To distinguish the different branches of Ln z, we imagine infinitely many examples of the z-plane, each of them cut along the positive part of the axis Ox, and map onto the nth sheet the values of the argument z corresponding to the nth branch. The points lying on different examples of the plane but having the same coordinates will here correspond to one and the same number x + iy; but the fact that. this number is mapped on the nth sheet shows that we are considering the nth branch of the logarithm.

In order to represent geometrically the fact

“that. this number is mapped on the nth sheet shows that we are considering the nth branch of the logarithm.

In order to represent geometrically the fact that the nth branch of the logarithm, on the lower part of the cut of the nth plane, agrees with the (n + 1)th branch of the logarithm on the upper part of the cut in the (n + 1)th plane, we paste together the nth plane and the (n + 1)th, connecting the lower part of the cut in the nth plane with the upper part of the cut in the (n + 1)th plane. This construction leads us to a manysheeted surface, having the form of a spiral staircase (figure 23). The role of the central column of the staircase is played by the point z = 0.

If a point passes from one sheet to another, then the complex number returns to its original value, but the function Ln z passes from one branch to another.

The surface so constructed is called the Riemann surface of the function Ln z. Riemann first introduced the idea of constructing surfaces representing the character of multiple-valued analytic functions and showed the fruitfulness of this idea.

Let us also discuss the construction of the Riemann surface for the function w=√z . This function is double-valued and has a branch point at the origin.

We imagine two examples of the z-plane, placed one on top of the other and both cut along the positive part of the axis Ox. If z starts from z,, and describes a closed contour C containing the origin, then √z passes from one branch to the other, and thus the point on the Riemann surface passes from one sheet to the other. To arrange this, we paste the lower border of the cut in the first sheet to the upper border of the cut in the second sheet. If z describes the closed contour C a second time, then √z must return to its original value, so that the point in the Riemann surface must return to its original position on the first sheet. To arrange this, we must now attach the lower border of the second sheet to the upper border of the first sheet. As a result we get a two-sheeted surface, intersecting itself along the positive part of the axis Ox. Some idea of this surface may be obtained from figure 24, showing the neighborhood of the point z = 0.

In the same way we can construct a many-sheeted surface to represent the character of any given multiple-valued function. The different sheets of such a surface are connected with one another around branch points of the function. It turns out that the properties of analytic functions are closely connected with the geometric properties of Riemann surfaces. These surfaces are not only an auxiliary means of illustrating the character of a multiple-valued function but also play a fundamental role in the study of the properties of analytic functions and the development of methods of investigating them. Riemann surfaces formed a kind of bridge between analysis and geometry in the region of complex variables, enabling us not only to relate to geometry the most profound analytic properties of the functions but also to develop a whole new region of geometry. namely topology, which investigates those geometric properties of figures which remain unchanged under continuous deformation:

One of the clearest examples of the significance of the geometric properties of Riemann surfaces is the theory of algebraic functions, i.e., functions obtained as the solution of an equation:** ƒ (z,w)=0 **the left side of which is a polynomial in z and w. The Riemann surface of such a function may always be deformed continuously into a sphere or else into a sphere with handles

The characteristic property of these surfaces is the number of handles. This number is called the genus of the surface and of the algebraic function from which the surface was obtained. It turns out that the genus of an algebraic function determines its most important properties.

**Conclusion**

The theory of analytic functions arose in connection with the problem of solving algebraic equations. But as it developed it came into constant contact with newer and newer branches of mathematics. It shed light on the fundamental classes of functions occurring in analysis, mechanics, and mathematical physics. Many of the central facts of analysis could at last be made clear only by passing to the complex domain. Functions of a complex variable received an immediate physical interpretation in the important vector fields of hydrodynamics and electrodynamics and provided a remarkable apparatus for the solution of problems arising in these branches of science. Relations were discovered between the theory of functions and problems in the theory of heat conduction, elasticity, and so forth.

General questions in the theory of differential equations and special methods for their solution have always been based to a great extent on the theory of functions of a complex variable. Analytic functions entered naturally into the theory of integral equations and the general theory of linear operators. Close connections were discovered between the theory of analytic functions and geometry. All these constantly widening connections of the theory of functions with new areas of mathematics and science show the vitality of the theory and the continuous enrichment of its range of problems.

In our survey we have not been able to present a complete picture of all the manifold ramifications of the theory of functions. We have tried only to give some idea of the widely varied nature of its problems by indicating the basic elementary facts for some of the various fundamental directions in which the theory has moved. Some of its most important aspects, its connection with the theory of differential equations and special functions, with elliptic and automorphic functions, with the theory of trigonometric series, and with many other branches of mathematics, have been completely ignored in our discussion. In other cases we have had to restrict ourselves to the briefest indications. But we hope that this survey will give the reader a general idea of the character and significance of the theory of functions of a complex variable.

**VECTOR SPACES**

Vector spaces are finally the other great expansion of polidimensional frames of references, and the original frame of reference that evolved through Hilbert spaces into the formalism where we study the complex quantum reality – ultimately galaxy-atoms DO have so much information about them, that it is a feat we can actually extract the relevant information needed to determine their 2D motions.

We are thus obliged to deal with Hilbert spaces, despite its relative complexity, even in this second line, to close our first article on math’s sub disciplines, specifically on those which create mind spaces to extract proper information of the Universe. As those 2 *fundamental complex planes, imaginary planes of ‘square 2-manifolds’ (or its inverse S∂ square root imaginary plane), and vector spaces, where a vector is also a ‘dynamic’ 2-manifold, with more motions than the imaginary plane; as one of the elements is a formal, spatial parameter (usually an active magnitude), and the other element, is usually a time-motion-speed magnitude.*

So we can relate each other through the basic duality of S-top vs. Time motion, affirming that of the 2 ∆±1 manifold frames of reference:

-complex spaces are the spatial static view and…

-vector spaces are the dynamic, temporal view.

Then, the second element, more developed in vector spaces, is the departure from a single humind frame of reference with its 3 ‘visual’, perpendicular light space-time ‘basis/co-ordinates’ by adopting generalised co-ordinates – that is, co-ordinates for each point/item *as if it were a fractal broken space in its own, which truly is, since we move then from the subjective continuous human view, to the sum of all the different particle views. And the awesome finding is that despite this enormous multiplication of kaleidoscopic perspective, we do have the capacity to probe on the envelopes of those masses of points of view, which gather orderly into a wave-body form that can be treated with single parameters of information, in the same way the zillions of cells of the body gather into synchronous, simultaneous space-time systems.*

This is then the underling meaning of Hilbert spaces, which have infinite orthogonal vectorial dimensions, as the fractal discontinuous Universe does. But where *there are enough ‘limits’ to establish differential tools that allow us to localise quanta (derivative) and vice versa, to group masses of fractal points into integral wholes.*

So as Hilbert spaces can then define experimentally the duality of discrete quantum systems, gathered into more orderly wholes with wave forms.

Yet we need to understand that those dimensions do not mean as the 0-1-5Dimensions of the fractal Universe GLOBAL DIMENSIONS AND SYMMETRIES but very local individual dimensions: *orthogonal basis in a Hilbert space are NOT ‘real’ global dimensions, but local and also mental, hence ‘logic dimensions’ where the concept of perpendicularity, has also some of the aspects of vital non-E geometry explained in the 4th postulate of Non-A Logic; where perpendicularity is not only a geometrical ‘image’ but also an i-logic relationship of ‘disrupter’ of predation’ and ‘penetration’, and merging of elements into new ‘forms’ , related to the vital ways in which ‘fractal points’=T.œs relate to each other.*

Let us then start slowly by a classic definition of a vector space, of the Hilbert type, which is ALL about the existence of **orthogonal=perpendicular basis≈coordinates** and the key operation between vectors (written with Dirac Kets as |vector>) called a **dot product:**

A vector space is then a set of vectors closed under addition, and multiplication by constants, meaning operating them with ±, x, c gives also a vector belonging to that space.

Any collection of N mutually orthogonal vectors of length 1 in an N-dimensional vector space then constitutes an orthonormal basis for that space. Let |A1>, … , |AN> be such a collection of unit vectors. Then every vector in the space can be expressed as a sum of the form:

**|B> = b1|A1> + b2|A2> + … + bN|AN>**

Fair enough. The sum of vectors and its multiplication for a constant is already explained in our analysis of algebraic operations. It merely ‘reduces a series of parts’ into a new social whole by adding a dimension within the system itself. But what really establishes a new reality IS the dot product. Since *it reduces the information of two bidimensional vectors into a single scalar; and as such it is truly an ST>S TRANSFORMATION.*

An inner product space is a vector space on which the operation of vector multiplication has been defined, and the dimension of such a space is the maximum number of nonzero, mutually orthogonal vectors it contains.

One of the most familiar examples of a Hilbert space is the Euclidean space consisting of three-dimensional vectors, denoted by ℝ3, and equipped with the dot product. The dot product takes two vectors x and y, **and produces a real number x·y.** It satisfies the properties:

It is symmetric in x and** y: x · y = y · x.**

It is linear in its first argument: (**ax1 + bx2) · y = ax1 · y + bx2 · y** for any scalars a, b, and vectors x1, x2, and y.

It is positive definite: for all vectors x, x · x ≥ 0 , with equality if and only if x = 0.

An operation on pairs of vectors that, like the dot product, satisfies these three properties is known as a (real) inner product. A vector space equipped with such an inner product is known as a (real) inner product space. Every finite-dimensional inner product space is also a Hilbert space. The basic feature of the dot product that connects it with Euclidean geometry is that it is related to both the length (or norm) of a vector, denoted ||x||, and to the angle θ between two vectors x and y by means of the formula:

How can we interpret this PRODUCT in terms of vital mathematics and the st components? The most obvious definition is this. The ‘biggest’ predator vector, B in the graph IS THE DOMINANT ELEMENT, as A is projected on it. Whatever the ST elements of those vectors mean, which will vary for different uses, *unlike the cross product which is creative, reproductive, the dot product is entropic, destructive, as the result is the ‘absorption’ of the |A|cos =X component of one of the vectors by the other, which becomes expanded in the X-axis variable (whatever this variable is), and for all effects A **disappears, leaves no trace of its motion/form≈position; and we obtain a scalar which quantifies the result of this ‘absorption’. So we can classify the 2 fundamental ‘products’ of vectors by the duality of the 4th Non-Æ postulate:*

- Dot products are darwinian, destroying one vector, reduced first to its X-parameter, which as it happens IS systematically, the ‘real’ normally momentum or energy or body element of the system, while the Y-parameter of form, information is discharged in a classic darwinian action of feeding (the ‘particle’-head element or Y coordinates disappears). Indeed, if we use the XY graph, as in most cases to quantify the 2 COMPLEMENTARY PARTS OF THE BEING ∑∏(body-wave)>ð (head-particle), in physical systems this process is equivalent to the predator event of cutting the head throwing it out and eating the body to multiply your inner cellular energy in the X-direction of your body-motion-momentum.
- Cross products are reproductive, creative, as a third ‘offspring-dimension-form’ is created fusion of the other two:

**3rd dimension production=multiplication =reproduction**

In the graph, product can be of multiple, different ST dimensions, which start the richness of its ‘propositions’. A vectorial product is one of its commonest forms as it combines ST or TS dimensions, BUT as both ‘present’ products are different in orientation, this product unlike other SS or TT products is non-commutative: bxa=- axb. In this case giving birth to two different orientations in space, though for more complex product of multiple ‘S-T’ dimensions, which *can define as a Matrix of parameter a T.Œ PARTICLE in full,* the non-commutability can give origin to different particles (quantum physics).

Vectors thus become the essential mode to define an ST holographic element, with a 0Dimension of a scalar number that defines the singularity point and a direction of motion in space (x,y,z parameters from an @nalytic frame of reference, but in generalised objective coordinates a lineal 1D parameter of distance=speed per time frequency, which measures the T-steps or cyclical motion of the • point active magnitude).

Now, the difference between both type of vectorial product are very important to fully grasp reality as it is.

The perpendicular product seems at first contradictory because they seem to diverge in orientation. But this is because we put the arrow *in the wrong side. It should be in the origin where they collide, and that is the dot in which the two vectors become a still spatial parameter. It is then also applicable to the ‘collapse’ of multiple flows into a non-euclidean fractal point, in which they become a scalar parameter. And that is how in fact a Hilbert space ‘collapses’ in quantum physics an infinite number of generalised parameters allowing us to calculate wholes, and giving a vital sense to the extremely abstract jargon of quantum physicists.*

On the other hand the creative product, which is also used in physics to describe another fundamental scale, that of electromagnetic forces, IS SYMBIOTIC creative, merging and helping the two components to act symbiotically as one. And again the ‘mental space’ of the cross product is misleading as it seems to contradict the 4th postulate of symbiotic parallelism vs. darwinian perpendicularity; looking like they touch each other perpendicularly, but in fact the electric charge and the magnetic field ARE always parallel, in the sense they are the singularity and membrane of the ELECTRIC T.œ never touching each other, as there are no magnetic monopoles; hence they strengthen each other, creating a new force and increasing the speed, and curving, increasing the information of the particle under the magnetic field.

**Geometric comparison**.

The magnitude of the cross product can be interpreted as the positive area of the parallelogram having a and b as sides : **a x b sin θ**

One can also compute the volume V of a parallelepiped having a, b and c as edges by using a combination of a cross product and a dot product, called scalar triple product (see Figure): **a x b • c**

Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of perpendicularity in the same way that the dot product is a measure of parallelism. Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The dot product of two unit vectors behaves just oppositely: it is zero when the unit vectors are perpendicular and 1 if the unit vectors are parallel.

Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive).

*So we can establish a parallel superposition principle for the dot product and a perpendicularity one for the dot product.*

**HILBERT SPACES AS REFLECTION OF FRACTAL, MULTIPLE QUANTA AND ITS WAVE-SOCIAL SYSTEMS.**

If we consider all systems ultimately a reflection of the ideal form, an ‘open ST-ball (the area enclosed by the curve), with a T-singularity (the center of reference of the mathematical representation) and an S-membrane (the curve), we can easily translate the findings of ∆nalysis in GST laws.

Thus while those problems can be stated purely as questions of mathematical technique, they have a far wider importance because they possess a broad variety of interpretations for all the species of the ∆ST world; once we apply the ternary method and study the technique in St, Ts, or ∆ problems (as usual pure S and pure T is not measurable, as S has no form, T doesn’t communicate, even though we ab. St as s and Ts as T).

**– St**. The simplest results will give us the value of the ST body-wave measured with a parameter of space (area, volume, etc.). I.e:

The area inside a curve, for instance, is of direct interest in land measurement: how many acres does an irregularly shaped plot of land contain? But the same technique also determines the mass of a uniform sheet of material bounded by some chosen curve or the quantity of paint needed to cover an irregularly shaped surface.

**– Ts.** However if we apply analysis to *the study of the symmetric sequential accumulative processes of time, we can do the equivalent calculus on a function of space. So ∆nalysis will be a key element to study processes of growth and decay of systems along its ∆-scales, most often ∆±1 inverse processes, and so it connects directly with the reproductive and entropic, dying phases of space-time beings. I.e. to calculate the total fuel consumption of a rocket or the reproductive growth of a to predator population… And so on.*

i.e, these techniques can be used to find the total distance traveled by a vehicle moving at varying speeds, *by accumulating sequentially the ‘area’ below the curve of speed.*

*– ∆**: And finally analysis emerged into Hilbert and Function spaces, where each point is a function in itself of the lower scale, whose sum, can be considered to integrate into a finite ‘whole one’, a vector in the case of a Hilbert or Banach space (Spe-function space):*

In the graph, 3 representations of Hilbert spaces, which are made of non-euclidean fractal points, with an inner 5th dimension, (usually and Spe-vectorial field with a dot product in Hilbert spaces, which by definition are ‘complete’ because as real number do ‘penetrate’ in its inner regions, made of finitesimal elements, such as the vibrations of a string, which in time are potential motions of the creative future encoded in its functions (second graph).

The 3 graphs show the 3 main symmetries of the Universe, lineal spatial forces, cyclical time frequencies and the ‘wormholes’ between the ∆ and ∆-1 scales of the 5th dimension (ab. ∆), which structure the Universe, the first of them better described with ‘vector-points’ of a field of Hilbert space and the other 2 symmetries of time cycles/frequencies and scales with more general function spaces.

They are part of the much larger concept of a function space, which can represent any ∆±1 dual system of the fifth dimension. They grasp the scalar structure of ∆nalysis, where points are fractal non-euclidean with a volume, which grows when we come closer to them, so ∞ parallels can cross them – 5th Non-E postulate: so point stars become worlds and point cells living being. When those ∞ lines are considered future paths of time that the point can perform, they model ‘parallel universes’ both in time (i.e. the potential paths of the point as a vector) or space (i.e the different modes of the volume of information of the point, described by a function, when the function represents a complete volume of inner parts, which are paradoxically larger in number than the whole – the set of sets is larger than the set; Cantor Paradox). Thus function spaces are the ideal structure to express the fractal scales of the fifth dimension and used to represent the operators of quantum physics.

– •: No less important will be the use of analysis for mind-perspectives and the comprehension of the ‘Galilean paradoxes’ and symmetries between motion in time and distances/curvatures in space.

I.e.: the mathematical technique for finding a tangent line to a curve at a given point can also be used to calculate the curvature or steepness of the curve, which becomes then, a measure in its time symmetry of the acceleration of that curve, which becomes in physical space-time a measure of the attractive force of the system

So less directly, ∆nalysis is related to the extremely important question of the calculation of instantaneous velocity or other instantaneous rates of change, such as the cooling of a warm object in a cold room or the propagation of a disease organism through a human population; *by constantly switching the study of the ∆st process between its S, t and ∆ components.*

This post begins with a brief introduction to the historical background of analysis and to basic concepts such as numbers and infinities, functions, continuity, infinite series, and limits, all of which are necessary for an understanding of analysis.

Following this introduction is a full technical review, from calculus to nonstandard analysis.

**ST: SIMULTANEOUS FUTURE PATHS ****.**

The rise and spread of functional analysis in the 20th century had two main causes. On the one hand it became desirable to interpret from a uniform point of view the copious factual material accumulated in the course of the 19th century in various, often hardly connected, branches of mathematics.

The fundamental concepts of functional analysis were formed and crystalized under various aspects and for various reasons. Many of the fundamental concepts of functional analysis emerged in a natural fashion in the process of development of the calculus of variations, in problems on oscillations (in the transition from the oscillations of systems with a finite number of degrees of freedom to oscillations of continuous media), in the theory of integral equations, in the theory of differential equations both ordinary and partial (in boundary problems, problems on eigenvalues, etc.) in the development of the theory of functions of a real variable, in operator calculus, in the discussion of problems in the theory of approximation of functions, and others.

Functional analysis permitted an understanding of many results in these domains from a single point of view and often promoted the derivation of new ones.

In recent decades the preparatory concepts and apparatus were then used in a new branch of theoretical physics–in quantum mechanics.

On the other hand, the investigation of mathematical problems connected with quantum mechanics became a crucial feature in the further development of functional analysis itself: It created, and still creates at the present time, fundamental branches of this development.

Functional analysis has not yet reached its completion by far. On the contrary, undoubtedly in its further development the questions and requirements of contemporary physics will have the same significance for it as classical mechanics had for the rise and development of the differential and integral calculus in the 18th century.

It is impossible here to include in this chapter all, or even only all the fundamental, problems of functional analysis. Many important branches exceed the limitations of this book. Nevertheless, by confining ourselves to certain selected problems, we wish to acquaint the reader with some fundamental concepts of functional analysis and to illustrate as far as possible the connections of which we have spoken here. These problems were analyzed mainly at the beginning of the 20th century on the basis of the classical papers of Hilbert, who was one of the founders of functional analysis. Since then functional analysis has developed very vigorously and has been widely applied in almost all branches of mathematics; in partial differential equations, in the theory of probability, in quantum mechanics, in the quantum theory of fields, etc. Unfortunately these further developments of functional analysis cannot be included in our account. In order to describe them we would have to write a separate large book, and therefore, we restrict ourselves to one of the oldest problems, namely the theory of eigenfunctions.

** n-Dimensional Space**

In what follows we shall make use of the fundamental concepts of n-dimensional space. Although these concepts have been introduced in the chapters on linear algebra and on abstract spaces, we do not think it superfluous to repeat them in the form in which they will occur here. For scanning through this section it is sufficient that the reader should have a knowledge of the foundations of analytic geometry.

We know that in analytic geometry of three-dimensional space a point is given by a triplet of numbers (f1, f2, f3), which are its coordinates. The distance of this point from the origin of coordinates is equal to:

If we regard the point as the end of a vector leading to it from the origin of coordinates, then the length of the vector is also equal to: The cosine of the angle between nonzero vectors leading from the origin of coordinates to two distinct points A(f1, f2, f3) and B(g1, g2, g3) is defined by the formula:

From trigonometry we know that |Cos Φ| ≤1 . Therefore we have the inequality:and hence always:(1)

This last inequality has an algebraic character and is true for any arbitrary six numbers (f1, f2, f3) and (g1, g2, g3), since any six numbers can be the coordinates of two points of space. All the same, the inequality (1) was obtained from purely geometric considerations and is closely connected with geometry, and this enables us to give it an easily visualized meaning.

In the analytic formulation of a number of geometric relations, it often turns out that the corresponding facts remain true when the triplet of numbers is replaced by n numbers. For example, our inequality (1) can be generalized to 2n numbers (f1, f2, ···, fn) and (g1, g2, ···, gn) . This means that for any arbitrary 2n numbers (f1, f2, ···, fn) and (g1, g2, ···, gn) an inequality analogous to (1) is true, namely:

This inequality, of which (1) is a special case, can be proved purely analytically.* In a similar way many other relations between triplets of numbers derived in analytic geometry can be generalized to n numbers. This connection of geometry with relations between numbers (numerical relations) for which the cited inequality is an example becomes particularly lucid when the concept of an n-dimensional space is introduced:

A collection of n numbers (f1, f2, ···, fn) is called a point or vector of n-dimensional space (we shall more often use the latter name). The vector (f1, f2, ···, fn) will from now on be abbreviated by the single letter f.

Just as in three-dimensional space on addition of vectors their components are added, so we define the sum of the vectors:

As the vector {f1 + g1, f2 + g2, ···, fn + gn} and we denote it by f + g.

The product of the vector f = {f1, f2,···, fn} by the number λ is the vector λf = {λf1, λf2, ···, λfn}.

The length of the vector f = {f1, f2, ···, fn}, like the length of a vector in three-dimensional space, is defined as:

The angle ϕ between the two vectors f = {f1, f2, ···, fn} and {g1, g2, ···, gn} in n-dimensional space is given by its cosine in exactly the same way as the angle between vectors in three-dimensional space. For it is defined by the formula:

The scalar product of two vectors is the name for the product of their lengths by the cosine of the angle between them. Thus, if f = {f1, f2, ···, fn} and {g1, g2, ···, gn} then since the lengths of the vectors are:

respectively, their scalar product, which is denoted by (f, g) is given by the formula:In particular, the condition of orthogonality (perpendicularity) of two vectors is the equation cos ϕ = 0; i.e., (f, g) = 0.

By means of the formula (3) the reader can verify that the scalar product in n-dimensional space has the following properties:

**1. (f, g) = (g, f).**

**2. (λf, g) = λ(f, g).**

** 3. (f, g1 + g2) = (f, g1) + (f, g2).**

**4. (ƒ,ƒ)≥0,** and the equality sign holds for f = 0 only, i.e., when f1 = f2 = ··· = fn =0.

The scalar product of a vector f with itself (f, f) is equal to the square of the length of f.

The scalar product is a very convenient tool in studying n-dimensional spaces. We shall not study here the geometry of an n-dimensional space but shall restrict ourselves to a single example.

As our example we choose the theorem of Pythagoras in n-dimensional space: The square of the hypotenuse is equal to the sum of the squares of the sides. For this purpose we give a proof of this theorem in the plane which is easily transferred to the case of an n-dimensional space.

Let f and g be two perpendicular vectors in a plane. We consider the right-angled triangle constructed on f and g (figure 1). The hypotenuse of this triangle is equal in length to the vector f + g. Let us write down in vector form the theorem of Pythagoras in our notation. Since the square of the length of a vector is equal to the scalar product of the vector with itself, Pythagoras’ theorem can be written in the language of scalar products as follows: **(ƒ+g, ƒ+g)=(ƒ,ƒ)+(g,g)**

The proof immediately follows from the properties of the scalar product. In fact:**(ƒ+g, ƒ+g)=(ƒ,ƒ)+(ƒ,g)+(g,ƒ)+(g,g)**

And the two middle summands are equal to zero owing to the orthogonality of f and g.

In this proof we have only used the definition of the length of a vector, the perpendicularity of vectors, and the properties of the scalar product. Therefore nothing changes in the proof when we assume that f and g are two orthogonal vectors of an n-dimensional space. And so Pythagoras’ theorem is proved for a right-angled triangle in n-dimensional space.

If three pairwise orthogonal vectors f, g and h are given in n-dimensional space, then their sum f + g + h is the diagonal of the right-angled parallelepiped constructed from these vectors (figure 2) and we have the equation: **(ƒ+g+h, ƒ+g+h)=(ƒ,ƒ)+(g,g)+(h,h)**

which signifies that the square of the length of the diagonal of a parallelepiped is equal to the sum of the squares of the lengths of its edges. The proof of this statement, which is entirely analogous to the one given earlier for Pythagoras’ theorem, is left to the reader. Similarly, if in an n-dimensional space there are k pairwise orthogonal vectors** f1, f2, ···, fk** then the equation:

which is just as easy to prove, signifies that the square of the length of the diagonal of a “k-dimensional parallelelipiped” in n-dimensional space is also equal to the sum of the squares of the lengths of its edges.

**Hilbert Space (Infinite-Dimensional Space)**

Connection with n-dimensional space. The introduction of the concept of n-dimensional space turned out to be useful in the study of a number of problems of mathematics and physics. In its turn this concept gave the impetus to a further development of the concept of space and to its application in various domains of mathematics. An important role in the development of linear algebra and of the geometry of n-dimensional spaces was played by problems of small oscillations of elastic systems. Let us consider the following classical example of such a problem (figure).

Let AB be a flexible string spanned between the points A and B. Let us assume that a weight is attached at a certain point C to the string. If it is moved from its position of equilibrium, it begins to oscillate with a certain frequency ω, which can be computed when we know the tension of the string, the mass m and the position of the weight. The state of the system at every instant is then given by a single number, namely the displacement y1 of the mass m from the position of equilibrium of the string.

Now let us place n weights on the string AB at the points C1, C2, ···, Cn. The string itself is taken to be weightless. This means that its mass is so small that compared with the masses of the weights it can be neglected. The state of such a system is given by n numbers y1, y2, ···, yn equal to the displacements of the weights from the position of equilibrium. The collection of numbers y1, y2, ···, yn can be regarded (and this turns out to be useful in many respects) as a vector (y1, y2, ···, yn) of an n-dimensional space.

The investigation of the small oscillations that take place under these circumstances turns out to be closely connected with fundamental facts of the geometry of n-dimensional spaces. We can show, for example, that the determination of the frequency of the oscillations of such a system can be reduced to the task of finding the axes of a certain ellipsoid in n-dimensional space.

Now let us consider the problem of the small oscillations of a string spanned between the points A and B. Here we have in mind an idealized string, i.e., an elastic thread having a finite mass distributed continuously along the thread. In particular, by a homogeneous string we understand one whose density is constant.

Since the mass is distributed continuously along the string, the position of the string can no longer be given by a finite set of numbers y1, y2, ···, yn, and instead the displacement y(x) of every point x of the string has to be given. Thus, the state of the string at each instant is given by a certain function y(x).

The state of a thread with n weights attached at the points with the abscissas x1, x2, ···, xn, is represented graphically by a broken line with n members (figure 4), so that when the number of weights is increased, then the number of segments of the broken line increases correspondingly. When the number of weights grows without bound and the distance between adjacent weights tends to zero, we obtain in the limit a continuous distribution of mass along the thread, i.e., an idealized string. The broken line that describes the position of the thread with weights then goes over into a curve describing the position of the string:

So we see that there exists a close connection between the oscillations of a thread with weights and the oscillations of a string. In the first problem the position of the system was given by a point or vector of an n-dimensional space. Therefore it is natural to regard the function f(x) that describes the position of the oscillating string in the second case as a vector or a point of a certain infinite-dimensional space. A whole series of similar problems leads to the same idea of considering a space whose points (vectors) are functions f(x) given on a certain interval.

This example of oscillation of a string, to which we shall return again in §4, suggests to us how we shall have to introduce the fundamental concepts in an infinite-dimensional space.

Hilbert space. Here we shall discuss one of the most widespread concepts of an infinite-dimensional space of the greatest importance for the applications, namely the concept of the Hilbert space.

A vector of an n-dimensional space is defined as a collection of n numbers fi, where i ranges from 1 to n. Similarly a vector of an infinite-dimensional space is defined as a function f(x), where x ranges from a to b.

Addition of vectors and multiplication of a vector by a number is defined as addition of the functions and multiplication of the function by a number.

The length of a vector f in an n-dimensional space is defined by the formula:Since for functions the role of the sum is taken by the integral, the length of the vector f(x) of a Hilbert space is given by the formula:The distance between the points f and g in an n-dimensional space is defined as the length of the vector f — g, i.e., as:

“Similarly the “distance” between the elements f(t) and g(t) in a functional space is equal to:

called the mean-square deviation of the functions f(t) and g(t). Thus, the mean-square deviation of two elements of Hilbert space is taken to be a measure of their distance.

Let us now proceed to the definition of the angle between vectors. In an n-dimensional space the angle ϕ between the vectors f = {fi} and g = {gi} is defined by the formula:In an infinite-dimensional space the sums are replaced by the corresponding integrals and the angle ϕ between the two vectors f and g of Hilbert space is defined by the analogous formula:This expression can be regarded as the cosine of a certain angle ϕ, provided the fraction on the right-hand side is an absolute value less than one, i.e., if:

This inequality in fact holds for two arbitrary functions f (t) and g (t). It plays an important role in analysis and is known as the Cauchy-Bunjakovskiĭ inequality. Let us prove it.

Let f(x) and g(x) be two functions, not identically equal to zero, given on the interval (a, b). We choose arbitrary numbers λ and μ and form the expression:

Since the function [λf(x) – μg(x)]2 under the integral sign is nonnegative, we have the following inequality:

This inequality is valid for arbitrary values of λ and μ; in particular we may set:

Substituting these values of λ and μ in (9), we obtain: **c/√AB≤1**

When we replace A, B and C by their expressions in (8), we finally obtain the Cauchy-Bunjakovskiĭ inequality.

In geometry the scalar product of vectors is defined as the product of their lengths by the cosine of the angle between them. The lengths of the vectors f and g in our case are equal to:

the cosine of the angle between them is defined by the formula:When we multiply out these expressions, we arrive at the following formula for the scalar product of two vectors of Hilbert space:

From this formula it is clear that the scalar product of the vector f with itself its the square of its length.

If the scalar product of the nonzero vectors f and g is equal to zero, it means that cos ϕ = 0, i.e., that the angle ϕ ascribed to them by our definition is 90°.

Therefore functions f and g for which:are called orthogonal.

Pythagoras’ theorem (see §1) holds in Hilbert space as in an n-dimensional space. Let** f1(x), f2(x), ···, ƒn(x)** be N pairwise orthogonal functions and:** ƒ(x) = f1(x) + f2(x)+···+ ƒn(x)…**

Then the square of the length of f is equal to the sum of the squares of the lengths of f1, f2, ···, fN.

Since the lengths of vectors in Hilbert space are given by means of integrals, Pythagoras’ theorem in this case is expressed by the formula:

The proof of this theorem does not differ in any respect from the one given previously (§1) for the same theorem in n-dimensional space.

So far we have not made precise what functions are to be regarded as vectors in Hilbert space.

For such functions we have to take all those for which:has a meaning.

It might appear natural to confine ourselves to continuous functions for which it always exists.

However, the theory of Hilbert space becomes more complete and natural if: is interpreted in a generalized sense, namely as a Lebesgue integral.

This extension of the concept of integrals (and correspondingly of the class of functions to be discussed) is necessary for functional analysis in the same way as a strict theory of the real numbers is necessary for the foundation of the differential and integral calculus. Thus, the generalization of the ordinary concept of an integral that was created at the beginning of the 20th century in connection with the development of the theory of functions of a real variable turned out to be quite essential for functional analysis and the branches of mathematics connected with it.

**Expansion by Orthogonal Systems of Functions**

If in a plane two arbitrary mutually perpendicular vectors e1 and e2 of unit length are chosen (figure), then every vector of the same plane can be decomposed in the directions of these two vectors, i.e., can be represented in the form: ƒ= a1e1+a2e2

where a1 and a2 are the numbers equal to the projections of the vector f in the direction of the axis of e1 and e2. Since the projection of f on an axis is equal to the product of the length of f by the cosine of the angle between f and the axis, we can write, remembering the definition of the scalar product: **a1= (ƒ,e1), a2 = (ƒ,e2).**

Similarly if in a three-dimensional space any three mutually perpendicular vectors e1, e2, e3 of unit length are chosen, then every vector f in this space can be written in the form:In Hilbert space we can also consider systems of pairwise orthogonal vectors of the space, i.e., functions **ϕ1(x), ϕ2(x), ···, ϕn(x), ·**··.Such systems of functions are called orthogonal and play an important role in analysis.

They occur in very diverse problems of mathematical physics, integral equations, approximate computations, the theory of functions of a real variable, etc. The ordering and unification of the concepts relating to such systems formed one of the motivations that led at the beginning of the 20th century to the creation of the general concept of a Hilbert space.Let us give a precise definition.

A system of functions :**ϕ1(x), ϕ2(x), ···, ϕn(x) **is called orthogonal if any two functions of the system are orthogonal, i.e., if:

In three-dimensional space we required that the vectors of the system should be of unit length. Recalling the definition of length of a vector we see that in the case of Hilbert space this requirement can be written as follows:A system of functions satisfying the conditions (13) and (14) is called orthonormal.

Let us give examples of such systems of functions.

1. On the interval (−π, π) we consider the sequence of functions: **l, cos x, sin x, cos 2x, sin 2x, …, cos nx, sin nx,…**

Any two functions of this sequence are orthogonal to each other. This can be verified by the simple computation of the corresponding integrals. The square of the length of a vector in Hilbert space is the integral of the square of the function. Thus, the squares of the lengths of the vectors of the sequence: **l, cos x, sin x, cos 2x, sin 2x, …, cos nx, sin nx,…**

are the integrals:

i.e., the vectors of our sequence are orthogonal, but not normalized. The length of the first vector of the sequence is equal to , and all the others are of length . When we divide every vector by its length, we obtain the orthonormal system of trigonometric functions:

This system is historically one of the first and most important examples of orthogonal systems. It appeared in the works of Euler, D. Bernoulli, and d’Alembert in connection with problems on the oscillations of strings. The study of it plays an essential role in the development of the whole of analysis.

The appearance of the orthogonal system of trigonometrical functions in connection with problems on oscillations of strings is not accidental. Every problem on small oscillations of a medium leads to a certain system of orthogonal functions that describe the so-called characteristic oscillations of the given system . For example in connection with problems on the oscillations of a sphere there appear the so-called spherical functions, in connection with problems on the oscillations of a circular membrane or a cylinder there appear the so-called cylinder functions, etc.

2. We can give an example of an orthogonal system of functions in which every function is a polynomial. Such an example is the sequence of Legendre polynomials: i.e., Pn(x) is (apart from a constant factor) the nth derivative of (x2 − 1)n. Let us write down the first few polynomials of this sequence:

Obviously Pn(x) is a polynomial of degree n. We leave it to the reader to convince himself that these polynomials are an orthogonal sequence on the interval (−1, 1).

Expansion by orthogonal systems of functions. Just as in three-dimensional space every vector can be represented in the form of a linear combination of three pairwise orthogonal vectors e1, e2, e3 of unit length: ** ƒ=a1e1+aϕ2e2+a3 e3 **so in a functional space there arises the problem of the decomposition of an arbitrary function f in a series with respect to an orthonormal system of functions, i.e., of the representation of f in the form:** ƒ(x) =a1ϕ1(x)+aϕ2(x)+ ···+an ϕn(x)+ (15) **

Here the convergence of the series (15) to the function f has to be understood in the sense of the distance between elements in Hilbert space. This means that the mean-square deviation of the partial sum of the series:

This convergence is usually called “convergence in the mean.”

Expansions in various systems of orthogonal functions often occur in analysis and are an important method for the solution of problems of mathematical physics. For example, if the orthogonal system is the system of trigonometric functions on the interval (−π, π): **l, cos x, sin x, cos 2x, sin 2x, …, cos nx, sin nx,… **then this expansion is the classical expansion of a function in a trigonometric series:

Let us assume that an expansion (15) is possible for every function f of a Hilbert space and let us find its coefficients an. For this purpose we multiply both sides of the equation scalarly by one and the same function ϕm of our system. We obtain the equation: We see that, as in ordinary three-dimensional space (see the beginning of this section), the coefficients am are equal to the projections of the vector f in the direction of the vectors ϕk.

Recalling the definition of the scalar product we see that the coefficients of the expansion of f(x) by the normal orthogonal system of functions ϕ1(x), ϕ2(x), ···, ϕn(x), ···As an example let us consider the normal orthogonal trigonometric system of functions mentioned previously:Then:

So we have obtained the formula for the computation of the coefficients of the expansion of a function in trigonometric series, assuming of course that this expansion is possible.

We have established the form of the coefficients of the expansion (18) of the function f(x) by an orthogonal system of functions under the assumptions that this expansion holds. However, an infinite orthogonal system of functions ϕ1, ϕ2, ···, ϕn, ··· may turn out to be insufficient for every function of a Hilbert space to have such an expansion. For such an expansion to be possible, the system of orthogonal functions must satisfy an additional condition, namely the so-called condition of completeness.

An orthogonal system of functions is called complete if it is impossible to add to it even one function, not identically equal to zero, that is orthogonal to all the functions of the system.

It is easy to give an example of an incomplete orthogonal system. For this purpose we choose an arbitrary orthogonal system, for example that of the trigonometric functions, and remove one of the functions of the system, for example cos x. The remaining infinite system of functions: **l, cos x, sin x, cos 2x, sin 2x, …, cos nx, sin nx,…**

is orthogonal as before, but of course it is not complete, since the function cos x which we have excluded is orthogonal to all the functions of the system.

If a system of functions is incomplete, then not every function of a Hilbert space can be expanded by it. For if we attempt to expand by such a system a nonzero function f0(x) that is orthogonal to all the functions of the system, then by (18) all the coefficients turn out to be zero, whereas the function f0(x) is not equal to zero.

The following theorem holds: If a complete orthonormal system of functions in a Hilbert space ϕ1(x), ϕ2(x), ···, ϕn(x), ···, is given, then every function f(x) can be expanded in a series by functions of this system: **ƒ(x) =a1ϕ1(x)+aϕ2(x)+ ···+an ϕn(x)+ (15)**

Here the coefficients an of the expansion are equal to the projections of the vectors f on the elements of the normal orthogonal system:

Pythagoras’ theorem in Hilbert space, which was established in §2, enables us to find an interesting relation between the coefficients ak and the function f(x). We denote by rn(x) the difference between f(x) and the sum of the first n terms of its series; i.e.:** Rn (x)** = **ƒ(x) – [a1ϕ1(x)+aϕ2(x)+ ···+an ϕn(x)]**

The function rn(x) is orthogonal to ϕ**1(x), ϕ2(x), ···, ϕn(x).** Let us verify for example that it is orthogonal to ϕ1(x), i.e., that:

the individual terms on the right-hand side are orthogonal to each other. Hence, by Pythagoras’ theorem as formulated in §1, the square of the length of f(x) is equal to the sum of the squares of the lengths of the summands of the right-hand side in (19); i.e.:Since the system of functions ϕ1, ϕ2, ···, ϕn is normalized [equation (14)], we have:i.e., that:

“which states that the integral of the square of a function is equal to the sum of the squares of the coefficients of its expansion by a closed orthogonal system of functions. If the condition (21) holds for an arbitrary function of the Hilbert space, it is called the condition of completeness.

We wish to draw attention to the following important question. Which numbers ak can be the coefficients of the expansion of a function in Hilbert space? The equation (21) asserts that for this purpose the series :

must converge. Now it turns out that this condition is also sufficient; i.e., a sequence of numbers ak is the sequence of coefficients of the expansion by an orthogonal system of functions in Hilbert space if and only if the series converges.

We remark that this fundamental theorem holds if Hilbert space is interpreted as the collection of all functions with integrable square in the sense of Lebesgue (see §2). If we were to confine ourselves in Hilbert space, for example, to the continuous functions, then the solution of the problem as to which numbers ak can be the coefficients of an expansion would become unnecessarily complicated.

The arguments given here “are only one of the reasons that have led to the use of an integral in a generalized (Lebesgue) sense in the definition of Hilbert space.

**Integral Equations**

In this section the reader will become acquainted with one of the most important and, historically, one of the first branches of functional analysis, namely the theory of integral equations, which has also played an essential role in the subsequent development of functional analysis. Quite apart from internal requirements of mathematics [for example, boundary problems for partial differential equations (Chapter VI)], various problems of physics were of great importance in the development of the theory of integral equations. Side by side with differential equations, the integral equations are, in the 20th century, one of the most important means of the mathematical investigation of various problems of physics. In this section we shall give a certain amount of information concerning the theory of integral equations. The facts we shall explain here are closely connected and have essentially sprung up (directly or indirectly) in connection with the study of small oscillations of elastic systems.

**The problem of small oscillations of elastic systems. **

We return to the problem of small oscillations. Let us find equations that describe such oscillations. For the sake of simplicity we assume that we are dealing with the oscillation of a linear elastic system. As examples of such systems we can take, say, a string of length l or an elastic rod. We shall assume that in the position of equilibrium our elastic system is situated along the segment Ol of the x-axis. We apply a unit force at the point x. Under the action of this force all the points of the system receive a certain displacement. The displacement arising at the point y (figure 8) is denoted by k(x, y).

The function k(x, y) is a function of two points: the point x at which the force is applied, and the point y at which we measure the displacement. It is called the influence function (Green’s function).

From the law of conservation of energy, we can deduce an important property of the Green’s function k(x, y), namely the so-called reciprocity law: The displacement arising at the point y under the action of a force applied at the point x is equal to the displacement arising at the point x under the action of the same force applied at the point y. In other words, this means that: **k (x,y) = k (y,x) 22**

Let us find, for example, the Green’s function for the longitudinal oscillations of an elastic rod (in figure 8 we have illustrated other transverse displacements). We consider a rod AB of length fixed at the ends (figure 9). At the point C we apply a force f acting in the direction of B. Under the action of this force the rod is deformed and the point C is shifted into the position C′. We denote the magnitude of the shift of C by h. Let us find the value of h. By means of h we can then find the shift at an arbitrary point y. For this purpose we shall make use of Hooke’s law, which states that the force is proportional to the relative extension (i.e., to the ratio of the amount of displacement to the length). A similar relation holds for compressions.

Under the action of the force f the part AC of the rod is stretched. We denote the reaction arising here by T1. At the same time the part CB of the rod is compressed, giving rise to a reaction T2. By Hooke’s law: where κ is the coefficient of proportionality that characterizes the elastic properties of the rod. The position of equilibrium of the forces acting at the point C gives us:In order to find the displacement arising at a certain point y on the segment AC, i.e., for y < x, we note that it follows from Hooke’s law that under an extension of the rod the relative extension (i.e., the ratio of the displacement of the point to its distance from the fixed end) does not depend on the position of the point. We denote the displacement of the point y by k.

Then by comparing the relative displacements at the points x and y we obtain:

Similarly, if the point lies on the segment CB (y > x), we obtain:

Bearing in mind that the Green’s function k(x, y) is the displacement at the point y under the action of a unit force applied at the point x, we see that on the longitudinal oscillations of an elastic rod the Green’s function has the form:In a more or less similar way we could have found the Green’s function for a string. If the tension of the string is T and the length l, then under the action of a unit force applied at the point x the string assumes the form illustrated in figure 8, and the displacement k(x,y) at the point y is given by the formula: which coincides with the Green’s function for the rod which we have derived.

In terms of the Green’s function we can express the displacement of the system from its position of equilibrium provided that it is acted upon by a continuously distributed force of density f(y). Since on an interval of length Δy there acts a force f(y) Δy, which we can regard approximately as concentrated at the point y, under the action of this force at the point x there arises a displacement k(x, y)f(y) Δy. The displacement under the action of the whole load is approximately equal to the sum: **∑ k (x,y) ƒ(y)∆y.**

Passing to the limit for Δy → 0 we see that the displacement u(x) at the point x under the action of the force f(y) distributed along the system is given by the formula:

Let us assume that our elastic system is not subject to the action of external forces. If it is displaced from its position of equilibrium, it then begins to move. These motions are called the free oscillations of the system.

Now let us write down in terms of the Green’s function k(x, y) the equation that the free oscillations of the elastic system in question have to obey. For this purpose we denote by u(x, t) the displacement from the position of equilibrium at the point x and the instant of time t. Then the acceleration of x at the time t is equal to ∂2u(x, t)/∂t2.

If ρ is the linear density of the field, i.e., ρ dy the mass of the element of length dy, then we obtain by a fundamental law of mechanics the equation of motion by replacing in (23) the force f(y) dy by the product of the mass and the acceleration [∂2u(y, t)/∂t2] ρ dy taken with the opposite sign.

Thus, the equation of the free oscillations has the form:An important role in the theory of oscillations is played by the so-called harmonic oscillations of the elastic system, i.e., the motions for which:**u (x,t) = U (x) sin ωt**

They are characterized by the fact that every fixed point performs harmonic oscillations (moves according to a sinusoidal law) with a certain frequency ω, and that this frequency is one and the same for all the points x.

Later on we shall see that every free oscillation is composed of harmonic oscillations. We set: **u (x,t) = U (x) sin ωt**

in the equation of the free oscillations and cancel sin ωt. Then we obtain the following equation to determine the function u(x): Such an equation is called a homogeneous integral equation for the function u(x).

Obviously the equation (24) has for every ω the uninteresting solution u(x) ≡ 0, which corresponds to the state of rest. Those values of ω for which there exist other solutions of the equation (24), different from zero, are called the eigenfrequencies of the system.

Since nonzero solutions do not exist for every value of ω, the system can perform free oscillations only with definite frequencies. The smallest of these is called the fundamental tone of the system, and the remaining ones are overtones.

Now it turns out that for every system there exists an infinite sequence of eigenfrequencies, the so-called frequency spectrum: ω1,ω2,…,ωn,…

The nonzero solution un(x) of the equation (24) corresponding to the the eigenfrequency ωn gives us the form of the corresponding characteristic oscillation.

For example, if the elastic system is a string stretched between the points O and l and fastened at these points, then the possible frequencies of the characteristic oscillations of the system are equal to:

Where a is a coefficient depending on the density and the tension of the string, namely: **a= √T/ρ**

The fundamental tone is here **ω1 = a(π/l)**, and the overtones are** ω2 = 2ω1, ω3 = 3ω1, ···, ωn = nω1**. The form of the corresponding harmonic oscillations is given by the equation:

and are illustrated for n = 1, 2, 3, 4 in figure 10.

So far we have discussed free oscillations of elastic systems. Now if an exterior harmonic force acts on the elastic system during the motion, then, in determining the harmonic oscillations under the action of this force, we arrive at the function u(x) at the so-called inhomogeneous integral equation:

**Properties of integral equations. **

Previously we have become acquainted with examples of integral equations:the first of which was obtained in the solution of the problem on the free oscillations of an elastic system, and the second in the discussion of forced oscillations, i.e., oscillations under the action of external forces.

The unknown function in these equations is f(x). The given function k(x, y) is called the kernel of the integral equation. The equation (27) is called an inhomogeneous linear integral equation, and the equation (26) is homogeneous. It is obtained from the inhomogeneous one by setting h(x) = 0.

It is clear that the homogeneous equation always has the zero solution, i.e., the solution f(x) = 0. A close connection exists between the solutions, of the inhomogeneous and the homogeneous integral equations. By way of example we mention the following theorem: If the homogeneous integral equation has only the zero solution, then the corresponding inhomogeneous equation is soluble for every function h(x).

If for a certain value λ a homogeneous equation has the solution f(x), not identically equal to zero, then this value λ is called an eigenvalue and the corresponding solution f(x) an eigenfunction. We have seen earlier that when an integral equation describes the free oscillations of an elastic system, then the eigenvalues are closely connected with the frequencies of the oscillations of the system (namely λ = ρω2). The eigenfunctions then give the form of the corresponding harmonic oscillations.

In the problems on oscillations it followed from the law of conservation of energy that: **k (x,y) = k (y,x)** 28

A kernel satisfying the condition (28) is called symmetric.

The eigenfunctions and eigenvalues of an equation with a symmetric kernel have a number of important properties. One can prove that such an equation always has a sequence of real eigenvalues:** λ1, λ2, …, λn…**

To every eigenvalue there correspond one or several eigenfunctions. Here eigenfunctions corresponding to distinct eigenvalues are always orthogonal to each other.*

Thus, for every integral equation with a symmetric kernel the system of eigenfunctions is an orthogonal system of functions. There arises the question of when this system is complete, i.e., when can every function of the Hilbert space be expanded in a series by a system of eigenfunctions of the integral equation. In particular, if the equation:is satisfied for f(y) = 0 only, then the system of eigenfunctions of the integral equation:

is a complete orthogonal system.

Thus, every function f(x) with integrable square can in this case be expanded in a series by eigenfunctions. By discussing various types of integral equations, we obtain a general and powerful method of proving that various important orthogonal systems are closed, i.e., that the functions are expandable in series by orthogonal functions. By this method we can prove the completeness of the system of trigonometric functions, of cylinder functions, spherical functions, and many other important systems of functions.

The fact that an arbitrary function can be expanded in a series by eigenfunctions means in the case of oscillations that every oscillation can be decomposed into a sum of harmonic oscillations. Such a decomposition yields a method that is widely applicable in solving problems on oscillations in various domains of mechanics and physics (oscillations of elastic bodies, acoustic oscillations, electromagnetic waves, etc.).

The development of the theory of linear integral equations gave the impetus to the creation of the general theory of linear operators of which the theory of linear integral equations forms an organic part. In the last few decades the general methods of the theory of linear operators have vigorously contributed to the further development of the theory of integral equations.

** Linear Operators and Further Developments of Functional Analysis**

In the preceding section we have seen that problems on the oscillations of an elastic system lead to the search for the eigenvalues and eigenfunctions of integral equations. Let us note that these problems can also be reduced to the investigation of the eigenvalues and eigenfunctions of linear differential equations.* Many other physical problems also lead to the task of computing the eigenvalues and eigenfunctions of linear differential or integral equations.

Let us give one more example. In modern radio technology the so-called wave guides are widely used for the transmission of electromagnetic oscillations of high frequencies, i.e., hollow metallic tubes in which electromagnetic waves are propagated. It is known that in a wave guide only electromagnetic oscillations of not too large a wave length can be propagated. The search for the critical wave length amounts to a problem on the eigenvalues of a certain differential equation.

Problems on eigenvalues occur, moreover, in linear algebra, in the theory of ordinary differential equations, in questions of stability, etc.

So it became necessary to discuss all these related problems from one single point of view. This common point of view is the general theory of linear operators. Many problems on eigenfunctions and eigenvalues in various concrete cases came to be fully understood only in the light of the general theory of operators. Thus, in this and a number of other directions the general theory of operators. Thus, in this and a number of other directions the general theory of operators turned out to be a very fruitful research tool in those domains of mathematics in which it is applicable.

In the subsequent development of the theory of operators, quantum mechanics played a very important role, since it makes extensive use of the methods of the theory of operators. The fundamental mathematical apparatus of quantum mechanics is the theory of the so-called self-adjoint operators. The formulation of mathematical problems arising in quantum mechanics was and still is a powerful stimulus for the further development of functional analysis.

The operator point of view on differential and integral equations turned out to be extremely useful also for the development of practical methods for approximate solutions of such equations.

**Fundamental concepts of the theory of operators.**

Let us now proceed to an explanation of the fundamental definitions and facts in the theory of operators.

In analysis we have come across the concept of a function. In its simplest form this was a relation that associates with every number x (the value of the independent variable) a number y (the value of the function). In the further development of analysis it became necessary to consider relations of a more general type.

Such more general relations are discussed, for example, in the calculus of variations we associate with every function a number. If with every function a certain number is associated, then we say that we are given a functional. As an example of a functional we can take the association between an arbitrary function:**y = ƒ(x) (a≤x≤b) **and the arc length of the curve represented by it. We obtain another example of a functional if we associate with every:**y = ƒ(x) (a≤x≤b) **function its definite integral:If we regard f(x) as a point of an infinite-dimensional space, then a functional is simply a function of the points of the infinite-dimensional space. From this point of view the problems of the calculus of variations concern the search for maxima and minima of functions of the points of an infinite-dimensional space.

In order to define what we mean by a continuous functional it is necessary to define first what we mean by proximity of two points of an infinite-dimensional space. we gave the distance between two functions f(x) and g(x) (points of an infinite-dimensional space) as:This method of assigning a distance in infinite-dimensional space is often used, but of course it is not the only possible one. In other problems other methods of giving the distance between functions may turn out to be better. We may point, for example, to the problem of the theory of approximation of functions (see Chapter XII, §3), where the distance between functions, which characterizes the measure of proximity of the two functions f(x) and g(x), is given, for example, by the formula: **max |ƒ(x)-g(x)|**

Other methods of giving a distance between functions are used in the investigation of functionals in the calculus of variations. Distinct methods of giving the distance between functions lead us to distinct infinite-dimensional spaces.

Thus, various infinite-dimensional (functional) spaces differ from each other by their set of functions and by the definition of distance between them. For example, if we take the set of all functions with integrable square and define distance as:Then we arrive at the Hilbert space that was introduced in §2; but if we take the set of all continuous functions and define distance as max | f(x) − g(x) |, then we obtain the so-called space (C.) For a given kernel k(x, y): indicates a rule by which every function f(x) is set in correspondence with another function g(x).

This kind of a correspondence that relates with one function f another function g is called an operator.

We shall say that we are given a linear operator A in a Hilbert space if we have a rule by which we associate with every function f another function g. The correspondence need not be given for all the functions of the Hilbert space. In that case the set of those functions f for which there exists the function g = Af is called the domain of definition of the operator A similar to the domain of definition of a function in ordinary analysis). The correspondence itself is usually denoted as follows: **g = Aƒ**

The linearity of the operator means that the sum of the functions f1 and f2 is associated with the sum of Af1 and Af2, and the product of f and a number λ with the function λAf; i.e.:Occasionally continuity is also postulated for linear operators; i.e., it is required that the convergence of a sequence of functions fn to a function f should imply that the sequence Afn should converge to Af.

Let us give examples of linear operators.

1. Let us associate with every function f(x) the function:

i.e., the indefinite integral of f. The linearity of this operator follows from the ordinary properties of the integral, i.e., from the fact that the integral of the sum is equal to the sum of the integrals and that a constant factor can be taken out of the integral sign.

2. Let us associate with every differentiable function f(x) its derivative f′(x). This operator is usually denoted by the letter D; i.e.**: ƒ'(x)= D ƒ(x)**

Observe that this operator is not defined for all the functions of the Hilbert space but only for those that have a derivative belonging to the Hilbert space. These functions form, as we have said previously, the domain of definition of this operator.

3. The examples 1 and 2 were examples of linear operators in an infinite-dimensional space. But examples of linear operators in finite-dimensional spaces have occurred in other chapters of this book. Thus, affine transformations were investigated. If an affine transformation of a plane of space leaves the origin of coordinates fixed, then it is an example of a linear operator in a two-dimensional, or three-dimensional, space. The linear transformations of an n-dimensional space now appear as linear operators in n-dimensional space.

4. In the integral equations, we have already met a very important and widely applicable class of linear operators in a functional space, namely the so-called integral operators. Let us choose a certain definite function k(x, y). Then the formula:

associates with every function f a certain function g. Symbolically we can write this transformation as follows:

**g=Aƒ**

The operator A in this case is called an integral operator. We could mention many other important examples of integral operators.

In §4 we spoke of the inhomogeneous integral equation:

In the notation of the theory of operators this equation can be rewritten as follows:

**ƒ=λ Aƒ+h (33)** where λ is a given number, h given function (a vector of an infinite-dimensional space), and f the required function. In the same notation the homogeneous equation can be written as follows: ƒ = λA ƒ (34).

The classical theorems on integral equations, such as, for example, the theorem formulated in §4 on the connection between the solvability of the inhomogeneous and the corresponding homogeneous integral equation, are not true for every operator equation. However, one can indicate certain general conditions to be imposed on the operator A under which these theorems are true.

These conditions are stated in topological terms and express that the operator A should carry the unit sphere (i.e., the set of vectors whose length does not exceed 1) into a compact set.

**Eigenvalues and eigenvectors of operators.**

The problem of eigenvalues and eigenfunctions of an integral equation to which we were led by problems on oscillations can be formulated as follows: to find the values λ for which there exists a nonzero function f satisfying the equation:As before, this equation can be written as follows: **ƒ=λ Aƒ or Aƒ=1/λƒ (35)**

Now we shall understand by A an arbitrary linear operator. Then a vector f satisfying the equation (35) is called an eigenvector of the operator A, and the number 1/λ the corresponding eigenvalue.Since the vector (1/λ)ƒ coincides in direction with the vector f (differs from f only by a numerical factor), the problem of finding eigenvectors can also be stated as the problem of finding nonzero vectors f that do not change direction under the transformation A.

This way of looking at the eigenvalues enables us to unify the problem of eigenvalues of integral equations (if A is an integral operator), differential equations (if A is a differential operator), and the problem of eigenvalues in linear algebra (if A is a linear transformation in finite-dimensional space; see Chapter VI and Chapter XVI). In the case of three-dimensional space this problem arises in the search for the so-called principal axes of an ellipsoid.

In the case of integral equations a number of important properties of the eigenfunctions and eigenvalues (for example the reality of the eigenvalues, the orthogonality of the eigenfunctions, etc.) are consequences of the symmetry of the kernel, i.e., of the equation k(x, y) = k(y, x).

For an arbitrary linear operator A in a Hilbert space the analogue of of this property is the so-called self-adjointness of the operator.

The condition for an operator A to be self-adjoint in the general case is that for any two elements f1 and f2 the equation:

**(Aƒ1,ƒ2)=(ƒ1,Aƒ2) **holds, where (Af1, f2) denotes the scalar product of the vector Af1 and the vector f2.

In problems of mechanics the condition of self-adjointness of an operator is usually a consequence of the law of conservation of energy. Therefore it is satisfied for operators connected with, say, oscillations for which there is no loss (dissipation) of energy.

The majority of operators that occur in quantum mechanics are also “self-adjoint.

Let us verify that an integral operator with a symmetric kernel k(x, y) is self-adjoint. In fact, in this case Af1 is the function:

.Therefore the scalar product (Af1, f2), which is equal to the integral of the product of this function with f2, is given by the formula:

The equation (Af1, f2) = (f1, Af2) is an immediate consequence of the symmetry of the kernel k(x, y).

Arbitrary self-adjoint operators have a number of important properties that are useful in the applications of these operators to the solution of a variety of problems. Indeed, the eigenvalues of a self-adjoint linear operator are always real and the eigenfunctions corresponding to distinct eigenvalues are orthogonal to each other.

Let us prove, for example, the last statement. Let λ1and λ2 be two distinct eigenvalues of the operator A, and f1 and f2 eigenvectors corresponding to them. This means that:**ƒ=λ Aƒ1 = λ1ƒ1, Aƒ2 = λ2ƒ2 (36)**

We form the scalar product of the first equation (36) by f2, and of the second by f1. Then we have:

Since the operator A is self-adjoint, we have** (Af1, f2) = (Af2, f1).** When we subtract the second equation (37) from the first, we obtain:

**0 = (λ1-λ2) ( ƒ1, ƒ2).**

Since λ1 ≠ λ2, we have (f1, f2) = 0, i.e., the eigenvectors f1 and f2 are orthogonal.

The investigation of self-adjoint operators has brought clarity into many concrete problems and questions connected with the theory of eigenvalues. Let us dwell in more detail on one of them, namely on the problem of the expansion by eigenfunctions in the case of a continuous spectrum.

In order to explain what a continuous spectrum means, let us turn again to the classical example of the oscillation of a string. Earlier we have shown that for a string of length l the characteristic frequencies of oscillations can assume the sequence of values:Let us plot the points of this sequence on the numerical axis Oλ. When we increase the length of the string l, the distance between any two adjacent points of the sequence will decrease, and they will fill the numerical axis more densely. In the limit, when l → ∞, i.e., for an infinite string, the the eigenfrequencies fill the whole numerical semiaxis** λ≥0** . In this case we say that the system has a continuous spectrum.

We have already said that for a string of length l the expansion in a series by eigenfunctions is an expansion in a series by sines and cosines of n(π/l)x; i.e., in a trigonometric series:For the case of an infinite string we can again show that a more or less arbitrary function can be expanded by sines and cosines. However, since the eigenfrequencies are now distributed continuously along the numerical line, this is not an expansion in a series, but in a so-called Fourier integral:The expansion in a Fourier integral was already well known and widely used in the 19th century in the solutions of various problems of mathematical physics.

However, in more general cases with a continuous spectrum* many problems referring to an expansion of functions by eigenfunctions were not properly clarified. Only the creation of the general theory of self-adjoint operators brought the necessary clarity to these problems.

Let us mention still another set of classical problems that have been solved on the basis of the general theory of operators. The discussion of oscillations involving dissipation (scattering) of energy belongs to such problems.

In this case we can again look for free oscillations of the system in the form u(x) ϕ(t). However, in contrast to the case of oscillations without dissipation of energy, the function ϕ(t) is not simply cos ωt, but has the form eˆ−kt cos ωt, where k > 0. Thus, the corresponding solution has the form u (x)eˆ−kt cos ωt. In this case every point x again performs oscillations (with frequency ω), however the oscillations are damped because for t → ∞ the amplitude of these oscillations containing the factor e−kt tends to zero.

It is convenient to write the characteristic oscillations of the system in the complex form u(x)e−iλt, where in the absence of friction the number λ is real and in the presence of friction λ is complex.

The problem of the oscillations of a system with dissipation of energy again leads to a problem on eigenvalues, but this time not for self-adjoint operators. A characteristic feature here is the presence of complex eigenvalues indicative of the damping of the free oscillations.

**Connection of functional analysis with other branches of mathematics and quantum mechanics. **

We have already mentioned that the creation of quantum mechanics gave a decisive impetus to the development of functional analysis. Just as the rise of the differential and integral calculus in the 18th century was dictated by the requirements of mechanics and classical physics, so the development of functional analysis was, and still is, the result of the vigorous influence of contemporary physics, principally of quantum mechanics. The fundamental mathematical apparatus of quantum mechanics consists of the branches of mathematics relating essentially to functional analysis. We can only briefly indicate the connections existing here, because an explanation of the foundations of quantum mechanics exceeds the framework of this post and we keep it for the 4th line.

In quantum mechanics the state of the system is given in its mathematical description by a vector of Hilbert space. Such quantities as energy, impulse, and moment of momentum are investigated by means of self-adjoint operators. For example, the possible energy levels of an electron in an atom are computed as eigenvalues of the energy operator. The differences of these eigenvalues give the frequencies of the emitted quantum of light and thus define the structure of the radiation spectrum of the given substance. The corresponding states of the electron are here described as eigenfunctions of the energy operator.

The solution of problems of quantum mechanics often requires the computation of eigenvalues of various (usually differential) operators. In some complicated cases the precise solution of these problems turns out to be practically impossible. For an approximate solution of these problems the so-called perturbation theory is widely used, which enables us to find from the known eigenvalues and functions of a certain self-adjoint operator A the eigenvalues of an operator A1 slightly different from it.

In quantum physics WE are using the position basis, the momentum basis, or the energy basis, because they represent the singularity=position, the membrane (momentum) or its conversion through an SHF, simple harmonic function into a lineal wave of motion and the energy or vital space, between them, which are the 3 elements of any system of reality (T.œ).

What this simple means is that there is a natural correspondence between:

- 1D: Position and singularity or active magnitude ‘scalar’, valuation; which is the ultimate meaning of a point-0-particle which becomes the o-1 dimension as a ‘Dirac function of value 1.
- 2D: The membrane or constrain, which becomes the angular momentum of the system, transformed into lineal momentum-motion as the
*angular momentum develops a wave of communication through a sinusoidal/fourier function of (boson) transmission of energy and information.* - 3D: Which ad in several ways, mostly through superposition, 1+2 = 3D, into a wave function (Schrondinger’s energy equation) that fills up the vital space (Hilbert space that includes all the possible configurations of the wave and its enclosed particles).

It is then when the specific eigenvalues and forms that the ternary system can form are expressed as position=singularity momentum= and energy values to characterise the system (constant).

Yet the minimum measure we must obtain will be a ‘planckton’, an h-planck of vital energy, appropriately defined as the uncertain piece we ABSORB TO OBSERVE, so the 3 dimensions can only be related by an equation of uncertainty, h= momentum x position, one of the many expressions of ∑∏= s x t 5D metric, in this case: minimal quanta of energy of a quantum system, h/2 equal to position-singularity x momentum.

Those simple elements appear always in mathematical equations, which are for that reason so often ternary games of the s ‘operator’ ð = ∑∏.

For example, in its most general expression angular momentum expresses the value and relationship of the membrane with its singularity through a common space with a self-centred radius= m (membrane singularity) x r (radius of the vital space) o (singularity center).

Quantum physics is then expressed in a complex Hilbert space just because such space is complex enough to accommodate all the various possible states of an ∆-3 minimal scale of reality, where by sxt=k 5D metric will have maximal number of forms-populations and time-speeds.

To which we might also ad ‘perturbations’ of other systems and scales, which makes it all more complex – and even more so because of the pedantic probability choice of formalism that further complicates the business, as all operators systems and ensembles have to be normalised to get the value 1 (of those processes the Delta function is the only worth to mention at this stage, as we have shown to rightly give us the value of dimension 1 for the zero particle).

Needless to say with so much confusion perturbation theory has not yet received a full mathematical foundation, which is an interesting and important mathematical problem.

Independently of the approximate determination of eigenvalues, we can often say a good deal about a given problem by means of qualitative investigation. This investigation proceeds in problems of quantum mechanics on the basis of the symmetries existing in the given case. As examples of such symmetries we can take the properties of symmetry of crystals, spherical symmetry in an atom, symmetry with respect to rotation, and others. Since the symmetries form a group, the group methods (the so-called representation theory of groups) enables us to answer a number of problems without computation. As examples we may mention the classification of atomic spectra, nuclear transformations, and other problems. Thus, quantum mechanics makes extensive use of the mathematical apparatus of the theory of self-adjoint operators. At the same time the continued contemporary development of quantum mechanics leads to a further development of the theory of operators by placing new problems before this theory.

The influence of quantum mechanics and also the internal mathematical developments of functional analysis have had the effect that in recent years algebraic problems and methods have played a significant role in functional analysis. This intensification of algebraic tendencies in contemporary analysis can well be compared with the growth of the value of algebraic methods in contemporary theoretical physics in comparison with the methods of physics of the 19th century.

In conclusion, we wish to emphasize once more that functional analysis is one of the rapidly developing branches of contemporary mathematics. Its connections and applications in contemporary physics, differential equations, approximate computations, and its use of general methods developed in algebra, topology, the theory of functions of a real variable, etc., make functional analysis one of the focal points of contemporary mathematics.