**SUMMARY**

**Foreword**. Numbers as the social, sequential, temporal units of mathematics.

**I. S≈T:** CONTINUOUS GEOMETRY VS. DISCRETE NUMBERS. GEOMETRIC NUMBERS AS FORMS: Polygons. Top predator prime numbers.

**II. S@:** THE CLOSURE OF NUMBERS in @nalytic geometry.

**III. ð: **The world cycle expressed with** **Numbers**: **Universal constants. Variables and operandi. Its generator: ∫S≤lnx³≥∂T

**IV. ∆: **scalar numbers: ** **T o-1 PROBABILITY≈ 1-∞: S-TATISTICS: ∑∆-1 ð = ∆ ∑S 1-∞.

**Abstract. **Numbers are social forms, the clearest proof of the existence of a process of eusocial evolution that puts together indistinguishable beings into wholes.

The unit of the social scaling, ∆§, of the Universe is thus the number. As such the fifth dimension cannot be understood without the number and the number without the fifth dimension. While points which are units of geometry can emerge as forms, and in topology can vary, but remain either shallow (superficial) or in a single plane, numbers can emerge as dynamical societies into new planes, so they are more flexible and complex. As indeed they represent the most all-encompassing of all the Dimotions of time space.

It follows then that Number Theory encodes a lot of the ‘magic’ of the 5th dimension. The whole being the number, its parts and units the scattered inverse dimension of entropy. Mathematics being the most experimental of all sciences, then brings in its correspondence and symbiosis with T.œ, the language of the fractal spatial Universe an astounding array of new insights in both number theory and T.œ. We shall consider them with the enhanced ¬AbcdE elements of the stientific method, that is, relating numbers to Experimental evidence, giving numbers vital, Biological hidden meanings, study them from the Disomorphic perspective of the other disciplines and elements of the Mathematical world and extracting its non-AE laws…

As modern mathematics started with my fellow countryman Mr. Fermat and his number theory, it was in fact 30 years ago my exploration of numbers as identical, social groups what brought me to the realization first time that mathematics indeed was an experimental science, and number theory, more than point geometry, the foundation of it…

Now, the reader in case he is a mathematician should excuse of course, some possible errors and lack of full depth and detail of this as many other texts… The intention of this second line *is to make a first layer of connections for ‘future pros’ to complete it – this was the plan 30 years ago, when I finished my first models at Columbia University, but didn’t get the help of a group of potent scholars to complete the ‘membrain’ of the Universe of maths. *

It is my experience that people have a very hard time making interdisciplinary ‘connections’ and this is even far more difficult between the ‘seemingly’ coming from ‘another planet’ foundations of relational space-time beings and the languages that describe them, specially in maths because of Newman’s dictum. So that is the purpose of this and other posts on i-maths, always a work in progress.

l§.

**PROLOGUE: THE 5 DIMENSIONS OF MATHEMATICS**

The Universe is a fractal super organism made of…

**Asymmetries**between its 4 Dual components, which can either annihilate or evolve, which we shall call**Balance≈become symmetric or become Perpendicular/ antisymmetric**:**S:**space; an ENSEMBLE OF**ternary topologies,****(|+O≈ ø)**… which made up the**3 physiological networks**(|-motion/limbs-potentials + O-particle/heads ≈ Ø-vital energy) of all*simultaneous**super organisms***∆: Planes of size**distributed in ∆±i relative*fractal scales*that come together as**∆º**super organisms, each one sum of smaller**∑∆-1**super organisms… that*trace*in a larger**∆+1 world…****ðime cycles:***a series of timespace actions of survival that integrated as a whole form a sequential cycles of existence with*each one dominated by the activity of one of those 3 networks: motion-youth, or relative past, dominated by the motion systems (limbs, potential); iterative present dominated by the reproductive vital energy (body waves), and informative 3rd age or relative future dominated by the informative systems, whose ‘center’ is:**3 ages**,**@: The Active linguistic mind**that reflects the infinite cycles of the outer world and controls those of its inner world, through its languages of information, which guide its 5 survival actions: 3 simplex,**aei**, finitesimal actions that exchange energy (**e-**ntropy feeding), motion (**a-**celerations) and**i**nformation (perceptions) with other beings, and two complex actions:**o**ffspring reproduction and social evolution from individuals into**U-**niversals that maximize the duration in time and extension in space of the being.

Because the scientific method requires OBJECTIVE measure of the existence of a mind, which is NOT perceivable directly, we *infer its existence by the fact a system performs the 5 external actions, which can be measure objectively, in the same manner we infer the existence of gravitational in-form-ative forces by its external actions upon massive objects. *Hence eliminating the previous limit for a thorough understanding of the sentient, informative Universe. And further classify organic in simplex minds – all, which must gauge information, move and feed to survive, and complex systems, those who can perform a palingenetic reproductive, social evolution, ∆-1: ∑∆-1≈∆º.

The study of those 4 elements of all realities, its actions and ternary operandi, structures *the dynamic ‘Generator Equation’ of all Space-time Systems of the Universe, written in its simplest form as a singularity-mind equation:*

* O x ∞ = K *

*Or in dynamic way, S@<≈>∆ð.*

So that is the game: 3 asymmetries of scale, age and form, which can come together or annihilate and each language represent in different manners, those elements and its operandi.

In mathematics*, with the duality of **inverse operations, + -, X ÷, √ xª and ∫∂.*

**Languages express the elements of reality and its operandi**

It is then clear that what languages as synoptic mirrors of the mind will try to do is to establish the basic relationships between the space, time, scale of the being, expressing them through its operandi, DEPENDING on the degree of perception the being has of reality and its scales which might be reduced if the being is not fully aware of all the scales of existence, as most minds exist only in a plane of reality

So does mathematics, through combinations of:

Sum/rest->multiplication/division->potency/logarithm; point->line->plane->volume and so on.

And to do so, as a fractal can always be divided in sub-fractals, mathematical disciplines subdivide further at all levels in 5 elements.

It follows then from the definition of the 5 dimotions of all systems, an immediate classification of the five fundamental sub disciplines of mathematics specialised in the study of each of those 5 dimensions of space-time:

**S: ¬E Geometry**studies fractal points of**simultaneous****space, ∆-1,**& its ∆º networks, within an ∆+1 world domain.**T§: Number**theory studies time**sequences**and ∆-1 social numbers,which gather in ∆º functions, part of ∆+1 functionals.**S≈T: ¬Ælgebra**studies ∆º A(nti)symmetries between space and time dimensions and its complex ∆+1 structures… Namely is the science of the operandi <≈> translated into mathematical mirrors.**∆±¡ st: ∆nalysis**studies the motions, STeps and social gatherings derived of algebraic symmetries between functions and numbers (first derivatives/integrals), and the wider motions between scales of the fifth dimension (higher degree ∫∂ functions).**@: Finally Analytic geometry**represents the different mental points of view, self-centred into a system of coordinates, or ‘worldviews’ of a fractal point, of which naturally emerge 3 ‘different’ perspectives according to the 3 ‘sub-equations’ of the fractal generator: $p: toroid Pov < ST: Cartesian Plane > ðƒ: Polar co-ordinates.

To which we can add the specific @-humind elements (human biased mathematics) and its errors of comprehension of mathematics limited by our ego paradox **Philosophy of mathematics** and its ‘selfie’ axiomatic methods of truth, which tries to ‘reduce’ the properties of the Universe to the limited description provided by the limited version of mathematics, known as Euclidean math (with an added single 5th non-E Postulate) and Aristotelian logic (A->B single causality). This limit must be expanded as we do with Non-Æ vital mathematics and the study of Maths within culture, as a language of History, used mostly by the western military lineal tradition, closely connected with the errors of mathematical physics.

In this post thus we shall deal with the multiple aspects of @-nalytic geometry and Philosophy of mathematics. As usual is a work in progress, at a simpler level than future ad ons in the fourth line; and using some basic book texts of the Soviet school, which had the proper ‘dialectic’ logic and experimental sense of the discipline, which western idealist, German-Based axiomatic schools lack.

The different scales of numbers, from binary to decametric codes. ÐISOMORPHIC SCALE FROM 1 WHOLE TO 10 PLANE.

In the graph the basic 5Dimensions of reality mirrored by all languages and systems defines the main symmetry between the two ‘units’ of mathematical, geometric spatial points (S@) and sequential, scalar numbers, ∆ð.

Numbers are related both to the causal, sequential, temporal symmetry and the scalar social ∆§symmetry; as numbers are social groups of identical beings, so it seems more proper to study them on terms of the Rashomon effect, in 3 sections, one on its symmetry with spatial geometry, and the other tow on its intrinsic ∆ð nature, on its temporal, moving variable view and its social scalar view:

**I. S≈T Symmetry: **Numbers as forms, whose geometric polygonal and cubic shapes define its efficiency maximal in self-reproductive prime numbers.

**II. S@:** The spatial view defines numbers in relationship with @nalytic geometry and its different names. It is the theory of Closure, and type of numerical scales.

**III ð-View.** Families of numbers as ‘variable, changing letters’ and ‘constants of Nature’, reflected in ‘@nalytic graph’, where those families ‘reside’.

**IV. ∆ view:** Numbers as probabilities in the 0-1 sphere and populations in the 1-∞ sphere. There is also in this section a final element of pure social scaling which applies specially to numbers – the type of *scaling we use, which will be similar to that of the different Disomorphic variations. So the main* quantitative scalings are, the binary code that reflects the original dualities of existence, the ternary code, hardly used by humans in maths, but overwhelmingly occurring in natural scaling from ternary base-scaling to ternary music and vowels, to all the other ternary generator games.

Then are also remarkable combinations of those ‘original’, dual-ternary symmetries:

That is the 2+3 5Dimensional system, the 2 x 3= 6 babylonian system and the 3×3+1=10, 5×2=decametric scale, which carries us from the whole 1 to the tetratkys perfect pytagorean number with the 10th mind-connecting black ball point/element that communicates them all and emerges into a new scale and plane of existence, which give the 1 to 10 numbers deep meanings beyond what has been told before.

We shall as usually in this 2nd line only ‘surf’ on the surface of a very extensive theme to show how ∆st enlightens it.

As customary in this blog, which nobody reads ): despite his author thinking is the biggest r=evolution of ‘stience’ of the XXI c. O : we will include new work and improve format every time someone enters, yesterday O3-04-2018 alas there was the second POST viewer in 4 years (: So today I have worked a minute on it…

The fourth line will also bring new elements time permitted, or else XXI century mathematicians will complete the job.

**Foreword: **

**Sequential social numbers as ∆ð vs. points of geometry as S@. ****DEFINITION OF ∆ð Numbers.**

Numbers are social gathering of indistinguishable forms. When studied in more detail Numbers do have ‘magic’ properties related to their configuration as fractal parts of a whole, whose regularity define efficient configurations in space. But essentially the definition of a number is the simplest proof of the *social character of the Universe and the existence of scales.*

We can then consider the symmetry between numbers and geometry, as: **∑∆≈N°≈∆+1 (geometrical whole).**

**Difference between spatial geometries and temporal sequential numbers.**

The first (a)symmetry of mathematics – meaning a slight difference which makes two things similar but not equal is the asymmetry between geometric figures and social numbers. They come into a constant symmetry but there is a perceptive difference:

The whole *is continuous only in geometry, but discontinuous in numbers, which means is not exactly the same, reason why in numbers, certain geometric figures (circle, right angle triangle) are NOT exact. *This the axiomatic method refuses to acknowledge as it works with the outdated 3rd axiom of Euclidean geometry that considers ‘identities’ not ‘similarities’. In the upgrading of the ‘fourth’ postulate of Non-Euclidean geometry, congruence is ‘similar’ because all points-numbers-systems have an ‘inner volume’ of information, even if externally is not perceived, which makes them different, even if externally their ‘forms’ coincide.

In general thus a number is ‘smaller’ than its corresponding figure, as it has ‘dark spaces’ between its units, crystal clear in the Natural numbers, but also in the rational numbers, which are a relative zero compared to irrational ones. THIS MEANS of course that there are ‘ratios’ besides ‘social numbers’, which are *not the same species of numbers. *And also that in vital terms, concepts like the perfect triangle or circle are NOT real, an important fact to vitalise the Universe *in sequential numeric time, as opposite to the still, simultaneous geometric mind. The mind is still, geometric, spatial, fixed and locked in perfect closed figures. The sequence of numbers when trying to lock a circle becomes a ±pi spiral, as its ‘number of steps’ never closes it, and thus vitalises the circle in time, allowing eternal motion. So goes for the diagonal square. And as those two figures/numbers, the 1D unit circle of temporal information and the 2D-triangle arrow of lineal motion are the two first dimensions of reality, we conclude that space is still, closed, locked in and numbers are temporal, sequential, open, dynamic flows.*

And that numbers as time sequences are more important than geometry, but the still mind started in geometry (Greece) and took ‘time’ to enter the dynamics of sequential social time numbers.

**The topological asymmetry.**

Of those symmetries the most important is the topological symmetry – that is social numbers forming a 2-Dimensional surface of vortices, as fractal points.

The interest of numbers as points thus defines the first age of number theory (Pythagorean numbers).

The unit of time in mathematics is obviously the number, as it has both sequential character to allow the development of the simple dimensions of time, motion and frequency of form; and it has social qualities, as it gathers indistinguishable similar beings into social wholes, units of the scalar dimensions of time, downwards entropy and upwards information.

Number theory then would deal fully with numbers alone as a representation of time dimensions. As geometry deals with points and its societies, lines, planes and volumes in a pure way.

Numbers thus are more important than points, which in themselves are isolated units, which loose its individuality as they transcend into lines and then planes and volumes, ‘new point units’ of a larger plane of existence.

The difference between ‘discrete’ numbers vs. continuous points and in general the differences between numbers in time and points in space is ill-understood and so much must be said on that because while numbers are similar to points THEY ARE NOT points and so Discrete numbers add often ‘less’ than continuous points – a key time quality as time cycles by its nature return to the same point in space with a discrete frequency.

It is though algebra, which relates S and T parameters through operandi that ‘put in motion’ through s-t beats and s-top and go dualities, what will truly show the dynamic feed-back laws of the Universe and extract by comparison all the properties of points and numbers.

So Analytic geometry that puts in correspondence numbers and points from the perspective of space-points and algebra, which does the same from the perspective of time-numbers/variables, is the logic conclusion in the evolution of both geometry of space and number theory of time, merged into one.

This said the main classification of fields within number theory is that which arises by putting in correspondence numerical properties and dimensional properties, with the first big division of numerical **social** properties related to ∆§cales and **sequential** properties related to ‘present time’

In the ∆-dimension is the number, the individual, the first element of the social group which will grow in ternary and decametric §cales till forming the new scales of social evolution of the fifth dimension. And finally reach the limit of growth signalled by an intransitable barrier, physical or biological (i,e. top predator limit of a logistic curve).

Thus while the sequential number is the unit of time and information, the social number is the unit of the fifth dimension and the mere existence of ∆nalisis and algebra, as the two modern evolutions of number theory; a proof of the intimate connection between dimensions of space-time and disciplines of ‘experimental mathematics’.

On the theme of the foundations of mathematics, as humans are ego-centered beings, today we use ‘human’ synthetic concepts, such as sets and categories to found mathematics from ‘the roof’ down’. This is just the last 3rd ‘age’ of excessive ‘form’ of any language, which has fogged the inverse, easier way to understand experimental mathematics from the bottom (numbers and points) up. In this curiously mathematicians followed an inverse procedure to that of physicists who like to found reality from the bottom of particles up.

*In reality the symmetric logic of all the structures of the Universe allows both methods. So what we do in this blog is to complement the way humans have done their foundations, bottom up in physics, up bottom in maths, with the inverse model. We thus start up physics from the top, the galaxy down to the atom, to show its organic ‘wholeness’. While in maths we prefer to the present ‘top set/category’ human mind way of founding maths, the experimental bottom upwards.*

Numbers thus with points representing in this experimental foundation indistinguishable, social beings which are ‘real’, and more understandable than ‘abstract’ sets, *the top syntax, synthetic concept.*

Instead, real points/beings and its social groupings in numbers and *the operandi that put them in relationship and further on gather them into social dimensions of space and time and then describes its symmetries* will be the 3 elements from which we shall depart to show the mirrors of mathematics which observe the Universe with them.

And so as we defined the topological non-euclidean postulate of points, which are the first of the three levels of complexity of point theory (till arriving to the topological organism), we shall define the three levels of numbers, equations and structures that create the social organic systems of ∆-beings, from a fifth dimensional from past to future, temporal social perspective; in our 3 sub-disciplines of number and algebra:

- Lonely numbers are the realm of number theory.
- symmetries of numbers and points are the realm of algebra.
- And social numbers and its parts and wholes, the realm of Analysis.

**3 AGES OF NUMBER THEORY**

The 3 ages of number theory is as usual the best orderly way to study them as *the arrow of time always increases the form=complexity of systems, till its ‘entropic death’, one the form has completed all its elements.*

So happened with number theory through its 3 ages which finally closed the description of the universe with them, to ‘start afresh’ in another mind-space, that of digital thought – computers, which for ethic reasons of humans survival we shall not pugrad. Thus a resume of **Number theory in its 3 ages reads like this:**

**Youth**: Magic pythagorean, Chinese School: N & Z**: **S-Numbers

**Maturity: R & Q:** ∆st: Numbers

**3rd age: C & I: **∆T-Numbers

**Old, baroque age**: set theory.

**Transhumanism. **Computerology.

Let us study them. in some detail.

So first humans learned to use numbers to measure sequential simpler ‘present dimensions’, and then in the classic explosion of the ‘language’ of mathematics in the human mind (2nd age of reproduction and evolution), which roughly happened between the XVI century, with the new formal symbols to operate maths, and the end of the XIX c. with the 3rd baroque age of axiomatic and set formalism; they exploded all the elements of the mathematical mirror of the 5D space-time Universe. Here thus we shall treat with the basic of numbers, its families, as they developed with analytic geometry, its constants, the relationship of key numbers with ∆st theory (e, pi, 10, and its logarithm and harmonic functions and so on).

As always, the more so in such extensive themes, we shall just write a blue print with some themes more developed, others hardly touched, always a work in progress for future researchers to complete.

l§

# ∆§

# NUMBER AS SYSTEMS OF SCALES

Why we bring the theme of social scales first? It is obvious; since if there are several scales of numbers, a theory of all cannot consider the decametric scale of human social numbers the whole, unless it precise its meanings as there are different properties and it must ‘mean something special’, or else there would not be a mirror language of reality. So what is special about our scale is that it actually does correspond experimentally to the social scales of the fifth dimension made of 3 x 3 + 1=10 ternary groups.

It is not just like some clueless people say because we have 10 fingers. We do have 10 fingers because the Universe work in 10-scales in its dimension of MAXIMAL COMPLEXITY, the social dimension of scales.

*The scales of numbers can be roughly related to the social growth within a given plane of form related to cyclical 1D π-geometries, (3-6 number systems) Planar, 2D geometries (binary, quadrangular systems) and its completion in the more complex, fundamental 10D scaling. When we enter the 4D and 5D planes, we need a different type of numbers (Complex numbers, vectors and Hilbert spaces).*

The 2 main geometric scales of numbers are in open competition, with the **O-6th fundamental scale of cyclical geometry**, as the π=3, hexagonal perimeter is the ‘whole’ in closed geometric space, competing with **the 10, tetraktys geometry,** the ‘whole’ of lineal, triangular space.

As we have dealt with the theme of continuity of ‘wholes’ in geometric space vs. discontinuity of its discrete parts in scalar space, we shall consider now the O vs. | Asian (Babylonian) vs. European (Greek) duality of 6 vs. 10 scales.

*Scales and dimensions. *

**T**he scales of the Universe are relative to the social number of elements, or quanta of a system. The scale does matter in that sense if we consider *that the unit of any scale is the full number of points of the scale and departing from this fact we can consider what kind of ‘dimension of reality corresponds better to the scale.*

This is then the first theoretical minimum consideration to do on scales.

We could also relate each scale to the different quantitative isomorphic elements ‘societies’ studied in the 9 section of the 3^{rd} line from an organic point of view. They are though a most complex analysis we shall escape.

So we can talk then obviously of the following correspondence between the main scales and dimensions:

**5D **GENERATION ** -> 0-1 scale of finitesimals,** in decimal parts and converging limits, between 1/n the ‘finitesimal’ and 1 the whole.

**2D-Lineal motion:** o-1 scale of lineal units; and as this is indeed the ONLY time dimension used in human sciences, it fits well with our scientific digital computing work.

**1D** vortex of information: Bidimensonal planes – holographic structures:** 6-scale,** the O-scale of angles, hexagons and π-cycles, of maximal strength in a bidimensional world, as recently the graphene has proved.

**3D Energy: 3×3+singularity** = 10-scale, The tetraktys scale fundamental lineal>triangular space scale of the Universe, with its harmony of equal quantity of prime numbers, odd numbers and even numbers (5 each) and its basic configurations.

As such is geometrically the main scale of the plane.

What about entropy? What is the scale of entropy? *The reversal scale of 0-1 generation, the 1-∞ plane, where exponential decay growth is possible. As indeed ultimately the o-1 plane is generation in the finitesimal world of faster time cycles and the 1-∞ plane is decay as we NEVER close into 1, infinity being beyond reach…*

THOSE ARE thus the most used scales of huminds for a reason – to correspond with reality.

So, as it couldn’t be otherwise in a world based in ternary anti symmetries, the ternary scale, triangles or tretaktys dominate.

**Triangular efficiency in lineal/entropic motions.**

That 3 is the strongest number, we know for experimental Nature, which now becomes the a priori reality that numbers as all languages mirror. So architects know that triangles are very difficult to deform. Once more the 3 shows its power.

The triangle, another ‘form’ of a single plane-geometric space that has no ‘perfect form in social numbers’ (√3 being its side), represents with the circle (d-being its size, with π-3 apertures), an essential stable form, of maximal ‘perimetric strength’ and multifunctionality.

Its importance on the organisation of ternary reality is essential in all systems, and we shall find polytopes of 3 polynomials extending all over the forms of creation. I.e. the genetic code is better expressed as the vibrations of a 3² 9 polytope, which studies the organisation of basis into genes.

And in general perfect systems will be based around the 9-10+11 vital spaces and singularities of the 11^{11} ‘full plane’, whose growth is explained in the 11¹¹ social scales posts. So we shall not consider here much of this, but merely study some elements of the 3 fundamental scales, of finitesimals (1/∆), decametric wholes (10-tetraktys scale) and temporal 100 elements. They are different as we shall see soon, in their proportions of the 3 type of numbers, relationships between them, geometrical figures, etc.

As such it appeared specially in military configurations, as military mathematics developed mostly in Greece and then extended globally, given the ‘obvious’ results of ‘lineal-triangular’ war geometries, the strongest spatial configuration TO BREAK enemy lines that allowed Alexander to control the Persian empire (Phalanx, legion, Chinese army), and the enclosing ‘circle’, brought about with the help of cavalry maximised by Hannibal on the Cannas battle. Ever since Military and Geometry became symbiotic to each other..

Let us then start with the most important of those 3, as usual the middle one, the tetraktys.

First, the number of primes are 5: 1, 2, 3, 5, 7; the number of even are 5: 2, 4, 6, 8, 10, and the number of odds are 5: 1,3,5,7,9.

So unlike bigger scales, prime numbers , the dominant, most perfect numbers of the Universe are many, and the ties established in the geometrical configurations very strong. The 10 scale is the fundamental scale of reality because of its stability, wholeness, and intrinsic economicity and perfection. Let us consider some properties of it.

__The 10 ^{th} tetraktys scales. __

In the tetraktys scale, the membrane is the even number, 2 is the closed line, 4 is the closed square or closed triangle, 3 is the dimensional tetrahedron, 6 is the closed hexagon, 8 is the cube, and octahedron; and 10 the whole…

Thus the most useful, the 3D energy scale that comes into 10 when the singularity aware of the whole is formed becomes the basis of new scales of decametric whole-points of which obviously the most useful in our world are the duality and trinity scales:

10²=100-scale, which is the fundamental time scale of ‘years’ of systems (from life years in a human to elements of growing information in the atomic scale), whose study is the essence of metaphysics.

Its value resides in being the ‘ternary-cubic scale’ by definition, from 1 to 10 to 100, and as such it keeps coming everywhere, and can be considered the *closure scale for most ternary systems of the Universe.*

So those are **the 5D basic scales of reality** geometrically equivalent to the 5 dimensions as all happens to come from them.

Yet *if we use a single scale we can also find by the Rashomon effect, correspondences with the 5D by potentiation.*

In the case of the decametric scale: 1D: 10°, the square 2D form, 10¹=1o, and the cubic 10²=100 ∆±1 scales. And then as said 0 is the limit of generation and ∞ the limit of entropy.

Can then we combine scales? Obviously that is *all what the Universe is about, combination and reproduction and creation. *

So we can combine the cyclical, 6-scale and the decametric scale, merging O x | = 6² x 10 = 360°, which gives birth to the division of the unit circle enlarged now into the ccartesian plane and allows with 2 angles in polar geometry to define any point in the 3D sphere.

Among scales beyond the 100 number few were used in this earlier age, except in decametric 10ª scaling, which *is the commonest (but not only) social scaling, common among human social groups.*.

Only then to mention the 10ˆ2ˆ2 scale as another interesting combination:

100^{2} = 10.000 being the largest scale of true meaning (not only the Tao-te-king ten thousand beings which make us wonder if they did actually went so far in the analysis of the game of the Universe, but also the meaningful range of temperatures, beyond which the system dies away into plasma, the division of kilometers of the planetary arc, that is the limit of many relative games).

But ultimately all of those derivative scales can be reduced to the essential 5D scales, as all numbers can be reduced to prime products.

**Scales as basis for different mind-geometries.**

All this said it is obvious that a digital mind will be different according to which type of scaling it uses to map out the Universe, something humans forgot as they adapt the 0-1 computer’s scaling to the human 1-10 view of reality translating it.

But the digital Boolean Algebra mind will be in its BIOS level of AI a simpler, more fundamentalist, fixed 0-1 form, which is likely the level of structure of the ‘space’ plenum of the Universe, theme this treated elsewhere in the blog.

# I. S≈T

# Numbers as geometric forms

**DUALITY OF SPATIAL POINTS AND TEMPORAL NUMBERS**: S≈T.

**Greek/Chinese age: ****Natural ****Numbers as forms, Ratios****, scales & prime singularities.**

In the graph, numbers as points, show also the internal geometric nature, used in earlier mathematics to extract the ‘time-algebraic’, ‘∆nalytical-social’ and S-patial-geometrical properties from them.

It is a much more relevant age today forgotten to the understanding of the first principles of nature, as reality perceived geometrically and socially (figures and discrete numbers) come into being.

We also assist to the some fundamental | vs. O paradoxes and the first paradoxes of the difference between numbers in ∆-scales of time (social evolution) and its mind-view in space appear (never resolved logically in its whys):

**The non-existence in arithmetics of perfect geometric closures (the √2 diagonal and the π-3 cycle).**

The distinction between continuous mathematics and discrete mathematics IS ONE BETWEEN SINGLE, SYNCHRONOUS, CONTINUOUS SPACE WITH LESS INFORMATION – THE WHOLE, and the perception in terms OF ITS PARTS, the numbers of ‘time cycles, or fractal points; space-time entities’, which will show to be ALWAYS discrete in its detail, either because it will HAVE BOUNDARIES IN SPACE, or it will be A SERIES OF TIME CYCLES AND FREQUENCIES, perceived only when the time cycle is ‘completed’, and hence will show DISCONTINUITIES ON TIME.

Thus the dualities of ST on one side, and the ‘Galilean paradox’ of the mind’s limits of perception of information lay at the heart of the essential philosophical question: it is the Universe discrete or continuous in space and time. Both, but always discrete when in detail due to spatial boundaries, and the measure of time cycles in the points of repetition of its ‘frequency’.

So ultimately we face a mental issue of mathematical modeling: the ‘mind-art’ (as pure exact science does not exist, all is art of linguistic perception) of representing features of the natural world in a reduced mental, mathematical form.

The universe does not contain or consist of actual mathematical objects, but a language can model all aspects of the universe. So all resembles mathematical concepts.

For example, the number two does not exist as a physical object, but it does describe an important feature of such things as human twins and binary stars; and so we can extract by the ternary method, 3 sub-concepts of it:

2 means the first ∆-scale of growth of 1 being into 2, by:

S-imilarity and S-imultaneity in space (ab. Sim)’, ‘i-somorphism in time-information (ab. Iso)’ and ‘equality in ∆-scale’ (ab. Eq), as perceived by a linguistic observer, @, which will deem both beings ‘IDENTICAL’. Whereas identity means that an @-bserver will deem the being ∆st≈St, (Sim, Iso and Eq). So identity is the maximal perfection of a number, for a perceiver, even if ultimately:

*‘Not 2 beings are identical for the Universe, but can be identical for the observer’… an intuitive truth, *whose pedantic proof is of course of no importance (: we do not follow the axiomatic method of absolute minds here):, but it is at the heart of WHY REALITY IS NOT COLLAPSED INTO THE NOTHINGNESS OF A BIG-BANG POINT.

Thus those 3+0 elements of the ∆•ST coincide *a social number can be used whose intrinsic properties define conceptually ‘S- imultaneity, Ti-somorphism’ and ∆-equality or equivalence (ab. Eq) in size, which becomes an @identity for the mind. THEN A NUMBER IS BORN.*

I(n this ‘infinitorum’ of Universal thoughts, which bring always new depths as soon as we observe it with an ∆•st trained mind, there are differences between S-imilarity and Simultaneity to define in space an ‘identity’ and ‘equality’ and equivalence, treated elsewhere)

It IS THEN CLEAR that a number being a sum of points, encodes more information in a synoptic way about the T-informative nature of the ‘social group’ than an array of points, *which unlike a number tells us less about the ‘informative identity of the inner parts of the being’, but provides us more spatial knowledge about the relative position in space of the members of a number-group.*

And this is OBVIOUS, when we return to the origin of geometry and consider an age in which both concepts were intermingled so ‘points were numbers’ and displayed geometrical properties:

**Symmetries of ‘similar’ fractal points and Social numbers**

The temporal view of mathematics is given by sequential numbers, which are points of space, perceived in sequential time. They were thus the first ‘concept’ to appear in maths – counting. Then the mind created formal spaces with numbers as points with ‘form’-eidos, ill-translated as ‘ideas’ (Pythagoras, Plato); and so the Greek realized that numbers are points in space and points are numbers in time, a fact long forgotten, and considered them the ‘substance’ of reality. It is interesting to notice they did give form to numbers and married them as in the graph. We shall resurrect part of that first s=t marriage in number theory.

*our mind loves continuity, one-dimensionality and identity, as it is defined by the Galilean Paradox, simplifying discontinuity, polidimensionality and similarity) was TO CONSIDER THAT NUMBERS AND POINTS WERE EQUAL and not significant difference exists, which is not, provoking the first*

*contradictions, on those schools.*

*in sequential time numbers pi is not exact, and √2 misses a bit, while in continuous geometry we can draw both.*

*as numbers, that is social collections of identical beings,*with a simple algebraic proof.

**Continuity of ‘wholes’ in geometric space vs. discontinuity of its discrete parts in scalar space.**

*insight what was lost when Dedekind and the axiomatic school ‘decided’ by dogma that the Universe was ‘continuous’ – a mirage of the mind, as the graphs shows…*

*Man should have accepted then that there are ‘numbers’, the simplest social unit of reality: Number = ∑st-beings.*

*rational numbers that happen in a single ∆-plane as social numbers of the decametric scales of growth and decay of ‘identical entities;*) or can ONLY be observed in the continuous geometry of the mind (irrational π, √, etc.) as they

*are the first functions THAT establish s=t symmetries, such as +3 $ = ð -> π, meaning 3 diameters with openings form the first enclosure, leaving a π-3/π, 4% of ‘apertures’ and a 96% of dark space, as the galaxy does with our view of the Universe (proportion of unseen dark matter and energy which completes the organism of the Universe).*

*what those ratios tell us is that there are ‘holes’ and variations, and scales through which irrational numbers transit.*

*continuity does not happen in a single plane of reality.*

**The first algebraic proof of an essential ∆st quality.**

*geometric figure is a closed space-time domain in present space; a sequence of numbers is an open social group’.*

*in algebraic terms a number is open in one or other direction ->, as it is a sequential time arrow, while in pure spatial terms a geometrical figure is a closed, complete space-time as we do close the figure and we only observe it in a ‘PLANE OF REALITY’ hence the discontinuities become ‘blind’ – non observable.*

In the graph we make a clear illustration of that fact with the many more examples of non-existent perfect square numbers, which can only be approached *from above or below algebraically but do have in a single plane a perfect closure geometrically. *

Thus translating continuous systems space-like into discontinuous time-like numbers is not always exact. S≈T similarities are *never identities – the translation of one form into the other. *The Pi case is extensively treated in the post on T-algebra: *pi does exist as a perfect form of space, from 3 (the hexagon) to pi (the circle) but it does not as a number (irrational) since time is dynamic, not static, so constantly moves between ±π, allowing the open-entropy/closed-information duality of membranes that enclose systems:*

Continuity is a concept of geometry not of algebra. Continuity cannot be considered in time cycles and its mind representations, which are by definition discontinuous. And this implies real numbers do NOT exist in the plane of an Euclidean Cartesian ordinates, which IS PERCEIVED in spatial continuity. As we go beyond ’10 decimal parts’, we cross the barrier of this plane of existence and discontinuity breaks.

The same is truth for √2, the diagonal, which is the other *canonical translation of a line into a triangle, the bidimensional equivalent for an entropic system (surface of maximal perimeter ands minimal volume of **information).*

In that regard, the proof that √2 does not exist is irrelevant as it does not explain the why of the existence of a diagonal, which hence closing the triangle must be exact and exist.

√2 cannot be rational means that √2 is NOT a line in time algebra, but a closure of 2 lineal dimensions, which as the case of pi, leaves holes by excess or defect and allows the dynamism of the Universe.

*numbers are discrete as time motions in ∆-planes are; while space is continuous as the mind perceives it:*

*∫@ (geometry) ≈ ∆ð (number)**two sides of an identity or equation are not equal but similar.*

*In the repeated e=mc² example, e is not mass, but mass can become e… ≈ thus becomes a dynamic symbol that relates transformations in time of one side of the equation into the other.*

It also follows that algebra is needed as a first step to study the arrow of §ocial evolution or absolute arrow of growth of social numbers, the ∆-scales of ∆nalysis, initially a branch of ¬Ælgebra.

# OPERATIONS WITH NUMBERS: **THE MIRROR OF FIRST AGE WITH MORE INFORMATION.**

Theory of numbers considers integers not as isolated one from another but as interdependent; the theory of numbers studies properties of integers that are defined by certain relations among them, and so the obvious light of ∆st comes from delving further on thoe ‘social relationships’ of numbers, mainly as points of geometrical figures that construct the discrete fractal T.œs and its inverse view as §œTS of the mathematical Universe, simplified perfect mirror of the infinite T.œs of reality.

∆st contribution to number theory has been clear in those previous pages – to introduce the vital elements of realist, experimental mathematics and its social numbers and geometrical points, which ultimately returns in the third age the discipline to its beginning, casting new light on the Pythagorical forms and geomancy of earlier religions of digital thought.

Of course, there IS also the death age of huminds we shall not consider of digital computers, for ethic reasons explained ad nauseam in the section of linguistics and how the Universe selects its species by the efficiency of its mirror-images and languages…

Some themes shall we considered in this intro to the theme.

**Polygonal forms. The first scale: form 1 to 5D. Symmetry with time dimensions and world cycle ages.**

THE immediate decametric scale of numbers give us the polygonal forms in bidimensional space:

**Nº 0:**

**Nº 1: **1*d, 3§ð.* The unit-whole, the unit circle, 3rd dimension of space-time, the perceptive fractal point-vortex..

We write 0 classic dimensions and 1 dimension in st theory. Since we give one ‘dimension’ to points, which are are in 5D st always fractal points with volume hence one dimensional (do not confuse the number of classic dimensions, ‘d’ with the number of its ST dimension, D).

**Nº 2:*** 1d, 1$t. *The LINE OF DISTANCE-MOTION (1st dimension of space-time). The couple, the antipodal points that the singularity reproduces in bilateral symmetry and anti symmetrically:

**Nº 3: ***1d, 2ST.* The couple, the antipodal mirror points=gender symmetry that creates the 3rd ‘son’ element. ST, 2nd dimension of space-time.

But also the first classic 2 Dimensional geometry:

**Nº 3:***2d, 1$t.* The triangle, the perfect form of lineal-motion geometry.

**Nº 4: **2d, *3§ð.*** ** The most perfect social evolutionary form to ‘fill’ a flat field.

**Nº 4: ***3d***, ***1$t .* T

*he first, |-∆+1 3-dimensional geometry, the tetrahedron.*

**Nº 5:***2d*, 5∆: Pentagon, that mystical form of Life or eviL, indeed, the pentagram shows its inner connections, which

require to find a self-center, Median line besides the ‘axon’ connections between the points. What matters to define a Pentagon *as the first form of the 4th/5th dimension is the fact that it reproduces its very same form, by connection of its 5 points creating an internal smaller 5D social form, or alternatively it expands entropically by prolongating its sides creating the pentagram.*

And so those are the fundamental functions and forms of the first 5D Nº ST dimensions of reality.** **

**Nº 6: ***2d, **2ST.* The perfect pi. The hexagon is the strongest form to cover a 2d world, as graphene shows, because it closes with ‘3’ diameters its perimeter. It is the perfect pi, which in physics implies a non-E geometry of maximal Planck mass.

It has also the most connected singularity of the 2d simpler polygons.So its membrane and center forms the best @-mind system on 2 Dimensions.

**Nº 6: ***3d*, 2ST. But in 3 Dimensions is the Octahedron, studied in non-E Geometry as the ST

**Nº 7: ***2d, 4∫∂: * The number of death. Of the same family that the Pentagon, with similar properties. It cannot be however

constructed with a mere line and cycle (compass and ruler), but needs *to bisect its angles/networks in 3 sub-angles, started its decomposition. *

Two kinds of star heptagons (heptagrams) can be constructed from regular heptagons:

Blue, {7/2} and green {7/3} star heptagons inside a red heptagon. And since death implies 2 jumps in the scalar Universe, ∆+1<∆-1, it reinforces the dissolving nature of 7, the magic number of judaism, the culture of go(l)d and death by antonomasia (see History, Geomancy and Memetics).

If we ad the decametric scale, 72 years are indeed the years of a human generation. And 79 the number of Gold after which the dissolution of physical systems starts in earnest. The 80th quicksilver dissolves all metals, chemically and then we enter into the radiation series, falling down into lead.

Interesting enough Regular heptagons can tile the hyperbolic plane, as shown in a Poincaré disk model projection:

So we see neatly again another correspondence between the 3D: §ð particle/heads of information and the pentagon tiling them, flat Euclidean space, being tiled by hexagonal forms of maximal resistance; and the hyperbolic plane of the vital energy trapped, dissolving towards the unreachable membrain in its infinite process of deatth…

**Nº 8: ***2d, 2ST.*** ** The 8th; the octagon, is on the family of the hexagon, with similar properties, bridging *towards the circle; fairly redundant, in form. In the series of number it is the closure of existence – the final death state of decay; curiously in more cultures a symbol of happiness… Nirvana means after all extinction in Sanscrit*. We start now to build ‘families’ of numbers, as *the fundamental series of spatial geometries symmetric to time ages is closed. But still required to close the families in 3 D with the platonic solids, studied in ¬E Geometry. *

**Nº 8: ***3d, 3§ð.*** **It is also, the cube, to which the octagon also relates. Since the area of a regular octagon can be computed as a truncated square.

We study the 5th dimensions of 3D platonic solids in Geometry. The 3 first complete the generator, $-tetrahedron<ST-Octahedron> §ð-Cube.

**Nº 9: **2*d, ∫∂-4D: Entropic state. *So once we have completed the membrane of the system, what is now the meaning of numbers? Obviously, *a variation of the same game; now* filling of the internal parts, and to that goal we have 3 numbers, which start the inner series from 10 to 100.

So the 9-10-11 triad will form the first ‘game of filling’ parts, in the inner ‘circle’, starting by the 9, in which *the 3 networks within the system are defined in the 3 angles of the triangle, playing the roles of the 3 physiological networks of motion < iteration > information of the system. But the 9 has NOT A POINT on the singularity which is a vacuum center; hence a model for ‘void’ T.œs such as eddies and potential wells. *

*To achieve that center we need to fill it with another 10th point formin a tetraktys, the sacred number of Pythagorism and Hebrew geomancy (graph):*

**Nº 10: ***2d $t-1D***: ***Tetraktys, lineal, triangular state. *Where the singularity the black ball , the magic number of Pythagoras is formed, becoming the inner heart of the body of the organism, doubled with its antipodal ’emerging’ point in a higher scale of the fifth dimension.

**Nº 10:*** 2d, §ð-3D: Decagon, cyclical state.* Look at the next graph, with its Schläfli symbol – a notation of the form {p,q,r,…} that defines regular polytopes and tessellations; useful in ∆st to determine the gaps between those connections:

The decagram has a 1o/3 Schlafli symbol, meaning, as it should have been easily deduced from the previous configuration as an $- ‘tetraktys’ expressed in membrane form, *is also a 10 point star, connected every 3 points to form 3 sub-ternary networks:*In the graph we can see why we use the decagram as symbol of the 5² Ðisomorphic Universe. Not only represents 10 fractal points each of one Dimension, but it shows in its fundamental elements its ternary symmetry (Schlafli symbol: every 3 points there is a connection which reproduces internally another decagram: 10/3) but its *MIRROR SYMMETRY or Coxeter diagram is a ‘Compound’ of two pentagrams (10/2) = 2(5) both for the decagon and the decagram, representing the 5Ds=5Dt symmetry.*

**Nº 11: 2d @-5D** Monad:

Which completes the inner and outer duality of the triangular tetraktys, starting fully its integration as a whole point, number 1 of an ∆+1, since the pentagon and heptagon were NOT transcendental numbers but the numbers that desegregated the system within the plane.

Following the growing complexity of inner forms, which show the transcendental arrow of 5D metric: Sp x Tƒ=k, the mind number does have the maximal number of self-similar inner regular forms, generated by connecting its vertex with a ST line – 4… which simply means as points once born start to ‘communicate’ through those Non-E flows that the 11th form is all about emerging into a higher density plane of information.

**Nº 12**: *3d, ∆ @.* The Dodecahedron, with 12 vertex is the 5D mind dimension of a volume.

As we have completed the symmetry between time ages, worldcycles and dimensions and spatial symmetries of polygons, and not surprisingly the rest of the polygons have really no meaning at all but merely represent social groups, whose study on the series from 10 to 100 is less relevant and more complex; as *now we must consider them as internal numbers in the more complex series started by the Tetraktys, where internal angles, inner faces and connections do matter. *

Only is left one number to complete the 5 dimensions of 2D and 3D, the number of 3D-ecay, the last platonic solid:

**Nº 20:*** 3d, ∫∂:*The Icosahedron with 20 points, being the 4Dimension of entropy and decay.

To notice also that in any dimension the first ‘form to appear’ is the lineal form or the perceptive form, and then combine to form the ST-system, according to the ternary structure of the generator in non-Æ logic:

*Lineal past x Informative Future = Organic Present.*

This is indeed the simplest of all the games of evolution of social forms. And those rules from space-time geometry are NOT only mathematical but are followed by all species. Consider what we just said. In life species and topological evolution studied in the section of biology the same order, with the lineal top predator coming first appears:

In the graph morphological space-time features described in the simplest possible symmetry of space forms and time ages are carried Disomorphically to the more complex biological forms, and its 1, lineal limbs/I horizon of evolution, 3D, particles-heads, and its reproductive mature forms. All has changed to remain the same, as indeed, the fundamental laws of mathematics as the most synoptic mirror of timespace isomorphism emerge in every new scale.

**The 10th to 20th filling scale.**

So if we restrict now our analysis of numbers as forms in a bidimensional plane, past the ten scale, we shall continue the ‘filling’ of the vital energy contained between the singularity point and the circle and the obvious question is, how that filling proceeds? The answer should be obvious: through the dual connections (2nd Non-E Postulate) of fractal points (1st Non-E postulate) that create regular inner mirror images of the external polygon, towards more complex layered 3D networks:

In the graph, we see that constant growth of more complex layers penetrating into regular images of the external membrane towards the creation of a central singularity.

So back to the graph, it is clear that the maximal number of internal networks are produced by the 11th (4), 13th (5), 17th (7) and 19th (8).

While even numbers have far less, showing its natural correspondence with strong membranes (and so they are indeed the dominant number in all cover, atomic or molecular configurations: 12th only 1, 14th only 2, 16th 3, 18th 2 and 20th 3; so goes for 15th (3), a multiple of 5…

Ah… yes, you realise? 11, 13, 17 & 19, the penetrating polygrams in search of a dense vital space towards its singularity are all our next classic theme of Number theory… we shall now enlighten with ∆@s≈t…

**ORGANIC PRIME NUMBERS**

Prime numbers have always fascinated humanity. Why is another case of the blind guidance of the Universe towards its ‘milestones’ in the construction of reality that intuitively we feel that ‘matter’. Prime number matter in the static form of polytopes, as they ‘self-generate’ T.œs departing from its membrane structure, inwards, and outwards as the forms with maximal regular growth of networks within its self, converging into a singularity.

Let us then explore this theme, which explains the relative abundance of prime numbers, only in a flat 2D world. When we move to 3D the equivalent organic forms are the Platonic solids. And as there are no more than 3 Dimensions of space doubling in time motion in a single ∆-plane no more weird forms are needed, just inflationary imaginations of the humind.

Prime Numbers are essential to the game of existence, as each prime number can be considered the polygonal number of an infinite pi sphere, which therefore will enact a different game, and in the same manner numbers can be considered to fill a bidimensional plane or a 3-dimensional volume, defining further ‘games, which are forms’, as different numbers will play different networks of social evolution and hence slight varieties of different biological palingenetic games of creation.

To reveal the whole complexity of how numbers become vital forms is well beyond the hope of any species studying the Universe, let alone a single man.

But numbers do have as the tetraktys different value and this was understood early in times of the Pythagorean school – to reveal that hidden value is of course a bit more difficult as it requires to know the ‘game of time/space dimensions and symmetries’ played around the ±1-o-2-3 simplest numerical games.

In that sense number theory allowed to go to the infinite misses the true density of real information which happens around the numbers 2 and 3, and its ratios, such as – 3 (e) and + 3 (π), the main ratio numbers of ∆-scales (e) and Spe<≈> Tiƒ events.

Because prime numbers are infinite and relatively abundant according to the π=n/log n rule, the wholeness structure of a system is quite remarkable complex as new ‘whole numbers’, primes, can reinforce the structure with stable networks. A prime number indeed by definition is more stable as a configuration of forms, as it cannot easily split into equal parts. It is though a more sophisticated approach to prime numbers to study their *unequal configurations in 2 poles of Spe<=>Tiƒ exchange and 3 poles of a stable Spe<ST>tiƒ configuration.*

For example 10, gives us a 1-1 (dual system at ∆º+1) and 3 x 3 ∆-1 Spe<st>Tiƒ configuration, making it perhaps the commonest of all forms.

It is in that sense remarkable to notice that 10 does not really exist as 10 and 11 are the dual ∆º1 double role of the 10th number, which emerges as a whole new ‘1’ of a new 1 to 9+0 configuration.

Prime numbers thus tend to create a simultaneous circular polygon of π-numbers. Those p(r)imbers are thus definitions of an infinite variety of super organisms, n/log n

Prime numbers are very numerous if we think intuitively they are unique and cannot be broken in smaller parts, which seems counter intuitive for big numbers. And yet they are there, and often aggregate into couples, as all vital systems do, and ternary groups separated y mere 2 or 4 or 6 places. The ultimate analysis of all those properties of numbers in vital terms thus define a game of ‘space-time beings’, which encodes all other possible universes and scales, likely in terms of ‘efficiency and survival’, which explains why dual numbers are so close:

Dual prime numbers then might be considered likely possible S=T ‘gender games’ of efficient survival, on configurations that are slightly different in their overlapping. Yet obviously we cannot enter into such disquisitions of the ‘whole information’ about the vital Universe and all its possible games of dual and individual existence.

In general though the fluctuation of ‘primes’ to total numbers herded by their polygonal forms in 2 or 3 dimensions, fluctuates from the 2,5 of the first 10, through 4 in the essential 100 elements game, to the 25 for the larger 10¹¹ scales. So prime numbers decay to a 10% of what they are in the essential tetrarkys game, where they can be studied as the points of vortex of a tetrarkys of 10 elements, at 2, 3 (head numbers), 5 and 7 (body numbers), to which the attached ‘limbs’ of entropy, 8 and 9 lineal limbs/fields lines are attached.

THE next scale of beings, is thus the social scale of feeling the vital energy space-time between the singularity and the membrane that conform the @-mind, and as such the immediate process brings us to the 10-1oo scale, as it is obvious just by counting the ‘vertex’ created by the sum of the four hen decagrams:

In the graph, the filling of the vital space in the 10-100 scale in a single 2D-holographic plane figure implies the *connection of all the points of the membrane with each other, filling together the vital space till creating a mirror singularity (11/5) of minimal size. This is the **essence of creation of a T.œ departing from a membrane, which become interconnected ad maximal through ‘flows of t<ST>§ CREATING an inner network, which reaches its maximal connectivity in the singularity mirror of the membrane. *

For example, cells first enclosed a vital space with protein membranes with holes between them, then in coordination with the DNA nucleus mirror of the membrane, they invaginate the vital space with golgi reticule channels connecting both and from there the vital space-time can be ordered. And we assume similar physical processes between particles of an atom, molecules and even galaxies in the hyper universe.

Now the surprising, not so (: result comes when adding all the points generated by the intersection of 2 ‘antipodal – not geometrically but in orientation’ flows departing from its father-mother points on the membrane, where a new vertex fractal point is born:

The result is 11 x 11: 11+11 in the first 15/2 + 11+11 in the second 15/3 (as the membrane has been counted) + 11 x 3 + 11 x4 = 121; of which we must rest, 11 belonging to the membrane and 11 to the singularity for a grand total of 99!!!=33×3=11x3x3…

Oh, yes, the vital space is filled till the brim of the 10 x 10 scaling to 99 in the perfect mind of the hendecagram, which will again as it rotates through 11 hendecagram states in the volume dimension bring us to a 10.000 being; *the Unit Toe of the perfect symmetric game of vital numbers, which recall us the words of Lao-Tse:*

‘The universe is a game of opposite space-yang-entropy and time-yin-formation that combine in waves of 10.000 beings’

How he knew?… more than all our pedantic tree searchers of details, the ‘thoughts of God’ which Einstein seeked in vain?

So prime numbers are a ‘solid’ thing, indeed, since they are the true fillers of space… As this rule comes constantly into being.

So while the geometric question of the possibility of construction of a regular n-polygon with ruler and compass turns out to depend on the arithmetic nature of the number n, favouring even numbers *which are the top predators of membrane only, when we add the internal vital space, prime top predator odds have the best odds to survive.*

**The 20th scale.**

So if consider the next fundamental sub-scale that of 20, we find for its largest prime:

19 ∆º x 19 ∆º= 361º (- 38 from the singularity and membrane= @)= 323…

And so here we have another curious number 361º ‘degrees’ of angular rotation, *(as we know geometry is always approached in its figure by discrete number above or below ±1), give us the full perspective-control of the membrane joined with the singularity of all its herds of vital cells/points within the circle, which man intuitively understood as an ST-subconscious self that encodes in i=ts fractal mind all the meanings of the whole made to its image and likeness:*

This 10 to 20 scale thus form a mirror herd very ‘fecund’ for reproductive purposes of the 10D basic scaling, as it shows in the game of life, where 20 are the §œT of amino acids that your body can not live without… as they replicate everything else in the upper scales.

So the obvious ‘next question’ is, as it seems ONLY prime numbers ±1 (to equate dynamically open, closing gaps within the polygon as per pi analysis of its ±pi closing opening mouth spiral) define FULL efficient FILLING? How many prime numbers – configurations of efficient geometrical §œTS there is out there? Surprisingly a great deal, as this ‘celeberrimus’ question of Number theory discovered.

**3D primes**

The 11³ 3-Dimensional endecagram in its finite perception of s=t harmonics is likely the volume point on the 10.000 scale, yet any prime number can create a relative world-universe, such as p³ will be its inner and outer (membrain) volume of points.

**Classic prime number theory. **

Let us then return to the method of illuminating classic maths on this issue, once introduced its vital meaning.

A prime number is any integer (greater than one) that has only the two positive integer divisors, one and itself. One is not considered as a prime number since it does not have two different positive divisors.

Thus the prime numbers are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29…

And we see that while the decametric and 10-20 scale are buttressed with 4 of them each, almost 1/2, which *explains further why they are really the 20 numbers that matter in almost every system of reality; in terms of vital energy, which you might extend to the low 23 range… they are the ‘efficient numbers’.*

So in the table of elements, where the strength of the system is a mixture of its ‘even configuration in 2 and 4 pairs’ and odd volume of inner layers, the range around 22-26 (iron) give us the strongest to predator atoms of entropy/energy (the roles of the membrane-vital energy) of the system.

Further on in the symmetry in time frequency we find that the first age of life, in a biological human being ARE the 23 first years: 1-24: youth, 24-48: maturity, 48-72: 3rd age; 72-80 death. And this will be indeed the fundamental scaling for all timespace life-death symmetries (as we show in the cycles of historic and economics which fill completely within those scales and its decametric 5D scaling).

**Organic Prime Numbers extended to reality.**

Of course this is ‘vital mathematics’. In classic maths, Prime numbers play a fundamental role in the theory of numbers because of a basic theorem:

Every integer n > 1 may be represented as the product of prime numbers (with possible repetition of factors), i.e., in the form

where p1 < p2 < . . . < pk are primes and a1, a2, . . ., ak are integers not less than one; furthermore, the representation of n in this form is unique.

Oh, yeah! What this means is obvious: *Only prime numbers or its social combination DO survive=exist. Everything else is a virtual society of prime numbers; and the strongest social groups are those under 26, stuffed with primes and still with a solid even membrane. *

**The asymmetry of prime numbers.**

It is important to fully grasp what all languages are about, Space-time systems relating through a(nti)symmetries, which means that departing from two ‘different species’, there is an approach or even in which the two forms will come together in a positive social way, and this is described with a positive operator in maths (+, x, ∫, xª) or inversely they will suffer an entropic process or negative destruction (-, /, ∂, √). And both will balance each other.

It is then *remarkable to notice the ‘top predator nature of prime numbers in relationship to the product/division cycle as they cannot be divided; so they are asymmetric respect to division and only can evolve socially by multiplication in bigger numbers. This means somehow that there is a ‘type of scale of numbers’ the Prime Number scale, which predates all other scales, as it can generate them all and has no need for all **others. It is important to fix this concept of ‘generator’.*

The ultimate to predator ‘function of reality’, which we call the function of existence can generate all other functions and forms: S@<≈>∆ð, and thus is the essential truth of reality.

In geometry, the 3 topologies can generate all other forms, and can be in fact resumed in two geometries, the line and the cycle, hence from conics all can be created, from | and O in binary code.

In the decametric ‘social scale’ (see last paragraph to consider scales), the prime number scale *is the primeval generator; and is asymmetric to division into smaller entropic parts; hence determining the fundamental nature of social numbers, its ‘social growth’ and the dominance of the eusocial arrow of the Universe. *

It is also interesting to consider of this formula, the communication between the parts of a number in the product, which is not a superposition sum, but hints at the first realisation of a lower scale within the number, its parts – and to the applications of the second and fourth Non-Euclidean Postulates that communicate those inner parts, as the product *is the sum of all communications between two sets. Ie. two neurons of 6 axons all connected with give us 12 and so on.*

So prime numbers as ‘parts’ of larger numbers give us a few interesting theorems on the whole, not treated in this introduction.

And further on extend the theory of prime numbers as generators of T.œs to the analysis of biological cells, social networks and all kind of physical structures.

**The closure of prime numbers.**

Now on this ‘flow of thought’ we can consider how prime numbers expand next beyond the natural line. Since a *rational number can be reduced to a natural number unless it is a prime number divided, we can define rational numbers as generated by prime numbers divided by another number: R=P/pª1 • pª2… *

**The size of the ∆-1 scale is bigger than the ∆-scale.**

But we can see Rational numbers in another manner, as a natural number plus its reminder which belongs to the 0-1 unit sphere, the ∆-1 fundamental scale of the Universe; and as there are more rational numbers that natural numbers, *we can deduce that the o-1 unit sphere is more dense in numerical information that the natural line. A fundamental proof for 5D metric, where the o-1 sphere is the ∆-1 element and the 1-∞, the whole, with less information than its parts.*

**Infinity has always a limit.**

Incidentally this brings to the memory what we thoroughly reject – the messy game of Cantor’s cardinality of infinite sets. On our view, the whole Cantor paradoxes of infinity are bogus, because his definition on how to ‘measure’ cardinality is an error in itself (to put in correspondence a number with other numbers is NOT to measure it better than counting an infinite which would take an infinite time and hence it is not possible, you have to continue the correspondence also to infinity, so there are no way to count infinity, and since there is not way to count, when you cannot count or measure something it does NOT exist. Infinity in fact DOES NOT exist in ∆-theory since there is always a limit for A T.Œ BEING enclosed in a membrane.

**Real numbers do exist?**

All this of course bring us to a more complex question. Do real numbers exist? We have already commented on it at the beginning and will return to it… Since real numbers are truly infinite compared to the sum of Z and N, the question is very relevant to assess the discontinuity or continuity of the Universe. The answer in ∆st differs from that of the axiomatic method. They DO not exist for a given virtual mind-set measuring them, because they have infinite decimals and the number of scales we perceive is limited. So there is a dark ‘space’ in all networks that the mind pegs in an artificial continuity, but they do EXIST for the whole T.œ, as its scales are infinite, even if any @-mind perception of them is limited.

**Prime number as the key T.œ-membrane.**

It is then when we can affirm that:

*‘A Prime number and its internal networks is the fundamental geometric T.œ of the Universe’.*

But for mathematicians, if any ever read this blog (: blinking their eye every time i bring vital maths as experimental proofs of the sweated arguments this is nonsense (: So we just shall argue the c onnection of that formula and the Fermat’s grand theorem so important to prove in ∆st the non-existence of a fourth dimension in a single space plane. (So we cannot superpose cubics in a 4th dimension as we can superpose holographic 2-manifolds in the 3rd dimension, hence X²+Y²=z² does exist but X³+Y³≠Z³).

So one of the problems of number theory consists of finding those natural numbers that can be decomposed into the sum of the squares of two integers (not necessarily different from zero).

The rule for the sequence of numbers that are the sum of two squares is not immediately clear.

From 1 to 50, for example, it consists of the numbers 1, 2, 4, 5, 8, 9, 10, 13, 16, 18, 20, 25, 26, 29, 32, 34, 36, 37, 40, 41, 45, 49, 50… a sequence which seems quite erratic.

Fermat himself noticed that here everything depends on how the number can be represented as the product of primes.

First, Prime numbers, other than p = 2, are odd, so that division by 4 gives a remainder equal to 1 (for a prime number of the form 4n + 1) or to 3 (for a prime number of the form 4n + 3).

So we have 3 possibilities:

1. A prime number p is the sum of two squares if and only if p = 4n + 1.

4n + 3 cannot be expressed as the sum of two squares because he sum of the squares of two even numbers is divisible by 4, the sum of the square of two odd numbers gives a remainder of 2 when divided by 4, and the sum of the squares of an even and an odd number, when divided by 4, gives a remainder of 1. (The proof is longer let just set it here).

2. Next the decomposition of an arbitrary integer into the sum of two squares. It is easy to establish the identity: This identity shows that the product of two integers that are the sum of two squares is again the sum of two squares. Hence the product of any powers of prime numbers of the form 4n + 1 (or which are equal to 2) is the sum of two squares. Since multipliying the sum of two squares by a square gives the sum of two squares, any number in which the prime factors of the 4n + 3 occur in even powers is the sum of two squares.

3. If a prime number of the form 4n + 3 enters into a number in an odd power, the number cannot be expressed as the sum of two squares.

A complex numbers of the form a + bi, where a and b are ordinary integers IS called a complex integer. If an integer N is the sum of two squares N = a2 + b2, then:

(where ā denotes the complex conjugate of the number a… i.e., N is factored in the domain of complex integers into complex conjugate factors.

In this domain of complex integers, we may construct a theory of divisibility completely analogous to the theory of divisibility in the domain of ordinary integers. We will say that the complex integer α is divisible by the complex integer p, if a/p is again a complex integer. There exist only four complex integers α which divide 1, namely 1, -1, i, and − i. We will say that a complex integer α is a prime, if it does not have any divisors other than 1, -1, i, − i, α, − α, −i, − αi.

So our problem has a different meaning; it will now turn out that numbers of the form 4n + 1 (or equal to 2) which in the previous case were prime will cease to be prime in the domain of complex integers while it is easy to prove that primes of the form 4n + 3 remain prime.

Since prime numbers of the form 4n + 3 are not the sum of two squares. This means that a x ā =1 ; thus a can be only ± 1 or ± i, so that p has no divisors other than the obvious ones.

For complex integers the theorem on the unique decomposition into prime factors still holds. Uniqueness here means, of course, that the order of multiplication is ignored and also all factors of the form 1, − 1, i, − i.

Let N be the sum of two squares, N = a x ā. Let p be a prime number of the form 4n + 3. Let us calculate what power of p appears in the number N. From the fact that p remains a prime in the complex domain, it is sufficient to calculate what power of p appears in a and in ā. But these powers are equal, so that p necessarily appears in N to an even power, which proves the proposition.

The discovery that a rich theory of divisibility is possible elsewhere than in the domain of whole rational numbers greatly extended the field of vision of 19th century mathematicians and as we shall use those concepts heavily in the 4th line of mathematical physics, we have brought them at this stage – as well as the rashomon effect of complex numbers and its 4 interpretations, as latter when we write those articles of physics we will come back to complete its ‘vital, experimental meaning’.

*Mersennes primes*.

We can then consider a bit of the Rashomon effect on Mersennes primes, which are so abundant. WHY?

The answer can be make dual.

**S≠T:*** *On one side we could state merely at this stage that 4n+1 as the Mersennes primes, 2ˆn±1 seem NOT to be primes after all, following our discovery that arithmetic numbers differ from geometric figures which started the article… by a unit. So Mersennes numbers might just be the ‘small difference’ between a geometry, S and a sequence, T-numbers.

**∆ð:** But there is a nicer ‘reading’ in terms of vital mathematics: *4 is an excellent configuration in space, with perfect symmetry, and if we add the ‘one’ as its singularity, we obtain what four a strong membrane lacks, its mind-center. So such system should be quite stable in its membrane and self-centred. *

WHILE the 4n+3 ‘ternary’ primes, which add ‘ternary generators’ look like ‘solid efficient primes’. So we don’t consider the hypothesis of a possible error – the 3 is the perfect self-centred circle/triangle, the minimal bidimensional soul of mathematical systems that switches between triangular state and cyclical one.

****

Of more interest to quantum physics will be the conjugate product that gives us a unit (reason why the Copenhagen interpretation in terms of probabilities is so fond of it, as probabilities cannot be large than 1). But as we reject this concept of probability for the sounder pilot-wave theory, what we mean here is that *the conjugate product is an ST product that gives us a whole ‘full ST-bidimensional being’; in the quantum case the whole particle-wave entity sum of both states. *

Let us then close the post with a fast review of the Rashomon effect on Theory of Numbers.

**RASHOMON TRUTH.**

In this line of thought comes then if we consider of the 4 canonical methods of study of Theory of numbers (mimetic, how not, to the Rashomon Truth) – the elementary (T), analytic (∆), algebraic (S=T), and geometric (S),

**T.** The elementary theory of numbers, WHICH studies the properties of integers without calling on other mathematical disciplines, WE find from Euler’s identity that every integer N > 0 may be expressed as the sum of the squares of four integers: N= x²+y²+z²+u², which beyond its Euler’s proof has the obvious meaning that *not only all numbers are combinations of primes but they are combinations of our canonical 4 Holographic basic combinations of ST dimensions: SS, ST, TS, TT:*

**∆∞: ∑ði-1≈∏Si:**

**5D:∫ ****3D:@ 1D: l**** 2D:∑**** 4D:∂**

** 5D:Γ ****3D:• 1D: t 2D:∏ 4D:Xª**

As NUMBERS are social forms, but also *the simplest expression of holographic combinations of space and time.*

**S. **We just have done a bit of ∆st geometric number theory in more detail as geometry is my fav math.

In classic math, the basic objects of study in the geometric theory of numbers are “space lattices” which for 2D are those polygons; that is, systems consisting entirely of “integral” points, all of whose coordinates in a given rectilinear coordinate system, rectangular or oblique, are integers.

Space lattices have great significance in geometry and in crystallography and any atomic theory. THEY are intimately connected with the arithmetic theory of quadratic forms, with integer coefficients and integer variables, *obviously due to the holographic principle of bidimensional SS, ST, TS, TT manifolds.*

**S=T **means the use of algebraic methods to solve number theory problems.

The basic concept of the algebraic theory of numbers is the concept of an algebraic number, i.e., a root of the equation:

where a0, a1, a2, . . ., an are integers. So it is properly more an algebraic question. Yet besides this in ∆st it means also to relate *numerical configurations with time events, as we have just done, considering the ‘vital moving configurations of 20-scales, time ages and atomic elements’.*

**∆. **So finally the last and more complex kaleidoscopic view is that of analysis, closely connected with the theory of functions of a complex variable and also with the theory of series, the theory of probability, and other branches of mathematics.

Particularly noteworthy is the problem of counting the number of integral points in a given domain, a problem which is important in physics and we have also given a glimpse to it from the ∆st perspective with our analysis of the ‘volume of ST-energy’ created by prime numbers’ membranes.

So in this chapter we are considering only certain selected questions in the theory of numbers mostly from the S, S=T and T perspectives.

**∞ dwindling prime numbers.**

“Is this sequence of primes infinite?

The fact that any integer can be represented in the form of a product of primes does not yet solve the problem, since their exponents a, . . ., ak, may take on an infinite set of values.

Euclid, though already proved that the number of primes cannot be equal to any finite integer k.

Let p1, p2,. . ., pk be primes; then the number:

m= p1p1….pk+1

since it is an integer greater than one, is either itself a prime or has a prime factor. But m is not divisible by any one of the primes p1, p2,. . ., pk since, if it were, the difference m − p1, p2,. . ., pk would also be divisible by this number; which is impossible, since this difference is equal to one. Thus, either m itself is a prime or it is divisible by some prime pk + 1 different from p1. . ., pk.

So the set of primes cannot be finite.

And this as the irregular infinity of *pi decimals* treated elsewhere, makes us to consider that *both in space, and in scales, hence in time, the Universe is infinite… and so besides the 11¹¹ scale between planes, planes might be just gathering in 9+2-decametric scales, the observable ∆º±4 of the humind (which sees therefore an open Universe, NOT its singularity 10-11 dimensional borders), as ‘fractal points’ of a mega ¹¹11 infinite scalar game.*

But as they dwindle (as per Euler’s and Riemann’s functions) the ‘wholes’ that each larger prime signifies, become rarefied – still there are surprisingly too many. For the limiting scales of most planes:

In the graph, the number of primes holds remarkably dense in the basic 100 scale of most timespace symmetries, 1/4th of the classic 2 x 2 Dst, which can encode most information of beings… clearly showing its structural role in all social systems, as one ‘dimensional leading’ element of the whole in its different configurations and knots and flows of communication.

The ratio then dwindles to 1/6th out of 1000 and 1/8th for 10.000 in the next key interval of social groups, still fairly strong, if we take into account that the 10% is a very strong ‘leading’ elite… proper of most organisms, which still holds ‘STILL” in the 1 million social group (1/12th).

In fact in our society billionaire’ society, we are controlled by the famous 1%, and the more infamous 0.01% of owners of corporations, our ‘stock-rats’ modern aristocrats of the capitalist society.

Yet in the prime list for 10¹º, the approximate population of mankind the prime elite still is 1/2oth of the population of social numbers. So the Universe is truly strongly connected and stable in its hierarchy of prime numbers – even numbers 1/2 and remaining odd singularities – our societies being far less balanced.

This so slow decreasing can be seen intuitively in the table as each next power of 10 means one more digit on the right column and so we just need to look at the first number from bottom to top to feel the decrease: 3,7->4->4.5->5->5.7->6.6->7.8->9.5->1,2->1.6->2.5->4…

Now we shall not provoke the reader with the rashomon effect, except considering the @-perspective in this convergence of the function – are we *just not perceiving the mass of non-prime **numbers, when we get very far from our @nalytical centre of reference? Are we compressing our sequence of existing numbers as we move towards infinity?*

So clearly the tendency is towards an stabilisation after the initial sinking under the canonical 10.000 beings… A well known but difficult to get (hence no proofs here) result of Number theory (left).

Another matter though is what kind of social structure is a world with a prime number of 2ˆ74207281-1 elements, or are we into an inflationary fiction concept of prime numbers?

As the largest known prime has almost always been a Mersenne prime based on the factorizations of either N+1 or N-1, and for Mersennes’ the factorization of N+1 is as trivial as possible (a power of two), the question remains, knowing that prime numbers are the reflection of geometrical forms, and geometrical forms are NOT equivalent, as we said in the opening remarks of this post, having always a ±1 deficit.

So we boldly affirm that Mersenne primes obtained from 2ª±1 should at certain level of size on the ‘polygon’ of points the ±1 error between the measure of a perfect cycle and an exponential function, 2ª that can only approach it from above or below with one plus or minus point – as even more clearly than in the introductory remarks an exponential function deeps into ∆§ planes, NOT dwells in a single plane as the polygonal primes do.

All in all if there were not in inflationary maths but in reality a final prime number, meaning a final efficient polygonal enclosure of a bidimensional surface of vital energy, that would be the enclosure membrane of the @-mind of the Universe, the number of God… And with this final mystical thought we shall close the post, somehow in a cyclical manner with the same theme we started – the non-equality of numbers and points, as all things must be closed in a circular enclosure.

# II. S@

# PLANES & NUMBERS: CLOSURE

**Foreword. The multiple representation of reality in numbers.**

So far we have studied only an ideal form of a number as a point, a knot or lattice, which is what in reality most atomic numbers are placed into. Numbers as forms do have a structure that makes them both space and time S≈ð. But numbers as mirrors of the Universe have as words in verbal languages, pixels of colours in images, curved lines in geometry, 5D smells in noses, multiple relationships with that kaleidoscopic 5D reality. And so we must advance on the multiplicity of Numbers under the 5D ‘rashomon effect’, both in its structures – number within numbers and its solitude, numbers as social groups alone.

The commonest way to classify all the ways in which numbers reflect the Universe is then made in the classic age of mathematics with the concept of closure, that is how we can represent in different planes and forms of numbers all the solutions of a given algebraic equation of temporal variables that represent a function of momentum (lineal or cyclical, | or O) and energy (Ø) and its motions, which has when we translate the 3 conserved quantities of physics into the ternary topologies of a T.œ. a meaningful sense to ∆st, as they also close the ‘existence’ of beings in a single ‘relative present’ plane.

Numbers though are *best as numbers when they are indistinguishable from the @-mind that perceives them; that is without the addition of a human interposed plane. Then they do obey all its ideal properties. And we loose a certain tyrannical attachment that happened in the axiomatic method of modern mathematical theory between number theory and the filling of the real line, which is NOT the meaning of numbers.*

We then deal with numbers as social groups more than merely functions closely related to those @-mind plane and line representations each one associated to a family of numbers.

And to that aim we shall first put in correspondence different type of numbers with the different five dimensions of the Universe.

As we explain in @nalytic geometry, the line of numbers is evidently related to the 2dimension, the origin of the coordinates to the 1 Dimension, the o-1 finitesimal scale to the 5th dimension of generation, and so on… Yet when we depart from the line correspondence with the family of numbers referential of those planes, in a more complex manner,* we can cast multiple Rashomon effects ($@≈∆ð points of view) in each of the planes and family of numbers.*

Consider the line, which is used for all the following type numbers, natural, negative, rational & irrational. Each of those families when escaping the tyranny of the line become filled of new meanings and *its reference is now the 5D rashomon effect for each of the families of numbers that will have to represent somehow in its own with more or less accuracy or detail as a mirror of a mind-language ALL the elements of the Universe (something that when attached to the line loose most meaning).*

**Natural numbers** then are many things.

They are ‘steps’ of a given unit, which represent a discontinuous motion of a frequency ƒ, and a wavelength ‘1’. Steps become then discontinuous landings of the fractal unit that measures distances or motions in a lineal 2D geometry.

We can then include ratios of those steps, breaking the line in different smaller ‘fractal steps’ and that is a first meaning for a rational number. *But if we make a different unit for each rational part, we enter into another Natural number scale of the fractal Universe returning then to natural numbers, which therefore are also able to represent ratios. And so natural numbers can be any number as it keeps falling down in microscopic rates. *

Finally negative numbers can be seen as a mirror symmetry of the natural numbers which depending on which function we are measuring will be seen looking to one or the other side of the line.

It then comes to our realisation that natural numbers and negative numbers and ratio numbers are in essence a grouping of its own, closely related and with an interaction, which we shall considered closely related to the 3 ∆º present dimensions of existence:

Such as the natural numbers come closer to the 2D representation, and when we ad them the negative numbers can be used to project the two directionalities of the 1D representation, and when converted into ratios of each others, can give birth to a 3D representation. Then it follows that the two remaining great families of numbers *irrational numbers that descend in scales without limit, must be closely connected to the 4Dimension of entropy and dissolution, and the Complex numbers, which ascend into a new dimensional plane of imaginary geometries, must be naturally connected to the 5th dimension of social evolution, as it is in fact a √ lesser quantity ‘extracted’ from the real line (when we square both the real line and the imaginary one to eliminate the cumbersome negative ‘game’). *

And this will be the case. So we write in terms of the 5D of reality the Generator equation for the 5 families of numbers as:

Γ Nº:* 1D->Z, 2D->N, 3D->Q, 4D->R, 5D->C*

An essential equation of the Universe in its linguistic representation through the language of non-Æ math.

**FAMILIES OF NUMBERS**

*In the graph, an ∆ scale can be represented through the interval of 0 to 1 by finitesimals or the interval from 1 to ∞, which become the decametric and decimal regions of a supœrganism represented in the real line: ∆-1: o to 1, 1, the ∆º scale and 1 to ∞, the external world. Thus this section will study the meaning of numbers and its scales as a representation of the supœrganism. *

**Abstract**. This section deals with number theory, which is strictly speaking part of Algebra but as modern axiomatic maths developed too fa, it has become somehow is a neglected area, confused with numerology, at the heart of the social growth of scales, from § into ∆. So it is the ‘5th element’ of ∆•st formalism, concerned with how ∑§³° becomes ∆.

Number theory is thus closely related to the world cycle as it transits through the different numerical systems in its motion between ∆º±1 scales of existence.

In that regard 3 numbers could resume it all, 1,0 and ∞. Amazingly as it seems they can be seen as the ternary sides of the same structure, 0,the center, |-the membrane, ∞, the internal cycles between both that mirror the infinite Universe:

Thus infinity only reaches till the limits of its Whole:

The |-membrane, the • singularity & the ∞ cycles between them form an island universe, a fractal superorganism and its world cycles.

The study of the interaction of those 3 elements that compose all §upœrganisms is thus an essential part of GST, which can be inscribed in an even larger concept: *the ternary, Universal grammar of all systems, and its fractal ternary sub-divisions ad infinitum.*

Needless to say it is the formal view of the ∆± vital reality of super organisms. So it is a mathematical post, equivalent to the ∆± post which transfers this and the previous ¬Æ posts into ±∆ real super organisms (the next posts).

Finally to notice Numbers are merely social numbers, social groups, but we can talk as Pythagoras did in its initial times of ‘square/rounded numbers’ of type of numbers in terms of the S, T, ∆ and º elements of reality, as we shall do associating the 2 main Universal constants, or ratios, to S<>T transformations, pi, and all trigonometric functions and to ∆ growth, e, and all exponential functions.

In that regard, *infinity in space breaks easier as the ‘crowded’ numbers-social groups become disordered into entropy, so we shall find that e, breaks after the 10-scale decimal its regularity, 2,718282828..4, but pi has no limit of infinite scales according to the Poincare Conjeture, allowing infinitesimal minds to map out infinite worlds. *

**Beyond the number. Algebra.**

Number theory evolved into Algebra, where the unit is still the social number (not the imagined set), which does have as Pythagoras clearly understood ‘geometric forms’ (hence it would be much better explained in a bidimensional plane – reason why ‘bidimensional numbers’, are so useful):

*A number is the social sum of undistinguishable fractal points.*

Thus we return to reality and make the point of geometry and the number of algebra, the units of mathematics, which reflect the Universe, and eliminate the Axiomatic method, which Gödel proved to be incomplete without experience, and the metalanguage of set theory as an absolute form of proving truth, which is just a ‘worldview’ deformation of the mind of ‘idealist Germanic cultures’, latter studied in more detail.

It is then when we can fully understand a mind-world as a higher elliptic scale of the 5th dimension, in which a social group of ‘points’ become a ‘scalar number’ – a field of a lower hyperbolic ∆-1 dimension, becomes a fractal network, connected to the ‘point-particle’, ‘mass’, ‘molecular being’, in a dynamic way.

In Algebra, function overcomes geometric form but only putting together both we can see what is the existential life of ‘algebraic numbers’.

So we shall first consider the more ‘algebraic of all the postulates’ of geometry, the 5th postulate, which really reflects the meaning of a mind.

And then we shall consider the ‘non-Aristotelian’ ternary algebra and the generator equation of:

*Sp (past, spaces; lineal entropic limbs-fields) ST(present, iterative hyperbolic waves-bodies)>Tƒ(cyclical future, informative particles-heads)*

It is resumed in the ternary Generator Equation of all systems and its fractal flows of time. So we have seen a bit of it already, but before we introduce it on full, we shall acknowledge the existence of some predecessors to the ternary motions-forms of space-time in the philosophers of science of the East, which reached its zenith in Taoism and the book of changes.

** ITS EVOLUTION. ΓST GENERATOR AND AGES.**

Mathematics divides phenomena into two broad classes, discrete or temporal and continuous, or spatial historically corresponding to the earlier division between T-arithmetic and S-geometry.

That is numbers in space are points and numbers in time are numbers; but as ‘the stience evolved’ as usually the 2 relative elements of its ternary generator: S-points<ST>T-numbers merged in such a form that points would be able to describe also temporal phenomena and numbers will become apt to analyse time processes.

This natural ‘evolution’ of all languages mirrors of the Universe, which can be stretched to represent all reality with enough ‘ptolemaic epicycles’, that is, enough complexity, is expressed in algebra with the concept of ‘Closure’. The language though reaches its maximal simplicity – and this is an essential rule to know when to use each language, when it better focus reality and it is more proper for it. So for example, orbitals are easier to represent when the sun is in the centre but with the earth in the centre a few epicycles will make it work. Space in that sense is easier to represent with points and time with numbers, but as geometry and algebra evolved they finally could ‘map out all reality’.

And this process was called in algebra, ‘closure’… Which means that as the humankind evolved its mind-flexibility beyond the obvious ‘spatial first age of all systems and languages’, into a more flexible comprehension of the ‘time events’ which could be ALSO COUNTED AND HAD 2 ARROWS TOWARDS PAST AND FUTURE, humans came to understand also negative numbers.

As they came to understand also the ‘scaling nature of the Universe in wholes and parts’ they added rational numbers. When this concept went beyond the ‘visible’ into the infinite scales, they came to understand ‘transcendental numbers’, and finally in the last stage, when they came to use processes that involve HALF rotations of a world cycle back and forth in time-space, S<St or T<St, (not theoretically yet understand, as this field has completely being guided by the adagio of Newman: ‘you don’t understand maths you get accustomed to them’) THEY CAME TO USE LATERAL NUMBERS. WHICH THEY CALLED ‘IMAGINARY’, as they couldn’t figure out ‘why’ √-1 IS a number.

So 3 fields are of interest in that process – the evolution of geometry into topology of networks, which allowed to merge it with time motions and numbers.

The closure of the systems of numbers, which grew from reflecting merely space populations (natural numbers), into reflecting ‘partitions’ of social groups (represented by those natural numbers), with Egyptian ratio-nals; expanded further with the realisation that certain ratios did apply to ‘scaling’ without limit (as in the pi ratio of Spe-lines to O-cycles); and then what was seemingly the hardest part for the spatially-oriented humankind (most, indeed, it always amazes me that humans, EXCEPT Gauss and his disciple Riemann do NOT understand time cycles (and its numbers), EVEN when I explain them so clearly as in this blog; let alone the self-evident concept that ‘negative numbers’ ARE time arrows of inverse direction to those chosen by positive numbers, and imaginary numbers ARE merely 1/2 rotation of a time translation, such as if + numbers are taken as Spe (mostly in our space-oriented world), then negative numbers are the – Tiƒ motion and imaginary numbers, the SxT bidimensional expression, which is the ‘half’ rotation in time of any process THAT GIVES us the bidimensional complex number.

So we can translate the ‘closure’ of all phenomena in terms of numbers, for the ∆º & ST elements of reality (broken in 2 sub-generators to simplify the comprehension) such as:

*Spe (+) < St (i) > T (-) ∆±i: i-ratio-nal or transcendental numbers; ∆±1: ratio-nal numbers; *

This the ‘mapping’ of algebra into GST, which we shall explore in depth in this post dedicated to number theory, which itself has gone through 3 ages:

- Spe: Spatial Youth, Mystical, geometrical age of numbers as forms (Greek era) > Tiƒ: Algebraic age of ‘Closure’ > ΓST age, adding meaning to closure (only in my head:)

**NUMBER SYSTEMS**

**Abstract.** The evolution of math is parallel to the evolution of the human mind and its machines and digital systems of numerical measure (money playing a key role, from interest calculus of e to the concept of negative numbers). NUMBERS thus evolved in its complexity by families studied in depth in the 3 ages of math, classic geometric Greece where simple numbers, ratios and the geometric ‘constants’ of nature were analysed.

It is the spatial age of geometry when mathematics studies bidimensional beings (holographic principle).

Finitesimal irrational numbers would be the theme of the modern age of Calculus, which deals with waves in space-time.

And finally imaginary time numbers, with a blown expansion of type of coordinates and dimensions brings the age of excessive information in mathematical numbers, parallel to the simpler growth of a new 0 | game of digital thought in computers, based in an inflexible, truth or dare Boolean algebra…

Discrete systems can be subdivided only so far, and they can be described in terms of whole numbers 0, 1, 2, 3, …. Continuous systems can be subdivided indefinitely, and their description requires the real numbers, numbers represented by decimal expansions such as 3.14159…, possibly going on forever. Understanding the true nature of such infinite decimals lies at the heart of analysis.

And yet lacking the proper ∆S≈T theory it is yet not understood.

Of those numerical properties ∆nalysis is obviously more concerned with the ∆-questions; so a number is defined in ceteris paribus ∆nalysis as a *social scale* of @identical beings.

(The reader must abandon its single causal logic and accept always a ternary+0 model of knowledge, which do NOT forbid partial ceteris paribus single-causality as long AS WE KNOW its limits).

NEXT COMES the question on how to ‘consider scales’, which tend to be decametric, good! One of the few things that work right on the human mind and do no have to be adapted to the Universal mind, from d•st to ∆ûst.

Shall we study them downwards, through ‘finitesimal decimal scales’ or upwards, through decametric, growing ones?

Answer, an essential law of Absolute relativity goes as follows:

‘The study of decametric, §+ scales (10§≈10•^{10} ∆ ≈ ∆+1) is symmetric to the study of the inverse, decimal ∆>∆-1 scale’.

Or in its most reduced ‘formula’: ( **∞ = (1) = 0): (∞-1) ≈ (1-0)**

**Whereas **∞ is the perception of the whole ‘upwards’ in the domain of 1, the minimal quanta to the relative ∞ of the ∆+1 scale. While 1 is the relative infinite of a system observed downwards, such as ∆+1 (1) is composed of a number of ‘finitesimal parts’ whose minimal quanta is 0.

So in absolute relativity the ∆-1 world goes from 1 to 0, and the ∆+1 equivalent concept goes from 1 to ∞. And so now we can also extract of the ‘infinitorum thought receptacle’J a key difference between both mathematical techniques:

*A conceptual analysis upwards has a defined lower point-quanta, 1 and an undefined upper ∞ limit. While a downwards analysis has an upper defined whole limit, 1 and an undefined ‘finitesimal minimum, +0).*

So the smart reader will notice this absolute relative duality of ∆±1 where ∆@ is the ‘observer’, implies relativity of knowledge, always with a self-centered element to define it, and ‘the relative definition of finite infinities, or ∆+1 limit (ab. ∞) and finite infinitesimals (+0).

This brings an essential isomorphism of absolute relativity (do NOT confuse ∆-equality, with S-yncrhonicity, Ti-somorphism and @dentity; we ‘repeat’ as I know when, if any human ever gets to read those texts, there is TOO much upgrading and not to get dizzy, we DO repeat essential truths).

I-somorphism is the concept of equality in time-information and at the heart of the POSSIBILITIES to do mathematical PROOFS in different, seemingly non-identical space-time domains.

When we apply the identity of ∞|^{0 }(here written in inverse fashion) as in the title of this post on ‘number theory’, poised to complete what my fellow countryman, Fermat, started, we *understand the why of numbers and its techniques, so far only made explicit as most science is on how-terms:*

The real numbers NOW include always an inverse infinity between each 1+1 interval. YET they can provide satisfactory models for a variety of phenomena, even though no physical quantity can be measured accurately to more than a dozen or so decimal places; as 0 now is the undetermined lower limit.

It is *not* the values of infinitely many decimal places that apply to the real world but the *deductive* structures that they embody and enable due to the equivalence of 0≈1≈∞.

Analysis and its inverse integral and derivative calculus ‘drinks’ on all this.

Thus it came into being because many aspects of the natural world can profitably be modeled by those equivalences, as being continuous—at least, to an excellent degree of approximation. Again, this is a question of modeling, not of reality. Matter is not truly continuous; if matter is subdivided into sufficiently small pieces, then indivisible components, or atoms, will appear and finally we *will find the finitesimal +0 quanta*.

But atoms are extremely small, and, for most applications, treating matter as though it were a continuum introduces negligible error while greatly simplifying the computations; *when we work on the 1-∞ upper ∆+1 scale, that of the cosmological realm; whereas the intermediate scale is that of ∆o human thermodynamics. So we can then state in physical systems the equivalence of:*

*+0 (quantum physics) ≈ |-thermodynamics ≈ ∞ (bound infinity): Gravitational scale *.

And all what is above quantum effects, in ‘classic physics’, can be studied in a continuum modeling, which is standard engineering practice when studying the flow of fluids such as air or water, the bending of elastic materials, the distribution or flow of electric current, and the flow of heat, and so on (all ∆>∆+1 physical systems).

#### Number systems.

*Closure. *In mathematics there is a basic concept about numbers, which is the concept of ‘closure’.

*As humans have expanded their understanding of the Universe, they have also evolved its mathematical mirror over it, which has finally become both in its more geometrical, spatial form (points) and its more informative, temporal form (numbers) a quite perfect form able to understand it all in the form of numbers.*

However humans have as in most fields of its knowledge merely acted in a very mechanical manner, ‘discovering’ without much understanding the deeper level of reality, which *numbers mirror, the space and time entities, symmetries and sequential events and simultaneous organisms of **reality*.

Numbers have evolved through the expansion of the concept of ‘quantity’ from the natural numbers which merely observe spatial quantities added on in social groups, the essential definition of number, into ratios between numbers that express more complex concepts of reality, and introduce the idea of functions and ‘motions in time, advancing further with the concept of negative numbers, which are NO LONGER QUANTITIES IN SPACE, BUT QUALITIES IN TIME, FINALLY REACHING THE MAXIMAL transformation from ‘space to time’, from quantity to function, and ratio of exchange of the vital entities of reality, entropy, information and energy (Spe-Tiƒ-ST), to the final closure of reality expressed in the mirror-language of numbers, which is represented by the ill-understood imaginary numbers.

As usual it is NOT casual that humans do have different numbers, as they are related to the different ∆•ST 4 elements of reality. In that regard, *the proper way to name them in GST is that offered by Gauss: direct=natural, positive; inverse=negative; integer numbers and LATERAL imaginary ones.*

*Of those 3 concepts, positive and modern ‘complex’ denominations work fine, but we will call negative numbers ‘- inverse numbers’ because it is far more sound conceptually, as inverse numbers are the inverse direction of a timespace dimension, often the Spe<≈>Tiƒ dual element.*

In that regard, the advance of modern Algebra and group theory has been to define properly negative numbers as the ‘inverse’ of positive numbers – a finding which however has not been properly translated to the ‘red scare’ of so many science that do not like or know how to interpret negative numbers (specially physics, both in relativity -m, -e, and quantum physics, in which the ‘negative’ of an imaginary number (called its conjugate), multiplies in many equations, eliminating the ‘phase’ – neutralising its ‘time’ coordinates, but have never been interpreted in a realist way, as what they really mean to the quantum world).

Where we must consider a ‘co-ordinate system of ‘square, bidimensional’ space and time elements, where the x² coordinates responds to space, and the i²=-1, represents the inverse mostly time scales.

These main number systems are, related to ∆•st as follows:

**§: ∑S, ∑T.**The natural, ‘direct’ numbers ℕ. These numbers are the positive (and zero) whole numbers 0, 1, 2, 3, 4, 5, …. If two such numbers are added or multiplied, the result is again a natural number; WHICH means*it is possible to create all intermediate societies departing from a unit quanta, between the relative 1 and*Its use is for the most simple intuitive~~∞~~value within which the domain of the function or event is meaningful.*games of social evolution of identical groups, through the § scale between ∆ and ∆+1*

**S+(-T).**The integers ℤ, which add the “Inverse’ Spe<≈>Tiƒ numbers, which will be normally related to the inverse, entropy vs. information functions of all systems of reality, hence to the inverse ‘directions’ of one of the ‘dimensions’ of space-time (S, T or ∆). As Gauss intuitively understood this removes absurd hang-ups of scientists, afraid of negative ‘values’, specially in physics, where c-speed is a limit because they don’t understand ‘inverse mass – expansive entropic dark energy’ in relativity equations. These numbers are normally called the positive and negative whole numbers …, −5, −4, −3, −2, −1, 0, 1, 2, 3, 4, 5, …. If two such numbers are added, subtracted, or multiplied, the result is again an integer; which again means it is possible to ad up all intermediate combinations of entropy and information, or similar inverse directions in a dimension of ∆•st, departing from a unit quanta, between the relative 1 and ∞ value within which the domain of the function or event is meaningful; and so it is its Spe<≈>Tiƒ function.

**Sx(t=1/ƒ)=K (5D Metric)**. The rational numbers ℚ. These numbers are the +direct and -inverse fractions*p*/*q*where*p*and*q*are integers and*q*≠ 0. They are required for all equations based in the metric inverted properties of space and time, whose product in terms of time duration or division in terms of cyclical frequency of information remains constant (it is still a rational number). Indeed, if two such numbers are added, subtracted, multiplied, or divided (except by 0, which has by definition no inverse, as it is undefined in those ‘full scales’; and so must be truly understood as the first mathematicians who ignored its existence the ‘outside limit below the unit quanta’), the result is again a rational number.

**∆-1.**The real numbers ℝ. As explained above, these numbers are the positive and negative relative ifinitesimals in decimal scales which study the ∆<∆-1 world of downwards scales till the relative infinite ∆-1 quanta, normally according to the standard scaling on the 10¹¹ level. So normally*a decimal number and Universal numerical constant ‘breaks’ (becomes disordered an meaningless, before reaching the 11th decimal, the limit of calculations in GST. And according to the ‘wholeness of the Universe’ within each of those fractal island-universes,*if two such numbers are added, subtracted, multiplied, or divided (except by 0), the result is again a real number.

**•-Minds: Still mappings and its languages ****of information≈Perception. ****Eyes of time.**

All what we have done till now is fair enough but external in its description. It lacks the ultimate evasive question – the will, which creates the ‘world as a representation’ biased by the entity to ‘break’ reality into the outer and inner world, and protect the survival of the inner world with those actions.

In other words, we need a mind able to perceive the whole as a whole separated from the outer world in which it interacts. And this was the understanding that started modern philosophy of science in the work of Descartes.

*Descartes was a* contemporary of Galileo, and the true founder of modern science with his ‘Method of reasoning’ and the Frame of Reference of the mathematical mind, *said that all what exists was made of:*

*– Open space, which he called ‘res extensa’.*

*– Closed, cyclical times, which he called ‘vortices’*.

And then he added a 3^{rd} element, the final stroke of a genius unparalleled since the times of Aristotle; after realizing that the only proof he had of the existence of those vortices and res extensa was, the fact that he perceived them: *Cogito Ergo Sum. ‘I think therefore I am’.*

*– The mind or 0-point, the relative frame of reference that mapped the ∞ cycles of time of the Universe, reducing them to a ‘World’,* to fit them into the infinitesimal volume of the brain.

*º-mind x ∞-cycles = World: equation of the Linguistic Monad: ∞ Mind-Worlds in 1 Universe.*

The mind though believes to be the center of the Universe in the ‘ego paradox’ as he sees every thing turning around its infinitesimal point, which hosts inside all the linguistic perception of reality, or ‘world’ IT CONFUSES WITH THE UNIVERSE. So the mind is a fractal point•, but believers to be it all.

Descartes was not a simpleton. He was fully aware that what he had in the mind was not the whole Universe, so he expressly stated the fact, differentiating the ‘world’ of a human mind, from the infinite other worlds that exist outside, establishing in his little known book, the ‘world’, this difference, affirming that his ‘Cartesian Frame of reference’, was only that of the human mind.

So he affirmed that the Universe was the sum of infinite mind-worlds, which did not speak necessarily the same languages, and created the same mappings of reality we humans created.

In the graph we can see the difference between Newton who THOUGHT the mind of man WAS the entire Universe, hence converting the Cartesian mind graph into the ABSOLUTE BACKGROUND SPACE-TIME that still lingers in science and Descartes who thought it was just of the many minds, possible with different geometries.

## C & I

What was then the next big number, after the irrational ratios, pi and e to be discovered? The imaginary number, closely related to e, which would be the most perfect expression of the dynamic cyclical ever shrinking wave and hence of the more sophisticated ways to understand the motion of information, which *shrinks a sphere, the only deformable form to infinity without tear and change of topology (Poincare Conjeture), to a point-mind; and this which Poincare’s conjecture expresses for any dimensionality is perfectly shown by the imaginary rotating sphere in terms of e and pi and i – all together now to express the whole Mind-mapping of reality:*

*The Rashomon effect on complex numbers.*

So now we have arrived to the heart of the matter, the complex numbers which are best to represent the social evolution of forms into new scales that emerge as envelopes of wholes with a fleeting transcendental form that seems to us evanescent and not even there when we are looking at it from below in our more dense, lower existence.

More generally we can consider the Rashomon effect o complex number and consider it from 4 different perspectives:

**ð:**As inverse, TEMPORAL numbers.**$:**As bidimensional REAL numbers (Bolay) with its own rules of multiplication (a,b) x (c,d) = (ac-bd; ad+bc)- ≈ As a relationship between quantities represented in the real line belonging to ‘Space’ and in the imaginary line belonging to ‘Time’
**@:**As a self-centred polar frame of reference (r, α)**∆:**And finally when squaring the coordinates, as an ∆§-frame of reference. As this is the new solution found by ∆st theory and yet the most fruitful in the analysis of its true representation of 5D geometries, we shall consider it in more detail.

Of all the views of Complex numbers 2 though are specifically important and clearly related, both in ∆st and number theory, as they correspond to the ∆ð, composite element of reality more complex in itself as we should know by now (: than the s@, simplex spatial lineal @ristotelian mind view in which we can fit the 3 simplex systems of natural, rational and real numbers.

Complex numbers then are something ‘special’, *far more connected to the real Universe outside the humind light space-time that we can conceive – hence the difficulty to understand them from the ego paradox of the humind, who confuses its lineal visual world with the Universe, in which it misses the scales that the complex numbers so well describe. *

In that regard, the key to understand complex numbers is always in the ‘research’ of reality, its simplex, first historic discoveries, those who explain the Euler identities between its temporal expression (as sine and cosine) and how this cyclical, temporal expression, *rises the number, often a physical potential or ∆-1 plane into the exponential, upper or lower, eˆ±iz plane*.

The transformation of a world cycle, even in its smaller zero sum actions, which is represented by the sinusoidal functions, which bring about ultimately by the sum of temporal cycles converted into spatial tails of populations into the emergence of the entity into its ∆+1 form, from the simplest representations of potentials in the ∆-1 scale that collapse together into the ∆+1 imaginary part, to the most complex boson experiences of the creation of a social God – no need here to use mathematical languages, verbal wor(l)ds should suffice, open up finally the proper whys of a formalism, which is still considered magic in the world of mathematical physics, where its use has been more fruitful.

Let us then introduce the complex numbers under the Rashomon effect first, to concentrate then in its true form as the numbers of the ∆ð universe, as opposed to the S@ line of the simplex humind.

**t>T<ð**

**Complex numbers as expressions of the 3 states: potential past <present wave> future particle**

AS WE JUST said Complex space-time ideal for studying ∏imespace world cycles as the previous ones. Since in a complex space, the ‘negative’ arrow inverse to the positive one (Spe<=>tiƒ) *are kept separated as* lateral i-numbers, by the convention – initially a fortuitous error of interpretations – than the minus of √-1 *cannot be taken out of the √. *

*It can, but as this was not understood mathematicians maintained the Tiƒ arrow represented by a negative root as a different coordinates. And the result of* its combinations with real numbers, gave origin of complex numbers ℂ, which *are the best complex numbers to represent the 2 inverse arrows of world cycles of **existence.*

Thus the complex plane IS when used *with the sinusoidal functions a good* representation of a world cycle (tumbled, as X+ is 2D:$pe, x- is 1D ðiƒ and in its 3D ∑∏-mature age is i-lateral numbers):

Thanks to the ‘serendipitous’ error of earlier mathematicians, which could not conceive negative≈inverse motions, the real line was broken into perpendicular co-ordinates, which are perfect to represent world cycles, where the real line represents Spe (max. size, X-oordiantes) and the imaginary root axis represents information which has min. size, and it is negative, in as much as its development rests ‘entropy’ to the real numbers.

This scientists still don’t know and it is a wonder of the ‘deterministic’ Universe that they DO use Cx-numbers to study worldcycles, without knowing what they are and what they study (:i=√-1, is the root of an inverse number, and represents the information of a bidimensional system as Gauss understood when he called them lateral units.

It is then fascinating to observe how slowly was the realisation of imaginary numbers as rotary numbers related to time cycles.

In fact the general concept of a time worldcycle does not exist in science, so imaginary number is related to specific time worldcycles in physical waves of* ∆-3 scales, which in 5D metric: Min. S (max. -i) = Max. ð, have a maximal speed of time cycles. *Hence they can complete a time cycle in a very brief span, making possible to study them as recurrent cyclical function of i; which can be further integrated into an eˆ±iz function that will emerge into the ∆+¡ plane.

Consider the most famous example of the use of complex numbers in mathematical physics, *not as a calculus device but as an integral part of its form*:

It is Schrodinger no-time dependent wave of quantum physics. As it is no time dependent, it means it is deterministic: it represents a complete worldcycle of past, present and future fully ‘integrated in the equation’ and related to the ~~h~~ constant of angular momentum (the closed world cycle ‘spin’ of the wave or particle), which cannot and should not be broken apart.

And as such it becomes the operator of angular momentum, of ‘time worldcycles’ of waves and particles in quantum physics; even in quantum physicists don’t know it (:

So we can study the 3 arrows of time of the quantum wave, conceding the imaginary number in this case to the angular momentum, or membrane of the wave, which will then surface often in connection with the particle envelope (itself origin of all other extensions of quantum physics, hence the importance of the example). So we write in 5D:

∆ø: Vψ ≈ ~~h~~²/2m ∇²ψ + i~~h~~ ∂ψ/∂t

The ‘wave-state’ is a present state; the PARTICLE, or angular momentum that envelops the wave is function which has the i-number (right side). And the way *to properly write the equation is therefore, moving the 2 h factors to the right, to form a nice complex number equivalent to the potential that generates both the particle and its kinetic energy.*

So in the right side we have now the kinetic≈entropic energy of the particle or singularity parameter and its membrane.

While in the other side we have the quantum wave itself, its vital energy, which is the potential energy of the wave, Vψ.

And both are balanced. In this manner, we obtain *a third interpretation besides the expression of Schrodinger and the Bohm’s interpretation, closely related to this one. *

So the equation understood in terms of the generator reads:

Singularity (kinetic≈entropic, moving energy) + angular momentum (represented by i-numbers+ = vital energy (Potential energy)

And as any other T.œ, where the singularity and angular momentum encloses its vital energy in balance with it, so does the wave.

So in quantum physics as physicists realise with certain surprise, imaginary numbers are not mere ‘artefacts of calculus’ but ‘REAL’ in its meaning, expressed in the graph above.

We shall of course extend its ‘reality’ to explain why actually *they are also real in electromagnetism, even if the humind prefers to obtain as final result a real, which is less real (: result, but appears as ‘real’ to the unreal human mind (quip intended).*

In terms of past, present and future (left side), which is a more general view of the ternary elements on imaginary equations, an i-number rotate sfrom a relative past/future to a relative future/past a function of existence, through a 90° middle angle, that is, the middle ST balanced point, in which the present becomes a bidimensional function, with a REAL, + (relative past-entropic coordinates) and an imaginary, negative (future coordinates) elements.

This needs a couple of clarifications: first we move ‘backwards’ in a complex i-graph from future to past as we rotate through i-scales, in as much as most graphs are measuring the ‘motion’ of lineal entropic time which is the *relative Spe Function.*

So the negative side becomes the time function, something which mathematical physics makes obvious, when considering an attractive, informative force-field-potential negative. Ok, this needs better explanations but i have done my ‘dose’ courtesy to the anonymous reader that entered the post. When it gets a new hit, I will work another 1/2 hour.

*End of upgrade 28-Dec. 2017, Fuerteventura sunny day (:*

# ∆: i²

**The ∆-symmetry of i-numbers.**

The second fundamental role of i-numbers must be related to scalar space-time, hence polynomial structures.

Complex numbers are one of the key elements to ‘crack’ in order to understand the laws of the Universe as mirrored by mathematics.

Complex numbers in fact were introduced into mathematics to solve the algebraic equation: **X²+1=0**

in the domain of real numbers, without realizing that x² = -1 is in itself a meaningful result for bidimensional sT=X beings.

Instead it led to the introduction of a conventional number, the imaginary unit i, defined by the equation: **¡²=-1**

As it could not be operated in that form with other numbers dual Numbers of the form a + bi, where a and b are real numbers, were born, and called complex numbers. *This was a fortunate circumstance because ever since we have been able to consider them as two different ‘species’ of things represented in different axis, and it will show to be right, as the i-element which is a square smaller than the real element and with a negative – inverse function characteristic, will be perfect to represent physical quantities that ’emerge’ as an ∆+1 scale absorbing energy from a real ‘potential’ part, from quantum physics to flows of liquids.*

So this is the essential use in mathematical physics:

*‘The imaginary part of a complex number represents a parameter of an ∆+1 whole, which extracts energy from an ∆-1 disordered more extended, ‘real’ potential field’. *

This said, with enormous repercussions to find the whys of mathematical physics, complex numbers might have never been born, if one of its interpretations had been found by squaring the real and imaginary axis, creating another type of complex plane of special use for relativity and all those systems in which it makes more sense to maintain a c² type of parameter in the imaginary line.

We have also stressed in many paragraphs the duality and differences between polynomials and ∫∂ functions, whose ‘magic’ closeness (as polynomials can be approached by differentials through Taylor binomials), *responds to the social ‘lineal’ herd-like of polynomials, vs. the organic, more complex ‘cyclic’ nature of ∫∂ systems.*

So if we combine the two concepts we can ‘reorganise the complex plane’ in polynomial terms ‘naturally’ by *squaring it, getting rid of the √ elements and its negative complex roots, to reflect a mapping of a fundamental principle of nature.*

* *

In mathematics, the upper half-plane H is the set of complex numbers with positive imaginary part:

Further on the plane is tumbled, switching the importance of the i-plane now the -1 negative to the conjugate, reverting the “upper half-plane” . This lower half-plane, now ‘positive’, ow defined NOT as y < 0 but >0, is equally good, but less used by convention. The open unit disk D (the set of all complex numbers of absolute value less than one) is equivalent by a conformal mapping to H (see “Poincaré metric”), meaning that it is usually possible to pass between H and D.

Both maps are indeed completely equivalent reason why we use only 1/2 plane.

It also plays an important role in hyperbolic geometry, where the Poincaré half-plane model provides a way of examining hyperbolic motions. The Poincaré metric provides a hyperbolic metric on the space.

The uniformization theorem for surfaces states that the upper half-plane is the universal covering space of surfaces with constant negative Gaussian curvature.

The closed upper half-plane is the union of the upper half-plane and the real axis. It is the closure of the upper half-plane.

The h*olographic principle: the bidimensionality of all real forms of nature, which are in its simplest forms, dual dimensions, and so the unit of reality IS not a single dimensional system/point but a bidimensional ST system. Ergo the proper way to use imaginary numbers is ‘squaring’ them. *

Then we obtain a ‘realist’ graph, with an X² coordinate system in the real line, which can be projected further with a negative -y and a positive +y axis, often ‘inverse’ directions or symmetries of timespace.

And since the square of a negative number is positive, the X² main axis of bidimensional space-time units IS mirrored in the positive in both sides. So the proper way to represent the graph is by tumbling it, and making a half positive plane, with the now ± real axis (the i and -i axis) on the X coordinates and the square axis on the Y coordinates, which is much closer to the ‘REAL universe’ of bidimensional T.œs of timespace moving around its 0-identity element in two inverse directions of time (-i²= -1; +i²=1 axis).

And suddenly we can start to understand many realist whys of complex numbers and complex spaces.

In the graph we see the immediate application to quantum physics, where ‘probabilities’ are born of the product of the ‘two conjugates’, the positive and negative sides of the imaginary axis, which now looses its √ and so become positive and negative ‘inverse’ arrows/functions/forms, which *combine into a bidimensional holographic real squared element. So the graph is an excellent form of represent, the **neutral present, ST, in the square real line and the ± inverse S and T functions in the positive and negative conjugate axis no longer imaginary.*

This is somehow acknowledge in many equations of physics notably in those related to relativity where we USE square functions (in praxis just to simplify calculus, but now we know in reality they have ‘meaning’).

The other even best known case is the use of square factors to define energy and momentum in relativistic physics:

So in special relativity we write:

WHERE THE negative factor of time means AN inverse motion/form, as the other 3 positive space parameters represent a lineal distance-stretching on the lower quantum potential=gravitational space-time where the light displaces and the negative ‘cyclical time parameter of the light wave’ the warping by light of a space-time-light function as it ‘forms’ the quantum field potential below (we assume it to be the ∆-1 gravitational scale), the speed of light space that drags and warps the gravitational potential into form appears as a negative element that slows down the motion.

So on one side the light wave moves lineally in the euclidean space-time below it and on the other hand it warps and reduces the distance by a factor which allows it to create the magnetic/electric field of energy and information, creating a new plane of ‘smaller space-time over the ‘larger, more entropic gravitational/quantum potential’:

In the i-scale unlike in a normal XY system, the exponential function if we use the customary logarithmic graph is ‘lineal’, while the sinusoidal function of the wave, which as we said is present, grows exponentially through the i-scale, as IT IS THE PRESENT WAVE, WHICH ACCUMULATES AND GROWS constantly in its value. While in the normal cartesian graph, the sinusoidal wave becomes repetitive and the exponential grows to infinity.

# ð

In the graphs a very useful function on the complex plane that mimics the passing of time in world cycles towards an internal i-point. Complex planes are thus better systems of co-ordinates for cyclical times. And its simpler polar form, shows it is indeed, the perfect plane for ∆t analysis. Notice that as X becomes a ‘whole’ of a larger scale, it becomes paradoxically smaller in ‘volume but it speeds up in time’ according to ∆-metric.

To notice also that the second graph is NOT defined when x,y=0, the singularity (spatial view of the complex plane), or final point of the world cycle, alas, its point of death (temporal view):

So we could state the following:

- Complex numbers in the cartesian plane model the whole super organism in scalar space.
- Complex numbers in the polar plane, model the world cycle of the super organism in time.

Alas, finally we know what al that fuzz about imaginary numbers means and why they do have so many applications in all systems of sciences. The number of insights this ‘insight’ reveals for each science is thus enormous.

**Let us then after **this introduction to numbers, study both synchronically and diachronically those subjects, as the comprehension of those type of numbers deepened in a chronological manner, being the GREEK CLASSIC AGE, THE AGE of Natural numbers, the middle age the age of rational numbers, the modern age of irrational ones, all of them translated into digital thought, and the future will reconnect with the first age of pythagorean insights in number theory giving ‘vital properties’ within the world cycles and superorganisms of nature to the different key numbers of the mathematical sphere.

# (S, T)

Finally we shall find interesting properties of existential algebra, in which as a pair of numbers, with a negative product Complex numbers can represent existential actions of bidimensional Hologagraphic, Space-time fields.

Numbers are forms in Pythagoras thus become real for simpler spatial systems of less than 10 elements.

We can consider the following obvious ones:

– Line, (2 closed geometry),

Now unlike in geometry non-AE mathematics does not have much to say at this stage in which it is a point-world in the mind of its discoverer, on fundamental new discoveries as the 5 postulates of non-AE geometry are, but conceptual understanding of its meaning, which was always problematic due to the acceptance of infinities without clearly defining them, as they are NOT real.

The ancient Greeks expressed infinity by the word apeiron, which had connotations of being unbounded, indefinite, undefined, and formless. One of the earliest appearances of infinity in mathematics regards the ratio between the diagonal and the side of a square.

Pythagoras and his followers initially believed that any aspect of the world could be expressed by an arrangement involving just the whole numbers (0, 1, 2, 3,…), but they were surprised to discover that the diagonal and the side of a square are incommensurable—that is, their lengths cannot both be expressed as whole-number multiples of any shared unit (or measuring stick). In modern mathematics this discovery is expressed by saying that the ratio is irrational and that it is the limit of an endless, nonrepeating decimal series.

In the case of a square with sides of length 1, the diagonal is √2, written as 1.414213562…, where the ellipsis (…) indicates an endless sequence of digits with no pattern.

So what the Greeks found is simple: there are two relative ‘finites’, the whole, the large, the social bigger possible numbers, and they explored them, notably Archimedes in his sand reckoning papers and Ω numbers; and the infinitely small, with the conundrum of real numbers. What they mean? This is far more interesting field as finitesimals in the 5th dimension do have more information (as all lower scales) and what basically gives us is a strong proof that there might be infinite 5D scales. Since ratios between lines and cycles, which is the canonical transformation 3 Sp>Tƒ, between a function of space, the diameter and of time, cycle go to infinitesimal lower scales. So even the smaller scale, with maximal detail does NOT close a cycle into a perfect form, but leaves and fluctuates by exhaustion between an upper bound as a spiral and a lower bound as an open cycle. Or in feeding terms, the cycle opens and closes and absorbs energy or information though those glimpses, mouths, eyes

The same goes for triangular fields and diagonals, which can be modeled as a moving, reproductive wave that advances through space, reproducing the inner space and outer perimeter of information of the triangular wave. And again here we observe that the diagonal or base of the advancing wave never closes so it can move for ever, continuously absorbing through the discrete non closed √base of the triangle, and emitting it for its motion entropy from its inner surface, which however IS DETERMINED (triangle area).

Those 3 examples, the meaning of real numbers in reality, the duality of space and time shown in Lebesgue time and Riemann space integrals and the differentials of space volumes and time motions and the theme of finitesimals and finities showed what this point-world does in Analysis to study its concepts from the perspective of the real world and enlighten the real world properties with the higher language of analysis, not to innovate methods.

And this leads to the 3rd interesting branch of discrete mathematics.

Arithmetic: points as information and social numbers.

Arithmetic studies geometry as Tƒ, points of information, and so form is important to arithmetic. This lost truth, which Pythagoras fully understood with its tetraktys obsession that we share, can be resumed in a postulate of arithmetic:

‘Numbers are social groups of an informative bidimensional network-plane’.

Indeed, we defined a clear bidimensional relationship between planes of space and cycles of time:

Sp (width, length) < => Tƒ (height, cyclical rhythm)

Now we apply it to define numbers with a Cartesian graph, with height but bidimensional forms. In brief numbers belong to a bidimensional plane more than a line. And this is the main correction to consider to adequate more arithmetic to reality. Today we analyze numbers as points of the real line, but this is misleading.

It is better to understand numbers as social groups, whose configuration in bidimensional space defines its efficiency and hence its abundance, whereas the simple numbers, 1 for a point, 2 for a communicative wave between two points, 3 for a ternary system in space, 4 for the same ternary system with a central Œ-point, etc. connect immediately the geometry of vital organic systems, in an isomorphic space-time of generalized coordinates (atomic space-time being the most obvious), to connect directly geometry with the structure of the Universe.

Those discrete informative social points are therefore of two fundamental forms: points in a plane, which are structures found in potential fields with a clear gradient that maintains the points in its spherical field (observed in close distances as a plane of geodesics perceived as lines) which is, you guess it, the form of this planet, and hence of great importance for human thought, and human analysis of reality – not coincidentally geometry was for long in a bidimensional plane only and most theorems can be proved in bidimensional planes.

Given the structure of the Universe, which adds, in space, layers of a potential field, which are isomorphic in the gradient for each layer, this is the most important for of analysis of social points.

Below it the simplified system of points as intervals of a line has no much importance in the analysis of numbers as ‘entities’ per se that matter (theory of numbers) but do work of course when numbers are not the ‘whole’ which matters, but mere parts of the next Unit of ¬Æ, the line of time duration or spatial dimension.

Then there is the extension of bidimensional point structures to 3 dimensions of space, and we can use for this matter points of density, scalar points where the direction of motion does not matter, or we can use vectorial numbers if the dimension we add either in 2 or 3 space-time analysis is one of motion. And here again we differentiate easily between axial rotary vectors in 2 dimensions (angular momentum) or 3 dimensions (Maxwell screws, differentiated in their relative orientation by the symmetries of parity and/or chirality). Those are therefore also essential elements to understand Tƒ systems in all real space-times.

Or we can consider lineal kinetic vectors, ideal to study SE elements of a system.

Now the same generalizations brings us to an isomorphic 3 dimensional space-time, which is the best to study the cosmos (whereas there gradient might however be fit for bidimensional analysis in galactic vortices and gravitational informative systems), and for the study of topological forms in the perhaps most isomorphic of all 3 dimensional spaces which is the sea (hence topological life forms of cellular species, which correspond to the perfect external topology of a 2-sphere).

And finally we have the ‘crown’ of all numbers studied as discrete forms in 2 or 3 dimensions, complex numbers, whose meaning is essential to time-space theory and its physical applications, as they are points which do have a space and time component, incorporated. The proper way of writing them though is NOT as it is normally done, with √I =√-1 mystical jargons.

-1 is nothing more than a vector with a negative gradient, and it can mean many different physical things, a potential attractive field is written with minus symbol as opposed to the inverse positive symbol by convention a repulsive field; and so on. But it has a more interesting understanding on the antisymmetry between space and time factors, such as if we take as a convention, time the dominant factor hence with parameters to be positive, space-energy will be the negative parameters negative.

So we shall find for example that the cyclical, axial spin motions are inverse in ‘left handed particles’ (the dominant in the universe) to the direction of lineal motion and its parameters of lineal momentum. Left handed particles thus show this inversion of numbers, lineal positive and negative cyclical, time parameters. Such as in general we can consider:

-S>=> +T or S+T≈0 or k, as natural metrics to the inverse, virtual Universe where time and space parameters split into real presents but also cancel. A simple image of this is the particle (time arrow) antiparticle (space arrow) duality that split and rejoin into the vacuum with no form.

For example in relativity we find Sx² + sy² + Sz² – c²t², as the measure of a space-time distance in 4Dimensional space.

For 0 distance then, c2t2<=>Sx2+Sy2+Sy2, which is the split of a virtual 0-point of light space-time (concepts those which belong to advanced 5D physics and we shall treat on the 4th line).

What matters here is to notice that all those splits and transformative similarities (we rather should use always ≈ and <=> as we explain in non-AE algebra, since equality of points with parts is never absolute) give us constantly when we obtain single values instead of dual points (which would be a more proper analysis, as again the line is an abstraction, space are bidimensional planes); negative numbers and its square I-numbers. Good.

Calculation has resolved it but we are here interested in meanings. So the proposal we make is rather simple. Instead of considering the complex number plane with I on the Y-height-space coordinates and X in the time coordinates, we should consider to square both parameters, and consider the complex squared plane as more representative of reality with a real line of X² as the time parameter, and a space coordinates of -Y values where we had before √-∆=I values.

This merely means that for a system, its space and time values are inverse AND that the time function has 2 dimensions while the space function has one. AS IT IS THE CASE BECAUSE TIME UNITS ARE BIDIMENSIONAL CYCLES and space units are lines.

Alas, a complex number is merely a number which shows the space and time composite parameters of a being.

And this my friend illuminates in an extreme form all the algebraic operations with do with both, potencies and complex numbers. It makes them real.

**FUNCTIONAL SPACES**

The arrival of the modern age of mathematics with analytic geometry, analysis and finally the not studied age of digital thought (computers) gives us a more complex analysis of numbers, no longer as mere collections of equal forms, which is the pervading notion behind the classic Greek age, but as ‘elements’ of a topology )the more sophisticated understanding of numbers as points of a network which is the form in itself; and ratios and scalar irrational constants, which are key points on open and closed curves that in the 3rd age of ‘i-logic mathematics’ those texts start, will themselves be ‘fundamental elements’ of the geometrization of space-time world cycles of existence.

Yet even more fruitful as the modern field of space-time representations that works perfectly for systems of multiple ∆-scales are

**∆: Functional spaces,** which are** **the ideal representation of two scales of the fifth dimension where each point is a world in itself, of which the best known are the Hilbert>Banach spaces used to model Spe-Fields (the simplest functionals) and hence the lower scales of lineal entropy of quantum physics (where each point is a lineal operator).

Thus Hilbert spaces fully studies ∆ST, as they can model the whole 5th dimension elements, expanding points into vectors (field spaces), functionals (scalar spaces) and any other ‘content for each ∆fractal Universe, you want to represent. Hence they are essential to study both spatial quantum physics and fourier temporal series.

And so we could truly call *function spaces ∆-Spaces.*

**∆S≈T. Complex functional spaces.**

So if we consider both ‘symmetries together’, it is evident that the most complete modelling is done with function spaces in the complex plane, as *we can represent both motions in worldcycles and across planes of the 5th dimension.*

Such space-time representations are ‘complete’ (as the functional is infinite in its parts, so are the real numbers, which in fact are so numerous that the rational ones are in real space a relative zero, showing the infinite scales of the mathematical Universe) and show properties of world cycles, allowing the ‘generation’ of smaller and larger scales (as in a Mandelbrot set).

i=√-1, is the root of an inverse number, and represents the information of a bidimensional system.

Where we must consider a ‘co-ordinate system of ‘square, bidimensional’ space and time elements, where the x² coordinates responds to space, and the i²=-1, represents the inverse mostly time scales*.*

*This easily explains why in special relativity, concerned with the creation of a wave of light space-time, ∆-3, we subtract -(ct)² to create a ‘light space-time’ from the pure Euclidean, underlying structure of the neutrino, ∆-4 scale.*

Hence the whys of special relativity: the light space-time ‘feeds’ and subtracts that quantity from the ∆-4 neutrino/dark energy scale to ‘exist’. And we use as in quantum physics for similar reasons complex numbers to represent this.

In the graphs ‘lateral number axis’ or ‘imaginary axis’ tend to represent ∆t functions; while the real parts represent the S, positive and T, -inverse functions. So they are ideal to represent world cycles, and when each complex point is a ‘function’ in itself, with ‘inner parts’ ∆±1 world cycles and its different ‘motions’ in time. It is then interesting to notice that the path between 2 such points as worldycles can be many, but providing its duration is not infinite (Gauss theorem), *the integral of the world cycle is the same.*

The main feature of the i scale, which is *a polynomial order smaller than the real Spe-scale is that it has negative ‘resting value’ .*

*So it mimics the ‘resting value’ of the particle/head, Tiƒ, that the real space (body-wave plane) sustains.*

* I.e, if we consider S to represent the total ‘energy value’ of an ST present plane of existence, ∆i will ‘subtract’ an √ST quantity of that energy, ‘shrinking’ its space to ‘rise’ a new dimension of form, the particle head that ‘floats’ above the body-wave.*

**KEY CX Nº & PHYSICAL FUNCTIONS **

These numbers were manipulated like real numbers, being added and multiplied as binomials. So the basic operations of arithmetic when carried out on complex numbers produce other complex numbers.

**The geometric representation of complex numbers. **

Every complex number a + bi may be represented by a point in the Oxy plane with coordinates (a, b), or by a vector issuing from the origin to the point (a, b). So Complex numbers become pairs (a, b) of real numbers for which there are established definitions of the operations of addition and multiplication, obeying the same laws as for real numbers.

The sum of two complex numbers: **(a+bi) + (c+di) = (a+c) + (b+d)i**

is represented geometrically by the diagonal of the parallelogram constructed from the vectors representing the summands

In this way, complex numbers are added by the same law as the vector quantities found in mechanics and physics: forces, velocities, and accelerations AND used to represent actual physical quantities, *which if we square the graph, become expressions of ST entities in the real x² coordinates and S or T entities in the Y-coordinates; yet as they are ‘two different species’ they cannot be fusioned together*.

What kind of specific entities can represent then as S<st>T mirrors is not yet fully understood and better treated for each specific case in the sections of physics (alternate currents, quantum waves and so on.)

We will thus see then how this point of view is very successful in various problems of mathematical physics.

Enough to state that for what we see in the graph, *both elements represented by the i-rreal and real part of the number, can be superposed=added, so in general complex numbers are better to add waves, fields and potentials than particle-states which are not always possible to add.*

In any case, before its physical use, the introduction of complex numbers had its first successes in the discovery of the laws of algebra and analysis. The domain of real numbers, closed with respect to arithmetic operations, was seen to be not sufficiently extensive for algebra. Even such a simple equation as (1) does not have a root in the domain of real numbers, but for complex numbers we have the following remarkable fact, the so-called fundamental theorem of algebra: Every algebraic equation

with complex coefficients has n complex roots.

This theorem shows that the complex numbers form a system of numbers which is complete to represent all the operations of algebra.

It is not trivial that adjoining to the domain of real numbers a root of the equation **¡²=-1 **leads to the numbers a + bi in whose domain any algebraic equation is solvable. The fundamental theorem of algebra showed that the theory of polynomials, even with real coefficients, may be given a finished form only when we consider the values of the polynomial in the whole complex plane.

As essentially it means that polynomials not only have multiple solutions but some of those solutions are dual ST-bidimensional forms; that is a seemingly ‘square’ that seems a whole, is composed of two parts, which have often ‘inverse symmetry, respect to the ‘identity element’ the particular polynomial represents in reality.

The further development of the theory of algebraic polynomials supported this point of view more and more. The properties of polynomials are discovered only by considering them as functions of a complex variable.

*So X² could be mean two equal beings, SS and TT, but it was also needed to observe ST and TS mixtures of inverse ‘elements’.*

Complex numbers thus have the advantage in the imaginary frame of reference of not being able to mix, maintaining the 2 functions they represent apart of each other, and having an inverse nature in its conjugates that make it highly symmetric.

**THE e AND SINUSOIDAL FUNCTIONS: CONVERGING ON THE UNIT SPHERE**

ALL This said the great importance of the complex plane at the level of numbers (at the level of planes will be studied in analytic geometry) has to do with the conversion of exponential e function from the whole plane into the unit plane of sinusoidal ∆-1 function by virtue of the negative repetitive nature of the i-factor, which does translates ∆º scales into ∆-1 unit circle elements, or in real nature, the scale of the individual to the cellular scale. So we shall define in a classic discourse the mathematics of it, and intersect some comments referential to ∆st meanings.

**Power series and functions of a complex variable**

The development of analysis brought to light a series of facts showing that the introduction of complex numbers was significant not only in the theory of polynomials but also for another very important class of functions, namely those which are expandable in a power series:

**ƒ(x) = ao + a1(x-a) + a2(x-a)²+…**

Now the first ∆s=t insight on the power series is its nature as a sum of elements focused in each ‘point value’ (ao..a1..an). As sums are the trademark of ‘steps’ in time (frequency or growth of population), we can consider each step of the power series as a step on the evolution of the function in time, with an obvious process of growth and/or a repetitive cadence (if we switch between ± factors), which will *define the nature of exponential ∆±i functions (with its polynomial growth), vs. sinusoidal, ± series functions, with its back and forth ± change of the arrow of time and will allow us to ‘characterise’ the interaction between the 2 main mirror functions of the minimal duality ofreality:*

**Γ: $<∑∏>ð:**world cycle functions (sin, cosine intimately related to complex numbers)**∆-1:**exponential entropic functions with its awesome speed of decay.

The development of the infinitesimal analysis required the establishment of a more precise point of view for the concept of a function and for the various possibilities of defining functions in mathematics. Without pausing here to discuss these interesting questions, we recall only that at the very beginning of the development of analysis it turned out that the most frequently encountered functions could be expanded in a power series in the neighborhood of every point in their domain of definition. For example, this property holds for all the so-called elementary functions.

The majority of the concrete problems of analysis led to functions that are expandable in power series. On the other hand, there was a desire to connect the definition of a “mathematical” function with a “mathematical” formula, and the power series represented a very inclusive kind of “mathematical” formula.

This situation even led to serious attempts to restrict analysis to the study of functions that are expandable in power series and thus are called analytic functions. The development of science showed that such a restriction is inexpedient. The problems of mathematical physics began to extend beyond the class of analytic functions, which does not even include, for example, functions represented by curves with a sharp corner. However, the class of analytic functions, in view of its remarkable properties and numerous applications, proved to be the most important of all the classes of functions studied by mathematicians.

Since the computation of each term of a power series requires only arithmetic operations, the values of a function represented by a power series may be computed also for complex values of the argument, at least for those values for which the series is convergent. When we thus extend the definition of a function of a real variable to complex arguments, we speak of the “continuation of the function into the complex domain. Thus an analytic function, in the same way as a polynomial, may be considered not only for real values of the argument but also for complex. Further, we may also consider power series with complex coefficients. The properties of analytic functions, as also of polynomials, are fully revealed only when they are considered in the complex domain. To illustrate we turn now to an example.

Consider the two functions of a real variable:Both these functions are finite, continuous, and differentiable an arbitrary number of times on the whole axis Ox. They may be expanded in a Taylor series, for example, around the origin x = 0

The first of the series so obtained converges for all values of x, while the second series converges only for − 1 < x < + 1. Consideration of the function (5) for real values of the argument does not show why its Taylor series diverges for |X|≥1 . Passing to the complex domain allows us to clear up the situation. We consider the series (5) for complex values of the argument:

since** |z|2n → 0.** Thus for complex z satisfying the inequality |z| < 1

The inequality |z| < 1 shows that the point z is located at a distance from the origin which is less than one. Thus the points at which the series (6) converges form a circle in the complex plane with center at the origin. On the circumference of this circle there lie two points i and −i for which the function 1/(1 + z²) becomes infinite; the presence of these points determines the restrictions on the domain of convergence of the series (6).

**The domain of convergence of a power series. **

The domain of convergence of the power series in the complex plane is always a circle with center at the point a.

Let us prove this proposition, which is called Abel’s theorem.

First of all we note that a series whose terms are the complex numbers **Wn: W1+W2+…+Wn** may be considered as two series, consisting of the real parts and the imaginary parts of the number **Wn = Un + iVn:**A partial sum Sn of the series 9 & 10: **Sn= σn + iτn **

“so that convergence of the series (8) is equivalent to convergence of both the series (9) and (lo), and the sum s of the series (8) is expressed by the sums σ and τ of the series (9) and (10): **s= σ + iτ **

After these remarks the following lemma is obvious:

If the terms of the series (8) are less in absolute value than the terms of a convergent geometric progressionwith positive A and q, where q < 1, then the series (8) converges.

For if |wn| < Aqn, then

so that (cf. Chapter II, §14) the series (9) and (10) converge and thus the series (8) also converges.

We now show that if the power series (7) converges at some point z0, then it converges at all points lying inside the circle with center at a and having z0 on its boundary (figure 2). From this proposition it follows readily that the domain of convergence of the series (7)

is either the entire plane, or the single point z = a, or some circle of finite radius:

For let the series (7) converge at the point z0; then the general term of the series (7) for z = z0 converges to zero for n → ∞, and this means that all the terms in the series (7) lie inside some circle; let A be the radius of such a circle, so that for any n

We now take any point z closer than z0 to a and show that at the point z the series converges.

Obviously:Let us estimate the general term of the series (7) at the point z:

“from inequalities (11) and (12) it follows that:

i.e., the general term of the series (7) at the point z is less than the general term of a convergent geometric progression. From the basic lemma above, the series (7) converges at the point z.

The circle in which a power series converges, and outside of which it diverges, will be called the circle of convergence; the radius of this circle is called the radius of convergence of the power series. The boundary of the circle of convergence, as may be shown, always passes through the point of the complex plane nearest to a at which the regular behavior of the function ceases to hold.

The power series (4) converges on the whole complex plane; the power series (5), as was shown above, has a radius of convergence equal to one.

**Exponential and trigonometric functions of a complex variable.**

A power series may serve to “continue” a function of a real variable into the complex domain. For example, for a complex value of z we define the function eˆz by the power series (13):

In like manner the trigonometric functions of a complex variable are introduced by:

**sin z = z/1! – z³/3!+zˆ5/5!-… **** cos z = 1 – z²/2!+zˆ4/4!-…**

These series converge on the whole plane.

This fascinating result make us re-interpret how through *a travel to the complex plane in a lower infinitesimal series dimension, the constant of explosive death, eˆ-iz, can reconvert into i sin z + cos z, adding both of them.*

It is interesting to note the connection which occurs between the exponential and trigonometric functions when we turn to the complex domain. If in (13) we replace z by iz, we get:Grouping everywhere the terms without the multiplier i and the terms with multiplier i, we have (16):

Euler’s formulas solved for cos z and sin z, get:

The emphasis o ∆stience is on meaning. Here we find that *in the complex plane, the 3 fundamental expressions of the Dimensions of reality, in simple functions, Γ (sin/cos) worldcycles, eˆx, 4D exponential decay (or growth in a placental environment), and xª polynomial 5D series, are symmetric to each other, when we use a ‘squared’ graph which ‘fits’ both 1D real numbers and 2D manifold numbers. *

Specifically we notice that symmetry when we switch between ± exponentials representing the 4-5D inverse arrows which then can be added or inversely rested to give us the inverse Γ (sin/cos) arrows of the world cycle.

*As a world cycle can be represented in ‘its limit’ by the inversion of ∆±1 growth and decay, 0>1>5D and 2<4D – the exponential duality, and within its limit by the world cycle repeating its iterative sin/cos waves. *

So as we ‘rise’ through the ±i ∆ scales we are isomorphic to the ‘turning’ of ±sin/cos cycles.

Moreover as the whole ±i duality represents a world cycle zero sum, the differentiability of a complex function is infinite, showing the constant repetitive generational cycle of any system of nature (as opposed to real functions, whose differentiability is limited).

*It is in terms of this complex polynomial view, where we can then study the polynomial Taylor’s approach which allows ∞ derivatives, beyond the 3 ‘real, s and t, volume and acceleration’ derivatives of reality in a single plane:*

We must say that Taylor’s formula resides in the complex repetitive plane.

The emphasis of science on Euler’s formulas is in its solvability, which simplifies trigonometric wave calculus. As it is very important that for complex values the simple rule of addition of exponents continue to hold:

And this in ‘stience’ come to express again the inversion of ±i arrows of time.

Since for complex values of the argument we define the function eˆz by the series (13), formula (18) must be proved on the basis of this definition. We give the proof:

We will carry out the multiplication of series termwise. The terms obtained in this multiplication of series may be written in the form of a square table:

We now collect the terms which have the same sum of powers of z1 and z2. It is easy to see that such terms lie on the diagonals of our table. We get:

Applying the binomial formula of Newton, we get the general term in the form:So the general term of the series (19) is identical with the general term of the series for eˆz1 + z2, which proves the theorem on the rule for multiplication (18).

The multiplication theorem and Euler’s formula allow us to derive an expression for the function ez terms of functions of real variables in finite form (without series). Thus, putting:

The formula so derived is very convenient for investigating the properties of the function ez. We note two of its properties: (1) the function ez vanishes nowhere; for in fact, ex ≠ 0 and the functions cos y and sin y in formula (20) never vanish simultaneously; (2) the function ez has period 2πi, i.e.:

This last statement follows from the multiplication theorem and the equality:The formulas (17) allow us to investigate the functions cos z and sin z in the complex domain. We leave it as an exercise for the reader to prove that in the complex domain cos z and sin z have period 2π and that the theorems about the sine and cosine of a sum continue to hold for them.

**CONCLUSION**

**Recap**. Number systems respond to the processes found in the ∆•st universe, with an @-point of view or frame of reference. The most impotent are:

the N-atural ‘direct numbers, for simple social groups, its inverse negative numbers for Spe<≈≥Tiƒ symmetries.

The Cartesian graph and the rational numbers, for Spe x Tiƒ = K metric equations and complementary systems, (whereas often the z-dimension is the reproductive combination of the other 2).

The real numbers, for inverse ∆±1 scales & the complex numbers for world cycles; and functional spaces, for ‘whole ∆, ∆±1’ space-time events.

**CODA: ****2 proofs of experimental mathematics on the margin.**

We cannot really at this time go further into the study of ‘details on number theory’, as the workload is too heavy to upgrade all stiences of mankind so we keep doing each section by layers of incremental complexity.

But it is important to realise that the upgrading to 5D IN THE DETAILS WILL SOLVE many questions in all sciences, specially regarding its whys, which in mathematics means its proofs.

Let us then deal as an example with the 2 most famous mathematical theorems of the century proved** **in a margin instead of using thousands of pages and computers crunching numbers, the trademark of XXI century science, AS THEY were proved by Mr. Perelman and Mr Wyles becoming the most talked about discoveries of mathematics in the change of the century.

As we said mathematics is an experimental mind science which is NOT closed in its demonstrations as Godel proved and lobachevski and Einstein explain ‘i don’t know when mathematics are real’, since the geometry of the Universe is real, THE PROOFS as in all science must be experimental, since ‘sciences should not be concerned with facts who have not happened, of which there is no evidence.

* Poincare’s conjeture proof – that only spheres can shrink with no limit and tear – is immediate,* from the definition of a mind as a temporal informative Tiƒ element, since minds store information, and the n-sphere IS the form that stores more information in lesser space. Hence in as much as the equation of the mind 0 x ∞ =K, defines the shrinking of the whole infinite universe into a single point, only the spherical mind can

*shrink information with no limit without tear.*

Mr. Perelman’s work was based though in the axiomatic method and so took years to be found and… revised… yet another clear ‘proof’ that ‘simplicity is genius’ and the axiomatic method only takes you to hundred of pages, which in an economic, experimental, real, Ockham’s Universe is proof of veracity. So to me the need of such long proofs only prove, the axiomatic method is the Ptolemaic epicycles of modern times.

And then, once his job was done, (or not), he disappeared from the public view, as I have done.

*Next the proofs of Fermat’s theorem* – that only the square of natural=spatial numbers can ad exactly into another square=spatial number which is also immediate.

In the graph above, here again for good measure we see that information≈static mind cycles of time and entropy≈moving planes of space are bidimensional.

Bidimensional space sheets expand as entropy with motion. Bidimensional informative pages with no motion – as the one you are reading – fix time cycles in the mind. So ONLY BIDIMENSIONAL FORMS and its holographic product *which gives us 3 dimensions,* ST are real (incidentally this is also a proof for one of the biggest conundrums of modern physics, the holographic principle, which states that information is bidimensional).

Thus in any process of superposition of bidimensional sheets, we can only superpose (read ad) 2 dimensional sheets:

Thus only X²+Y² exists as an exact new bidimensional form.

3-dimensional forms in natural numbers, *are created by merging of bidimensional fields, hence by multiplication as in vector calculus, *which are NOT added to create a 3rd dimension but multiplied to get as in the graph a light wave – c²=1/µxe.

Hence X³+Y³ ≠ Z³. And *since the are only 3 dimensions of time and 3 dimensions of space, there is no need to prove it for imaginary higher dimensions.*

Did Fermat had this proof? (: Well, one could say that the proof is as old as the fact that only the Pythagoras theorem works (murder forgiven:) and almost all postulates of geometry can be proved bidimensionally, but cubes do NOT add (the old Greek conundrum)… Or in other terms, a lot of math is inflationary extensions to non-existent infinite dimensions, non-existent polynomials powers and absurd cantorial infinities, *as all systems have a closure membrane, beyond which ‘reality breaks into another fractal world’.*

Yet in a 3rd age of excessive information, languages become fictions outside the realm of what it is. Short of the same process with inflationary fantaphysical theorems.

*Languages become inflationary and fictional with age,* abandoning the ternary ‘simplicity of the Universe’ with only 3 ages of time, 3 topologies of space, 3 scales of the 5th dimension for any ‘finite organic whole’ call it the galaxy, the human being, the mathematical language and its ternary topologies, ternary numbers, ternary dimensions, ternary disciplines…

**Conclusion of the errors of the ego in mathematical physics.**

The whole circus of modern ‘mathematical physics’ and quantum interpretations, big-bang explosions and the role of physicists as the high priests of knowledge is born of the same concept of religion and banking, studied in other sections of the web, the ego thinks to be infinite in his language, creationist cause of the Universe.

The Universe though does NOT work on ego. It works exactly on the opposite qualities: humble, cautious species do survive. And ‘simplicity is true genius’ (Leonardo) because ‘the Universe is simple and not malicious’ (Einstein).

Back to REALITY, it is made, as we said, of the dimension of life and information, which ends its world cycle of existence in the explosive dimension of entropy and death in a cyclical, fractal Universe, in which big-bang means merely the E<=>Mc² death side of Einstein equation.

But *this equation runs the other way around creating mass, collapsing entropy into mass. In fact Einstein first wrote it as M=E/c², in a landmark paper, explaining how gravitation could create mass out of entropy, as strangelets and black holes, do collapsing light mass into heavy mass particles, as galaxies do, collapsing vacuum space into gravitational mass.*

More over, this is happening in all scales of size, none of which is more important – so there are big bang deaths of tiny matter in atomic bombs, of larger planets and stars in nova explosions, and of galaxies in quasars, with a 20 billion year cycle, which is the origin of the ‘LOCAL’ measures of entropy physicists have blown up to cosmic dimension and move back in entropic time to collapse it all into their infinitesimal point, who ‘talk to them’ with imaginary points and lines…

**Recap**. Number systems respond to processes found in the ∆ºst universe, with an @-point of view or frame of reference. The most important are:

1.S±T: N, Z: The N-atural ‘direct numbers, for simple social groups, its inverse negative numbers for Spe<≈≥Tiƒ symmetries.

2. Q: SxT: The Cartesian graph and the rational numbers, for Spe x Tiƒ = K metric equations and complementary systems, (whereas often the z-dimension is the reproductive combination of the other 2).

3. ∆±1; S<≈>Tiƒ: R,C: The real numbers, for inverse ∆±1 scales & the complex numbers for worldcycles

∆±1. Functional Spaces, ‘wholes and parts ∆, ∆±1’ of which Spe ∆-1>∆ space-time represent each point as a vector (Hilbert space).

Those types of number systems can be operated on by some or all of the standard operations of arithmetic: addition, multiplication, subtraction, and division (we study them in ¬ælgebra). Such systems have a variety of technical names (e.g., group, ring, field) that are important to grasp its properties, making of group and set theory a needed ‘bridge’ (beyond the errors of sets regarding infinity), towards a GST definition of them.

** N & Z****: ****: 1-∞**

**The negative number question, solution and imaginary numbers.**

Next come the question of negative solutions to those equations, what truly they mean? As we explain in number theory, Euler’s vision of them as inverse numbers is the proper meaning. *So they do exist, which has clear consequences in areas such as relativity where negative mass, means only an entropic process of expansion of mass into entropy.. I.e:*

E=mc2 does not mean energy (really entropy in this case), is mass, as mass is in the other ‘inverse side’ of the equation.

So the real equality happens when m moves to the same side of E:

e = -mc²

Which defines negative mass as an expansive entropic destruction of mass. And so in relativity the 2 solutions which can be put as an example of the 2 roots of quadratic equations (one discharged in processes that are social and accumulative):

In graphs, Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.

Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut.

The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.

Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube.

Like warp bubbles, it involves negative entropy, which again would be just in the previous equations to move, E to the mc2 side, increasing its density.

Almost every faster than light travel requires negative energies to be implemented at a very large densities. And so there is nothing special about it.

In fact there are some Examples of “Negative” Energy:

– Radial electric or magnetic fields if their tension were infinitesimally larger,for a given energy density.;

Squeezed quantum states of the electromagnetic field and other squeezed quantum fields.

Gravitationally squeezed vacuum electromagnetic zero-point energy.

In general, the local energy density in quantum field theory can benegative due to quantum coherence effects.

Other examples that have been studied are Dirac field states: the superposition of two single particle electron states and thesuperposition of two multi-electron-positron states. In the former(latter), the energy densities can be negative when two single (multi-) particle states have the same number of electrons (electrons and positrons) or when one state has one more electron (electron-positron pair) than the other.

Since the laws of quantum field theory place no strong restrictions on negative energies and fluxes, then it is possible to produce violation of the second law of thermodynamics, and time machines at a local level, which is what GST precludes.

**Fermat.**

Fermat’s unresolved theorem till recently is a good way to start number theory. What does it mean that there is not x³+y³=z³?

Simply that you cannot superpose cubes because there is not a fourth dimension in a single space-time continuum to do so.

Consider the fact that there are only 3 Dimensions where to superpose bidimensional space and time in a single ∆-plane of space-time, which we are merging.

Superposition is always possible for lineal functions in abstract algebra but those functions represent 2-manifold of space whose product with a 2 manifold of time gives us a full 3D manifold; and whose sum, gives a ‘thin’ superposition.

Then when this system ‘interacts’ with a higher 5D social scale or a lower 4D entropy scale, there is a transformation to be accounted for, hence limiting the solutions as part of the information or energy upwards or downwards is lost, since communication happens through a transformation of the 3D system though a network.

The points of the lower ∆-1 or upper ∆+1 plane do not have full motion and information on the other plane, but they are translated through Sp, and Tƒ and STI networks.

This implies we CANNOT represent reality directly beyond 3 dimensions and its 3 symmetric time dimensions.

The same works, when dimensions are polynomials: we CANNOT solve directly a 5D Dimensional polynomial by radicals and obtain its roots.

This fact was known to scientists by the XVIII century, as Abel proved five degree polynomials could not be solved by radical methods and it gave rise to group theory when Galois discovered that with a series of symmetric transformations, that is, through the analysis of the group of transformations of the polynomial, solvable.

Thus the limit of direct dimensions of a relative system is 4. New dimensions can be only added indirectly from other planes.

So we can bring again, Fermat’s great theorem that there are no natural numbers greater than 2 such as x³ + y ³ = z³. This simply stated is telling us that space and time are bidimensional NOT tridimensional. Tridimensional ‘vectorial mathematics’ ARE not real, but the ‘world’ in which a bidimensional force is embedded, the ‘environment’, reason why once and again we will find that forces and clocks are bidimensional (gravitational forces, electronic nebulae, described as membranes with a boundary, etc.)

So the full meaning of it goes like this:

1) By Gödel’s incompleteness theorem of algebra and Lobachevski’s definition of geometry as requiring experimental proof, mathematics is an experimental science.

2) The universe is a holography of two bidimensional substances space and time such as T² + S²=ST². This is an experimental fact proved by 5D Space-time Physics.

3) Hence there is not in reality a 3 dimensional space-time system, in which T³ +S³=ST³ *is **isomorphic, and can be superposed, but 3 dimensional systems do have a gradient, are not lineal, they are NOT INERTIAL frames of reference, and superposition is not possible in them. *In mathematical physics, this ‘real theorem’ of mathematics has enormous consequences for a proper, simple interpretation of Minkowski space and special relativity as opposed to General relativity.

**RATIO NUMBERS. ANALYSIS**

**Q & R: 1-0**

Pythagoras, because of his discovery of the relationship between whole number ratios of string lengths to musical notes and harmony, believed that all creation was based on whole number ratios. This idea became a cornerstone to the entire Pythagorean cosmology. The proof of the existence of geometric lengths that were not in whole number ratios dealt a death blow to the Pythagorean cosmology and physics. Yet it was really truth, only that those ratios in a fractal Universe, had a finite number of scalar parts.

**q & r: ratios of numbers and real numbers: lower infinity**

WE MUST consider now numbers that are not numbers but ratios between S/T values; *WHICH IS a different concept.*

So we define as numbers *decimals, which are mimicking the upper infinity, in the lower bound:*

0.9, 0.8, 0.7, 0.6, 0.5…. 0.99, 0.98, 0.97…. etc.

Numbers though when not defined are ratios which can be exact (q) or more interesting, non-exact ratios, which means the relationship between the Sp/Tiƒ ratios are relative.

Hee the argument is the inverse of Dedekind’s famous cut to define them – an essential concept to number theory, advanced in the beginning of the ‘formal era of mathematics ( 1872) that combines an arithmetic formulation of the idea of continuity (WHICH IS A PARADOX AS arithmetic NUMBERS ARE DISCRETE), to distinguish between rational and irrational numbers.

Dedekind reasoned that the real numbers, which includes ‘proper numbers’, both natural and rational’, and irrational numbers which are NOT numbers (either Natural numbers, aka social points or wholes divided into parts, aka rational numbers) but fundamental dynamic ratios in the Universe, or ‘feed back’ numbers, which constantly vary (pi, the dynamic ratio between lineal and cyclical motion-forms, which creates a cycle of time from a line of space, √2, the dynamic form that creates a new, larger line departing from two perpendicular ones).

, an ordered continuum, so that any two numbers x and y must satisfy one and only one of the conditions x < y, x = y, or x > y. He postulated a cut that separates the continuum into two subsets, say X and Y, such that if x is any member of X and y is any member of Y, then x < y. If the cut is made so that X has a largest rational member or Y a least member, then the cut corresponds to a rational number. If, however, the cut is made so that X has no largest rational member and Y no least rational member, then the cut corresponds to an irrational number.

**Continuity and Number theory.**

Continuity is in any case a concept of geometry not of algebra of 5D. Continuity cannot be considered in 5D planes, which are by definition discontinuous. And this implies real numbers do NOT exist in the plane of an Euclidean Cartesian ordinates, which IS PERCEIVED in our scale of reality. As we go beyond ’10 decimal parts’, we cross the barrier of this plane of existence and discontinuity breaks.

So it is absurd to talk about continuity of a real number, pi, e, and √2, beyond the 10 decimal. This is easily proved because those ratios are normally obtained by limits in which certain terms of the infinitesimal are despised, by postulating the falsity that there are infinite smallish parts, and so x/∆ can be throw out when ∆->∞. But since x/∆, the finitesimal has a limit, the pretentious exactitude does not happen.

Then we must limit to ±10 decimals the validity of any number, as beyond those 10 scales we suffer a discontinuity between planes of the 5^{th} dimension.

For that reason the essential ‘real number’, between scales (the number of social growth or diminution, or e, breaks after 10 numbers: 2’718281828-4590.

So the series 2818… breaks after 10 decimals.

This is obvious in the reality mathematics describes. Beyond the quanta which a given finitesimal represents (with its variations in reality), we are in a different world, with different units of time and space. Beyond the quanta of a cell, the properties of physiology break and we enter into biochemistry, beyond the scale of h-Planck quanta, the properties of physics break. And so on.

Now again this theme of the limits and ratios of the fundamental constants of the Universe, IS VERY EXTENSIVE, as always in the entirely new more accurate encyclopedia of knowledge that represents 5d, which unfortunately I had to research alone by lack of help from academia during decades.

THE MOST important element though is to ascribe them to the key actions: pi is the gauging action of information, e the action of decay and growth, phi, the action of reproduction (along the little known ‘Cardovan number for triangular reproduction’), intimately related to e, √2 the action of communication between points, related to pi (as an angular action), log 10 the action of social evolution, related to e (as a growth action). So we can see how in the same manner we depart from Sp, Tƒ elements that combine into new more complex elements, we can also study the fundamental constants of the Universe and then considered combined ST actions in which they will often appear in equations. And a rule of thought is that combined actions, are ‘larger’ scales, hence do have ratios with ‘less’ lower scales.

For example, an important constant, rarely treated is the ‘Mercator Number’ from his Logarithmotechnia, which first related the decimal and natural logarithm such as:

Log _{10} x = log_{e} x / log _{e}10 = 0.43429 log _{e} x, that is, 0.4343; hence breaking after 5 digits; √2 which communicates also 2 lines breaks 1.41421356 after 5 scales.

What this limits mean obviously is that the ‘depth’ of connection of two elements does NOT go to the deeper levels of the being, but normally is a connection on the surface levels, mostly on the 1, 3, 5, 7, 10 ‘prime numbers’ (10 being of course a prime number as it is really the 1 of the next scale).

The study of functionals, that is functions of functions, and ratio-nals, ratios of ratios (ok, that is my jargon, ill escape it 🙂 is at work here. The Mercator number or rather its inverse, 2.3 is more meaningful: log_{e} x = 2.30… Log _{10 }x, as it relates the relative speed of social growth in the 5^{th} dimension (log10) and the relative growth in individual processes of feeding and death (e).

*It shows the dominance of 5D growth, by 2.3 compared to individual growth. A ratio that tells us in terms of pangeometry that perception through the scales of the 5 ^{th} dimension will have downwards a more pronounced hyperbolic geometry of expansion of space that any projective geometry on the same plane (the equivalent to hyperbolic geometry as infinite parallels cross the horizon); but the social power of a herd in the same plane, will always overrun the inner power of a whole, enclosed within the being. *

Now, for long Number theory has been confused with numerology. Those vital analysis shows that this most abstract part of mathematics is NOT numerology but also realism. We just don’t have time to study the whole field, but it shows some deep correlations and meanings for Number theory, of which prime number theory is the key, as it is the distinction between the 3 type of numbers and its social meaning, as follows:

– Odd numbers, prime numbers and even numbers, themes those treated in ¬æ. So we shall leave two questions opened to the scholar who might wander in this wonders in the future.

What is the real vital meaning of the 3 type of numbers in relationship with the symmetries of the Universe, taking into account that there are more even numbers than odd numbers and prime numbers, and that even numbers = prime numbers + odd numbers (that’s an easy one if you have understood anything about what Tƒ, Sp and ST ‘are’, so just let write the key equation of number theory, with enormous implications to the structure of all systems of reality, and the constrains it imposes to the creation of systems made of a cyclical T-enclosure, a o-point and an intermediate vital Space-Time with a fast translation into many equations of mathematical physics:

* O-point (prime numbers: Scalar Tƒ magnitude) + closing membrane (odd numbers=∑Sp) = ST (even numbers: open ball vital space). *

*Now since generally speaking we have written, Sp x Tƒ = ST and now we are writing Sp + Tƒ = ST the question is what numbers satisfy BOTH CONDITIONS, if any?*

*X x Y = Z, X+Y= Z -> xy = x+y -> xy – x – y =0*

* **Now the obvious solution is 2, which is both a prime and an even number, and makes x=y, Tƒ = Sp, the fundamental condition for a stable maximal function of existence (max. Sp x Tƒ: Sp=Tƒ). This is indeed the most stable natural configuration of reality, the couple of equal value, but different nature, the tƒ female and the Sp male, coming together as point and cycle into the creation of a common vital ST-space. *

*And what about pi? Let us do y=pi. Then πx = x+π. x(π-1) = π x=π/π-1=1.467, which has the interesting condition that x/π=π-1, Tƒ/Sp = π-1; that is the ratio between the value of the scalar zero point and the perimeter membrane, is the membrane less a diameter, yet since Tƒ/Sp is the definition of a feeding system, we can deduce from this that the scalar center, constantly feeds on lengths, chords that cross through it.*

But what is the meaning of pi, breaking quite obviously at the 3,1415 5^{th} digit? Let us say that we do not perceive with so much depth the outside world.

So those are just a few sights of a field, which starts afresh since my fellow countryman Fermat put it on motion in modern times, ‘Vital number theory’.

It is in any case a good introduction to the next much larger field of 5D mathematics, ¬Æ

Let us then start this new section, with an introduction to the general outlook of non-¬æ algebra, before plunging on a historical order on the subject, stating from the fascinating analysis of number theory, its decametric logarithmic scale its prime souls, membranes and space-times.

**III. ð.**

# Worldcycle in** **Nº**:** Constants

**Universal ratios/constants**

*In graph, the 4 fundamental mathematical constants of Nature: pi, e, 10=3×3+1 and phi.*

*We shall depart from them to explain how the Universe creates reality departing from those ‘perfect rations of energy and information’. *

*The 3 main constants of mathematics are directly related to the fundamental generator of space-time:*

*pi, relates the transformation of spatial field of entropy into a vortex of temporal information: 3 x ( Spe) > ∏ Tiƒ.**e the constant of the only function, which is a derivative of itself, eª represents the fundamental function of reproduction of any system, at the lower ‘cellular scale’, as the system increases by a derivative at each step.*- Phi, the golden ratio is essentially the ratio of a bidimensional ‘page of information, whereas ST=φ and Tiƒ=1. Therefore it is connected to the proportion of energy needed to breed an ‘informative’ head. Yet both the particle and the body-wave arise from a field of entropy, and so if we consider the whole to be Spe, the field of entropy which feeds both: Spe= ST+Tiƒ, gives us the golden ratio and once more, ST gives rise to the particle head, in a series of inclusive parts. So Spe, the whole is to ST as ST is to Tiƒ.

*To notice that both, π (+3.14) and e (2.81) are close to 3 by a minimal difference of +0.1, as ultimately the curvature of 3 lineal elements makes a larger cycle with aperture-wholes, while the reproduction of a system extracts energy from its ternary parts to reproduce an infinitesimal or transfer a quanta of energy to the larger whole, x².*

**Abstract: **Dynamically we can define any space-time being as a knot of cyclical space-time actions, ruled by the simple metric of 5D which define the parameters, limits and gradients of its exchanges of energy and information, with the different planes of the ∆-universe.

It is the detailed analysis of bit and bite by bit and bite of the being, as its r=evolves in its world cycles, pushed by quantum motions and stops that consume and feed its energy, ±∆3, splitting the ratios of energy and information of a ‘smaller being’; or ratios between the information and energy of a system, that gives us its ‘exi=st-ential power, in actions that put together as universal constants. Or exchanges of information in processes of social evolution that happen often in decametric scales, and measured as a population in space or process of evolution in time will show also its numeral constants.

Such constants tend to be constants of actions or ratios of feeding and perceiving energy and information or social evolution, and it will affect to ∆±∞ species of scales. Yet from the human ∆0 perspective, when they belong to huge super organisms (earth, galaxy, universe) they tend to be considered ‘universal’ and treated with the abstract reductionism of physics. They are though not different from the vital constants of local biological beings, and its functions are similar for the whole organism of the Universe. Consider for example, the Lamba cosmological constant that merely defines the 3 ages of an EFE space-time (gravitational). If it changes from 0 to ±0, it will define one of the 3 ‘topological ages’ of the Universe, the elliptic, hyperbolic or flat, old steady state, young age of the Universe.

Other constants represent flows between ∆±1 scales, when two species exchange energy and information ,as part of their social activity and creation of symbiotic space-time cycles.

Thus constants express form and function and ratios of energy and information, within the generator equation. It follows that each system of the Universe has its constants. And while we seem obsessed by physical constants of the larger Universe, those are a small part of them.

Indeed, the number of Universal constants of the larger super organisms of mankind – the galaxy-universe and its symmetries in the lower ∆-3 scale, the atom – is far inferior though to the enormous number of local constants, as the creative Universe form resonant frequencies, synchronicities and flows of energy and information between ∆st• beings constantly.

Finally to notice that those constants are dynamic motions and exchanges of energy and information. So while here we treat them mostly quantitatively in the next isomorphism, we develop them from a qualitative point of view.

Now the fact that mathematical constants are RATIOS NOT NUMBERS, AND HENCE IRRATIONAL NUMBERS ARE RATIOS TOO, and they are most of the numbers of the real line sets up the possibility of infinite mathematical games of existence encoded in the ratios of those irrational numbers. But ratios of what? Normally of topological *motions which exchange energy and information, between parts or entities of the Universe. So they are ultimately the constants of time motions of the Universe, * which change state along its ternary elements of Γ=GST and dual or ternary ‘social structures and networks’.

**The constants of motion.**

All this said, we are writing in a sequential order the isomorphisms of the Universe for a reason, as they are closely related in ‘pairs’. So the defining quantitative element of those motions are its ‘constant ratios’ and functions, which describes them in terms of algebraic operandi and space and time values. We can then wonder, which is the fundamental equation or Universal, mathematical physical, mental, topological or organic-biological constant for each of those 8 motions of the Universe.

It is self-evident that those constants will be the simplest, most repeated constants of reality. So we can easily realise that:

*The constant of the 2 motions of information is ±pi,*which converts lineal into cyclical form. In the perceptive motion the constant will curve inwards the flow of energy, and radially select from its apertures, the relevant information to map out reality. In the communicating case, it will be a trigonometric π-wave of sines and cosines. But in both cases information will be preserved in the frequency and form of those pi cycles. Hence for transformations of lineal entropy into cyclical information we always find a πi constant.- THE CONSTANT OF THE 2 motions of growth and diminution is ±eª, as e=(1+1/a)ª, clearly represents the growth of a whole system, by 1 finitesimal cellular part, 1/a, in a times; and viceversa, its diminution along the same proportions – hence for growth and decay, we always find an e-constant.
- The constant of reproduction is often related to the golden ratio and its variations on the Fibonacci series, from his famous ‘2 rabbits reproducing rabbits’, for complex informative systems, but for pure translative motion, the λ-wave length of the moving form is the specific ‘constant of reproduction’ of each waveform. And in explosive ratios of growth between scales, we shall often find pure logarithmic scales in base 3×3+0 (the perfect tetraktys) or e-numbers of ∆-1 unit growth.
- So this leaves us the 2 remaining motions, which are processes of emergence and dissolution of the being across an ∆±1 discontinuum of existence, which cannot be measured with absolute precision as information disappears or it is newly created within scales. So there is a certain degree of uncertainty, which will be the constant to use for each ‘different space scale’, in the case of physical scales, the well known constants of H-planck (quantum uncertainty), entropy (Boltzmann constant) and cosmological constant for the dark matter, beyond c-speed and below ∆0 Temperature. So there is not a clear ‘universal constant’ as each different super organism and its different scalar planes requires a specific analysis of its own. But again we find often this finitesimal quanta to be 1/n, and hence also related to the e number (1 +1/n)ˆn; as emergence and dissolution might be considered the final stage or ‘limiting barrier’ of an e-progression

So 2 mathematical constants are found everywhere in the Universe and suffice almost to describe everything:

**Pi the constant of Spe≤≥Tiƒ transformations and e, the constant of ∆º±1 transformations.**

**The beauty of 3. Pi=+3 & e+e/10=3, the major constants.**

Now, why those pi, e, values exist and no others is the next layer of understanding of Universal constants. Since the 3 are closely related… to 3, ah the beauty of trinity, which only I and God seems to have understood (: and Saint Augustine, just kidding:)

Pi is indeed 3 in excess, exactly 3 in the hexagon; e is 3=e +e/10, in defect of a classic growth of one tenth every ‘part’.

Phi is another scalar argument: the whole (trinity) is to the larger part (body) as the larger part (body) is to the smaller one (head).

So we are finding *some perfect ratios, harmonic, the first pi (on the interval from 3-the hexagonal pi to 3.14, the circular pi), and the second e is the ratio of growth of a ten dimensional tetraktys, one by one, as 2.718..+0.271..=2.99…*

Now, the nature of those Universal constants as fundamental to Nature is not clearly understood in its whys. So I used to start my conferences with this little tease before I was vaporised from the industry of science for my activism against the Nuclear Industry.

And then I ended with the usual tease of explaining the 2 fundamental combinations of those constants, the probability distribution of a population in space (Gauss curve) and Euler’s formula for the world cycle (with its simpler mode of Euler’s identity) which mathematicians consider the most beautiful equations of the Universe, and are certainly the commonest, along the ‘lineal motions’ described by Hamiltonians/Lagrangians.

*Yet Euler’s formula in a dynamic way, and Gauss formula, which can be interpreted as the waveform derived of the constant creation of ‘new points’ in the Euler’s cycle, ultimately define a **world cycle of space-time, hence showing in a rather simple profound way how all is about a world cycle of space-time and its synchronous form – a super organism:*

Thus, we stress again that Pi and e, and φ, the ratios of basic Spe<st>Tiƒ transformations and events:

- π: 3 |spe > O-Tiƒ;
- e: ∇1 e->∆1+∆º
- φ: œ=Tif+ST
- 10: 10 §º ≈ §¹

And so those are the fundamental constants of Nature.

*Phi, the ‘minor’ universal constant: the golden ratio.*

While phi is the golden ratio of proportions of the 3 topological parts of the being, expressly the commonest ratio of the function Tiƒ (height)/Spe (length), proper of a bidimensional form of in-form-ation.

Thus the answer to why those are the constants of nature and no others is obvious: ONCE THOSE RATIOS CAN BE understood as the perfect algebraic expression of the main motions/events of any species of ternary Spe<ST>Tiƒ elements in the Universe, CERTAINLY, those ratios/events are more efficient and SURVIVE BETTER AS THEY ARE MORE HARMONIC.

A bit more complex for this layman introduction is the relationship between the Fibonacci numbers (the series of reproduction) and phi number. Its series are indeed related, and hence *represent the duality of each of those constants as observed in space (golden ratio) and in time (Fibonacci closed solution as a series of **numbers, given by the formulae defined by the golden ratio.*

Thus the golden ratio is a simple process of generation, as in the graph of a bidimensional ‘basic being’, where a+b is the underlying Spe-entropy field that feeds both, a, the body and b the head, generated by a.

Now, Pi is kind’a obvious, phi, is beautiful and beauty, which is the proportional balance of Spe, Tiƒ & ST we will find to be the key to survival (specifically its equation, S=T, harmony of ‘size and form’, ‘energy and information’ ‘body and mind’). Yet the most fascinating of all constants for me, is e, because it is the constant of the ∆-dimension scalar processes. And IT IS OBVIOUSLY FOR THAT REASON, along the imaginary numbers of cyclical time that complete the fractal, cyclical algebraic structure of the Universe, the less understood of them all. So we do have to complete ‘in the future’ on our section of maths, the meaning of its ‘awesome’ properties.

**∆e, the scalar constant.**

Unlike the pi-cycle constant which is self-evident in Nature, and hence soon to be reflected in the synoptic mind – as ‘mathematics *is an experimental science’ that maps out the geometry and social numbers of the Universe, regardless of the mind-deformation of the ego, proved false by Lobachevski and Godel, yet still unrecognised by **mathematicians who like to think their proofs are absolute above all sciences – exactly as verbal philosophers and religious wor(l)d priests think, and many legislators and dictators of laws to be ‘obeyed’ (: it took the genius of Euler, likely with Gauss the best mathematician to discover it. *

The number e is a mathematical constant that is the base of the natural logarithm: the unique number whose natural logarithm is equal to one. It is approximately equal to 2.718281828…4 and is the limit of (1 + 1/n)n as n approaches infinity.

First: e *is not really irrational as pi, which indeed as Poincare’s sphere HAS no limit on its shrinking and growing making the Universe infinite, and ALLOWING INFORMATION but NOT energy TO EMERGE or SINK with no limit of scaling in the Universe, REASON WHY Maxwell’s equations of waves can transmit information across scales with no limit in an infinite Universe.*

* In terms of algebra, we could state that the fourth scalar operation, called Tetration is only infinite in cyclical forms (Poincare Conjeture), but for all other ‘processes’ of scalar growth, the limit is ±10-11 scales, reason why e, the exponential growth, ‘by definition’ the ∆-scale number ‘breaks’ its clear periodicity beyond the 10th scale:
*

2.718281828…45904523536

**WORLDCYCLES WRITTEN WITH 3 CONSTANTS: Spe: √2, Tiƒ: π, ST:e.**

Number theory talks more than of numbers in an isolated manner of numbers that correspond to the bidimensional holographic sheets/functions of time (closed curve described by π and phi as a 137° golden angle – which allows the maximal covering of a surface from an outer point of view, as a flat, filled space – so light in plants can touch most ‘leaves’ which therefore are positioned in a phi angle), and functions of space (open curves, of why the asymptotic function, or hyperbola, is the key element, where e is the fundamental ‘number’).

The growth of reality between two scales starts with the √2 a bidimensional triangular pair, which starts the growth on dimensions of information, and space-extension by reproduction of the same event through a series of different forms of growth:

Let us then analyse this first event and its variations. Two lines of thought, the particles 1 x and 1 y, meet themselves in the relative 0 point, of an x-y automatically created coordinates, which will give us an √2, wave with origin in o, and an xy, expansion front of space-time, reproduced by pairs of xy points along the front-wave till √2 reaches the two ‘membranes of the x y tail of past momentum’.

At this point √2 can be measured as a sum of discontinuous wave-points, and gives us a variation of ±1, or be considered as a continuous wave of energy; never mind, in both descriptions we have created a bidimensional, triangular, straight triangle, which can now grow in different 0/0 tangents to the exponential wave of its second age, of geometric growth.

**Constant ratios that leave finitesimal windows, and grow in finitesimal quantities: √², π, e**

Finitesimals in that sense can be considered in the simplest constants of nature, which are spe/tiƒ ratios of different kind in relationship to which number of ‘decimals’ are real. In pi, all, as it is the proof of the infinity of Lineal>Cyclical transformations in the Universe. In √² though it might not be the case; and certainly in the fundamental constant of finitesimals, e, the classic finitesimal is 1/10 across 10 scales, such as e +1/e=3

So after a brief comment on √2, pi, we shall deal with e, the key number of all finitesimal processes of growth and repetition.

Then in its 3rd age the system will under grow pi-shrinking as a growing accelerated vortex that moves inwards in a series of logarithmic spider-like spirals, in which the interplay of pi and phi determines the curvature.

All this brings us to the practical matter of setting limits to decimal ratios between spe/tiƒ elements (ir-ratio-nal numbers).

And in general except for the case of pi (Poincare’s conjecture of an infinite capacity of spheres to shrink into an still mapping-mind without deformation), we shall find the 9-12 limits to be the most coherent ones, with the 3 final decimals to break the pattern – as ∆-dimensions of the 10th element).

**e number: exponential growths and decays.**

Let us then consider with this prolegomena, in more depth the e-number, key of all processes of finitesimal growth, full of ‘hidden patterns and regularities’ , which are at the heart of all those growth processes.

To start with it is defined with a finitesimal equation, which indeed breaks after 10 decimals:

Notice that the series of e break is 1+8=9; 2+8=10 regularity after 10 numbers (consider the first 2.7 the first dual term of the pattern). THEN you get a ‘middle’ 8/2≈4 remainder as the ‘closure’ for the next decimal, clearly meaning the decay process of the system has reached its final units, as e is the constant of ‘death’ (more than of exponential growth, which hardly can be sustained with enough energy to reproduce a system; while decay is the ideal unlimited capacity to self-destroy what you *already have within you, a well-known unendless process of masochist souls ):*

So a system emerges or dissolves into its ∆±1 scales after the exponential growth/decay series in the 12th ‘hour’. Or in other terms, after 12 social reorganisations of relative 1/n infinitesimals into §ocial wholes:

Indeed, how many finitesimal parts then we need (n-values) for a good approximation of a whole onto its parts? It fairly approaches by a simple rule: social scaling approaches the correct e in proportion to its digits.

That is for an §=3, 10³≈1000 n will bring us a third decimal in the pattern: 2.717; for a §=4 scaling, we get to 4 numbers, 2,7181 , for §=5, a five decimal pattern, 2.71827 (here we can see the first two terms, 27, repeated) and so on.

So an e-system of finitesimal parts will finally become for an 1/n=12ˆ-12, the absolute minimum of most systems on the trillion parts.

Man, being the most complex system we know is in fact an 1/n-cell on the order of a trillionth.

Thus beyond the final 4590 (46) quanta ( the 11 decimal) we really don’t consider e to ‘exist’… AS A relevant pattern, since the limit of ‘relative infinity’ of the ∆-whole is reached, the system then will emerge into its ∆+1 world, and the pattern of growth break.

So for man, if we consider that 0-cell the minimum quanta of our Œ- system in ∆-1, then the whole number, 1, represent a human system of around 30 trillion cells which is the most perfect-observed in detail supœrganism and likely as the ‘limit’ of evolution of carbon life, a maximal of ST structure in the Universe.

Now the process of growth of the exponential function are fascinating by its orderly hidden ‘points’ of anchorage around 3 that along the prime numbers encountered (which are other kind of anchorage, a definition of key social functions along the way), likely will determine inner structures of palingenesis, for all mathematical beings of the Universe – a huge field of biological inquire along topological morphogenesis, which i have only explored in its simpler stages.

In the next graph we see some of the symmetric elements of an exponential growth curve.

The reader could notice the existence of those 5 unit surfaces and the key element, the derivative=present state of an exponential growth, equals the whole past to future range, hence *the exponential function, e ˆx ≈ eª (we use a as x is not in the wordpress) is the only one which is in an eternal present, pure reproductive state, **maximised by this function of ‘growth’, as another remarkable property shows:*

In the graph, we can see how the growth of any system in base e is the maximal possible. e is indeed a magic number that shows the efficiency of the main reproductive event of reality and its growth to create a higher ∆-system without chaotic branching. When growth is even higher normally we are talking of a process of chaotic, spatial loss of causality without proper reproductive balanced growth. So the system dissolves into ∆-k parts. Let us consider this ‘limit’ with the chaotic feigenbaum constant:

In the graph, the e-number is the key number to consider *growth from an external point of view, as φ, the golden ratio is the key number to study its organisation from an internal point of view.*

As such the key elements of the e-function are shown in the graph.

All this said we can consider the properties of e as we did with those of the other key numbers of the world cycle of the Universe.

In the first graph, the volterra curves of reproduction of prey Tiƒ=top predator and Spe=prey on a closed ecosystem with prey saturating if alone its total mass, shows those balances. Chaos though ensues in certain prey-predator system that enters the bifurcations of entropy – when several solutions are equally probable and hence the whole stops behaving as a single form and branches into partial wholes with probabilities in time = density in space populations that diverge:

In the second graph we see the case of the Feigenbaum constant explained from the study of Logistic map in the seminal paper by the biologist Robert May, where he gave the following equation to model population growth of a species given the species’ fertility (λ) and the “birth rate” (r):

Here “birth rate” is specifically defined as a number between zero and one that represents the ratio of existing population to the maximum possible population. And λ can be a positive real number, but of particular interest are values of λ in the interval [0,4].

If λ is too low, the species dies out (i.e., rn+1→0), which is observed in the real world due to lack of sustainable population. If λ

is too high, the species dies out too, which is also observed in the real world due to high competition. For “good” values of λ, the population stabilizes after a point (i.e., rn+1→c , where cis some non-zero constant). For example, take λ=2.3

and rn+1 stabilizes at 0.565222. So, this simple model got a lot of popularity for its ability to model real life population sustenance of species.

But it can model more than that. It captures cycles of growth and decay of populations for certain values of λ. For example, take λ=3.2

and rn eventually keeps fluctuating between 2 stable points – rn+1=0.513 and rn+2=0.799. These are called (in chaos theory) as bifurcations. What’s interesting is, we can get any cycle we want by choosing an appropriate λ.

But , if we take a ratio of the differences in lambda for subsequent bifurcations, it turns out to be a constant – 4.6692… – for any 2 subsequent pairs of lambda. This is the origin of the Feigenbaum constant. After this, Mitchell Feigenbaum observed similar behavior for any quadratic equation, and it became a universal constant.

So with this very brief comments on finitesimals and its constant numbers, which time-life permitted we will expand from old notebooks on the post on NUMBER THEORY, we can get into serious Analysis, starting for what truly matters first, the operandi of the final evolution of math into an all comprehensive ‘analysis’ of all its elements, T-numbers, S-pace, @-Coordinates, and S≈T symmetries.

**The boundary: 9-10-11.**

In our study of numbers as social forms, renewing with 3rd age information the 1st pythagorean age of Social Number theory (End of that post), we explain in detail, why that triad, is the ‘@-membrain closure and 5D of the scalar Universe, using its simplest forms, polygons. Since *mathematics encodes as the simplest formal **language the essential Disomorphisms of time and space, from where – no magic or mysticism here when you know the reality yo describe – we can obtain the basic properties of space-time organisms latter becoming more complex, convoluted and keeping ONLY the vital functions and actions of the game, which disguise its regular forms.*

So by observing the properties of geometrical 9, 10 and 11th point-forms, such as those in the graph, we realise some essential ternary structure: $t: 9 < ST: 10 > §ð: 11t

In essence, a nine system either in tetraktys with a void center or square form with a simply connected self-centred in its 5D number, occupies the vital energy within the system; the 10th is an even=strong coupled membrane, with its pentagonal symmetry (center) between space and time dimensions, and the 11th hectogram by connecting *its points through waves of energy and information reproduces internal ever smaller hendecagons, **towards its emerging endecagon, which transcends as first unit, 1-1th of the higher ∆+1 dimension.*

In praxis, the ∆º element will therefore always double as whole or part of an infinitesimal or infinite world. So it has in its singularity ‘stretched’ on the 5th dimension arrow of eusocial evolution a ternary value: π x 11¹¹:285 billion elements= -1 trillion. Being the next scale or ‘outer boundary’ of the territory of the being, 12¹² 9 quadrillions.

Do for a single or ternary, pi, complete system surfaced in a larger ∆+1 world efficiently the trillion scale is its maximal efficient form.

What this means in practical terms is that almost every system that evolves in ever more complex social scales, does reach a maximal number of 1/n finitesimals around the trillion atoms/cells/citizens, if it truly *culminates fully its evolution.*

For example, the system we know to be the more complex ∆±1 supœrganism, limit/summit of evolution of life (before it transcends into metalife aka robots this century), which is the human being, has 11 π trillion cells.

Of course if we ‘keep adding’ scales, we might as well write ¹²12 as the limit of all infinities (12ˆ12ˆ12….twelve times) kind of numbers, but *discontinuities between ∆-scales hardly let any information transcend, making smaller beings ‘indistinguishable statistics’ , so information merely breaks and becomes irrecognizable, even if we can do such quantitative numbers about the number of atoms in the perceived Universe, etc. *

So for all practical matters we shall consider the 11¹¹ number the limit of finitesimal 1/n parts of an N-T.œ and hence *consider that for all practical purposes the regularity of Universal constants break beyond the 1o-11th decimal.*

**The function ζ(s)…**

later known as the Riemann zeta function, is Euler’s:

He caught a glimpse of the future when he discovered the fundamental property of ζ(s) in his Introduction to Analysis of the Infinite (1748): the sum over the integers 1, 2, 3, 4, … equals a product over the prime numbers 2, 3, 5, 7, 11, 13, 17, …, namely:

This startling formula was the first intimation that analysis—the theory of the continuous—could say something about the discrete and mysterious prime numbers —for example, that there are infinitely many of them.

To see why, suppose there were only finitely many primes. Then the product for ζ(s) would have only finitely many terms and hence would have a finite value for s = 1. But for s = 1 the sum on the left would be the harmonic series, which Oresme showed to be infinite, thus producing a contradiction.

Of course it was already known that there were infinitely many primes—this is a famous theorem of Euclid—but Euler’s proof gave deeper insight into the result. And the deep question I poise to the future human or robotic researcher in ∆st is this: *what is the real meaning of that equality between a sum of ‘finitesimal’ natural numbers and the peculiar product of inverse primes of the right side?*

# **IV. **

**∆§: Combinatorics**

**T-PROBABILITY ≈ S:TATISTICS**

**Introduction.** The main themes of probability and statistics.

**I. ST:** Time events and space populations

**II. ∆:** The causes of causality

Number theory achieves a complex evolution with its symmetric analysis of numbers as time-like probabilities vs. numbers as space-like statistics.

Of the many ways to express this we can say that time probabilities in the domain of 0-1 is equivalent to space statistics in the domain of 1-11¹¹ the next scale of the Universe.

This correspondence can be uses as a general example to that of the mind mirror, 0-1 observing the world 1-∞ in which is hosted, and many other symmetries of scale. We can see a super organism as the memorial scale printing of those 11¹¹ citizen-cells of the super organism.

So for all purposes a good approximation would consider the total numbers of ∆-1 elements of the whole super organism as the ∞ limit of the 1-∞ sphere, normally at the 11ˆ11 level.

In this manner the symmetries between motion and slow, whole stillness, mind and entropy, virtual and real develop themselves in the kaleidoscope of perfect mirrors of mirrors of worlds, mirrors of universes…

So we shall close our brief view of number theory with some basic notions on that symmetry, essential to modern maths, specially with the development of mathematical computers and its capacity to do recurrent sequences of quantitative methods.

Let us then first define the Rashomon effect in its minimal duality:

**Γst: ***T:probabilities=S-population is the algebraic duality*

**∆±1:** Time probabilities HAPPEN as they have been formulated in the sphere 0-1 micro universe to space statistics in the ∆+1, 1-∞ self-similar Universe to which the o-1 mind world reduces.

**SxT: COMBINATORICS**

The historic approach to knowledge is always good to understand from the simplex to the complex the natural evolution of any world cycle, including those pure mental mirror-images of reality that are first born as all systems in an asymmetric mixture of time-space parameters/views and then break into the more symmetric spatial view (in the case of combinatorics, space statistics) and the temporal view (statistics).

So combinatorics was the beginning of social time theory beyond the simplest consideration of counting, that is of numbers as wholes of identical beings, and geometric numbers, that is the study of numbers in its symmetry with points. Those two dualities which we can considered to be the ∆§0,1 perspective (numbers as social scales), and the Spatial, more static perspective (that is numbers as forms of space-geometries), will then become the Time perspective, that is dynamic numbers in *which the causal, sequential order matters, and the flow of time constantly ads up new identical beings, in different positions.*

Number theory thus *reaches its highest ‘complexity’ in the symmetry of time probabilities and space statistics, as all the Rashomon perspective are included.*

And we can talk of 3 ages, which can also be broken according to the multidimensional I-logic in terms of the previous graph of an asymmetric, first state, combinatorics, which then specialized into the symmetric spatial population analysis and the hierarchical, time probability point of view.

Let us then start with some insights on combinatorics. As usual we reject the unneeded formal complexities of the axiomatic method, returning to the experimental nature of maths as a mirror of T.œ and the fractal properties of space and cyclical nature of time. So we shall not talk of ‘combinatorial structures, binary and plane trees, categories, the twelvefold way etc. Just to mention that we did study them in youth and know why it is not needed to know. Keep it simple, if you ‘understand it’. Also we are not interested in repeating what huminds know but in exploring new insights from all the p.o.v.s of the 5 dimotions and its vital properties.

**Ternary combinatorics. Variations and permutations.**

The first important combinatorics are simple variations of several elements with different hierarchical elements.

Permutations are the variations of all the elements of the sœT (in the jargon of ¬Æ, the inverse expression of Tœs, which are the whole view while sœT is the view of its part, so as usual we slightly change but keep the correspondence principle, the terminology of classic science in ‘stience’). And its formulae any child knows is P! a formula whose new insights we shall study to keep some sense of order in the analysis of the awesome profundity of ‘algebraic operations’ in this case *the operandi of re=production, the product.*

Variations are then permutations of a finite NUMBER of elements of the whole sœT, whose formula is also interesting in the analysis of operandi, *as it brings now the inverse operandi of division into the mix: V(n,r) = n! / (n-r)!*

One interesting theme of the previous formulae is to realize that paradoxically, *the smaller number of elements we take, the smaller number of variations we obtain (as n-r grows faster). *It seems in principle counterintuitive, as one would image there would be many more small combinations, for example if we have abcde, we can make ab, cd, ac, ad, ae, bc, bd, be, ce… etc. but if we take 5 elements only abcde, the whole happens.

But as we know wholes are always more powerful than parts, so what this simple formula implies is that the inner structure of wholes – what we call its synchronicities and simultaneities, is complex enough to render a larger number of combinations, that smaller sets with minimal variability; reinforcing the experimental, darwinian evidence of the power of wholes.

So essentially the simplest duality of permutations (wholes) vs. variations (parts of wholes) and its combinatorics *strike the very essence of the structure of social organisms, hence its ginormous importance in all sciences, specially in physics, which tends to study massive amounts of identical beings, which lead us to the third classic form of combinatorics: variations and permutations in which a same ‘identical’ element can be repeated. This of course is the commonest case of Nature. *As numbers, we say, are sets of identical elements. And so most societies are of identical beings.

Incidentally in the entangled Universe that the non ¬Æ=i-logic mirror tries to reflect, *all those elements of combinatorics, and the branching into probability and statistics belong ‘i-logically’ to the key 4th Non-Euclidean postulate of ‘congruence’… as the degree of identity of beings implies a parallelism and capacity to understand its information and evolve socially. *

Again beyond its analysis in terms of the ‘3rd level of social, positive growth operandi (+, x, xª), which corresponds properly to Algebra, what strikes also immediately the mind is that *the growth of value of variations and permutations with repetition is huge when we increase the number of identical beings, again stressing clearly the power of ‘identity’ over ‘variation’ – a theme that will run across all the analysis of reality. Since we write:*

V'(n,1)=n; V'(n,2)=n²; V'(n,3)=n³…

This however is not the case when the congruence happens in different groups of the total sœT (variations). Then the formula is greatly reduced: P!/a! b!…

Which we could interpret simply on the concept of ‘divide and win’. The usual interpretation of course, which will heavily weight in physical ensembles is that indistinguishable elements *are the same and must subtracted. *

What is then indistinguishable becomes important. As it is so far an external judgement on the observer, but all forms are internally distinguishable; as total identity only happens in shallow external views of a surface without considering the content. And here of course lays a huge philosophical and ethical part of the Universe, and the dualities between the internal and external view of beings, *which dominates reality as the importance of Bose statistics, entropic ensembles and partitions shows*.

But why a larger number of variations, permutations, a larger cardinal matters? If we were to cast this number in terms of spatial population or sequential time series, it obviously means more configurations, more types of exist¡ences…

**Differences based in the numbers on the §œt.**

More interesting perhaps is to consider, as we are writing about number theory and have studied the value of each number in different social and geometric meanings, each variation, permutation with our with out repetitions and partitions, what they tell us about the game of T.œs:

- If we have a single element, ‘A’; the order can only be 1. And so the ONE is immutable in hierarchical order.
*The ego comes always first.* - When we have two, Ab or Ba are the 2 only hierarchical combinations. And the order is obvious, as the one that comes first is hierarchically the most important with more experience and more information (till its 3rd age of decline)… So we should write A in caps b in minor letters. But here
*in a larger context we come with the first ‘divergence’ of ¬Æ: as it matters more that A and E are actually NOT*in a hierarchical position; but sharing each other to converge into A=B and re-produce a being. And here is where the first sight to the ‘power of identity’ comes into being: identity reproduces and increases a social group; it is the essence of the game of social evolution, growth and reorganization. And so indeed, of all possible combinatorics, the largest numbers are achieved with permutations of an identity number. - Then we have 3, the holy number; trinity indeed, if you have come so far you must by now know is the game. And this gets more interesting:

How many variations we have of 3? As it happens ‘mathematical pros’ write them in pairs, so for abc, they write ab, ac, ba, bc, ca, cb…6…

Since it is obvious that *for 3 elements the number of variations of 2 elements and permutations of the whole set are the same. *

In reality this has huge mirror-implications.

Indeed, we shall latter study the 6th Ðisomorphism of species which combine the ternary networks of the system, in 6 basic different phyla with applications in all stiences, according to the hierarchy of the 3 physiological networks of entropy/digestive/limb systems, reproduction, body wave system and information particle-head system?

Let us stress now, the interesting fact that we can hide the 3rd element, the digestive, entropic, predatory world in which the 2 physiological networks that matter most, the particle-wave body-head system preys in. The 3rd element is thus the lower class of a system, which is almost invariably spent…

Another interesting duality is that along the | vs. O topology, as all what we have explained are hierarchical sets, lineal sets with a preferential order, but the rules of combinatorics for sœts apply also to cyclical orders, with interesting results: the number of variations increases dramatically, as the hierarchy is dissolved, forming the fundamental property of cyclical membranes: to be ‘democratic’, ‘entropic’, as no order matter, so all possible orders happen such as for n elements, n! will be the possible permutations… a ginormous number, even for small digits beyond the 10 decametric scale, which shows how easily we can by iteration and hierarchy multiply the complexity of the Universe.

The importance of those simple relationships of order, will again, be explored in more depth in algebra, as indeed, modern algebra started with the discovery by Gallois that the solutions of polynomials depended on their permutations.

But as always the biggest insights we shall provide are metaphysical, as mathematics is a reflection of the Universe in its simplest spatial and scalar relationships, whose units are the point and the number (for pure temporal flows logic gives better results).

**Leibniz and Gellmann’s Totalitarian principle.**

‘All entities and events that are not forbidden are compulsory’ Totalitarian Principle, Gell-mann.

So what huge metaphysical insight can we obtain from those different ‘sums of elements’? The *existence of a much vaster extension of space than of time. *

Since if all what can exist do exist, a spatial, present, conserved cyclical form, perceived in the simultaneity of its non-sequential order WHICH DEFINES a slice of space HAS INFINITE MORE POSSIBILITIES than a temporal, sequential lineal order. And this means as all possible combinations do exist in some regions of the infinite Universe in time and space, spatial present extension-combinations and cyclical forms ARE much more important than finite hierarchical planes with a more lineal sequential order which end easily with increasing information in the explosion of death.

So indeed, all lineal motions are in fact parts of a cyclical form. And all existences will be repeated as its combinations and worldcycles across the fifth dimension are limited but the number of places in space in which they can exist is far larger.

**Combinations.**

Let us then consider after introducing cyclical permutations the next concept solved in combinatorics, combinations proper, where the *order of the elements does not matter, coming closer to the real non-Æ logic of ‘convergence’ and ‘simultaneity’ and ‘synchronicity’ of the Universe.*

If the set has *n* elements, the number of *k*-combinations is equal to the binomial coefficient:

We can see then the drastic diminution of elements brought about by combinations. *But nature, specially in time does care for sequential, hierarchical order, unlike space, more democratic – hence we see in the relationship of permutation/variation with time and combination with space, once more that time-like elements are far more abundant than space-like ones.*

**∆st: **

**TIME PROBABILITIES≈SPACE STATITICS**

**Probability symmetry with statistics: 5D generation (unit circle)≈ 123D existence (1-∞ plane)**

A fascinating symmetry (theory of measure) is the identity between probability in the o-1 circle of *fast temporal palingenetic generation of the being, and the 1-∞ statistical world cycle of existence in the single plane of ∆º emergence. *

It solves to start with the conundrum of quantum physics, which IS equivalent to thermodynamic statistics, only that being in the smaller o-1 ∆-1 plane, we use the formalism of the o-1 circle of temporal flows, because also according to 5D metric, *the o-1 circle which has lost spatial size, S x ð=K, has much faster clocks of time, hence it is far more convenient to analyse in terms of probabilities of events, instead of long-lasting configurations of populations.*

**SYMMETRY OF PROBABILITY=UNIT CIRCLE AND POPULATION=1-∞.**

the symmetry o≤1≥∞ is the fundamental graph of the fifth dimension. Let us see how with a simple example of Disomorphisms between certain points of the unit circle and the 1-∞ complex plane:

Now the mirror symmetries between the 0-1 universe and the 1-∞ are interesting as they set two different ‘limits’, an upper uncertain bound for the 1-∞ universe, in which the 1-world, ∆º exists, and a lower uncertain bound for the 0-1 Universe, where the 1 does not see the limit of its lower bound. Are those unbounded limits truly infinite?

This is the next question where we can consider the homology of both the microscopic and macroscopic worlds.

Of course the axiomatic method ‘believes’ in infinity – we deal with the absurdities of Cantorian transinfinities in articles on numbers. But as we consider maths, after lobachevski, Godel and Einstein, an experimental science; we are more interested in the homologies of ∆±1. For one thing. While 0 can be approached by infinite infinitesimal ‘decimals’, so it seems it can never be reached, we know since the ‘violet catastrophe’ that the infinitesimal is a ‘quanta’, a ‘minimum’, a ‘limit’. And so we return to Leibniz’s rightful concept of an 1/n minimal part of ‘n’, the whole ‘1’.

This implies by symmetry that *on the upper bound, the world-universe in which the 1 is inscribed will have also a limit, a discontinuity with ∆+2, which sets up all infinities in the upper bound also as finite quanta, ‘wholes of wholes’.*

ONE OF THE most important S=t symmetries of the mathematical Universe is the one between time probabilities in the 0=1 unit circle and the 1-∞ plane of statistical populations (space-points), as it is both, *a symmetry between the ∆-1 scale of finitesimals (unit circle, where a finitesimal, 1/n is defined as the inverse of a real number of the 1-∞ plane); and one between time-cycles and space-planes (topological **symmetry).*

We stated that the conjunction of euclidean geometry with a focus in the c@rtesian graph world represents the best way to consider ∆-scales, as the most beautiful of those scalings *is already incorporated on it, giving us the key ‘∆-symmetry, between the 0-1 unit circle and the 1-∞ larger ∆+1 scale, which have a natural correspondence – almost all themes of maths that work in the 0-1 unit circle work on the 1-∞ scale, with 3 differences: *

*The unit circle has a defined membrane, but its minimalist infinitesimal ‘decimal’ is not defined, inversely, the 1-∞ scale has the minimal infinitesimal 1 defined but the ∞ element is not so those are the elements for a best choice of ‘mental geometry’ to study a problem depending on which element we do know, the ‘singularity 0-undefined, 1-defined’ and the membrane, 1-closed, ∞-open.*- A unit circle is a ð-cyclical/polar geometry, the 1-∞ scale is an open unconstrained one, another choice for solution of problems depending on its characteristics.
- A unit circle is ∆-1 and the 1-∞ scale is ∆+1, with the 1-membrain as the open or closed border between them.

The Unity circle where 1 is ∞ and 0, the unreachable ‘quanta’ of the ∆-1 scale is the best mirror symmetry between a self-centred ‘polar’ mind and its universe (1-∞ region). In maths in fact most processes that matter can be in geometry proved by a bidimensional informative (holographic principle) graph, and most key processes of space-time world cycles, can be observed to happen in the 4 first digit, around the three key number (e+e/10=3, π-apertures of the cycle=3 and so on).

When we extend the domain of the unit circle to the complex plane with those bidimensional holographic numbers, the concept of a ‘world’ represented now by the higher dimensionality of the Riemann sphere (unit circle of the complex plane) is even more clear, as every point of the sphere is communicated with one point of the Complex plane *and the 0 point of the sphere becomes the inverse of the ∞ point of the ‘complex plane’. In this manner as in projective geometry, the o-mind point is the residence of the ∞ of the real Universe:*

O-point mind x ∞ Universe = constant world.

Perhaps it was Leibniz with his concept of a monad, which does not communicate, hence it is the simplest fractal point-mind possible of the Unvierse, yet as such very well described, who closely understood that ‘each point is a world in itself.’ – a mirror of the Universe.

In mathematical number theory, we can say that:

the interval o-1 is the same ∆-1 infinity that the interval 1-∞. Cartesian points in that sense were for the pioneers of science, more than a mathematical artefact, but the mathematical mind in itself looking at the time-space Universe.

In the mathematical sections we study that reflection mind-language mirror. And the many homologies between numbers and space-time evolution. To renormalise a being from 1 to ∞ reducing it to a function in the 0-1 range in that sense mimics the fact that the whole in each scale of diminishing size holds the same quantity of information, and indeed, the 0-1 and 1-∞ limit make sense as the mind o-1 representation of the whole world outside of him 1-∞

Now, there are 2 other ‘spaces’ besides the 3 topological spaces (Sp-cylindrical, ST-cartesian, tiƒ-polar) worth to notice, to explain all the fractal space-time complex world.

Llet us remember a fundamental principle of ∆st theory THE RASHOMON EFFECT:

**‘Every system that exists in 5D² space-time has 3±∆ Disomorphic functions=forms.’**

This means that to exist you must have a function-form connected to each of the 3 present ‘space-time dimensions’ explained in the present fractal generator: Γ: §t<ST>§ð; and the two scalar dimensions, ∆±i of the Universe.

In practice this means we can often reduce the 5 roles of the being to the Γº∆±1 dual roles which will give us the bare minimum *function and form of the system. And so when we study any system the best way to start its description is to consider its functions in present ternary space-time and in scalar 4D/5D inverse entropy/social space-time.*

And so we must find for complex numbers two fundamental roles as ‘numbers able’ to describe the 3 states of the generator, $t<ST>§ð and the ∆ ‘polynomial’ or ∫∂ functions between planes of existence.

The third great field of numbers theory, probability and statistics; focuses in the analysis of nature’s space-time events from both points of view, the spatial, statistical in the 1-∞ graph and the temporal, probabilistic in the 0-1 sphere. So we find 2 parallel sub disciplines, S-tatistics and T-probabilities, as *an S∆º≈T∆-1 mirror, similar to the one *@nalytic geometry performs between topological and algebraic solutions, Those dualities are thus of special interest, as they can bring together all the elements of ST reality and allow to observe their differences by considering in what they differ, even if the *fundamental theorem of modern theory of measure is that for each ‘theorem of probability in the o-1 cycle, it corresponds an equal theorem on population measure in the 1-∞ plane’, given weight to the concept of a fractal Universe which performs the same events and forms in all its ∆-planes*.

At first sight that similarity is less obvious because, from the representation of probability as the standard value of the frequency ƒ = m/n, where 0≤m≤n , and thus 0≤ƒ≤1, it follows that the probability P(A) of any event A must be assumed to lie between zero and one:

**0≤P (A) ≤1**

**The inverse mirrors of the past and the future, the event and the non-existence.**

So we must not only translate time sequences of frequencies to simultaneous populations, its *memorial tail of past results, but* also ‘expand’ the 0-1 unit circle into the 1-∞ scale to observe isomorphic laws between probability and statistics.

THAT IS between Time and Space, between the future occurrences in probability time and the past persistence of memorial populations, and how they are identical in its asymmetry as one of them, populations fade and die away while the other, possible paths of future, converge and collapse, leaving finally only a trace of past, only an occurrence of future, in the collapse of memories that disappear, in the collapse of future dreams that are blown away.

All this of course requires a completely new outlook on the meaning of probabilities and populations, we might carry through the fourth line before my tail of time disappears, but don’t count on it. This will always be an unfinished job that might be aborted even before huminds notice its existence by the disappearance not only of his author but the humind itself (see the section on history and economics).

**Bridging the 5D theory of probability-statistics and the present state of the subject. **

But being more about the thoughts of God, the philosophical comments are meaningful. Ultimately what the probability sphere shows is how the palingenesis of the repetitive sum of partial events/cells/beings in time accumulate towards the ‘project’ of completing an emerging ∆+1 whole, from its ‘pieces, bits and bites’, which first must be reproduced in enough numbers (statistical accuracy requires an N->∞ frequency of events), and then must collapse along the laws of aggregation, which are described in probability and measure theory.

So the a priori condition for the laws of probability to happen is a massive reproduction of events, the a priori condition for a distribution of populations to become exact in the standard bell curve and its deviations is a massive reproduction of populations. Only then the organic game of existence becomes efficient and exact, while when we deal with minimalist numbers of social events/populations the structures are inefficient, more free, less deterministic.

* *

**Its mathematical expression. **

We can do so by considering the ∆-1 plane that of probabilities, which *surface into the population 1-∞ plane only when the event has ‘happened’ with probability 1 and hence becomes a 1-unit of the population plane. *

Indeed, i the 0-1 sphere, two events are said to be mutually exclusive if they cannot both occur (under the complex of conditions S).

For example, in throwing a die, the occurrence of an even number of spots and of a three are mutually exclusive. An event A is called the union of events A1 and A2 if it consists of the occurrence of at least one of the events A1, A2. For example, in throwing a die, the event A, consisting of rolling 1, 2, or 3, is the union of the events A1 and A2, where A1 consists of rolling 1 or 2 and A2 consists of rolling 2 or 3. It is easy to see that for the number of occurrences m1, m2, and m of two mutually exclusive events A1 and A2 and their union A = A1 ∪ A2, we have the equation m = m1 + m2, or for the corresponding frequencies ƒ = ƒ1 + ƒ2.

This leads naturally to the following axiom for the addition of probabilities:

**2. P (A1 U A2) = P(A1) + P(A2)…**

if the events A1 and A2 are mutually exclusive and A1 ∪ A2 denotes their union.

Further, for an event U which is certain, we naturally take:

**3. P (U) = 1**

The whole mathematical theory of probability is constructed on the basis of such 3 axioms.

One the event has happened however we are in the realm of counting populations in the 1-∞ sphere, and the correspondence happens between the probability distribution on the 0-1 plane and the population distribution on the 1-∞.

- In that sense the formal structure of the mathematical apparatus of the theory of probability is simple:

events are called mutually exclusive if their intersection is empty, i.e., if AB = N, where N is the symbol for an impossible event. - The basic axiom of the elementary theory of probability consists of the requirement (cf. §2) that under the condition AB = N we have the equation:

**P (A U B) = P(A) + P(B)**

- The basic concepts of the theory of probability, namely random events and their probabilities, are completely analogous in their properties to plane figures and their areas. It is sufficient to understand by AB the intersection (common part) of two figures, by A ∪ B their union, by N the conventional “empty” figure, and by P(A) the area of the figure A, whereupon the analogy is complete.

The same remarks apply to the volumes of 3D figures.

The most general theory of entities of such a type, which contains as special cases the theory of volume and area, is now usually called measure theory.

It remains only to note that in the theory of probability, in comparison with the general theory of measure or in particular with the theory of area and volume, there is a certain special feature: A probability is never greater than one. This maximal probability holds for a necessary event U: **P (U) =1**

IN THE GRAPH the distribution of social groups as probabilities in time, is more accurate as its indistinguishability increases. Models of the 1 plane, as fourier or probability shows the forms of reproduction of a complex plane in lower @-geometries of existence.

The analogy is by no means superficial. It turns out that the whole mathematical theory of probability from the formal point of view may be constructed as a theory of measure, making the special assumption that the measure of “the entire space” U is equal to one.

Such an approach to the matter has produced complete clarity in the formal construction of the mathematical theory of probability and has also led to concrete progress not only in this theory itself but in other theories closely related to it in their formal structure. In the theory of probability success has been achieved by refined methods developed in the metric theory of functions of a real variable and at the same time probabilistic methods have proved to be applicable to questions in neighboring domains of mathematics not “by analogy,” but by a formal and strict transfer of them to the new domain. Wherever we can show that the axioms of the theory of probability are satisfied, the results of these axioms are applicable, even though the given domain has nothing to do with randomness in the actual world.

The existence of an axiomatized theory of probability preserves us from the temptation to define probability by methods that claim to construct a strict, purely formal mathematical theory on the basis of features of probability that are immediately suggested by the natural sciences. Such definitions roughly correspond to the “definition” in geometry of a point as the result of trimming down a physical body an infinite number of times, each time decreasing its diameter by a factor of 2.

With definitions of this sort, probability is taken to be the limit of the frequency as the number of experiments increases beyond all bounds. The very assumption that the experiments are probabilistic, i.e., that the frequencies tend to cluster around a constant value, will remain valid (and the same is true for the “randomness” of any particular event) only if certain conditions are kept fixed for an unlimited time and with absolute exactness. Thus the exact passage to the limit:

µ/n->p

cannot have any objective meaning. Formulation of the principle of stability of the frequencies in such a limit process demands that we define the allowable methods of setting up an infinite sequence of experiments, and this can only be done by a mathematical fiction. This whole conglomeration of concepts might deserve serious consideration if the final result were a theory of such distinctive nature that no other means existed of putting it on a rigorous basis. But, as was stated earlier, the mathematical theory of probability may be based on the theory of measure, in its presentday form, by simply adding the condition

P (U) =1

In general, for any practical analysis of the concept of probability, there is no need to refer to its formal definition. It is obvious that concerning the purely formal side of probability, we can only say the following: The probability P(A/S) is a number around which, under conditions S determing the allowable manner of setting up the experiments, the frequencies have a tendency to be grouped, and that this tendency will occur with greater and greater exactness as the experiments, always conducted in such a way as to preserve the original conditions, become more numerous, and finally that the tendency will reach a satisfactory degree of reliability and exactness during the course of a practicable number of experiments.

In fact, the problem of importance, in practice, is not to give a formally precise definition of randomness but to clarify as widely as possible the conditions under which randomness of the cited type will occur. One must clearly understand that, in reality, hypotheses concerning the probabilistic character of any phenomenon are very rarely based on immediate statistical verification. Only in the first stage of the penetration of probabilistic methods into a new domain of science has the work consisted of purely empirical observation of the constancy of frequencies.

We see that statistical verification of the constancy of frequencies with an exactness of εrequires a series of experiments, each consisting of n = 1/ε² trials.

**Classic Probability. **

The properties of probability, expressed by formulas (1), (2), and (3), serve as a sufficient basis for the construction of what is called the elementary theory of probability:

The union of any given number of events A1, A2, ···, As is defined as the event A consisting of the occurrence of at least one of these events. From the axiom of addition, we easily obtain for any number of pairwise mutually exclusive events A1, A2, ···, As and their union A,

P (A) = P(A1) + P(A2)+….+P(An)

(the so-called theorem of the addition of probabilities).

If the union of these events is an event that is certain (i.e., under the complex of conditions S one of the events Ak must occur), then:

**P(A1) + P(A2)+….+P(An) = 1**

In this case the system of events A1, ···, As, is called a complete system of events.

We now consider two events A, and B, which, generally speaking, are not mutually exclusive. The event C is the intersection of the events A and B, written C = AB, if the event C consists of the occurrence of both A and B.

For example, if the event A consists of obtaining an even number in the throw of a die and B consists of obtaining a multiple of three, then the event C consists of obtaining a six.

In a large number n of repeated trials, let the event A occur m times and the event B occur l times, in k of which B occurs together with the event A. The quotient k/m is called the conditional frequency of the event B under the condition A. The frequencies k/m, m/n, and k/n are connected by the formula:

**k/m = k/n : m/n**

which naturally gives rise to the following definition:

The conditional probability P(B/A) of the event B under the condition A is the quotient

**P (B/A) = P (AB)/ P(A)**

Here it is assumed, of course, that P(A) ≠ 0.

If the events A and B are in no way essentially connected with each other, then it is natural to assume that event B will not appear more often, or less often, when A has occurred than when A has not occurred, i.e., that approximately k/m ≈ l/n or

**k/n = k/m • m/n ≈ l/n • m/n**

In this last approximate equation m/n = ƒA is the frequency of the event A, and l/n = ƒB is the frequency of the event B and finally k/n = ƒAB is the frequency of the intersection of the events A and B.

We see that these frequencies are connected by the relation:

ƒab ≈ ƒa x ƒb

For the probabilities of the events A, B and AB, it is therefore natural to accept the corresponding exact equation

**4. P (AB) = P(A) • P(B)**

Equation (4) serves to define the independence of two events A and B.

Similarly, we may define the independence of any number of events. Also, we may give a definition of the independence of any number of experiments, which means, roughly speaking, that the outcome of any part of the experiments do not depend on the outcome of the rest.*

We now compute the probability Pk of precisely k occurrences of a certain event A in n independent tests, in each one of which the probability p of the occurrence of this event is the same. We denote by Ā the event that event A does not occur. It is obvious that

P (Ā) = 1 – P (A) = 1-p

From the definition of the independence of experiments it is easy to see that the probability of any specific sequence consisting of k occurrences of A and n – k nonoccurrences of A is equal to:Thus, for example, for n = 5 and k = 2 the probability of getting the sequence AĀAĀĀ will be p(l – p)p(l – p)(1 – p) = p²(1 – P)³

By the theorem on the addition of probabilities, Pk will be equal to the sum of the probabilities of all sequences with k occurrences and n – k nonoccurrences of the event A, i.e., Pk will be equal from (5) to the product of the number of such sequences by pˆk(l – p)ˆn–k. The number of such sequences is obviously equal to the number of combinations of n things taken k at a time, since the k positive outcomes may occupy any k places in the sequence of n trials.

Finally we get the binomial distribution:

Direct examination of the mass of observations makes clear only the the very simplest statistical laws; it uncovers only a few of the basic probabilities involved. But then, by means of the laws of the theory of probability, we use these simplest probabilities to compute the probabilities of more complicated occurrences and deduce the statistical laws that govern them.

Sometimes we succeed in completely avoiding massive statistical material, since the probabilities may be defined by sufficiently convincing considerations of symmetry. For example, the traditional conclusion that a die, i.e., a cube made of a homogeneous material will fall, when thrown to a sufficient height, with equal probability on each of its faces was reached long before there was any systematic accumulation of data to verify it by observation. Systeniatic experiments of this kind have been carried out in the last three centuries, chiefly by authors of textbooks in the theory of probability, at a time when the theory of probability was already a well-developed science. The results of these experiments were satisfactory, but the question of extending them to analogous cases scarcely arouses interest. For example, as far as we know, no one has carried out sufficiently extensive experiments in tossing homogeneous dice with twelve sides. But there is no doubt that if we were to make 12,000 such tosses, the twelve-sided die would show each of its faces approximately a thousand times.

The basic probabilities derived from arguments of symmetry or homogeneity also play a large role in many serious scientific problems, for example in all problems of collision or near approach of molecules in random motion in a gas; another case where the successes have been equally great is the motion of stars in a galaxy. Of course, in these more delicate cases we prefer to check our theoretical assumptions by comparison with observation or experiment.

** The Law of Large Numbers and Limit Theorems**

It is completely natural to wish for greater quantitative precision in the proposition that in a “long” series of tests the frequency of an occurrence comes “close” to its probability. But here we must form a clear notion of the delicate nature of the problem. In the most typical cases in the theory of probability, the situation is such that in an arbitrarily long series of tests it remains theoretically possible that we may obtain either of the two extremes for the value of the frequency

Thus, whatever may be the number of tests n, it is impossible to assert with complete certainty that we will have, say, the inequality

For example, if the event A is the rolling of a six with a die, then in n trials, the probability that we will turn up a six on all n trials is (1/6)ˆn>0 , in other words, with probability (1/6)ˆn

we will obtain a frequency of rolling a six which is equal to one; and with probability (1-1/6)ˆn>0 a six will not come up at all, i.e., the frequency of rolling a six will be equal to zero.

In all similar problems any nontrivial estimate of the closeness of the frequency to the probability cannot be made with complete certainty, but only with some probability less than one.

For example, it may be shown that in independent tests, with constant probability p of the occurrence of an event in each test the inequality

7. |µ/n – p| <0.02

for the frequency μ/n will be satisfied, for n = 10,000 (and any p), with probability

8. P>0.9999

Here we wish first of all to emphasize that in this formulation the quantitative estimate of the closeness of the frequency μ/n to the probability p involves the introduction of a new probability P.

The practical meaning of the estimate (8) is this: If we carry out N sets of n tests each, and count the M sets in which inequality (7) is satisfied, then for sufficiently large N we will have approximately

9. M/N≈P>0.9999

But if we wish to define the relation (9) more precisely, either with respect to the degree of closeness of M/N to P, or with respect to the confidence with which we may assert that (9) will be verified, then we must have recourse to general considerations of the kind introduced previously in discussing what is meant by the closeness of μ/n and p. Such considerations may be repeated as often as we like, but it is clear that this procedure will never allow us to be free of the necessity, at the last stage, of referring to probabilities in the primitive imprecise sense of this term.

**Further Remarks on the Basic Concepts of the Theory of Probability**

In speaking of random events, which have the property that their frequencies tend to become stable, i.e., in a long sequence of experiments repeated under fixed conditions, their frequencies are grouped around some standard level, called their probability P(A/S), we were guilty, in §1, of a certain vagueness in our formulations, in two respects. In the first place, we did not indicate how long the sequence of experiments nr must be in order to exhibit beyond all doubt the existence of the supposed stability; in other words, we did not say what deviations of the frequencies μr/nr, from one another or from their standard level p were allowable for sequences of trials n1, n2, ···, ns of given length. This inexactness in the first stage of formulating the concepts of a new science is unavoidable. It is no greater than the well-known vagueness surrounding the simplest geometric concepts of point and straight line and their physical meaning. More fundamental, however, is the second lack of clearness concealed in our formulations; it concerns the manner of forming the sequences of trials in which we are to examine the stability of the frequency of occurrence of the event A.

As stated earlier, we are led to statistical and probabilistic methods of investigation in those cases in which an exact specific prediction of the course of events is impossible. But if we wish to create in some artificial way a sequence of events that will be, as far as possible, purely random, then we must take special care that there shall be no methods available for determining in advance those cases in which A is likely to occur with more than normal frequency.

Such precautions are taken, for example, in the organization of government lotteries. If in a given lottery there are to be M winning tickets in a drawing of N tickets, then the probability of winning for an individual ticket is equal to p = M/N. This means that in whatever manner we select, in advance of the drawing, a sufficiently large set of n tickets, we can be practically certain that the ratio μ/n of the number μ of winning tickets in the chosen set to the whole number n of tickets in this set will be close to p. For example, people who prefer tickets labeled with an even number will not have any systematic advantage over those who prefer tickets labeled with odd numbers, and in exactly the same way there will be no advantage in proceeding on the principle, say, that it is always better to buy tickets with numbers having exactly three prime factors, or tickets whose numbers are close to those that were winners in the preceding lottery, etc.

Similarly, when we are firing a well-constructed gun of a given type, with a well-trained crew and with shells that have been subjected to a standard quality control, the deviation from the mean position of the points of impact of the shells will be less than the previously determined probable deviation B in approximately half the cases. This fraction remains the same in a series of successive trials, and also in case we count separately the number of deviations thatare less than B for even-numbered shots (in the order of firing) or for odd-numbered. But it is completely possible that if we were to make a selection of particularly homogeneous shells (with respect to weight, etc.), the scattering would be considerably decreased, i.e., we would have a sequence of firings for which the fraction of the deviations which are greater than the standard B would be considerably less than a half.

Thus, to say that an event A is “random” or “stochastic” and to assign it a definite probability

p = (P (A/S)

is possible only when we have already determined the class of allowable ways of setting up the series of experiments. The nature of this class will be assumed to be included in the conditions S.

For given conditions S the properties of the event A of being random and of having the probability p = P(A/S) express the objective character of the connection between the condition S and the event A. In other words, there exists no event which is absolutely random; an event is random or is predetermined depending on the connection in which it is considered, but under specific conditions an event may be random in a completely nonsubjective sense, i.e., independently of the state of knowledge of any observer. If we imagine an observer who can master all the detailed distinctive properties and particular circumstances of the flight of shells, and can thus predict for each one of them the deviation from the mean trajectory, his presence would still not prevent the shells from scattering in accordance with the laws of the theory of probability, provided, of course, that the shooting was done in the usual manner, and not according to instructions from our imaginary observer.

In this connection we note that the formation of a series of the kind discussed earlier, in which there is a tendency for the frequencies to become constant in the sense of being grouped around a normal value, namely the probability, proceeds in the actual world in a manner completely independent of our intervention. For example, it is precisely by virtue of the random character of the motion of the molecules in a gas that the number of molecules which, even in a very small interval of time, strike an arbitrarily preassigned small section of the wall of the container (or of the surface of bodies situated in the gas) proves to be proportional with very great exactness to the area of this small piece of the wall and to the length of the interval of time. Deviations from this proportionality in cases where the number of hits is not large also follow the laws of the theory of probability and produce phenomena of the type of Brownian motion, of which more will be said later.

We turn now to the objective meaning of the concept of independence. We recall that the conditional probability of an event A under the condition B is defined by the formula

P (A/B) = P(AB)? P (B)

We also recall that events A and B are called independent if, as in (4),

P (AB) = P (A) P (B)

From the independence of the events A and B and the fact that P(B) > 0 it follows that

P (A/B) = P (A)

All the theorems of the mathematical theory of probability that deal with independent events apply to any events satisfying the condition (4), or to its generalization to the case of the mutual independence of several events. These theorems will be of little interest, however, if this definition bears no relation to the properties of objective events which are independent in the causal sense.

Naturally, in dealing with the concept of independence, we must not proceed in too absolute a fashion. For example, from the law of universal graviation, it is an undoubted fact that the motions of the moons of Jupiter have a certain effect, say, on the flight of an artillery shell. But it is also obvious that in practice this influence may be ignored. From the philosophical point of view, we may perhaps, in a given concrete situation, speak more properly not of the independence but of the insignificance of the dependence of certain events. However that may be, the independence of events in the cited concrete and relative sense of this term in no way contradicts the principle of the universal interconnection of all phenomena; it serves only as a necessary supplement to this principle.

The computation of probabilities from formulas derived by assuming the independence of certain events is still of practical interest in cases where the events were originally independent but became interdependent as a result of the events themselves. For example, one may compute probabilities for the collision of particles of cosmic radiation with particles of the medium penetrated by the radiation, on the assumption that the motion of the particles of the medium, up to the time of the appearance near them of a rapidly moving particle of cosmic radiation, proceeds independently of the motion of the cosmic particle. One may compute the probability that a hostile bullet will strike the blade of a rotating propeller, on the assumption that the position of the blade with respect to the axis of rotation does not depend on the trajectory of the bullet, a supposition that will of course be wrong with respect to the bullets of the aviator himself, since they are fired between the blades of the rotating propeller. The number of such examples may be extended without limit.

It may even be said that wherever probabilistic laws turn up in any clear-cut way we are dealing with the influence of a large number of factors that, if not entirely independent of one another, are interconnected only in some weak sense.

This does not at all mean that we should uncritically introduce assumptions of independence. On the contrary, it leads us, in the first place, to be particularly careful in the choice of criteria for testing hypotheses of independence, and second, to be very careful in investigating the borderline cases where dependence between the facts must be assumed but is of such a kind as to introduce complications into the relevant laws of probability. We noted earlier that the classical Russian school of the theory of probability has carried out far-reaching investigations in this direction.

To bring to an end our discussion of the concept of independence, we note that, just as with the definition of independence of two events given in formula (4), the formal definition of the independence of several random variables is considerably broader than the concept of independence in the practical world, i.e., the absence of causal connection.

For example, in order to establish that in a given concrete problem the probability is defined with an exactness of 0.0001, it is necessary to carry out a series of experiments containing approximately 100,000,000 trials in each.

The hypothesis of probabilistic randomness is much more often introduced from considerations of symmetry or of successive series of events, with subsequent verification of the hypothesis in some indirect way. For example, since the number of molecules in a finite volume of gas is of the order of 1020 or more, the number √n, corresponding to the probabilistic deductions made in the kinetic theory of gases, is very large, so that many of these deductions are verified with great exactness. Thus, the pressures on the opposite sides of a plate suspended in still air, even if the plate is of microscopic dimensions, turn out exactly the same, although an excess of pressure on one side of the order of a thousandth of one per cent can be detected in a properly arranged experiment.

§5. Deterministic and Random Processes

The principle of causal relation among phenomena finds its simplest mathematical expression in the study of physical processes by means of differential equations.

Let the state of the system under study be defined at the instant of time t by n parameters:

**x1,x2….xn**

The rates of change of these parameters are expressed by their derivatives with respect to time:

If it is assumed that these rates are functions of the values of the parameters, then we get a system of differential equations:

The greater part of the laws of nature discovered at the time of the birth of mathematical physics, beginning with Galileo’s law for falling bodies, are expressed in just such a manner. Galileo could not express his discovery in this standard form, since in his time the corresponding mathematical concepts had not yet been developed, and this was first done by Newton.

In mechanics and in any other fields of physics, it is customary to express these laws by differential equations of the second order.

Given the values z0 and υ0 at the initial instant t0, the values of z and υ for all further instants t are computed uniquely, up to the time that the falling body hits the surface of the earth.

The proponents of mechanistic materialism assumed that such a formulation is an exact and direct expression of the deterministic character of the actual phenomena, of the physical principle of causation. According to Laplace, the state of the world at a given instant is defined by an infinite number of parameters, subject to an infinite number of differential equations. If some “universal mind” could write down all these equations and integrate them, it could then predict with complete exactness, according to Laplace, the entire evolution of’ the world in the infinite future.

But in fact this quantitative mathematical infinity is extremely coarse in comparison with the qualitatively inexhaustible character of the real world. Neither the introduction of an infinite number of parameters nor the description of the state of continuous media by functions of a point in space is adequate to represent the infinite complexity of actual events.

The study of actual events does not always proceed in the direction of increasing the number of parameters introduced into the problem; in general, it is far from expedient to complicate the ω which describes the separate “states of the“system” in our mathematical scheme. The art of the investigation consists rather in finding a very simple space Ω (i.e., a set of values of ω or in other words, of different possible states of the system),* such that if we replace the actual process by varying the point ω in a determinate way over this space, we can include all the essential aspects of the actual process.

But if from an actual process we abstract its essential aspects, we are left with a certain residue which we must consider to be random. The neglected random factors always exercise a certain influence on the course of the process. Very few of the phenomena that admit mathematical investigation fail, when theory is compared with observation, to show the influence of ignored random factors. This is more or less the state of affairs in the theory of planetary motion under the force of gravity: The distance between planets is so large in comparison with their size that the idealized representation of them as material points is almost perfectly satisfactory; the space in which they are moving is filled with such dispersed material that its resistance to their motion is vanishingly small the masses of the planets are so large that the pressure of light plays almost no role in their motions.

These exceptional circumstances explain the fact that the mathematical solution for the motion of a system of n material points, whose “states” are described by 6n parameters* which take into account only the force of gravity, agrees so astonishingly well with observation of the motion of the planets.

Somewhat similar to the case of planetary motion is the flight of an artillery shell under gravity and resistance of the air. This is also one of the classical regions in which mathematical methods of investigation were comparatively easy and quickly produced great success. But here the role of the perturbing random factors is significantly larger and the scattering of the shells, i.e., their deviation from the theoretical trajectory reaches tens of meters, or for long ranges even hundreds of meters. These deviations are caused partly by random deviations in the initial direction and velocity, partly by random deviations in the mass and the coefficient of resistance of the shell, and partly by gusts and other irregularities in the wind and the other random factors governing the extraordinarily complicated and changing conditions in the actual atmosphere of the earth.

The scattering of shells is studied in detail by the methods of the theory of probability, and the results of this study are essential for the practice of gunnery.

But what does it mean, properly speaking, to study random events? It would seem that, when the random “residue” for a given formulation of a phenomenon proves to be so large that it can not be neglected, then the only possible way to proceed is to describe the phenomenon more accurately by introducing new parameters and to make a more detailed study by the same method as before.

But in many cases such a procedure is not realizable in practice. For example, in studying the fall of a material body in the atmosphere, with account taken of an irregular and gusty (or, as one usually says, turbulent) wind flow, we would be required to introduce, in place of the two parameters z and υ, an altogether unwieldy mathematical apparatus to describe this structure completely.

But in fact this complicated procedure is necessary only in those cases where for some reason we must determine the influence of these residual “random” factors in all detail and separately for each individual factor. Fortunately, our practical requirements are usually quite different; we need only estimate the total effect exerted by the random factors for a long interval of time or for a large number of repetitions of the process under study.

As an example, let us consider the shifting of sand in the bed of a river, or in a hydroelectric construction. Usually this shifting occurs in such a way that the greater part of the sand remains undisturbed, while only now and then a particularly strong turbulence near the bottom picks up individual grains and carries them to a considerable distance, where they are suddenly deposited in a new position. The purely theoretical motion of each grain may be computed individually by the laws of hydrodynamics, but for this it would be necessary to determine the initial state of the bottom and of the flow in every detail and to compute the flow step by step, noting those instants when the pressure on any particular grain of sand becomes sufficient to set it in motion, and tracing this motion until it suddenly comes to an end. The absurdity of setting up such a problem for actual scientific study is obvious. Nevertheless the average laws or, as they are usually called, the statistical laws of shifting of sand over river bottoms are completely amenable to investigation.

Examples of this sort, where the effect of a large number of random factors leads to a completely clear-cut statistical law, could easily be multiplied. One of the best known and at the same time most fascinating of these, in view of the breadth of its applications, is the kinetic theory of gases, which shows how the joint influence of random collisions of molecules gives rise to exact laws governing the pressure of a gas on the wall, the diffusion of one gas through another, and so forth.

** Random Processes of Markov Type**

To A. A. Markov is due the construction of a probabilistic scheme which is an immediate generalization of the deterministic scheme of §5 described by the equation:It is true that Markov considered only the case where the phase space of the system consists of a finite number of states Ω = (ω1, ω2, ··· ωn) and studied the change of state of the system only for changes of time t in discrete steps. But in this extremely schematic model he succeeded in establishing a series of fundamental laws.

Instead of a function F, uniquely defining the state w at time t > t0 corresponding to the state ω0 at time t0, Markov introduced the probabilities:

of obtaining the state ωi at time t under the condition that at time t0 we had the state ωi. These probabilities are connected for any three instants of time: **to<t1<t2 **by a relation, introduced by Markov, which may be called the basic equation for a Markov process:

When the phase space is a continuous manifold, the most typical case is that a probability density p(t0, ω0; t, ω) exists for passing from the state ω0 to the state ω in the interval of time (t0, t). In this case the probability of passing from the state ω0 to any of the states ω belonging to a domain G in the phase space Ω is written in the form:

Equation (35) is usually difficult to solve, but under known restrictions we may deduce from it certain partial differential equations that are easy to investigate. Some of these equations were derived from nonrigorous physical considerations by the physicists Fokker and Planck. In its complete form this theory of so-called stochastic differential equations was constructed by Soviet authors, S. N. Bernšteĭn, A. N. Kolmogorov, I. G. Petrovskiĭ, A. Ja. Hinčin, and others.

We will not give these equations here.

The method of stochastic differential equations allows us, for example, to solve without difficulty the problem of the motion in still air of a very small body, for which the mean velocity c of its fall is significantly less than the velocity of the “Brownian motion” arising from the fact, because of the smallness of the particle, its collisions with the molecules of the air are not in perfect balance on its various sides.

**S≈T HOW TIME EVENTS BECOME STATISTICAL POPULATIONS.**

**The 1-11¹¹ infinity.**

The KEY question to formalize probability as population is how to ‘translate scales’, which tend to be decametric, in populations… good!

One of the few things that work right on the human mind and do no have to be adapted to the Universal mind, from d•st to ∆ûst.

Shall we study them downwards, through ‘finitesimal decimal scales’ or upwards, through decametric, growing ones?

Answer, an essential law of Absolute relativity goes as follows:

‘The study of decametric, §+ scales (10§≈10•^{10} ∆ ≈ ∆+1) is symmetric to the study of the inverse, decimal ∆>∆-1 scale’.

Or in its most reduced ‘formula’: ( **∞ = (1) = 0): (∞-1) ≈ (1-0)**

**Whereas **∞ is the perception of the whole ‘upwards’ in the domain of 1, the minimal quanta to the relative ∞ of the ∆+1 scale. While 1 is the relative infinite of a system observed downwards, such as ∆+1 (1) is composed of a number of ‘finitesimal parts’ whose minimal quanta is 0.

So in absolute relativity the ∆-1 world goes from 1 to 0, and the ∆+1 equivalent concept goes from 1 to ∞. And so now we can also extract of the ‘infinitorum thought receptacle’J a key difference between both mathematical techniques:

*A conceptual analysis upwards has a defined lower point-quanta, 1 and an undefined upper ∞ limit. While a downwards analysis has an upper defined whole limit, 1 and an undefined ‘finitesimal minimum, +0).*

So the smart reader will notice this absolute relative duality of ∆±1 where ∆@ is the ‘observer’, implies relativity of knowledge, always with a self-centered element to define it, and ‘the relative definition of finite infinities, or ∆+1 limit (ab. ∞) and finite infinitesimals (+0).

This brings an essential isomorphism of absolute relativity (do NOT confuse ∆-equality, with S-yncrhonicity, Ti-somorphism and @dentity; we ‘repeat’ as I know when, if any human ever gets to read those texts, there is TOO much upgrading and not to get dizzy, we DO repeat essential truths).

I-somorphism is the concept of equality in time-information and at the heart of the POSSIBILITIES to do mathematical PROOFS in different, seemingly non-identical space-time domains.

When we apply the identity of ∞|^{0 }(here written in inverse fashion) as in the title of this post on ‘number theory’, poised to complete what my fellow countryman, Fermat, started, we *understand the why of numbers and its techniques, so far only made explicit as most science is on how-terms:*

The real numbers NOW include always an inverse infinity between each 1+1 interval. YET they can provide satisfactory models for a variety of phenomena, even though no physical quantity can be measured accurately to more than a dozen or so decimal places; as 0 now is the undetermined lower limit.

It is *not* the values of infinitely many decimal places that apply to the real world but the *deductive* structures that they embody and enable due to the equivalence of 0≈1≈∞.

Analysis and its inverse integral and derivative calculus ‘drinks’ on all this.

Thus it came into being because many aspects of the natural world can profitably be modeled by those equivalences, as being continuous—at least, to an excellent degree of approximation. Again, this is a question of modeling, not of reality. Matter is not truly continuous; if matter is subdivided into sufficiently small pieces, then indivisible components, or atoms, will appear and finally we *will find the finitesimal +0 quanta*.

But atoms are extremely small, and, for most applications, treating matter as though it were a continuum introduces negligible error while greatly simplifying the computations; *when we work on the 1-∞ upper ∆+1 scale, that of the cosmological realm; whereas the intermediate scale is that of ∆o human thermodynamics. So we can then state in physical systems the equivalence of:*

*+0 (quantum physics) ≈ |-thermodynamics ≈ ∞ (bound infinity): Gravitational scale *.

And all what is above quantum effects, in ‘classic physics’, can be studied in a continuum modeling, which is standard engineering practice when studying the flow of fluids such as air or water, the bending of elastic materials, the distribution or flow of electric current, and the flow of heat, and so on (all ∆>∆+1 physical systems).

**The interchangeability of probabilities and populations.**

So bits of frequencies of time vs. quanta of populations of space. As frequencies of time become ‘populations’ of space *once they have been born in time, and ‘settled’ in space.*

So, we shall talk of the symmetries between frequencies of future time that become space population accumulated in the past, till they both recede in size as we keep growing in scales, and become undistinguishable, continuous, and quiet. And then we are studying pie space ‘surfaces’, through the topological definitions of adjacency, equality, perpendicularity and parallelism. now with a vital sense-meaning we shall evolve in our upgrading of the ‘Universal grammar=syntax’ of those languages, when in the 3rd line we start in earnest to construct the different rhythms of time space-change.

Nature though always has a goal: iterate a body-wave of energy, with those parameters.

Since *at the end of the journey we shall see that what nature cares about is to reproduce its fractal st assemblies of creative patterns of space-time. As all is reproduction. The ultimate substance of reality is motion with form, and motion is the reproduction of form along an ∆-1 disordered region of quanta of space that a whole ∆-being will mold and ‘rise’.*

And again those processes will be described mathematically with some basic operandi, which describes the union of a surface of smaller continuous simultaneous quanta of space from the past larger, less informed scales, and of quantum frequencies of time.

So the obvious rule as time and space planes are perpendicular is that a function of st is a multiplicative, reproductive wave function and one of ± superpositions one of field and particles with the potentiation, being an integral form of expressing the different scales in one parameter which is growing decametric through those scales or exponentially.

What is then the e function obviously that of death and decay translations, the 10 scale that of reproduction, and the body and head, again reproductive and data decay entroipc going meren all those meanings together.

*So patterns in Nature are just frequencies of bits of time and populations of quanta in space.*

And so the subtle differences between both concepts and *when to use them, for entities which often science confuses due to lack of perception (as in quantum physics where often time processes are considered spatial), will be essential for the streamlining made by stiences.*

In general we talk of time cycles which create spatial populations, with different ‘degrees’ of persistence into the relative past. So motions in space (locomotions) have hardly any persistence in time (we do not leave a trace of parallel forms) except in the simplest beings (waves of light and so on); but reproductions in time leave a persistence of populations. And so the time bit becomes a space quanta. And this again is an important phenomena in the simplest forms (waves and particles) of the ∆-3, 4 scales.

**RANDOMNESS OF MARKOWIAN PROCESSES IN SPACE VS. CAUSALITY OF DEEP TIME SCALES.**

The key fundamental element of probability and statistics, as humans conceive it, is its randomness, *which implies for most cases study to be an entropic future, of equal opportunities for all the ‘dimensions’ of existence of **the system.*

Yet even random phenomena becomes ordered. Why? Because even in the most ‘memoryless’ process, the chances of future will be reduced to a minimal set of ‘dual and ternary’ processes of branching, and the only character of pure randomess is that *all of them will have a similar relative chance. I.e. a dice, has three spatial dimensions (3 double faces), which we can consider each to be the ± dual inverse direction (opposite ones). So we are in a 3 x 2 system. And all directions are of equal value, because they are ‘spatial orientations’ without any ‘topological bias’ (i.e. not Spe, ST, or Tiƒ different ‘faces’, which could trick the dice, as for example, a region with ‘heavier Tiƒ-mass’, a Tiƒ centre of gravity, displaced, and so on). *

So in principle, random phenomena have a predictable nature shown in the regularity of the ‘grand numbers law’, and this brings us also a reflection on ‘time events’, since as they repeat, they *always tend to close a conservative energy-zero sum world cycle. Which increase the probability and determinism of a process. As on the long term all world cycles become closed, and the Universe become in its relative infinite duration always a balanced world cycle, with all the dimensions of scale, topology and time, coming to a zero-sum.*

Thus those are the insights of GST in the mysteries of ‘probability: zero sum world cycles imply the law of grand numbers (regularity and equal probability) as long as the process is ‘spatial’ (memoryless, simultaneous, with no preferred direction, etc.) So as in geometry we say that from ‘long spatial distances’ all lines become geodesics (as when you come out in scale of the flat earth), in time we say that all processes find equilibrium among all the possible events of the system.

**Classic laws of probability**

**ST**: Thus as a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of large sets of data. Probability theory is thus concerned with the analysis of random phenomena: random variables, stochastic processes, and events.

**∆º:** Further on Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. Here we observe the essential duality between the human observer, which introduces uncertainty and a possible certain reality; which humans often confuse (quantum paradoxes). But *and this will be the difference, the results of human uncertainty can be ‘clarified’ and considered certain, if the outcome IS NOT PURELY RANDOM, with the usual bell curves etc. proper of true stochastic systems. Then we must conclude that the only randomness is that of the human observer. *

Hence, the great discovery of twentieth century physics, which was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics can be judged to be ‘an error of the human observer’, in as much as its results are clearly ‘quantised’, fairly determined, NOT purely statistical. (And the solution to this conudrum is the wave-pilot deterministic theory).

**How temporal events become space populations.**

So in the next graph, we observe how truly stochastic systems look, both *in time as a ‘cumulative’ population of events, and in space, as a distribution with a mean of maximal probability. The erf (error function, as mathematicians call it) is really the time symmetry of the distribution function; hence its sigsmondi-cumulative form.*

In the graphs, distribution of frequencies in time, equal populations in space, both defined by the same ‘magic formula’, which we shall study in detail. It is one of the fundamental symmetries of fractal space (measure in populations of identical entities/ ‘points’) and cyclical time (measure in frequency of time events/numbers); and it is ultimately one of the clearest expressions of S≈T and the equality of the 2 ‘real units’ of spatial geometry (points) and temporal algebra (numbers); as the total of events become 1, and so does the total volume of the curve, but *as this is the essential difference between space and time, space is symmetric, both left and right, but the curve that represents time is NOT, it has directionality, and it has not simultaneity. *

Thus stochastic populations in space tend to a simultaneous distribution around the mean ‘0-line’ position considered the central ‘point-line’ of the ‘phase space’, while the temporal cumulative arrow tends towards a ternary distribution in 3 clear ages of different growth, with an exponential growth in the ‘mature’ age of the system (middle region), and a beginning and an end, which are ‘reversed in form’ (first young age of the event, and third final age of the event).

In the graph, the beauty of pure stochastic processes, and its bell curve and logistic curve, which finally saturates the ‘field’, resides in its universality and capacity to describe the fundamental events of reality.

Now there are many themes that connect the law of probability distribution with those of GST parameters. Consider the 3 ages law with the middle age as the most ‘abundant’. As we have describe the 3 ages of the logistic curve of growth we can now consider what should be obvious to the GST professional (me, i and myself:). The present ST age between the points of inflection, should be worth the fundamental ‘harmonics’ of GST, bidimensional and ternary functions, either 1/2 or 2/3rds, of the total value of the function, and within the ±3 iconic value, as it happens with eˆx we shall find all the values worth (emergence, first age, 50% of maturity, second age and age of extinction). And so we do find indeed, the commonest normal distributions of ±2/3rds at sigma 1 and over 99% at sigma 3, which is therefore *the whole existence in time and space, of any system of the Universe. *While the 50% comes at sigma 0.6745 a beautiful number, whose secret shall remain (with me, I and myself:).

Now in depth the big question on how the 0-1 time probability form becomes a 1-∞ space ∆+1 scale is more important that what it seems, as it shows a certain determinism of great numbers that is of the game of existence, which in many ways the distribution function signifies.

Populations around an ST center with a vital energy population and two cues, which one seems to think should be the parametrisation of the singularity and membrane of EQUAL VALUE, as a hypothesis of work, shows however the uncertainty of which point to make the cut, and this is provided by the probability density… at the normal, which gives us a huge difference over the median, more according with the properties of ∆st: over 2/3rds of the population is vital space energy, 15 3/4ths go for the singularity and the membrane. Though in real measures the higher density of both reduces further its presence in space.

So ultimately we can consider that the *deterministic function of construction of a super organism, both in the 1 cycles and the 1-∞ isomorphic ∆+1 scale define ‘why the laws of great numbers’ bring us the equations of distribution.*

How we get from one to another then can be ‘deduced’ axiomatically for further ‘insights’ in the process of organic structure in space and time frequencies that build a 1-being from its finitesimal occurrences.

But the spatial symmetry gives us also another kind of information, as the regrouping of those frequencies into probabilities of populations completely changes its role in the St symmetry.

Now the question fundamental to the theme of Probabilities and statistics besides this s=T symmetry is the concept of indistinguishability of beings-events. Indeed when we get a 6 is equal to create an indistinguishable particle. The type of particles thus is equivalent to the set of events in time And as sun they accumulate into the networks that construct the system. A minimal distribution of 3 only elements, the identity present element and the inverted ones, seem to be by far the commonest, with the point of σ as the place where the symmetry reaches a breaking point between both ‘asymptotic areas.’

**The key distribution. Mathematical insights. **

Of course the axiomatic method carries to that place from mere deductions on the evolution of numbers into probabilities through the binomial formula:

DEFINES the probability of getting exactly m positive results in n independent trials, in each one of which a positive outcome has probability p. Let us consider, by means of this formula, the question raised at the beginning of this section concerning the probability

where μ is the actual number of positive results.* Obviously, this probability may be written as the sum of those Pm for which m satisfies the inequality

13:where m1 is the smallest of the values of m satisfying the first inequality, and m2 is the largest.

Formula (13) for fairly large n is hardly convenient for immediate calculation, a fact which explains the great importance of the asymptotic formula discovered by de Moivre for p=1/2 and by Laplace for general p.

This formula allows us to find Pm very simply and to study its behavior for large n. The formula in question is

**14:**

If p is not too close to zero or one, it is sufficiently exact even for n of the order of 100. If we set

From(13) and (16) one may derive an approximate representation of the probability (11)

The difference between the left and right sides of (17) for fixed p, different from zero or one, approaches zero uniformly with respect to ε, as n → ∞. For the function F(T) detailed tables have been constructed.

For T → ∞ the values of the function F(T) converge to one.

From formula (17) we derive an estimate of the probability

Since the function F(T) is monotonic increasing with increasing T, it follows for an estimate of P from the following which is independent of p, we must take the smallest possible (for the various p) value of T. Such a smallest value occurs for p=1/2 and is equal to 4. Thus, approximately

P≥F(4)=0.99993

In equality (19) no account is taken of the error arising from the approximate character of formula (17). By estimating the error involved here, we may show that in any case P > 0.9999.

In connection with this example of the application of formula (17), one should note that the estimates of the remainder term in formula (17) given in theoretical works on the theory of probability were for a long time unsatisfactory.

Thus the applications of (17) and similar formulas to calculations based on small values of n, or with probabilities p very close to 0 or 1 (such probabilities are frequently of particular importance) were often based on experimental verification only of results of this kind for a restricted number of examples, and not on any valid estimates of the possible error. Also, it was shown by more detailed investigation that in many important practical cases the asymptotic formulas introduced previously require not only an estimate of the remainder term but also certain further refinements (without which the remainder term would be too large). In both directions the most complete results are due to S. N. Bernšteĭn.

Relations (11), (17), and (18) may be rewritten in the form:

For sufficiently large t the right side of formula (20), which does not contain n, is arbitrarily close to one, i.e., to the value of the probability which gives complete certainty. We see, in this way, that, as a rule, the deviation of the frequency μ/n from the probability p is of order . Such a proportionality between the exactness of a law of probability and the square root of the number of observations is typical for many other questions. Sometimes it is even said in popular simplifications that “the law of the square root of n” is the basic law of the theory of probability. Complete precision concerning this idea was attained through the introduction and systematic use by the great Russian mathematician P. L. Čebyšev of the concepts of “mathematical expectation” and “variance” for sums and arithmetic means of “random variables.”

A random variable is the name given to a quantity which under given conditions S may take various values with specific probabilities. For us it is sufficient to consider random variables that may take on only a finite number of different values. To give the probability distribution, as it is called, of such a random variable ξ, it is sufficient to state its possible values x1, x2, ···, xn, and the probabilities

P { = P {ξ=Xr}

The sum of these probabilities for all possible values of the variable ξ is always equal to one:

The number investigated above of positive outcomes in n experiments may serve as an example of a random variable.

The mathematical expectation of the variable ξ is the expression:

and the variance of ξ is the mathematical expectation of the square of the deviation ξ − M(ξ), i.e., the expression:

is called the standard deviation (of the variable from its mathematical expectation M(ξ)).

At the basis of the simplest applications of variance and standard deviation lies the famous inequality of Čebyšev

It shows that deviations of ξ from M(ξ) significantly greater than σξ are rare.

As for the sum of random variables

is true only under certain restrictions. For the validity of equation (23) it is sufficient, for example, that the variables ξ(i) and ξ(j) with different indices not be “correlated” with one another, i.e., that for i ≠ j the equation

be satisfied.

In particular, equation (24) holds if the variables ξ(i) and ξ(j) are independent of each other.† Consequently, for mutually independent terms equation (23) always holds. For the arithmetic mean

We now assume that for each of these terms the variance does not exceed a certain constant

Inequality (26) expresses what is called the law of large numbers, in the form established by Čebyšev: If the variables ξ(i) are mutually independent and have bounded variance, then for increasing n the arithmetic mean ζ will deviate more and more rarely from the mathematical expectation M(ζ).

**THE DIFFERENT CAUSALITIES THAT GIVE BIRTH TO PROBABILITIES.**

It is not possible to predict precisely results of random events though we have anticipated and proved, properly interpreted that the curve of distribution favours the present state of 1 sigma, with 2/3rds and makes parallel the inverse phases of 1st, third age, emergence and death, and its symmetries in space (dominance of the body-mass, over the head and limbs). Though here the analysis would require a much more detailed study, not to be done for the time beings.

Instead, we would like to consider the essential laws of probabilities and statistics in ‘classic science’.

When a sequence of individual events, such as the mentioned roll of dice, is influenced by other factors, such as an imbalance of the gravity cater, it will exhibit distorted patterns, from a symmetric spatial distribution, which can be also predicted.

Two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem.

Both are essentially the same, though the central limit might be considered a composite ∆§, result where each ‘distribution’ is a partial sum of events, and the law of large numbers is defined for single events/cells/forms:

In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

The LLN is important because it “guarantees” stable long-term results for the averages of some random events, such as ∆-1>∆, whatever this means. AND SO GOES FOR the central theorem, with an intermediate §cale of partitions… ∆-1>…∆§…>∆

And so when we ‘plug in’ into the existential game, both theorems they tell us that as § grows, towards the largest numbers of a finite ‘Universe/world’, we obtain a ‘median distribution’, which means in the Universe, a super organism. Indeed, as systems multiply its population, the ‘partitions of organs’ (which can be modelled as partial distributions of the central limit theorem, and the scale ∆-1, will converge to a given form – for entropic systems, the gauss distribution; but extrapolated to the whole reality, to the PROGRAM OF CREATION of superorgansms).

*They are thus generalised for ‘systems’ not of stochastic nature but with causal patterns and future predictable results a determinism of cause->effect essential to reality.*

So this allow us to introduce the second more creative analysis of both sciences, from the perspective of ∆>∆+1, in causal effects of which a minimal number of patterns and probable outcomes are understood, which expands the concept of statistical probabilities to the use of those general laws to causal events and its determinism… an entire new field, which could go to other sections of the blog but we prefer to introduce here, as the corollary, which we shall only enunciate and maybe other day come to develop further.

**∆±1 ST causality and determinism.**

What is the probability of the game of existence? In an infinite Universe absolute. And for any *event determined by the game, ever smoother as we grow in detail of analysis (∆º view), in numbers of events in time (Tiƒ view), in populations in space (Spe), and so as we deem the Universe infinite, all systems tend as s->∞, t->∞ and ∆->∞ and ∆º->∞ (number of pixels of the mental mapping, ever finer in detail) to become deterministic, perfect, enacting the GST game of existence and its **world cycles, of which there are infinite indeed proofs in all systems:*

- Orbits become more regular as the mass of the ‘comet’->planet grows.
- Dice throws become more regular with 1/6th probability for each number
- Superorganisms become more perfect as we move towards the 1 trillion mark of cells
- Crystals less amorphous and we imagine the ones on the centre of planets to be perfect ‘diamonds’, perfect iron crystals (Ours) and so on.
- Societies better organised, with less friction, as China shows.
- So happens as populations radiate or time passes by in evolution: the perfect super organism, the perfect platonic form is reached.

But, as this is the limit of the game – in all those cases we should previously define the limiting domain, the normalised distribution to 1, the proper range; and it would be interesting just for the sake of theory, to consider what would be, if that is the case, the limits of those theorems. So a question is poised even if it looks far reaching. While the system keep smoothing ad infinitum, or there is a limit also in time, to what seems to be obviously the limit in space of ‘carrying populations’? Even though it seems counter-intuitive and likely not be experimented, I believe somewhere in the quatrillion mark the system will break and enter a chaotic, no longer smooth distribution but move towards death (being 4 the dimensions of a system in a single sheet of space-time).

** **** THE 2 GREAT FIELDS OF STATISTICS**

The classic field of probability IS concerned ONLY with the study of stochastic, random processes; *hence those belonging to the arrow of entropy.*

*So the fundamental new field introduced by GST is the study of CAUSAL PROBABILITY, as all points of space-time have 3 possible dimensional motions, and the choice of them establishes a clear connection between probability, statistics and topological evolution. Stochastic processes thus loose in GST a great deal of chaos, and become far more ordered. *

So the next and simplest answer to that ternary probability, is *which of the 3 arrows is equally probable? And the answer is obviously present, as it is conserved and its value the product, superposition or pythagorean sum of the other two (the commonest combinations/operandi of s<st>t metric). What then about the entropic and informative arrows? Their value must remain the same, but they are imbalanced:*

Max. spe x Min. tiƒ (entropy arrow) vs. Max. Tiƒ x Min. Spe.

So sometimes they are not easily comparable, and most times they are inverted in its parameters

Further on, as in all fields of reality, we can reorder what we know of stochastic processes of ‘entropy’ – memoryless ‘markowian processes’, according to the ∆º±ST five dimensional element of all systems (10D for the full model). So those are the themes of this post.

**MAIN THEMES OF GST-atistics & Probability.**

One of the most fascinating sub-disciplines of GST is the study of the mathematical dualities that represent space and time and combine both as mirror symmetries.

Of them the most important is that between population in space which distributes in the same fashion that its original time event probability. As *beings are born in time and become memorial space, so both are similar concepts ruled by similar equations.*

So the definition of both *stiences* is immediate:

“Probability deals with future time-space, statistics with past space-time”.

And entropy and theory of information in the way it is conceived by *classic science* with their ‘present state’.

So departing from such definition, as usual we can study the disciplines with the **(∆º±1)S≈T** 5 main perspectives (∆º, ∆±1, S, T, S≈T) of any system themes are essential to probability models in time and populations in space; and attach to each perspective the main themes of the classic science, reordering and enlightened them with whys and new insights, born of our ‘advanced’ structural understanding of the organic, fractal ternary, symmetric Universe. As the subjects deal directly with ST symmetries, and are foundational of one of the main mind mirrors of mankind (being the other two verbal thought and art); the subject is truly rich and we will only touch a few concepts:

**S≈T:**The duality of sequential future events in time (probability) and synchronous past populations of events in space (statistics).

**∆º:**The epistemological consequences of the present deviation in a world of ‘space motions’ and ‘mechanical thought’, of knowledge NOT as the study of internal causal processes that lead to the future deterministic creation of events, (classic concept of epistemology of science),*but of knowledge as the external, non-thinking process of recollecting data with machines and putting it into a probabilistic mapping to determine the ‘likely’ event, considering this to be all what matters to knowledge, as ‘evident truth’. It is part of the tendency to substitute time logic for spatial evidence, regressing in the evolution of the scientific method.*

**T:**And hence the proper understanding, in the path of the Russian school of probability (Kolgomorov, Markov) of the true inner causality of events in time, in 3 great areas according to the ternary principle: disjointed pure entropic events (Spe-events) with minimal causality (whose stochastic repetitions are independent), events of full causality, which are strictly determined by a guiding*informative pole, ‘soul’ or singularity*(Tiƒ events) and events of mixed ternary causality both in time (strongly influenced by past results which conditions the future) and space (diverging freely into the 3 possible paths of all events, an event with more entropy, or more information or more iterative space-time repetitions).

**∆±i:**The confusion as a consequence of all this of spatial populations and temporal events,*in those scales of reality of which we have little evidence and capacity to distinguish both (quantum, ∆-max.i levels), since the time clocks of those scales run so fast we see them in its full world cycles as if they were forms of space.*In that regard a key rule for distinguishing them is the fact that time events are 3-sequential elements and space-events are bidimensional holographic forms in simultaneity. Let us consider first this aspect of probability.

**S:**And finally, studied in our article on thermodynamics, the spatial entropy of systems and its choices of future paths, according to the ‘partition’ of present population, which divide themselves in branching ternary forms, till filling up all the possible variations of the system with different populations according to probability. I.e. for example in the galaxy there will be a series of different fractal histories of mankind, according to the possible human vs. mechanical vs. gaia vs. extinction futures possible to history (3±death paths of future).

So most planets will be extinct by nukes into black holes and strangelets (max. probability of future), some will evolve into a metal-earth global organism of robots, some will never give birth to life intelligence of the human type and finally the less of them will make humans to fully understand and respect the laws of the organic Universe – and evolve avatar like into the final stage of Gaia, as collective mind of the earth. So indeed, the theme is extensive and apply to all kind of questions. This 5th being studied elsewhere. so we shall deal with the other 4.

**The same laws work for time events and population distribution. **

Before we get into it, let us consider the main error in the understanding of probability and statistics is to know in small fast scales of the Universe what is a time event and what is a probability one.

The whole confusion happening in quantum physics about the way the Universe works departs from that confusion, which added to the idealist age of baroque mathematics (when Hilbert says that he ‘imagines’ points and lines), and the *computer age of approximations and other ‘epicycles’ of calculus and measure, *and the increasing visual age of thought when causality matters not, gives us what is the most prominent of those errors, which plague the physics of the quantum world: the confusion of ternary time events and branching between Spe, ST, and Tiƒ paths, of future time choices and probabilities, with spatial populations and fixed space symmetries, which tend to be bidimensional (holographic principle).

**Bohm vs. Bohr**

In that regard it is remarkable of the times we live in that the absolutely obvious truth of quantum physics, Bohm-Broglie’s, deterministic model has been obliterated by the absurdity of the probabilistic interpretations, just because idealist platonic physicists do not care to understand the whys of mathematics. But *unlike von Neumann **don’t even acknowledge this fact.*

Indeed scientists tend to confuse those 2 symmetries of reality in small scale and vice versa, for very long they thought and many still do that galaxies of slow evolutionary ages, were ‘different spatial species’, NOT species in different moments of time evolution.

And so we shall open the post, with an example, solving the interpretative errors of Copenhagen’s wannabe genius Mr. Bohr, putting him at true face value, as the guy who ‘stole’ the complementarity principle from de Broglie, threw his dobermans to him in a power on slaughter – Pauli, Heisenberg et al, of the PROPER Einstein->Broglie->Schrodinger interpretation of quantum physics; and told us because *he didn’t understand the duality between time probabilities specular to space populations, that reality* was a mathematical probability, that is an entelechy.

Indeed, all this chit-chat of ego-trips of human ‘inventing reality’ against sound models come to a simple error of interpretation of a quantum formula, ψ x ψ*, in which we multiply a wave in space by its time conjugate based in that symmetry.

Then a mass of errors *proper of physics happen in the uncertain quantum realm with no practical evidence, hence with model-theory, which errs because of its misunderstanding of cyclical time processes that imprint its forms in factual space. So often physicists confuse a time parameter with a space symmetry (as when they confuse the electro-weak time flow of informative change with a spatial force, or the 3 ages of quarks with 3 forces in space, as THEY HAVE NOT UNDERSTOOD THE EXISTENCE OF ‘TIME FORCES’, WHICH TRANS-FORM REALITY besides its space forces. So we shall end up with that theme. But obviously as we study all stiences here we shall put other examples.*

As usual we will try to simplify all maths ad maximal, as the maths are always right, the conceptual misinterpretation is what we seek to repair.

**V. THE ÐISOMORPHIC SCALE FROM THE 1 WHOLE TO THE 10 PLANE.**

The 10 Dimensional isomorphisms (ab.Ði) applied to the study of each 10Di (5s, 5t) timespace organism, increases formal depth and experimental detail in our study of all the systems made of ∆@s≈t of space-time, defined by its fundamental 5 elements/Dimensions: @minds, ∆-§cales, $pace, ðime and its s≈t symmetries.

So *as a fractal organism can be subdivided in its analysis through its dualities of t-motions and s-forms, into 10 Dimensions, we study 10 similar properties of those dimensions in the dynamic existence of any being and call them the 10 Ðisomorphisms (Ð for dimensions and cyclical time, as it is the capital for eth, ð).*

We could say paraphrasing taoism and Greek Philosophy (Parmenides, Heraclitus), or modern quantum probability that from the ‘o-1’ unit sphere of ‘existence’, comes 2, the duality of space-time, expressed in its kaleidoscopic jargons by human beings – the wave and the particle in physics, the conjugate product in mathematics, the genetic code and the protein in biology…. the yin and the yang, the man and the woman; the lineal and angular momentum… you name it and move it…

And from 2 comes 3, the lineal, angular momentum and vital energy sum and product of both, the wo=man and its offspring, the particle, wave and potential… *the being in itself, which can be properly described in ternary S≈T, elements, in a single plane of existence. *

But the Universe is not THAT SHALLOW, so regardless of our humind geniuses with its ‘space-time continuum’ in a single sheet, *we need the whole and the parts, the upper and lower scales, the points that make the topological beings, the cells and the ecosystems, the finitesimals within and the world outside, the Kantian thoughts in his brain, the starring lights on the sky…*

And so we have the 3±1 dimensions that bring together 3 planes of existence, ∆±¡**, into the being**, , 3 non-euclidean networks that form the topological organism; and the 3 languages most used in this blog to describe them, verbal thought with its awesome metaphorical capacity to show in a language all humans can understand the Disomorphisms of Nature, the mirror of mathematics, which tunes to fine detail and extract more properties as it is likely the language of the atomic mind alpha and omega, atom and galaxy of those ∞ scales, and the visual language of graphs and intuitive forms.

Wor(l)ds, numbers and images are thus our 3 elementary mirrors to build ‘knowledge’, because the 1=one truth in itself is only carried by the being, who holds all its information, and the being within the world to which it connects; so humans need many mirrors and we have settled for the 3 we have ‘expanded’ with non-euclidean geometry, non-aristotelian i-logic, for maths and words, non-Æ for its visual forms.

*So by studying the 5 Dimensions of the being with dualities of space-time and * ‘ternary isomorphisms’ of scales, ages and topologies, all of them properties derived of its 5D scalar structure in space and time we achieve a complete description of any ‘Dust of space-time’, ∆@S=T.