Home » ·- 4›c

·- 4›c



“(Gravitational) Time tells matter how to move; electromagnetic matter tells space how to curve≈form.” Wheeler, on how the two scales and states of reality: motion=time and form=space in the gravitational, ∆±4 and electromagnetic, ∆±3 scales.

In the Fourth Line, we study in greater detail each Plane of Existence of the Humind’s perceived Universe between the invisible force of ∆-4 gravitation and the partially perceived ∆+4 galactic cosmos.

In this post we study the lowest of all scales with whom the human mind interacts to perform the simplest of all actions, locomotion, through the force of ∆-4 gravitation.

Let us then consider the few things we know of the…


Beyond the limits of full human perception, (cosmos, dark entropy scales) we can only ‘guess’ some properties and isomorphisms.


2nd Disomorphism:

Gender duality. Sp vs. Ti differentiation. Evolution of its species:

Gravitons or ‘cyclical-time strings’ will be the ð-informative class, lineal energetic gravitation, its |-energy.

3rd Disomorphism: Species Vital Constants AND network forces.

G is its main constant, studied in detail when we consider its parallelism and unification equation with charges.


Gravitons or ‘cyclical-time strings’ will be the ð-informative class, lineal energetic gravitation, its |-energy.

Evolution: Neutrino theory of light.

Dissolution: Beta decay of ∆-3 into ∆-4 neutrino waves.

Evolution: Neutrino mediated weak forces.

Dark Energy: expansive, cosmological gravitation

Now those scales are the inner gravitational and outer scales of the Universe. So it is useful to remember its structure as mirror symmetries between the top quark and black hole planes.





Some basic principles of epistemology and truth.

Before we enter into the exploration of the ‘invisible scales’ of forces with whom our ∆ø humind interacts to achieve the simplest action of motion (for which perception is not needed, as pure motion and pure static perceived form have inverse properties) , we shall clarify a few ‘first concepts’ of great importance that often come up on this blog on the argument on the foundations of physics and the limiting realms of the humind, which are the quantum and cosmological scales:

  1. The equal value of all scales of Nature. It is obvious that the more removed we are in 5D scale, the more uncertainty we obtain in our analysis, hence the relative unimportance of physics in those realms to understand a Universe with likely infinite scales. Those scales are NOT more important and are MORE distorted to human observation, so they are less relevant for a philosophy of science. Why then physicists have so much intellectual prestige? There is the worldly power – the make our technological machines which today are more important than life to this planet, as its company-mothers of machine dominate our eco(nomic)system. But that is not the theme of this post. The theory behind is. It is called the ‘constructionist hypothesis’ that pretends the lower scales CAUSE the upper scales. This is NOT truth, as we have seen in many posts, since the scalar causality is dual: information flows from lower, faster time  scales with more cyclical form=information and energy from larger scales. So causality is dual. And further on all scales are self-similar – facts defined long ago by system scientists as the next quoted text shows:Screen Shot 2018-03-06 at 6.40.20 PM
  2. This said, in this post we shall deal with the smallest scales, which belong to quantum physics, so two important questions on those scales are the difference between the quantum scale of particles and the thermodynamic larger scales of molecules. In Physics it matters more to understand the quite evident scale of thermodynamics, our scale. And when we do so, it becomes obvious as Einstein put it that ‘the statistical quantum theory would … take an approximately analogous position to the statistical mechanics within the framework of classical mechanics”. How this happens is easy to understand in 5D metric, and mathematically we shall time permitted include a more ‘pro’ analysis for specialists.SINCE in 5D Metric there IS no DISPUTE between Einstein and Bohr.  First S=T means we have always equivalences between space-form-statistical populations and time-motions-probabilities. THE time o-1 description IS BETTER FOR faster ‘clocks of smaller particles’, according to the  5D scalar metric: smaller scales in space run faster time clocks – time, not space thus become dominant in quantum physics, hence better described probabilistically. The Universe is thus probabilistic in smallish, time dominant scales (the 0-1 mathematical unit sphere after normalization of parameters) vs. STATISTICAL IN larger SPACE (the 1-∞ thermodynamic plane). Yet  BOTH ARE EQUIVALENT MATHEMATICAL FORMULATIONS (fundamental theorem of measure theory – the 0-1 sphere and all its laws are equivalent to the 1-∞ plane, which is better for slower LARGE thermodynamic ensembles that occupy more space.
  3. Finally is essential to accept that in 5D scales, the c-speed limit is ONLY the limit of speed of the light space-time scale, as S x T = Constant. So for smaller scales speed ARE faster, both in rotary clocks, as we have just shown, and in lineal inertia. This means gravitation scales have c as the limit of lower speed and are often, outside the realm of high density of light space-time (that is between galaxies) faster, non-local. So there IS a quantum potential (Bohm), NOT a hidden variable, but the invisible ∆-4 non-local = faster than light scale we shall study in this post. Non-locality IS experimentally proved both for intergalactic space (or else galaxies would not interact) and in the under-light smallish level of invisible particles (Bohm’s model, which we accept in 5D as the more reality). This brings a series of important consequences both to human stience and reality, which we study here.
  4. Finally among the key insights of 5d applied to those lower scales there is the invisibility of it, natural to the relativistic perception of reality centered inner ∆ø, which has its limits beyond which all becomes invisible; and on the other hand, the fact that faster motions in space are seen as larger distances. So IT is precisely the faster than c-speed of the ∆-4 scale WHICH IS INVISIBLE what we see as ‘expansion of space-distances’ between galaxies (balanced by the implosion of form within galaxies that create mass from vacuum space, making the Universe a wobbling immortal infinite entity). It is also obvious, abounding on the uncertainty of measure, because time is quantic, discontinuous, cyclical, it can be only measured when a ‘cycle is completed’ as a unit of time (in humans the second)… but as 5D metrics allow different speeds of time, for a being with a measuring mind that has a huge length in its time quanta, all what happens within that time quanta of measure at faster speeds will seem simultaneous, non-local, even if the phenomena take some time to ‘travel’ from A to B, or from  a,b,c… points into the synchronous knot of communication of them all – the mind that absorbs those ultra > c speeds as pixels of form. So BECAUSE WE SEE simultaneously the Universe of invisible gravitation, this must be between galaxies faster than light, and all what we perceive as simultaneous distance is caused by that faster than light speed perceived with our slower light eyes.


The faster than light – expansive distances – invisibility of the ∆-4 plane.

Let us try to fully grasp this essential property of time, mind and measure. When a wheel turns fast you see it as solid, because you see simultaneously in your eye-mind all what happens within a second and so all the points of the turning wheel might seem to be in the same point at the same time (within that second) even if IN A FASTER TIME SCALE, they will be recognisable pictured at slow-motion as clearly different rays. This concept of simultaneity of measure embedded in special relativity is very relevant for many phenomena of perception, and stientific description of reality, as a source of much confusion when humans study in detail those so fast small scales of 5D.

The most obvious example is human thought – what we see as ‘simultaneous’ non-local IS the entire planet, because the speed of light is so great that all within a second, including the moon, seems to us ‘simultaneous, non-local, co-existing at the same time-quanta’.

The light from that distant mountain you see below the horizon TAKES sometime to arrive here but it seems simultaneous, non-local in its ‘infinite speed’ to our one second senses, to the point that till the XVII century humans thought light was ‘instantaneous’ non-local as they could not even measure its speed. Galileo tried with mirrors in far away mountains, but the time of reaction of the person that had to stop the clock when emitting the ray and the one that received the ray and stopped his clock, which should allow to find a difference of time in both locations to measure the delay due to the motion of light between both points, was much larger (normally two seconds one to perceive one to act) than the millesimals it took light to go from Mountain A to Mountain B. Only when we did measures from far away Jupiter’s satellites we could get a meaningful measure.

Non-local quantum fields below light space-time, the next scale which Bohm formalised must therefore be much faster than light, to the point of being seemingly non-local or else causality essential to time processes (even if sometimes is co-causality or multiple causal rays joined in a point) would not happen.

Normally those parameters that quantify the difference of speed (s/t), density (t/s) and momentum ( s x t) between scales and its species are on decametric, ternary potencies, as the 5D scales are. So ± 1, 10, 1oo, 1ooo, 10.000 are the commonest differences between scales. I.e. the fine structure constant, measure the difference between the light scale speed and the next upper ∆+1 electronic scale, around +100 (137).

The difference of scaling between photons and electrons and the lower ∆-1 scale though is larger.

In graph, experimental evidence of faster than light intergalactic scales: 10c quasar jet at ∆+4 and quantum non-locality at ∆-4 at 10.000 c

After all dense photons are just the ‘cellular’ level of electrons. So for complex more detailed reasons concerning the parameters of GST scaling, action at distance should be on the α² bidimensional speed scaling, compared to the larger, light world, around 10.000 times faster and all that happens in 10 thousand times more distance within the minimal quanta of human observation should seem to us NON-LOCAL. Actually recently a Chinese lab measured non-locality which turned out to have a c=10.000 speed, for its ‘thinner’ minimal messages; or upper boundary of maximal speed of non-local effects; and so we do observe all kind of non-local c<V<10.000 C PHENOMENA, which of course theorist then vehemently deny to uphold its pretension of absolute truths under c speed, which is only the speed of information in the scale of light space-time, as the substrata of the scale is the Bg light background radiation.

So as you cannot travel faster than a plane when you are inside of it, you cannot travel faster than light when you are inside a galaxy sustained by a light space-time substrata. But outside, in the dark world you do have the possibility to go faster.

Thus C speed (v=s/t) and 0 temperature (limit of friction, disorder that impinges faster speeds) are JUST the limits of our light space-time world. And we bring this conclusion in many parts of this blog in many ways, from the different ∆ºST perspectives, as it is customary in GST, this perspective being that of the ‘mind paradoxes’ of perception.

Let us consider the other perspectives on faster than light.

• mind: The 3 dimensions of light space-time. Its functions in all systems made of it.

Let us  consider the meaning of faster than light speeds from the perspective of reproduction and perception of information. The mind’s quanta of time and speed simply cannot see ‘gravitational smallish, faster than light carriers of information’. But smaller particles do, as the dark work is ∆±4 and light, electrons and atoms exist in the neighbourhood: ∆±3,2.

So the dark world IS the ∆-1: ‘energy-feeding level’ for a photon… reason why it follows as prey-predators do the ‘scattering’ rays of Bohm’s pilot wave theory.

Don’t raise your eyes. We can explain all phenomena from all languages and povs. So as all has topo-bio-logical properties, we can always make an abstract mathematical explanation, in ‘detail’ but also a biological, organic explanation in ‘whole vital terms’.

Thus light merely warps≈feeds≈evolves the entropy of the lower scale-field, ∆-1, which is invisible gravitation to us into visible information, and for that reason ‘as information must be copied and imprinted on the quantum potential field with no form, it takes time and reduces speed into c-light.

Indeed, gravitation from our mind pov must have less information and more speed, as experiments prove (gravitational non-local, invisible action at distance, due to the lack of human detectable information).

And if we plug this act into the equation of ‘cyclical time-speed’ it gives as an ∞ speed for the quantum potential field/gravitational field:

v -> $/ðƒ≈0 -> s/0=∞:

Below we see the humind which better understood this paradox, Monet’s first ‘impression(ist): sunrise’ painting, in which space is no longer painted as ‘a background’ cartesian canvass, but as the frozen view of light rays by the human eye. Physicists though still don’t get what artists eyes saw intuitively. We shall in that sense consider the relationship of art with physics, as a pioneer of its space-time analysis, from the work of Alberti on perspective, way before Desargues found projective geometry and Descartes the self-frame of reference, to the work of Leonardo, under his lemma of a living Universe of organic forms we must learn how to see (saper vedere), which certainly would have enjoyed immensely this blog and the part on topological vital space-time, to the impressionist realisation light is the mind of space, to Picasso’s analysis of imaginary worlds made only with ‘lineal motions’ (cubism) or cyclic, reproductive yin-female ‘forms’ (post-war period).

In the graph, the infinite speed of the quantum potential of gravitation that feeds light, which follows the ‘string’ of gravitation, tended between the emitter and perceiver particle, which lock each other in entanglement, (neutrino theory of light)

And so when both particles have connected through non-local action at distance in the dark world, they can guide their motions; they can assess their relative distance (which is the main information a single line provides) and as a secondary effect – since they are all ‘locked’ joined by the gravitational string, regardless of the external speed of their world, the speed of light communication between both will be always constant as they are in relative stillness to each other (explanation of the constant of c- light speed from the @mind perspective).

All this in fact has been explored by physicists in its ‘fringe theories’, notably by Feynman’s absorber theory whereas the two solutions of electromagnetism, one with a negative sign are considered two rays simultaneously produced by A and B particles, but as usual in physics, while all has been discovered ‘mathematically’ by the mere pedestrian process of manipulating algebras, its deepest meaning is not understood – this is the guy that said the why is what he never questions, an extraordinary mathematical physicist with the usual conceptual peanut brain of his practitioners.

The mind of man as Kant understood is EUCLIDEAN, because our space is Euclidean, and it is so because it is light space-time, made of three perpendicular fields able to carry information: the electric, magnetic and c-speed field, the longest direction of view, which explains why we indeed look ahead in the horizontal plane, but have an informative head on the height direction of information as the electron which produces the ray of light does, moving up and down (left, bottom); and finally stores ‘energy’ in the width, MAGNETIC ‘BELLY’ direction (: ALL HAS changed in the evolution of light through 5D scales till creating man, but all has ultimately remained the same: a game of vital dimensions of space-time.

The consequences of the previous graph for the understanding of both the humind (human mind) and the external Universe of space-time are multiple, from a proper understanding of special relativity to the analysis of magnetism as an independent force NOT an observer’s effect as modern physics think, to the fundamental analysis of light and photons as the minimal organism of our known-known Universe, each of those themes exploring an element of the ∆@st light supœrganism.

Back to the understanding of motion as reproduction of information.

What this means is that motion is not only relative, but related to the density of information of a system, so as mass-information grows in density the system takes longer time to reproduce the Ti element of v=s/Ti, and so it slows down. And on the other extreme when information tends to zero speed increases.

So as we humans perceive information with light that is our limit of speed of transmission of information we can perceive c=s/Ti. But this wave of light is really impressing a quantum field of action at distance, faster than light (Bohm’s discovery on the pilot-wave theory which we have to marry with the non-observance of particle during the motion, to fully grasp the process).

It is then obvious that from the proper endophysical perspective of the human visual mind, faster than light speeds cannot transmit information among us. But – and this is again a topic egotist error of the humind – this doesn’t mean as physicists claim that other species, specifically those on the ‘verge’ of ∆±4 (galactic black holes and quantum particles), which are connected directly to its ∆±1 scales can communicate information through dark (to us) gravitational waves. 

It is then the humind, which sees those flows of information as ‘invisible’ actions at distance – but as we predicted even action at distance has a limit of speed… which seems to be on the 10.000 c range.

The Dark World. E<cc>M and Beyond: the c-t limits. 

The dark world is according to 5D metric: S x T = C, ‘faster than light’ both in its lineal inertia (S) which is seen as expansive space between galaxies – since its motion is invisible, and its temporal speed or frequency, 1/t=ƒ, on the rotary motion of black holes that therefore store maximal information beyond the c-speed event horizon. A fact in the complex mathematical model we formalize as t<0. Thus ok and c-speed become merely the limits of our light space-time plane in terms of lineal speed and cyclical/angular momentum, the singularity center and clock-like membrane that encloses and control a vital energy and have its maximal example in the black hole, an accelerated vortex of gravitation that therefore should go faster than c, passed the event horizon;  and should move ‘faster than light’ if we adopt the point of view of the black hole, as it ‘creates’ distance-speed by ejecting at faster than c (Kerr metric) the absorbed light energy that becomes a jet of dark entropy on its poles. 5D in that sense has not hang ups with postulates such as those of Einstein’s c-speed limit or the concept of entropy as the single arrow of time, which are just born out of the ego and naive realist of huminds, but are neither logic nor experimentally sound.

Humans confuse things they don’t know with dogma… Top quarks and black holes cannot be explained fully considering a single continuous spacetime plane.  Because for them to work as quark stars able to eject dark entropy and dark matter through its vortex they NEED ‘a gravitational plane’ at faster than c-speeds, besides the light space-time plane we ‘see’ limited by Einstein’s relativity to go at c-speed.

But while respecting Einstein, ‘beyond’ the c-speed plane’ there is immense evidence of the galactic plane of ‘dark entropy’ and ‘dark matter’ going faster, hence allowing the black hole to emit dark entropy seen as expanding space between galaxies. A few proofs:

– The gravitational vortex (graph) IS accelerated. This is the basic TRUTH of Einstein general relativity: Principle of equivalence between gravitation and acceleration.

So obviously BEYOND THE EVENT HORIZON THE SAME PROCEDURE – ACCELERATED LIGHT SPACE-TIME BEYOND C happens and LIGHT must become something else, as when it is accelerated in accelerators at c-speed. What then becomes in accelerators, obviously QUARKS! SO THAT IS WHAT IT WILL BECOME IN BLACK HOLES, LIGHT BECOMES QUARKS! EASY, SOUND THEORETICALLY, PROVED EXPERIMENTALLY, just Mr. hawking Mr. Penrose and Mr. Wheeler – the guys of the mathematical singularity – didn’t see it, but we don’t see gravitation and infer its laws. Nobody doesn’t deny gravitation because we don’t see it. Nobody should deny the obvious conversion in a vortex of c-speed spacetime beyond the event horizon  into quarks as it happens at CERN in its accelerators

GRAVITATION IS AN ACCELERATED VORTEX OF TIME, which follows the same laws that A VORTEX OF THERMODYNAMIC time (hurricane, Eddie  or a vortex of quantum time (charge). The 3 in fact are easily UNIFIED by a simple 5D metric equation (which physicists have been trying to find for 100 years but they can’t in a single plane of space-time. This is AN ABERRATION OF CREATIONIST MATHEMATICS THAT ONLY EXISTS IN THE CARTESIAN CONTINUOUS SINGLE GRAPH THAT NEWTON USED TO CREATE THE CONCEPT OF A BACKGROUND SPACETIME…)

We live and in this even the most retarded physicist will agree in a relational space-time, we ARE MADE OF VITAL space-energy and cyclical time membranes (energy and angular momentum, the conserved quantities)… Read the central page at unification which explains it perfectly simple.


Bellow you see the scales of the big-bang, we escaped chemical lesser thermodynamic big-bangs, so you have the non-existence – background radiation as we explained in other articles CAN ONLY REALISTICALLY BE PRODUCED BY A SINGLE SPECIES IN THE COSMOS A BLACK HOLE THE SIDE OF THE MOON, PROVED EXPERIMENTALLY.

SO again WHEN we APPLY REALISM to physics and abandon the belief that all inflationary mathematics, are REAL, we are left always with a single candidate in economical nature to be the substance of beings: A SOUP OF heavy QUARKS IN BLACK HOLES, A BLACK HOLE MOON MACHO FOR THE BACKGROUND RADIATION (a black hole that eats a moon will have exactly the same temperature required to reproduce redshift background radiation at 2.7 K).


To the point:

In the graph, the ∆±4 scales of physical systems, self centred @ the human  pov and the 5D metric equivalence between its time speed and space distance which gives us its co-invariant energy, in all scales (S x T = K)

Once this CORRECTIONS are done and light space-time kept for those regions where we see light, we find above, a slower thermodynamic scale and below a faster intergalactic scale, which I would call simply ‘dark scale’ of faster than light rotary top quarks bc-atoms, and dark entropy expanding vacuum.


NOW physicists think this is not happening BECAUSE THE INTERACTION OF TOP QUARKS AND CB ATOMS IS SO FAST, THEIR rotary motions are so incredibly fast, billions of interactions in a second, that they think they cannot act with the strong force whose coupling constant is slower, IF they rotate at less than c-speed.

But IF THEY ROTATE faster than c-speed, which is proved by the ‘billions of time per second frequency when they switch on-off from quarks into antiquarks at FASTER THAN C,  and the accelerated vortex beyond the event horizon of EFE equations, heavy quarks can become stable cb-atoms and top quark bosons.

So by symmetry with pulsars black holes will be dark stars with a center of top quarks and a cover of cbc atoms.

 WHY DO I THINK THEY ARE  LIKE PULSARS which have STRANGE QUARKS IN THE CENTER AND light atoms in the surface – hence with top quarks in the center CB-ATOMS IN THE SURFACE. BECAUSE THE UNIVERSE IS PERFECTLY SYMMETRIC, IN SCALES, FORMS AND it is ECONOMIC, iterative and so pulsars and black holes are symmetric frozen stars as Einstein wanted.


Screen Shot 2016-05-16 at 18.26.57

In the graph, Galaxies are fractals of stars and dark, quark matter built with 3 topologies: a reproductive body of stars, sandwiched between an informative nucleus of black holes and an external halo of dark matter, probably strangelets and other dense stars. The closest self-similarity in our world scale is a cell and in the quantum scale an atom.

dark macrocosmos

In the graph the ‘gala cell’, an organism of stars, joined by a network of ‘nervous, gravitational information’, composed of the 3 relative families of mass, of increasing density, which act as the ‘DNA-informative centre’ (top quark stars, aka black holes, as top quarks are the only ∆-1 ‘points’ with the same density that black holes ), the protein, hard membrane (strangelet halo of dark matter – Witten hypothesis), and a visible electromagnetic network of ‘stars’, the energetic mitochondria that becomes the food that reproduces both strangelets and black holes. It is a simple organic scheme that explains the whys of physical particles, and the ternary structure of galaxies, similar to that of an organic system of the Universe. Yet such models are not even explored by astrophysicists, as they are based in organic concepts, which physicists by ‘dogma’ cannot accept.

ALL THIS IS WHAT EINSTEIN ASKED FOR in everything he wrote about philosophy of science, the limits of mathematical equations, the hypothesis of frozen stars, the relational nature of space-time, and the use of c-speed limits only in phenomena related to light – you name it: ‘Leibniz is right, there are infinite time clocks in the Universe but it so we have to change physics principles’…

which is what I do in those texts. But I Don’t DENY EINSTEIN AS the singularity guys who deny his affirmation that black holes only will be real when they have a cut-off substance, and believe as creationist mathematicians do that if they ‘talk’ numbers alas reality appears.

Fact is the black hole is a frozen star, the dark world goes beyond c-speed between galaxies, since special relativity and c-speed IS only the speed of the luminous space-time of galaxies, which is also the substance of the background light radiation. Beyond galaxies and within black hole OBVIOUSLY the light limit doesn’t work, BECAUSE THERE IS NO LIGHT THERE TO IMPOSE THE LIMIT.

ST-perspective: travels in time.

Now, the most beautiful perspective on faster than light speeds comes from the proper understanding of the quoted Feynman’s absorber theory – an interpretation of electrodynamics derived from the assumption that the solutions of the electromagnetic field equations must be invariant under time-reversal transformation.  Indeed, those equations do not  singles out a preferential time direction and thus makes no distinction between past and future, but consider both rays one from past to future and one from future to past, to happen at the same time. 

Why then physicists discharge one solution? Essentially because they do not understand the logic of the 3 arrows of time, and its local past to future and future to past converging flows of space that create a present simultaneous event in space.

But that is exactly what those two solutions show. 

Maxwell’s equations and the equations for electromagnetic waves have, in general, two possible solutions: a retarded (delayed) solution and an advanced one. Accordingly, any charged particle generates waves, say at time t0=0 and point x0=0, which will arrive at point x1 at the instant t1=x1/c , and other waves, will arrive at the same place at the instant t2=-x1/c, before the emission (advanced solution).

The latter, however, violates the causality principle: advanced waves could be detected before their emission. Thus the advanced solutions are usually discarded in the interpretation of electromagnetic waves.

In the absorber theory, instead charged particles are considered as both emitters and absorbers, and the emission process is connected with the absorption process as follows: Both the retarded waves from emitter to absorber and the advanced waves from absorber to emitter are considered. The sum of the two, however, results in causal waves, although the anti-causal (advanced) solutions are not discarded a priori.

Feynman and Wheeler obtained this result in a very simple and elegant way. They considered all the charged particles (emitters) present in our universe and assumed all of them to generate time-reversal symmetric waves.

It is exactly in this manner how the ‘present space-time light background’ of our perceive light Universe – the eternal present underlying reality – is formed. Since we live in a galaxy with a background radiation substrata of present, constant light space-time.

So what GST does as usual with the awesome mathematical work of mr. Feynman is to add ‘consistency, logic’ and its whys (: even if he also had a long-life laugh to philosophers of science with the usual egotist view of the ‘entropy-only’ makers of weapons who think all comes from big-bang bombs ):

So this is what we ad on the simple mathematical equations to explain action at distance with the absorber theory, just by moving the sign in t2=-x1/c, to the side of time: –t2=x1/c,

If A sends to B at c speed a ray of light, from the future to the past, which will take t1=x1/c time, but A is in the relative past, – t2; as the ray moves towards the future and A also moves to the future, both meet in the present.

Since the absolute value of |t1|=|-t2| both cancel each other, meaning the time, t1, it takes the ray to travel from A to B in the -t2 past, is cancel by the fact the ray was emitted in an equal amount of time back to the past, so both sum o and the ray arrives in relative present for both beings, seen in our Universe as infinite non-local speed.

This is also implicit in Feynman’s sum of both fields, as he writes:

The assumption that the free field is identically zero is the core of the absorber idea. It means that the radiation emitted by each particle is completely absorbed by all other particles present in the universe, as they feed them.

If the incoming wave is absorbed, the result is a zero outgoing field. In the absorber theory the same concept is used for both retarded and advanced waves, the meal of the two electrons who emit the photons who feed on the neutrino-string that guide the pilot wave.

Then the resulting wave appears to have a preferred time direction as feynman discovered when adding both solutions to maxwell equations:

So at the macroscopic level of huminds it respects causality.

However, this is only an illusion. Indeed, it is always possible to reverse the time direction by simply exchanging the labels emitter and absorber. Thus, the apparently preferred time direction results from the arbitrary labeling (objective view), or from the role of the particle which will feel to be both, in the relative past of its causality-time as emitter, and the relative future as absorber – hence ultimately in an eternal present (for us).

Why is that? On a different psychological perspective of time, which is essential to understand how the human mind perceives time, regardless of the more objective ∆st perspectives on it – we deal with the logic of actions vs. the passivity of perceptors:

The ‘active part’, the emitter is looking always at the future of his actions that ‘move forwards’; but the passive perceptor is actually looking at the past, from where he receives the information or action.

And if you have not understood anything of it. Well, JUST  remember Einstein’s quip: the separation between past, present and future is an illusion.

You live indeed in an astoundingly simple Universe in terms of its first substances, which the enormous number of spatial parts, make complex enough to ‘fog’ the forest. Science today deals and cares only to describe each tree, with its parameters of measure and that is what they call knowledge. We are interested in the absolute synthetic knowledge, which is provided by the organic whys, far simpler and intuitive.

∆-perspective: The deepest truths: time reversals.

Sof why, light travels to the past from A to B? In organic terms, because it feeds entropically on the lower scale of the dark world of neutrinos≈strings≈gravitons.

The ‘formal’ answer is then more complicated than the intuitive vital one. And it brings us a key equation of ¬Æ:

It is called the reversal of time between scales of 5D. And that plugs in the dark world, theme of this post. It was first formulated as usual by the forgotten genius, Mr. De broglie, and it is called the neutrino theory of light. Neutrinos are the ∆-4 singularity, which therefore become the entropic food of light. And entropy is literally NOT only figurative, a past motion, as in death. So when light eats neutrinos≈strings (same scattering weak angle≈size, likely the same concepts: Gravitons≈neutrinos≈strings, explained from three different perspectives=functions arrows)  it follows a flow of past and meets finally the entangled ‘other particle’ hunting together the neutrino. Now, for two neutrinos to create a ray of light, Jordan proved they just need to be emitted in perpendicular, inverse but parallel form, which is exactly what they do in Maxwell’s equation, in absorber theory, in neutrino-light theory, in GST, all over the Universe.

But alas, physicists wishful thinking decided such precision was not possible for lowly photons (reason why the usual idiot, Mr. Pauli, who also busted Broglie – he along Bohr had to be the genius, so they basically stole the material of the French humble aristocrat, who had also discovered the particle-wave duality today ascribed to the power-broker, a banker’s son, with political, financial and military clout, Mr. Bohr, also busy bullying Einstein – what a bore!)

So the dark world is faster than light, it is made of neutrino strings of gravitation, and it is the lower ∆-1 relative scale of light, and so we have now defined properly the invisible ∆-4 scale, which in the inverse ∆+4 world corresponds to the dark entropy traveling faster than c (expansion of the Universe) between galaxies, produced by ∆+4 ‘stringy black holes’.

As usual the maths of it are all over the place, scattered in physical papers not even physicists understand (:conceptually:)

So we shall not play the pedantic role of writing here specialised equations – physicists do know what I talk about and can plug in the articles if so they wish. Our goal is that any university major who has been serious about studying his ‘first year courses’ or even last high school physics can understand the Universe conceptually better than any specialist does today by adding on the main discoveries and 5 Disomorphisms written in a simplified language. 

Why we do this is obvious: there is an overgrowing of computer-generated maths that make physicists think they (their machines in fact) are so smart, exactly inverse to the degradation of conceptual thought by lazy plugged-into-chips scientists. The real task left to mankind though is to upgrade their conceptual logic chip and that is what we do.



. The invisible world of gravitation.

“I had, for a good many years earlier, been of the opinion that the space-time continuum picture of reality would prove inadequate on some small scale.” Penrose on twistors

The pretension of physicist to know on a scale in which perception with our light>electronic instruments is ideally π-3/π, a 0.04%, leaving 96% of dark entropy and dark matter, which does NOT have energetic=present, evident effects on us is a waste of time resources of stience better applied to ∆≤1 planes of the human Universe.

This said we can hint at that vast overlarge Universe which would descend down to the limits of theoretical planck strings, of which the only experimentally sound particle of similar size might be the neutrino, and as such the element of this first unit of reality would be the neutrino ν, or ƒ, as we shall consider in our units of time, the ‘ideal particle’ of time planck size/frequency, etc. This ideal particle akin to a string renders obviously as we have NO experimental evidence beyond the 4% of an ideal ‘pi circle’ observing an opaque Universe, would be the vast reflection of a hyper larger universe made of superstrings of cosmic size; and or therefore stretch to the ∆-4,5 potential scales reality in somewhat different but similar physical games.




Theory on the invisible Planck scale – String theory

Those are though limits for ‘social growth’ of those Tiƒ forms in networks, which physicists cannot ‘humanly’ resolve for the lowest scales… and it would be preposterous to pretend to understand beyond theory the dark-cosmological-Planck scale which will always remain speculative.

Those lower scales to the Planck scale of strings have little evidence, and so while it is very interesting mathematical stuff to theoretically develop them – the  more so with ¬Æ, it is also truth there is not so much interest, if we accept the ∞ of scales in the Universe, according to the most likely truths of the fractal Universe. It is also a fact that there will be variations in different scales and regions of the cosmos and parallel Universe in each ‘scale’ with different ‘dna-codes’ to put it in biological terms. So while the ‘eideos’, or ‘ideal’ would be a 10ˆ11 scale it is obvious that in this universe scales are not that perfect. Still it is remarkable to observe the clear scales down from man to atom (through cell and molecule), and then from atom to ‘dark matter’ (hence with 2 intervals of scales of particles and scales of photons)… This would be the ‘neutrino’ scale and/or that of its micro-particles of a neutrino wave… Below the neutrino scale therefore we can consider one of strings (open and closed, entropic and temporal strings to call them properly).

I tried back in my 20s (90s), during my age studying around American universities, to start an encyclopaedia of stiences to fully reorder all this stuff. But it was impossible to interest the ‘dogma’ on the new ‘paradigm’.

So as in the case of the cosmos, we are not going to make much of a fuss from it. Neutrinos are an obvious gravitational background, invisible, below the light scale and its c,h, constants. And the cosmological constant is too good, within 5D canonical theory with its 3 space-time topologies according to its variation around 0 or 1 (in the reduced Ω version), as to ignore them.

Now, the whole thing should be the ‘labor’ of professional ‘relativist’ and ‘particle’ physicists. I do have a lot of work done during the years I was fighting Nuclear physicists in courts for ethical reasons, and studying in depth all those models, but this will be really the last part of my work to be ‘filled’ with equations, when i finish all other chapters.

Humans though being what they excel at more than anything else (their ego and belief they are above heavens and earth), will simply not give up and accept an objective, ‘limiting’ theory of our role within the Universe. So it is really a matter of beliefs and paper publishing. We can just ‘basically’ construct a parallel theoretical world in those lower scales; to make sense of them. And that’s about what we can do and will sketch in this post, regarding neutrinos, dark energy and matter and the cosmological scale.

So what scientists of the -4th scale most today string and neutrino theorists and strong and weak force theorists, the forces that act at – 4  (strong force) and -4><-3 transitional (weak force) should do to complete its translation to stience of its disciplines is to beat its brains to mix, neutrinos the real stuff with string theory, to make string-neutrinos, and make them faster, and then connect them with the quantum potential field of emergence in our scale and so many other fancy things, in a time-space background independent field where strings are closed time and open space strings, and in a fractal universe, where strings do follow the same mathematical dimensional concept of 2 of space, 2 of time for each plane, emerging as a point surface of the larger ∆+1 scale, adapting all its maths, shake it shake it and pour it all over again. Good luck, a lot of algebra involved.


The first scales of the Universe, the bottom line of our perception of energy and information (light space-time or 1st scale) and beyond, in the invisible world of gravitation (0 non perceivable scale) and beyond, into the -∞ probable scales (string theory and beyond) of non-observable min. Spe size, are considered from a ‘physical perspective’, scales beyond experimental evidence, where theoretical analysis reigns supreme. Yet in as much as we consider experimental evidence the beginning of stience, before physics, comes metaphysics, which therefore in ¬Æ STiences acquires a new meaning as the ‘physical theories’ of uncertain, unobservable scales of the Universe.

In that sense metaphysics and cosmology, the study of ∆≥ |4| scales beyond the ∆±3 scale of the atomic galaxy, clearly observed from our ∆o human point of view, are closely related. In fact there are cosmological strings as they are smaller ones. But when analysing those ∆+∞ scales we shall consider the more ‘logic’ concept of a god or super organism, the relative infinite element for the parts of a super organism.

And so we reserve the more semantic name of the stience of metaphysics for the study of forces, which have a great degree of uncertainty (invisible forces, dark matter and energy and gravitation) or are the structure of the human mind, and scientific analysis has been clearly lacking perspective.

We also use consciously the term metaphysics, because it implies certain imaginary analysis not necessarily truth as it is the case of many of the theories physicists develop on metaphysical scales beyond our perception (black hole evaporation, Supersymmetry, string theory etc.)

Consider the simplest form of the Universe a unit interval, I, (O,1), It is a bit of Entropy or motion, distance or space (both perceptions are correct).

This open string I, can convert itself in a circle π, with a diameter 1, the original string. The circle will be made of 3 D turning around with an aperture of 0.14, and so this entity, with a zero point in x=0.5, which can perceive through 0,14 apertures a 4% of the external Universe, with a 96% of dark space blinded by 3 i-strings that turn around it, is the simplest time-space organism, the pi point. The pi point as it turns to be reflects quite well the ultimate web of our gravitational space-time over which the electromagnetic space-time we see exists.

The Universe is bidimensional both at small and large scales (Holographic Principle), but its units, Non-Euclidean points, reproduce in fractal patterns till emerging as a 4-dimensional social network of space-time, the Universe we live in. This discovery of the most successful model of quantum gravity (casual, dynamic triangulation, galactic scales), shows that even the simplest space-time membrane of gravitation is built with the isomorphisms of social networks. But in its deepest meaning the fact that the Universe in the limits of perception of man has ONLY 2 SPATIAL DIMENSIONS, WITH NO TIME-HEIGHT DIMENSIONS, means we are ‘touching’ two immortal limits of reality that might be the absolute scalar limits of the Universe – a philosophical question which belongs to the metaphysics of i-logic time.

400 Ultimately we are made of pi circles. This simple bidimensional form has therefore a lineal |-Entropy space configuration and a O-closed one and from the Planck scale this strings of space and time can create our 4D Universe, as casual triangulation has proved:

The scales of physical space departing from that bidimensional, gravitational, invisible world of strings are studied by astrophysics. The scale of strings of space and time, of quarks and gluons of strong forces, of ¥-light and electrons of electromagnetic forces, the human scale and the gravitational, cosmic scale.

Screen Shot 2015-09-09 at 14.11.49


The limits of knowledge.

A consequence of the reversed entropy of information and energy crossing though 5D planes, which goes beyond the idol-ogical shortcomings of the metal-based cults of western thought is the limit of any human theory of the Universe that pretends to explain scales beyond those which our superorganism interacts with. Namely theories on the scales that is invisible to our perception, below the limit of gravitational energy and informative light, which we use to exist.

Thus from the ∆o human point of view, beyond the ∆±4 scales of the local Universe and the interior of atoms, (regions with huge uncertainties of perception but from which some information is still perceived) reality is and will always be uncertain. Of course, physicists will tell you that making big-bangs on earth with high speed accelerators can observe those scales – to the risk of extinguishing the earth into a strangelet or black hole, and eliminating all knowledge as a extinct scientist knows nothing. There are also absurd plans for faster than light speed future flights in search of the beyond-the-galaxy regions, which obviously are absurd megalomaniac concepts, as we cannot even go beyond the Oort cloud due to radiation )we would need a kilometric iron shield on our starships).

Thus all analysis of those scales is theoretical and the best we can do is to ‘project’ as Mendeleyev did with his elements’ table the isomorphic properties of known scales in the beyond and below. This is the proposal of 5D metaphysics, fully aware that there will never be theoretical proofs of those scales, worth to mention.

Let us then briefly consider the lower scale limit of u-4, string theory from this perspective, trying to accommodate it to the isomorphism that should define its properties on the metaphysical realm.

In that regard we accept the correspondence between the lowest atomic and highest known-unknown scales of galaxies, as there is clear theoretical evidence in the equation of unification of forces, provided in this post latter on with the metric of the 5th dimension and in the uncertainties due to reversed entropy we have on those 2 scales, uncertainty of information on the larger galaxy (unknown to experience is the 90% of dark matter on the halo of galaxies and the internal matter of the black hole nuclei), and uncertainty of energy measures on the atom due to motion entropy (with the added unknown of lineal time physics theory’; that is the proper interpretation of the laws of quantum systems, which however can be solved with the right models of cyclical time as this blog will show when completed).

This would imply that the same duality exists in the non-perceivable scales of string theory, between the minimal strings and the cosmological strings; when properly modeled within the restrictions of 5D fractal space-time as the lowest and upper membranes of reality.

they would be two scales above and below the local universe and the quark-gluon systems.

These unknown unknown, theoretical strings, are of course just mathematical functions; hence without evidence they should remain linguistic fictions not different from a novel or Jewish book of history talking about the creator of the Universe. The consistency of a literary work of art is by no means proof of its truth as Gödel showed with his incompleteness theorem, studied in depth in ‘the future’ in section ∆±∞. Since as Einstein quipped ‘I know when mathematics are truth, but not when they are real’. Still in as much as the best fictions ARE linguistic mirrors of the human reality, which do allow knowledge of that reality (so the best novel, war and peace does help to understand Napoleonic wars likely much better than any treatise of history), what string theory should do is to follow ‘Tolstoy’s method’ of limiting ad maximal ‘imagination’ and ‘excess of formalism’ in his writing and stick to the barebones of 5D isomorphic laws to restrict the mathematical formalism to its likely true minimum. We thus give here just some basic advices to convert string theory into a workable likely ‘war and peace’ model of the unknown scales beyond chromo dynamics and local Universe astrophysics.

So it is a waste of time to deal with string theory;  as promising as it might look, specially as long as its dominant forms are Newtonian (using an absolute space-time frame), on the lower scales – and we shall not do it beyond recommend them the ‘quixotic worshippers of mathematical fictions’, to model them at least with the simple parameters of cyclical time, in the following terms:

  • Closed strings can be modeled as the minimal time cycles of the Universe. With spin 2, which means a closed, ‘immortal’ time loop, and 10c tachyon velocity they might be used to model hyper-gravitons as a force of repulsive dark energy between galaxies.
  • Lineal strings can be modeled as the minimal lines and planes of energetic space of the Universe. As their energy is proportional to their length, they are easily converted in units of spatial energy, at the plank scale, and their conversion into relative background space is immediate.
  • Neither of them can be modeled with more than the canonical dimensions of the Universe. This means they must be modeled with the the 3 x 3 ± Œ symmetries of the Universe, in space, Se≤ST≤Tiƒ; time, Sp>ST>Tƒ, and 5Di Planes, œ-1, Œ, Œ+1. So the way to do it is to construct with them 3 x 3 + ∆ dimensional scales to make up a 5D plane. In brief, 3 space and 3 time symmetric dimensions create a whole Œ string, which then can be considered a point-particle to construct a new scale, for a total of 3 x 3 +∆ dimensions of 5D space-time.
  • And this can be done with both atomic strings to define the strong force below the level of gluons and cosmological graviton-tachyon strings to model an upper scale over that of galaxies (black holes and dark energy). This duality is of interest as it would put in correspondence top quarks and its hyper strong force and black holes as all seems to indicate that top quarks with its parameters of information density (mass) and rotary speed are the ‘atoms of black holes’ in 5D physics. And the Dualities of cosmic strings, graviton strings and boson strings can do the trick.


Foreword: GisT Dimensions, Planes and isomorphisms.

The Goal of GiST is to study the 10 i-somorphisms (equal sets of laws) that define the similar forms, events and actions of all the entities that exist.

Those ‘i-somorphic’ laws derive from their common nature as ‘Scalar Space-time beings’ made of:

The same ‘3 topological finite dimensions of space’ that configure its ‘3 organic parts’ As all systems have lineal fields/limbs of energy; and spherical heads/particles of information that combine in their toroid bodies/waves that iterate the system).

The same ages of time, dominated sequentially by those topologies: an energetic youth, a reproductive maturity and an informative old age that define the finite duration of its life-death worldcycle.

And the same ‘scalar planes’ of organic existence, divided into, the closer 3 i±1 scales:

– The i-1 existential cellular/atomic plane, the i-ndividual plane and the i+1 social, gravitational plane (physical/biological systems).

– And the larger ‘ecosystemic’ scales, with whom the i-being inter-acts to get its energy, information and force (Its i±2 world and energy quanta, its i±3 micro and macrocosms and its information bits and its i±4 Universe and its motion forces).

And so as a result of that common structure of all beings, we can consider anything that exists a variety of fractal scalar space-time; part of an infinite isomorphic reality defined formally by a ‘Fractal Generator equation’ written with the symbols of those 3+3+9 dimensional planes of space-time. Which in its simplest generic form would write:

[STe≤ STr≥STo]i±4

Representing the 3×2 bidimensional Space-time topological organs of all species in space, ST limbs/fields of energy, ST repetitive body/waves repletion and ST particles/heads of information, which exchange in time flows of energy < and information > or repeat them =, through 5 type of actions, across 9 i±4 planes of existence, increasing its acceleration with the help of ∆i-4 forces, its ∆i-3 informative bits, ∆i-2 energy quanta, its ∆i-1 reproductive seeds and evolving social with ∆i parallel clones, to create a bigger social i+1 superorganisms.

This is the Universe in a ‘nutshell’ and so we shall study in this 3rd line, each specific species, its dimensions of space and time, its 2-manifold topologies, i±4 actions and social planes, through the analysis of its 10 isomorphic elements, common to all those species.

1st isomorphism: space-time dualities: ∑E=S, ∑t=O

The system’s Space and Time components, which are also its Energy and Information, as Space is a fixed vision of the energy quanta that make a system, and information a still vision of time cycles that carry it in the frequency and form of those cycles.

So we identify the main elements and plane of existence of a system and consider its ‘gender varieties (lineal energy=male, cyclical information=female’ and Se<i>To symmetries.

2nd Isomorphism: Its 3 organs/networks: $t≤ ∑∏≥§ð

Its 3 organic/network topologies in space… and 3 x 3+i subsystems

3rd Isomorphism: Its ages and evolution: [Max.E x Min. I > E=I > Max. I x Min. O]∑i>i+1

its 3±i ages in time…and its evolutionary ages in the i+1 plane of species.

4th Isomorphism: Its planes of Existence: i±4

The ‘metric’, Scalar Space-time Generator equation which describes all i=ts Space-time dimensions and isomorphic planes: i±4=SexTo. And it allows to study its i-4, motion, i-3, information pixels, i-2, energy quanta, i-1 seminal seeds, Si, social scales, i+1 super organism, i+2 world and that’s about it. We do not really care for its i+3 galaxy (-: and beyond.   And so now that we have it almost all said, we define the PsD structure of the being, with its specific generator equation in which the whole is represented, with all the previous data. This generator equation completes our understanding of the being.

5th Isomorphism: Its actions: ∆(Ai-4; Oi-3;Ei-2, Ri-I, Si)

Thus now we can easily describe its main 5 actions derived by those Dimensional components across its i±4 planes of existence: ∆Ai-4 (acceleration of motion), ∆Oi-3 (perception), ∆Ei-2 (feeding), ∆Ri-1 (Repetition), ∆Si (social union)¹.

6th Isomorphism: Social classes: i±1; i<-1

Then we find its internal hierarchical social class structure and exchanges of energy and information among its i±1 ‘willing’ scales (the cellular/atomic i-1 plane, the i-ndividual and i+1, social/cosmic plane, where the being exists and which remains co-invariant through its inter-actions. So we analyse the closest world around it, through the perpendicularity and parallel laws of Non-Euclidean geometry’s ‘3rd postulate’ of similarity.

7th Isomorphism. Existential Constants: C(∑S,∏ T,i±4,5A)

Next, we study the system quantitatively, through its Constants of Action, its Social Constants and its Space-time symmetries, all of them determined by the ratios of exchange of energy and information between its PSD elements. This is the most mathematical detailed analysis after the qualitative understanding of all the elements of the being.

8th isomorphism. Creative diversification: 1,2,3,4.

We show now the processes of creation and diversification of a given species. We study its gender dualities and its topological varieties caused by STe,r,o differentiation and the coding 4 numbers of its ‘eros’ actions.

9th isomorphism. Social scales. S10

Finally we consider the last phase of its evolution which is social – for the most advances species, which transcend into a higher i- plane of existence through S10=(3×3+i)¹° scales.

10th isomorphism: Self: O-Point x ∞ World = Constant mind Mapping

It is left to study the alpha and omega, the 0 and 10th isomorphism: the point of the mind, site of the will, which orders the system internally in its ∑i-1 parts and perceives it as an i-whole, part of its i+1 society. The topological center of a sphere, which can according to Poincare’s conjecture represent without deformation its whole world in the infinitesimal fractal, non-Euclidean mind point.

The STP isomorphisms of strings.

String theory explains the smallest physical scales. Further on it MUST be corrected in its formalism to make them background dependent as strings ARE space-time. Then they show the key 5D Isomorphisms, the ∞ of i-scales and the Galilean Paradox:

1st isomorphism: space-time dualities: ∑E=S, ∑t=O

– Their length is proportional to its energy: ∑E=S (Galilean Px.)

They are 2 motions mutating into each other: To- cycles (closed strings) & Se-lineal motions (open strings). Hence they prove the topological, functional Isomorphism in 2 Dimensions (Cyclical and lineal forms).

2nd Isomorphism: Its 3 organic/networks: : STe≤ STr≥STo

There are 3 varieties: O-closed, |-open and its vibrating combinations that reproduce laterally (Str).

3rd Isomorphism: Its ages and evolution: [Max.E x Min. I > E=I > Max. I x Min. O]∑i>i+1

Strings evolve socially, and likely they also evolved from a planar Universe, (max. E) into cyclical forms (Max.O) and vibrating modes (ExO)

4th Isomorphism: Its planes of Existence: i±4

They exist in 3 main i±1 planes: as single i-1 strings, as social ∑i-1=i membranes and as ∏i=i+1=social particles

The 10 or 26 dimensions of strings are the inner scales of a fractal point that seen from our world shows only ±i=1 dimension of lineal or cyclical motion. So 10D fermion string theories are equivalent to the 3x3i±1, Se dimensions of most entities studied in 5D metrics from a spatial p.o.v. and the To=33i =27±1=26 dimensions of bosonic strings, those of most entities studied with a Time p.o.v. as bosons are To-forms and fermions, Se-forms.

Strings prove the infinity of scales, as there are hypothetical, self-similar to cosmic strings of galactic size…

5th Isomorphism: Its 5 actions : ∆(Ai-4; Oi-3;Ei-2, Ri-1, Si)

They are actions vibrating in 4±i modes:,

∆Ai-4: they move lineally.

∆Oi-3:They form closed cycles

∆Ei-2: They form lines of energy.

∆ Ri-1:They reproduce laterally, ∆exo, creating planes or cyclical tubes in social motions.

∆ Si: They create i+1 structures:  bosons & quarks.


6th Isomorphism: Social classes: i±1; i<-1, particles made of membranes (D-Branes) made of single strings, attached to the membrane as energy cilia are to a cell form a ternary system.

7th Isomorphism. Existential Constants: C(∑S,∏ T,i±4,5A)

Strings are mathematical objects, defined by basic mathematical constants and operators.

8th isomorphism. Creative diversification: 1,2,3.

Its gender duality is O & |-strings, its diversification happens as both combine into all kind of tertiary curves.

9th isomorphism. Social scales. S10

They gather together creating enormous string bundles that originate relative i+1 membranes and i+2 particles.

10th isomorphism: Self:  O-Point x ∞ World = Constant mind Mapping

A simple pi circle can already be a 0-point.  So 3 lineal strings and a cyclical one can form an organic system – a π-cycle acting as an organic membrane, made of 3 unitary ‘open strings’ with 3 x 0.05D ≈ 0,15 apertures for a total π-perimeter. This π-membrane receives external, imploding vibrations of energy, which it transmits to its internal space, through its 3 apertures as a broken Cantor dust: 3 bits from the external world that finally stop as a still image in its center, creating the zero point; thus forming an organic topology with 3 canonical parts: the external membrane, the toroid whose cycles transform energy into form till reaching the static zero point that integrates the flows into a mind.

This is its ‘life-informing’ arrow. Or in the inverse direction, the zero point or membrane can grow finally expanding in a big-bang vibration, dying away. This simplest structure has already all the Isomorphisms of the 5th dimension, and it’s a model to study many self-similar properties of more complex Universe. An example: How much external space the 0-point sees? If it has ≈ 0,15 apertures in a π-perimeter, it sees 0,15/π of outer reality ≈ 4%, leaving 96% of the Universe out of its perception as dark space. It is the same proportion of dark matter & energy our ¥-eye 0-point doesn’t perceive of the Cosmic, gravitational, lower scale – solving the mystery of dark matter. 


standard model

In the graph we can see how this dark, gluon-top quark soup (toplet liquid) of a gas-9 reaction (black hole quasar explosion), neatly reclassifies properly the 3 families of increasing mass in the Universe:

In the graph, a first look at the reordering of the strange and top quark triangles of mass, to define the symmetry between the ∆-1 and ∆+1 scales of ‘atoms and galaxies’, in terms of its ∆-1 components (quarks).

To fully understand the previous graph, you should consult the next post on atoms, as quarks fully correspond there. We have already treated in the introduction to this post, the analysis of the ternary symmetry between ∆-1 quarks and ∆+1 parts of the galaxy;

∆+1: top quarks->To: Galactic black holes, ∆-strange quark: Sæ: galactic halo, ∆-1: Ud quarks (our matter): ST-galactic stars.

Now, in this system, it is necessary to understand the role of the Higgs field and top quarks, which conform the outside-the-galaxy dark matter and dark energy.


now it is fundamental to understand that gravitons do NOT EXIST on the sense given by physicists – as the attractive gauging particle between physical masses, but gravitomagnetic transversal waves of neutrinos do exist and are essential to the structure of galaxies and solar systems.

THIS IS THE SOLUTION to the conundrum put up before on the ‘neutrino 1/2 spin’ vs. the graviton 2 spin. Simple: neutrinos are the equivalent in the next cosmic scale of the 5th dimension of an electromagnetic repulsive wave of ‘dark energy-repulsive gravitation’.

The neutrino theory of light, as the pilot-wave theory and the particle-wave duality is the work of De broglie, brutally assaulted by the Bohr gang who took their work as theirs and misinterpreted. A neutrino is the ’emerging’ flow of entropic communication between two atoms, or atom-galaxies, which either goes the entropic way, dissolving-stretching space-time into action at distance and disappears without trace of information: V=s/t=s/0=∞.

Or is not in an open ‘domain’ but constrained in its two limiting ends by the two atoms/galaxies it communicates and then, this condition (to be guided and controlled by two point-particles at their ends, is the only condition needed for two inverse neutrinos, with inverse motion, that is, one emitted by particle a and the other by particle b in a communicating act: 

Fermion < neutrino+Neutrino>Fermion

for a light boson that ‘warps’ pure gravitational action at distance space-time giving it form information to exist. So neutrinos do form light-beams and do dissolve them. And that is another beautiful thought of De Broglie on the path of truth in science: economicity, simplicity and causality.

As we have shown the symmetry of scale between atoms with positive quarks and negative electrons, and galaxies with positive black hole top quark stars and negative strangelet halos ad nauseam, it follows now that dark energy waves will be just the equivalent created by dark energy flows which acquire form between galaxies. The math of it are more or less complex but as we REPEAT ad nauseam, the beauty of the fractal, organic, vital, topological= mathematical universe, is that we can explain it much simpler and intuitively using the concept of symmetries between scales and organic properties and the laws of the scientific method of truth (economicity, simplicity etc.) as long as in the background we use sound-sound mathematical theories (as the neutrino light, providing they are indeed as Jordan proved, exactly inverse neutrinos constrained in its ends – by the particles that use them to entangle and communicate). So we do not need you to be a math-addict, just trust me, i read physics, love maths and only use sound-sound theories of the Universe, unlike we must say many of our fantaphysicists these days.

So there must be G-waves of dark entropy and dark faster than c gravitation in the upper scale between galaxy atoms; and between (at light speed constrained by the galaxy inner structure) stars and planets and black holes and starts.


And thus with those pre-conclusions we can deal now with gravitational waves, which we call in 5D G-waves, likely made of neutrinos either constrained and able to reproduce light or unconstrained and escaping constantly into the dark energy lower scale of action at distance and null information -for our electronic perceivers. Consider for example an earlier use i did of them to calculate the titus law of distances between planets:


One of the hypothesis of 5D physics is the fundamental role that must play neutrinos by sheer rational evidence, as the gravitational quanta of the gravitational transversal wave structure of the universe. AS IT is simple enough we can bring it here just for fun.

The theoretical difficulty being the 2-spin prediction of quantum gravitational theories based in 2 assumptions not completely hold in 5D physics – the conservation of spin (which in 5D physics might be transformed in upper or lower forms of rotational momentum, conserved only through 3 planes of the 5th dimension, as they become inverted in its emergence in upper planes), which allows to create a model of 2-spin neutrinos as gravitons or inversely the assumption that gravitons do NOT have 2 spin (or rather are made of 4 neutrinos, lineal strings of one dimension, which become the 4 component of a field that forms a complex doublet of spin ½ with the peculiarity observed that both neutrinos and antineutrinos have longitudinal polarization of positive spin.

The mathematics of the model are somewhat complex but essentially mean that neutrinos ARE VERY important, almost as multi-faceted as light space-time is – couldn’t be otherwise as they are the other 2nd background space-time network of the galactic universe. So as nervous and blood systems do have multiple roles in your body neutrinos and photons share all the networks roles of the galaxy.

They do have therefore both boson and fermion ‘nature’ as the weak force they mediate.

This means that we can put neutrinos and antineutrinos together in and 4-doublets, as if they were bosons. Then:

– 2 gravitational tachyon neutrino strings, when put together in pairs of opposite direction form the up and down, particle and antiparticle sides of the magnetic light wave-field.

– When communicated between particles, the fixed distance in gravitational space that allows the sharing of information between the particles at fixed c-speed.

– When colliding with neutrons they catalyze the beta decay, as well as other weak force transformations, in a role similar to the one mediated by the Higgs boson on the top quark faster triplet of heavier quarks.

– When emitted massively and constantly by all type of cosmic bodies the origin of the 3 ‘G’ ‘giant electromagnetic waves’ of the gravitational scale, which form the different transversal wavelengths that order in ternary symmetries the structure of solar systems and galaxies.

Those transversal waves are the only of the many roles of the Neutrino ‘fantaphysics’ (: I confess neutrino gravitational string physics is the only speculative part of 5D Physics of which I have not yet by lack of evidence put my firm on blood), which we will discuss here.

Essentially there are 3 neutrinos which constantly oscillate as they abandon the cosmic bodies that constantly reproduce them in proportions often higher than the photon radiation, forming 3 basic type of gravitational transversal waves, whose combinations allow us to model galaxies and its spiral arms and rotary orbits of stars in the wells of those waves. The same neutrinos structure the lows of planetary waves, and so we can recognize clearly in ternary symmetries a 0.33 short wave of neutrinos of higher energy that crosses the planet crystal centers provoking the well-known Fe-Co-Ni chain of reactions that puts on ‘fire’ the internal energy engine of planets. A longer 5 A.U. wave exists also clearly as it allows to put on its nodes the main ‘Jupiter like planets’.

And finally there should be an even longer neutrino wave that will put in connection stars among them.

In the next graph we see the 3 oscillatory neutrino waves observed together coming out of the sun:

Screen Shot 2016-04-08 at 21.26.33

Gravitational, transversal, fractal waves shape the structure of galaxies and solar systems, transferring form and Entropy between cosmological bodies, in a self-similar process to the transference of information and Entropy between atoms through electromagnetic waves. In the graph, Titius Law of distances between planets reflects their position in the nodal points of those transversal gravitational waves. In the core of planets, there could be a crystalline or super-fluid zone where those flows of dark, gravitational Entropy are processed, causing flows of heat and matter that make planets ‘grow’.

Below we observe the oscillations of neutrino waves.

Now the 5D model of astrophysics, implies that there are also transversal repulsive gravitational waves, which do account for many of the structures of the Universe, starting by the expansion perceived in intergalactic space-time. This is just the expansion produced by any lineal electromagnetic wave of any range, now in the cosmic scale. Moreover, black holes produce two type of wave-shift that compensates each other and explains why the total space-time of the Universe remains constant:

  • When a wave of light enters the black hole it ‘blue shifts’ imploding space into higher frequency/density of energy-mass. But this effect is NOT seen obviously from far away as it is the inner production of mass within the galaxy.
  • When the wave leaves a black hole region it redshifts that explains the initial red shifting of light as it leaves the galaxy towards us

So the hypothesis is clear.

Somewhat neutrino waves, of the hipouniverse, which have as a lower limit the speed of light and an upper unknown limit as tachyons, theoretically, if they are the field force between galaxies up to c=100 z, gather in ginormous numbers, allowing the stretching in the ∆+4 symmetric hyperUniverse.

According to this ∆-4,5 reality above us we can then consider that the stretching reveals either gravitation as having a ‘slower’ speed on the ‘glass of light of the galaxy’, as light has slower speeds on ‘glasses of solids’, and neutrinos becoming the stretching particle of the ∆-4 intergalactic field to 100 c-distances/speeds, or neutrinos becoming something ‘else’ (dark entropy), at the halo of strangelets (dark matter wit ten’s hypothesis candidate to the ternary symmetry between quark masses and regions of the galaxy (top quark stars=’dna’ black holes, strangelets=’protein’ MACHOs, ud-matter: internal star ‘mitochondria’).

Titius law. A local proof of transversal gravitational waves.

While transversal gravitational waves cannot be detected directly we shall bring here a couple of proofs, at local and cosmic level. At local level it is dry easy to create a model of the Titus law (the exact position of the planets) using 3 waves. So the planets will be on the nodes of those wave frequencies, as in quantum physics, the electrons ARE in the nodes of waves that restrict their positions (multiple of h-frequencies). In the next graph we see the short, high energy wave that locates in its gravito-magnetic nodes (as the magnetic number locates the orbits of electrons), the orbits of ferromagnetic planets. A longer ±5 AU wave locates the outer giant gaseous planets:

In the graph, we see one of those waves, which between ‘galaxies’ with no light warping it to create light space-time at the c-limit, SHOULD move in a new ‘decametric scale’ at 10z. As it happens, we find in this cosmic scale is the outer ‘region’ of the galactic atom, ruled by transversal, expansive, intergalactic giant gravitational waves MATTER gushing out of black holes that produce those waves at 10 z=c speed.

Thus its existence within the galaxy -albeit reduced to c-speeds, unlike outside the galaxy where they are spotted routinely at z=10c, would define a new dual game of implosive and attractive gravitation similar to the scalar atomic game. Many laws can be solved and deduced from this parallelism. Below the Titus law of planetary distances as caused by G-waves.

This is a necessary introduction to the most fascinating finding of 5D physics, the unification of charges and masses, as the two limiting scales of the Universe perceived by man. We shall expose this finding as the Linguistic Method prescribes.

First with a conceptual understanding of the Scientific method of truth (simplicity, correspondence), then from the organic perspective (topological and scalar structure of the Universe and its 3 relative symmetries of topological parts and scales)

And only at the end in its quantifying methods, since T.OE works with inverse causality to a mere use of how-mathematics.

We use constantly self-similarities, based in the 3 Isomorphisms of invariance of the Universe, scalar, formal and motion invariance and the ternary differentiations in time (Entropy, reproduction, information) and its symmetric function in space (planar, spherical membranes; toroid bodies and hyperbolic centers).

With those simple isomorphisms we can describe the galaxy as we did with the atom. Yet self-similarity is not equality, which means that we cannot use the exact formalism of quantum physics but use those self-similarities for gravitational systems we do not perceive, except for its secondary effects.

This is the case of gravitational waves, which are self-similar to electromagnetic waves. They organize the structure of stars and galaxies, as electromagnetic waves organize the orbits of electrons. Both respond to the same morphological equations that relate 2 particles through a lineal force field defined by the ratio between the informative density of masses or charges and their distance.

Yet even if form remains invariant at scale, as it is an essential topological property that defines the why of the Universe, the metric space changes as the space-time ratios/constants that define the size, speed, frequency and range of those waves change.

We observed a self-similar change when studying the cyclical informative vortices of both scales – the G-constants of Newton’s gravitational vortices and Coulomb’s equation of an electronic vortex, unified by their invariance of topological form and scale).

Thus galaxies and solar systems show a gravitational, morphological, spatial structure similar to that of an electromagnetic atom in the cosmological scale, a fact that Einstein predicted, establishing 2 kinds of gravitational waves, parallel to the 2 types of electromagnetic fields we know:

– Static waves that create the gravitational bi-dimensional fields over which galaxies form.

– Discontinuous, transversal, quantized waves, which shape the orbits of stars and galaxies, in the same manner photons control the orbital distance of electrons in atoms (l numbers).

Thus, those gravitational waves should have the same functions in galaxies and solar systems that electromagnetic waves have in the world of atoms, explaining cosmological structures and becoming by self-similarity with electromagnetic waves, the fundamental force of interaction between celestial bodies.

We know that the gravitational activity of black holes set up star orbits and probably influences its evolution, growth and formation, determining the basic properties of magnetic fields, ecliptic orbits and distances between stars in a galaxy and planets in a solar system. So even if gravitational waves are invisible, using their morphological self-similarity with light waves, the equations of Einstein’s relativity and the indirect proofs provided by the orbital distances and rotational fields of stars and planets, we can explain many ‘whys’ on the structure of those celestial bodies:

– Astronomers have always wondered what rules the distances between the planets of the solar system. The existence of regularities in the distribution of planets in the Solar System was recognized long ago. This was Kepler’s main motivation in his search for planetary isomorphisms.

The Titius-Bode law (rn = 0.4 + 0.3 × 2n) was the first empirical attempt at describing these regularities, and was followed by several other proposals. The discovery of similar structures in the distribution of the satellites of the great planets led to a revival of interest for such studies, and to the hope that indeed a physical mechanism was at work. Now we can add a topological why to the how and when of metric space measure:

Those planets are in the nodes between gravitational waves of different frequency/amplitude and the solar system’s orbital plane in which planets feed, ‘deforming’ space-time, as they follow their static gravitational orbits; as electrons are in the nodes of their quantum waves, fine-tuned by the secondary levels they access according to the strength of the electromagnetic waves they exchange with their environment. For the same reasons stars should in the nodes of gravitational waves caused by galactic black holes.

In the graph we draw the 2 fundamental wave lengths that could explain the distances between planets: a high frequency, short gravitational wave of 0.33 AU could explain the positions of ferromagnetic, inner planets on its nodes. While 2 low frequency long wavelengths at 5 and 10 AU, could explain the position of bigger, and lighter gaseous planets. Since Jupiter is located at 5 AU, Saturn at 10 AU, Uranus at 20 AU, Neptune at 35 AU, Pluto at 40 AU; and as I predicted a decade ago, we have found a new planet, which I called then Chronos, ‘the last of the titans’ at 100 AU, in the limit of the solar system, ‘renamed’ Selma (-;.

– G-waves explain why planets have ecliptic orbits with an inclination on its axis, which is a natural orientation if they are receiving curved G-waves with a certain angle through its polar axis. In that regard, the rings of gaseous planets in the point of maximal activity of those waves (Jupiter and Saturn) and the spiral vortices of galaxies, could act as ‘antennae’ for those waves at star and galactic level.

– Those waves might cause, as all lineal movements do, a cyclical vortex around them, originating the condensation of planetary nebulae. While in galaxies their wave structure seems to originate the different densities of stars in their nodal zones.

– Planets suffer catastrophic changes in their magnetic fields, probably produced by changes in the directionality of those waves, emitted through the tropical dark spots of the sun.

There are advanced mathematical models of gravito-magnetism that have unified both type of waves, departing from Einstein’s work. Thus energetic ‘gravito-magnetic’ waves might cause a change in planetary magnetic fields as a magnetic field changes the spin of an atom that aligns itself with the field. For example, Uranus is tumbled and it has lost most of its magnetic field: perhaps it was knocked-out and relocated by a G wave.

Lineal magnetism is in fact in complex physics of multiple planes of space-time the intermediate SæxTo force that ‘transcends’ from the gravitational to the electromagnetic scale: for example, electromagnetic light or ferromagnetic atoms like iron should absorb gravitational Entropy through their magnetic fields.

– Solar spots are the probable source of those waves. Yet its origin might be the central core of the star or the activity of the central black hole, whose G-waves might be absorbed and re-emitted by the star. We cannot perceive G-waves directly; but magnetic storms, solar winds and the highly energetic electromagnetic flows and particles that come from the sun’s spots, might be its secondary effects. In the same way we only perceive indirectly the waves of dark Entropy emitted by black holes that position the stars of spiral galaxies, by observing the mass and radiation dragged by those waves.

– Those catastrophes might cause the climatic changes that modulate the evolution of life on Earth, since we already know that the activity of sun spots affects the temperature of the Earth.

– G-waves could structure the galaxy and its stars in the way electromagnetic impulses structure a crystalline atomic network, ordering the distance between its molecules: electromagnetic waves also feed with Entropy and information those crystal webs. For example, electromagnetic waves cause the vibration of quartzs, which absorb Entropy from light and vibrate, emitting ‘maser-like’, highly ordered discharges of electromagnetism. We observe similar maser beams in neutron stars, called for that reason pulsars.

Recap. Gravitational waves produced by black holes control the location of stars and planets, and its spin/orientation through smaller gravito-magnetic waves.

Organic Patterns in the Galaxy. The why of G-waves.

The closest homology of the 2 dual networks of the galaxy is with an atom in which the central nucleon with max. density of gravitational information and the external electronic membrane interact in a middle space-time vacuum through gravitational forces and electromagnetic photons.

Another self-similarity between scales of multiple space-times might be established in complex analysis between the galaxy and a simple ‘cellular’ organism, which introduces elements of complex biology in astronomy obviously more difficult to accept from a mechanist perspective).

Following the cellular or physiological homologies, the network of dark, informative matter and gravitational Entropy, connected to black holes, surrounds and controls the stars’ electromagnetic Entropy. We know that it was formed first and then guided the creation of electromagnetic Entropy, so we can observe it indirectly and deduce its form from the highly quantized shape of the filaments of light-galaxies that were formed around dark matter (right graph). Thus dark matter acts in a similar way to the RNA that shapes and controls the Golgi membranes of the cell or the nervous system that guides and builds the morphology of the body; while the network of stars and electromagnetic – the slower Entropy that produces the substances of galaxies – surrounds those strands of dark matter. In the cell’s homology ribosomes that create most products of the cell are pegged to those membranes.

We shall consider briefly here 2 of those controversial hypothesis: The possibility that black holes perceive gravitation and the chains of causality between the different scales of the Universe, self-similar to the chains of causality between cells and bodies.

– The most controversial element of a cosmological model based in G-waves is the existence of gravitational information that allows strange, neutron stars and black holes and maybe in the future evolved planets such as the Earth through its machine systems to perceive and move at will within a static field of gravitation.

On the Earth animals use light as information and dominate plants, which use it as Entropy. The hypothesis of complex cosmology is that stars are ‘gravitational plants’ that merely feed and curve gravitational space-time, while Worm Holes are ‘gravitational animals’, which are able to process gravitation as information and control and shape with gravitational waves the form of galaxies, their territorial space-time. They are in that sense extremely simple plants and animals. A more proper comparison would be with a cell, where the DNA molecules are the Worm Holes, the informative masses of physical space; and the mitochondria that produce energetic substances, the stars.

Thus frozen, quark stars could be ‘gravitational perceivers’ in the cosmological realm, as animals are light perceivers in the Earth’s crust and DNA perceives van der Waals forces in the cellular realm.

On the other hand stars would be plant-like, floating in the sea of gravitation, used as Entropy of their motion, feeding on interstellar gas, as planckton does, floating in the sea of water.

Do black holes perceive gravitation as complex animals perceive light, instinctively or mechanically, as DNA perceives the forces of the cell? They probably gauge gravitation in very simple ‘forms’, as a cellular DNA-system, much simpler than the brain of animals, perceives its territorial cell.

That is the supremacy of man in a relative universe were size is less important than form: While all systems process information, man is a summit of form and hence one of the most conscious species. Yet black holes have enough quark complexity to act/react to informative flows, as they seek Entropy to feed on – our electronic Entropy. This hypothesis has experimental proofs, since pulsars and black holes emit gravitational waves and we have observed many black holes following erratic paths through the galaxy, which defy the tidal, regular orbits of stars.

In that sense, multiple space times theory considers that in the same way light waves are the Entropy of plants and the information of animals, gravitational waves move stars and inform black holes, the most evolved celestial bodies, which emit or feed on the Entropy and information provided by those gravitational waves.

Further on, gravitational waves emitted by black holes might reproduce matter on the cores of stars and planets:

If those gravitational waves degenerate easily into quark matter as the jets of quasars show, they could also become converted into matter in the super fluid cores of stars and the crystalline centers of planets, in a process inverse to the Lorenz Transformations; since tachyons acquire more mass when they slow down, trapped by those superfluid and crystalline vortex.

Thus, as light is converted into Entropy in plants, stars and planets will create their ‘amino acids’, quark matter, in those processes. Thus, dark Entropy, tachyon strings would decelerate into c-speed gluons that would reproduce quarks; (as electromagnetic waves become photons and electrons).

There is also a mechanism by which those planets and stars ‘jump’ or change their spin position under the effect of gravitational waves, as electrons do under a magnetic field: the core of stars are made of super fluid helium and the core of planets of iron crystals, which are the only atoms that can absorb the Entropy of a gravito-magnetic field to change its motion.

Finally, all those events will have a ‘why’ in the 4 arrows of the organic Universe; since they would represent the feeding, matter reproduction, informative perception and social location within the galactic or planetary network of celestial bodies, equivalent to the 4 whys of the 4 quantum numbers, described before.

Recap. Gravitational waves accomplish the 6 arrows of time for celestial bodies, as the 4 quantum numbers describe those arrows for electrons.

Intermediate Space. Gravitational Waves and Solar Systems.

Now gravitons are massless just because it is considered to be a force with infinite range but this is not the case. The gravitation we study works within galaxies. Outside galaxies it should be the dark energy field, which is repulsive. So gravitons do not need to be massless in 5D. They might have the neutrino mass. It remains therefore to accommodate somehow the 2 rank tensor of gravitation, which determined 2 spin gravitons to convert them into ½ spin, as neutrino ½ spin seems to be proved. I will on the 4th line, charged with mathematical physics adventure my solution, but at this stage, those are questions, which require far more serious maths than this 2nd line can hold.

We thus will consider the following proposition: Gravitons are ½ spin, do not have infinite range – nothing does in 5D, ‘are’ neutrinos, create gravitational waves that communicate galaxies and maintain the structure of galaxies as single organisms, form the second background of the galaxy along the light background, representing the duality of any organic system, and originate when used as strings that connect two atoms in opposite directions a photon of light.

As I said the mathematical analysis goes to the 4th line, but it cannot be otherwise within the organic structure of the galaxy.

The reader in that sense must understand how T.Œ works: any of the 4 analysis possible of an event – the mathematical, most detailed one – the mental and action oriented from the zero-point of view that survives, and apperceives automatically the Universe, seeking for energy and information, iteration and social evolution; the organic spatial one of 3 space-time elements communicated through the rules of ¬Æ geometry, and the temporal, causal of the 3±∆ planes of existence, ARE necessary to SATISFY and qualify a system, event or form as truth.

And vice versa, each of them can properly define an event. And so we give an o.25% of truth to a theory with a mathematical consistence, a 0.25% when such theory satisfies the organic paradigm, another 0.25% when it defines the 5 actions of a system and another 0.25% when it defines the 3±∆ ages of the system.

This of course will be contested for decades if no centuries till T.Œ is accepted if ever by mankind. It does not matter to me. The point here is that at this stage of ‘inflation’ on mathematical physics one can easily find a theory for everything. So the way this post is constructed is simple. Unlike the classic scholar of physics which have no intention to define organically, temporally and from the perspective of actions its systems, but merely will work out equations of physics, twist them (today not even coherence is required specially when dealing with the internal space of black holes the external dark energy and any other region not experimentally evident), and then seek for with expensive machines to find parallel Universes, muinos or whatever.

T.Œ does not work like that. What I do is to start with the organic structure of a system, then to define it in space as a simultaneous organic system, then in time through its 3±∆ges and then normally as a third stage to work out the mathematics of it, because they are closely related believe it or not with the 4th element, which are the space-time actions of each ‘mental point of view’. So basically we work in 2 blocks: the spatial and temporal symmetry on one side, and the actions o space-time of the 0-point of view and its mathematical equations on the other side.

In this 2nd line I will work out the space-time symmetry and refer to the mathematical and space-time actions and constants very briefly. We are making in this introduction an exception, for the sake of representing to the specialist the full concept of T.Œ, considering in more detail the Neutrino ‘affair’.

During the 1930s there was great interest in the neutrino theory of light and Pascual Jordan and Max Born, and others worked on the theory. We are here talking of 2 of the 6 foremost fathers of quantum theory. So it is not to be taken lighter (Jordan worked out the matrices of Heisenberg, as a better mathematician and Born the probabilistic theory of Bohr, which only give the idea). De Broglie and Fermi also worked it out, but there were some obvious problems to it. The one most often mentioned – that a photon is not a composite particle is silly. To start with is composed of a myriad of H-planktons and if we get to see it in detail it will grow as any fractal point to cosmic dimensions – to become a quasi-star of the string world.

Now this was understood first by Dirac, and with many of the ambivalences of quantum theory, only made reasonable with the 3 different perspectives of 5D physics (scale, space or time), as any point from an upper scale view is a fractal of populations (space view) or probabilities (time view). So we should choose the description that better fits a phenomena, as related the OBSERVER’S experiment and the OBSERVABLE through one of those different perspectives, all of which add truth to the analysis (but only all together describe the multiplicity of the universe):

Some time before the discovery of quantum mechanics people realized that the connection between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the beam is connected with the probable number of photons in it, we should have half the total number going into each component.

If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs.

So indeed, photons do split into its h-plancktons as electrons do split into its photon cells, and so on. It is just the point of view we adopt. From our ∆o perspective obviously both electrons and photons are point particles but also waves smearing its components. The same with neutrinos. We see a neutrino ∆-3 beta decay as a single particle-point but we do see a neutron star ‘beta decay’ at ∆+1 as 1059 neutrinos rushing out.

So how we can manage all that? Through parameters which are ‘integrals’ of space or time quanta, such as energy. So we find that the proportion neutrino/quark mass-energy in beta decay is the same than the proportion of neutrino/star novae mass energy, a 10%, which incidentally is once more the decametric scale of 3 x 3 +0 elements of 5D theory.

(5D strictly speaking would be a 10 Dimensional theory of 3 spatial components, 3 time ages, 3 scales of size-time speed and a 0-point pegging it all together, but not to scare too much the ‘primitives’ I decided given the idolatry to Einstein and his 4D concept to call it 5D call the 3 dimensions of time, 3 ages, whereas 4D is the present age, and so on.

But if there was some respect for it, of course I would use the 10 Dimensional analysis which is much better for the full ® model, anyway that is how it was originally written 24 years ago; and so one can imagine if we are still here by 2140, some Chinese congress on 10D rewriting the whole thing on the proper 10 D formalism. In that sense 10D means 10 parameters to fully describe a system, which are 10 parameters in General relativity 10 parameters in string theory, 10 systems in physiology, 10 is then the number of the game).

We have so far settle down just for 10 isomorphisms.

Now that is clear that light is NOT a point particle but can be treated in many ways what the Neutrino theory of light means is merely the ‘feeding of light’ in its lower gravitational scale, the use of neutrino and then light by other particles, Fermion < Boson > Fermion to communicate and ultimately the beginning of a creation of a network between particles and or stars that ‘cements’ the internal structure of galaxies to make them ‘galacells’, not mere rotary systems with no connection between its parts.

The point more difficult is how to match the polarization which is different, and how to find a mechanism that makes 2 neutrinos to couple together into the photon, in ‘lineal fashion’ to avoid the problems of obtaining its Fermi statistics from Bose-Dirac ones.

Jordan’s hypothesis that the neutrinos are emitted in exactly the same direction eliminated the need for theorizing an unknown interaction, but his hypothesis seemed rather artificial and was ignored. However it is precisely because neutrinos are the first communication act between two particles, entangling them that this is in T.Œ a must. Jordan obtained exact Bose–Einstein commutation relations for the composite photon – a longitudinally polarized photon, as commutation relations for pairs of fermions were similar to those for bosons:

Bosons are defined as the particles that adhere to the commutation relations:

Screen Shot 2016-04-08 at 21.27.04

The difference is minimal – a mere ‘delta term’

Specifically the size of the deviations from pure Bose behavior, ∆ (p, p) depends on the degree of overlap of the fermion wave functions and the constraints of the Pauli exclusion principle. which is cancelled by a Raman effect in which a neutrino with momentum P+K is absorbed while another o with opposite spin and momentum K is emitted.

But again that is precisely what 5D T.Πrequires: a particle communicates with a second particle, which absorbs the neutrino and emits in the inverse direction a new one.


Neutrinos as tachyons.

Now we must understand that humans simply speaking will not easily accept any correction to their dogmas. ‘humans are slaves they believe, they don’t reason’. So physicist do NOT accept the dogma that c-speed is the limit of light speed, as it is the absolute modern dogma of their idol-god Mr. Einstein. Trust me. My first work on T.Œ was called the error of Einstein – the part of physics his theory did not account for and that was enough – young and bold – to get 300 rejections to the book (-;

This I said because in 5D, outside galaxies, outside light space-time, there are faster than light speeds, quite likely in the ‘inflaton’ field of the Higgs o-scalar, and maybe if Neutrinos have properties that experiments seem to show – but at this stage there is so much ‘ad hoc’ arrangement of data to favor ‘theory’ that frankly I am skeptical, and this particular part of 5D theory – neutrino physics, is the only one I haven’t yet settled down for definitive conclusions.

Now for faster than light particles all what you need is negative mass, as the Higgs has above its minimal energy. So happens to neutrinos, till recently when assailed by all data against it, c-believers have scared anyone pretending to measure faster than light neutrinos.

Measurements for electron neutrino mass

The mass of electron neutrinos is measured in tritium beta decay experiments. The decay results in a 3-helium, electron and an electron antineutrino. If neutrinos have non-zero mass, the spectrum of the electrons is deformed at the high energy part, i.e. the neutrino mass determines the maximum energy of emitted electrons.

To be exact, the experiments measure the neutrino mass squared. Curiously, when taken at the face value, all results during the last century, pointed to a negative mass squared, before the last scandal of faster-than-light neutrinos who ended the career of a few researchers till they gave up and ‘found’ luckily a loose cable:

Screen Shot 2016-04-05 at 13.25.44

Screen Shot 2016-04-08 at 21.28.17

Thus precisely in as much as the neutrino goes first back and the forwards between two particles that entangle each other with them, all the 3 problems of the neutrino theory of light are solved, miraculously by postulating what T.Πrequires in its organic and perceptive-action elements.

The 4 elements, the organic structure in which each new layer of reality builds upon the previous later (photons are created upon the previous layer of neutrinos), the mental structure that requires the event to correspond to one of the 5 possible actions of the Universe (neutrinos entangle particles over which waves of light communicate information), the temporal structure (neutrinos die-are absorbed giving birth to light at both extremes of the communication) and finally the mathematical structure (which can only be resolved precisely by a lineal communication of neutrinos and antineutrinos).

It remains now to consider the transversal vs. longitudinal polarization. Which is the final concept that truly solves the question
In the graph, the V> c is the front wave of neutrinos, B and A are the particles communicated by the neutrino wave front, which start a non-local communication, the neutrino wave locates both and locks them to start the building up of layers of the 5th dimension that will give birth to light

In simple terms:

– The spin of the 2 neutrinos back and forth ad up to form a single 1-spin unit as the photon has. And here makes sense that wonderful discovery that neutrinos are always left handed (-: so we can put them together to ad up 1 spin.

The lineal back and forth motion makes them both follow a lineal form. And its polarizes them in the same direction than light. So a ‘new entity’, light appears, which as an emerging new form does have obviously new properties, but is born from the neutrino in which it feeds. This is an essential characteristic that imposes a certain order in the galaxy.

And so it allow us to go further and consider in general that the neutrino waves between galaxies that carry energy interact constantly with the light waves and are the minimal information quanta of the Universe, which only give us information on the location along its path but good enough to form the necessary background for light to maintain its constant c-speed. And we will go back to its details, latter.

So the task to develop by a top-top physicist today would be to consider the possibility of a neutrino, which acts as a graviton for transversal gravitational waves between galaxies, a theme we will outline ‘grosso modo’ in this post and develop with more finesse but by no means exhaust in the 4th line. In brief, there must be transversal gravitational waves with a fundamental role in the Universe to connect galaxies, and keep the inner structure of galaxies in place, as it is the case (galaxies act as a solid structure with no difference of rotary speed between stars regardless of position).

So neutrinos should play roles similar to the graviton, as the ‘photon quanta’’ or more precisely the H-quanta of gravitational cosmic waves. This role in most hypothetical models of physics today is played by a string with 2 spin as that of gravitons tachyon speed, in some cases, which is a feature also of intergalactic gravitational waves in 5D (not within galaxies, where light is the dominant force that slows down and maintains c-speed as the limit, as Relativity well considers). So here is the surprising fact, which makes neutrinos so likely to be the graviton ‘string’:

Standard Model neutrinos are fundamental point-like particles. But an effective size can be defined using their electroweak cross section (apparent size in electroweak interaction). The average electroweak characteristic size is r2 = × 1033 cm2 ( × 1 nanobarn), where = 3.2 for electron neutrino, = 1.7 for muon neutrino and = 1.0 for tau neutrino. As it happens 1.7 nanobarns is the area of a string. So both coincide.

Both can also be modeled as hyper luminal open strings/neutrinos, outside galaxies obviously, and so the only reason so far I do NOT affirm directly that the present neutrino IS the h-Planck constant of quantum gravity is the question of the neutrino, string and graviton spin, which is supposed to be ½, but by no means this question, in which I would recommend top theoretical physicist to work – as it is the unknown piece of 5D astrophysics which requires more analysis. Of course the strings to be used should be background independent, which is the other area that should be worked deeply.


String theory is NICE but IS NOT worthy, beyond pure mathematical ‘literature’, as nice as quixot. And yes many cavalry men did walk on Spain but Quixot with it specifics none. Strings as are today described, NONE.

It is far MORE USEFUL to fully develop a Neutrino theory of the ∆-4 with laws isomorphic to those of ∆-3. So we DO have on the 2 sides of c-speed, faster than light neutrinos, which ‘trapped’ in the galaxy feeding light, become c-speed neutrinos but go much faster and according to scales, should have a mean speed of ‘10.000 beings’ that beautiful taoist insight, since particles that jump scales tend to be in the 1000-10.000 speed range. This is the light neutrino between interstellar galaxies, in which the red light decay.

The catch? As we live in a relative Universe where speed is distance (remember the Galilean paradox); we can’t really know the distance between galaxies IF light decays in neutrino between them moving faster and then back into light entering galaxies.

Do NOT raise your eyebrows, that is all over the place. You talk voice into a phone that translates into a c-speed motion your voice and then back into sound, and if you did not know the details you would think you were talking to a guy in the next room, as some thought in the earlier age of phones. So we cannot MEASURE THE SPEED OF NEUTRINOS BETWEEN GALAXIES, as scientists are hang up into a supposed c-speed/distance for them.

But alas, we do know something, which as usual (30 years of ‘Law of Silence’ on GST has given me 30 years of seeing how ALL sciences confirmed my predictions and none has contradicted it – NOT a single Popperian black swan – so my certitude is absolute on GST:


“How fast do quantum interactions happen? Faster than light, 10,000 times faster.

That’s what a team of physicists led by Juan Yin at the University of Science and Technology of China in Shanghai found in an experiment involving entangled photons, or photons that remain intimately connected, even when separated by vast distances.They wanted to see what would happen if you tried assigning a speed to what Einstein called “spooky action at a distance.”

So alas, do NOT be shy with neutrino theory. If GST proves something is that ALL ENTITIES, ARE at least 10D (3 S TOPOLOGIES, 3 ages of time, 3 scales connected and a mind/will of survival shown in its 5 canonical actions). So THIS IS A BASIC CHANGE OF MIND-chip: you must always ascribe to each entity multiple ‘purposes’ one for each of those 10 connections.

Neutrinos thus, communicate as we said earlier in the post, light-beams, feed them, and are used to transport longtidunal, ‘basic-distance’ information, as a ‘first’ sentence of dialog that entangles particles, latter sharing much more information with frequency light:

Now in this 30 years old graph, we call ‘graviton’ the neutrino. This is NOT exact (too lazy to change old graphs). It is repulsive, entropic gravitation – the neutrino is the final death of mass. We shall deal with the ‘real graviton’, the microblack hole of the Planck mass, and the strangelet that matches it on the halo, and the true meaning of gravitation, an ∆+4 force (Mach was right, the inertia of cyclical accelerated centripetal motions comes from the larger scale of the galaxy vortex).

If you understand the nested nature of the Universe, in russian dolls, the connection is between the minimal entropic unit of dark energy (entropy in fact) and here we have elements to be analysed more seriously in the 3rd line, which are cosmological curvature (constant) on the ∆+4 scale and on the bottom line, the minimal component of reality known to us, the neutrino of the Planck scale.

And in the same way non-euclidean (Klein model) relativity means we cannot reach the limit of c-speed in our plane and 0k entropy; in the scalar Universe the supœrganism we are in, we cannot reach smaller scales than Planck and larger than the cosmological almost null curvature. But those are limits for the beings inside the Universe, as your cells cannot escape your body, and particles cannot in galaxies go faster than the c-spacetime.

Again we are not doing here the huge maths. THE INFLATIONARY NATURE OF MODERN SCIENCE MEANS AS IN THE PARABLE OF EDDINGTON (MONKEYS TYPEWRITING WILL TYPEWRITE THE ENTIRE BRITANNICA:), just give them time, that all has been found, dig it. PROBLEM IS MODERN Monkey Scientific Lethaliensis (prior Homo Sapiens), HAS NO IDEA how to order the volumes it typewrites (-; That is what gst does. SO HERE, we have actually a very smart Monkey, Mr. Nottale, which has an excellent formalism for the idea of Non-euclidean limits in the ∆-scale relativity, which he rightly establishes on the ‘planck scale’ (but not on the neutrino) and on the cosmological. So just dig that if you are math-oriented.

Screen Shot 2016-05-09 at 23.56.35Screen Shot 2016-05-10 at 10.27.07

In the graph, hyperbolic geometries have concepts of ∞ that work in the real Non-e universe: as close as it seems a limit (c-speed, h-planck scale), the space-time being will NEVER reach it, as there is an increasing difficulty, so you need growing energy to pass c-speed or go below Planck scales or above cosmological scales, the limits of your galaxy island-Universe and ∆±4 cosmos



Neutrino & ¥ background radiation.

Some theories to explore:

  • Neutrinos creating light (neutrino theory of light)
  • Neutrinos as waves  something much smaller, the ‘quanta’ of ∆-4 scale of dark matter, the ‘ultimate’ atoms of space.
  • Neutrinos as a scale above the planck scale of ‘strings’

As I said this is what ‘sound experimental truths’ known-known particles, GST aND THE CLOSEST formulation of it, in the beyond any visual evidence, real of the Planck Scale should be considered.

it is then when we can see that the existence of those strings upwards in 3 sub-scales would give it 3 x 3 +1 (the transverse in those equations) dimensions +1 (the whole)… 11 dimensional strings thus can create the ‘gluon fields of particles’, the neutron field of dark energy…

As I said, one day in the future if/when physicists, humans/or robots, in this or other similar planet do their job and systematise all that an astounding symmetry and beauty arise in the body of physics. Because of my ethical confrontation with physicists though and the epistemological p.o.v. of GST with man at its centres, this ‘∆±4 scales of reality would be studied long into the future’.

So let us consider just some philosophical/physical musings on this scale.

The limiting intersection.

We must be aware on this limit of any form of perception of reality 2 concepts explored in depth by these much vilified Mr. Eddington/Dirac, the theory of the big numbers, which finds its full meaning on the scalar Universe; of which we have merely resolved one scaling – that of the Cosmological constant; and we will consider one far more important – the unification of charges and masses but there of  a few more to consider. For example, the scaling of the size and weight of an electron and a mean star (10ˆ60), which can be scaled in time also considering the frequency of a beta-decay and a nova star (as explored by the few ‘fringe physicists’ that dare to explore LNH). It is important also when dealing with such ‘numerologies’ to differentiate scaling from chance, in which eddington did fall at the end but not Dirac, to consider the constant confusion in limiting physics of phenomena that is time-related but due to the speed of ∆-|>3| cycles, is seen as ‘a complete spatial cycle’.

 So for example Dirac’s far less maligned, relationship between a variable G and Time (G≈ 1/time of the Universe), does work in a different sense: The scale of the big bang is NOT the Universe but the galaxy (quasar big-bang as all evidence of BG is local, galactic, where light space-time and the ∆-3 scale works). And the relationship is NOT with time but with Space, co-related to the ‘expansion’ of space, that dilutes the central black hole ‘gravitational power’, and since that scaling did happen only at the beginning, during the ‘inflationary age’ of the central black hole cycle, and then diluted once the galaxy took its ‘steady state’ mature form, it is largely irrelevant.

Between two waters…

Eddington also noticed that on those remote scales, the mind, @, distorted our ∆ST analysis and so it must be really account for, which offended naive realism physicists thinking it denied their ‘supreme authority’, but of course it is truth. When Eddington in an honorable retreat on the importance of his science affirmed that biology is far more accurate, he was taking the uncertainty principle to its logic conclusion: IN AN ABSOLUTELY RELATIVE UNIVERSE indeed all scales are equally important, follow the sam laws, and so the humans cales matter more for being of positive human praxis and easier to observe.

On the other extreme there is neutrino physics, where Neutrinos MUST be understood not as a fully perceived single particle but as the upper limiting scale of maximal energy that ’emerges’ or ‘dissolves’ from ∆-4 (planck scale) into ours. As there is always a ‘huge’ range of social varieties of the fundamental Space-Time fields/waves/particles observed in each scale (so there is a huge range of molecular crystals, from ions to planets; of frequency waves from radio waves to ¥-burst).

So neutrinos as we describe them are the ‘planetary crystals’ of the Planck’s scale.

How neutrinos and ¥-rays relate to each other, in density? Standard cosmology makes a series of false assumptions – BG beyond the galaxy (it is only in the light space-time Universe. But we can get its model shrunk to the quasar cycle of the galaxy and then it all makes sense. So we can consider in 5D physics 2 different models.

The standard model that only analyses the galaxy; and the cosmological BG, which is neutrino dominant.

The radiation density in the galaxy, where BG makes sense not in the cosmos has two components: the present-day photon density ργ   and the neutrino density ρν.

The photon density as a function of frequency can be derived directly from the CMB: the photon number density follows the Planck law. So with the current CMB temperature  the neutrino density is related to the photon density by:

ρν=3.04678(4/11)ˆ4/3 ργ.

This relation can be derived from physics in the early galaxy, when neutrinos and photons were in thermal equilibrium.

What all this means really in 5D physics is that there is a quite remarkable ‘balance’ prey (neutrino field)-predator (light field made of neutrinos), 4.6/3.2 of ±1.5, in the earlier quasar bang of the galactic black hole and/or similar balances in explosions of novae and stars, which allows the theory of creation of light from two inverse neutrinos arrows, the use of neutrinos for communication between photons, and all other ‘scalar’ relationships we should find if theory is right and all scales follow similar behaviour, between the quanta of electrons (photons) and the denser particles of the next ‘scale’ of particles, near the planck scale.


In scale relativity, the cosmological constant is interpreted as a curvature. If one does a dimensional analysis, it is indeed the inverse of the square of a length.

The cosmological constant.

Dark energy can be considered as a measurement of the cosmological constant. In scale relativity, dark energy would come from a potential energy manifested by the fractal geometry of the universe at large scales, in the same way as the Newtonian potential is a manifestation of its curved geometry in general relativity.

So how dark energy, neutrinos with an angle equal to the planck’s length etc. etc. works out? In essence, as a curvature measure, the cosmological constant enters on the range of the G and Q curvature measures, albeit at an even smaller scale, so it is the equivalent of those equations and we plug it into our topological tiƒ constants right in.



The Higgs field

More speculatively, the Higgs field has also been proposed as the energy of the vacuum, which at the extreme energies of the first moments of the Big Bang caused the universe to be a kind of featureless symmetry of undifferentiated extremely high energy. In this kind of speculation, the single unified field of a Grand Unified Theory is identified as (or modeled upon) the Higgs field, and it is through successive symmetry breakings of the Higgs field or some similar field at phase transitions that the present universe’s known forces and fields arise.

The relationship (if any) between the Higgs field and the presently observed vacuum energy density of the universe has also come under scientific study. As observed, the present vacuum energy density is extremely close to zero, but the energy density expected from the Higgs field, supersymmetry, and other current theories are typically many orders of magnitude larger. It is unclear how these should be reconciled. This cosmological constant problem remains a further major unanswered problem in physics

.”All the matter particles are spin 1/2 fermions. All the force carriers are spin 1 bosons. Higgs particles are spin-0 bosons (scalars). The Higgs is neither matter nor force. The Higgs is just different.

There has been considerable scientific research on possible links between the Higgs field and the inflaton – a hypothetical field suggested as the explanation for the expansion of space during the first fraction of a second of the universe (known as the “inflationary epoch”). Some theories suggest that a fundamental scalar field might be responsible for this phenomenon; the Higgs field is such a field and therefore has led to papers analyzing whether it could also be the inflaton responsible for this exponential expansion of the universe during the Big Bang.

Such theories are highly tentative and face significant problems related to unitarity, but may be viable if combined with additional features such as large non-minimal coupling, a Brans–Dicke scalar, or other “new” physics, and have received treatments suggesting that Higgs inflation models are still of interest theoretically. 

Theoretical need for the Higgs.

Gauge invariance is an important property of modern particle theories such as the Standard Model, partly due to its success in other areas of fundamental physics such as electromagnetism and the strong interaction (quantum chromodynamics). However, there were great difficulties in developing gauge theories for the weak nuclear force or a possible unified electroweak interaction. Fermions with a mass term would violate gauge symmetry and therefore cannot be gauge invariant. (This can be seen by examining the Dirac Lagrangian for a fermion in terms of left and right handed components; we find none of the spin-half particles could ever flip helicity as required for mass, so they must be massless.)

W and Z bosons are observed to have mass, but a boson mass term contains terms, which clearly depend on the choice of gauge and therefore these masses too cannot be gauge invariant. Therefore it seems that none of the standard model fermions or bosons could “begin” with mass as an inbuilt property except by abandoning gauge invariance. If gauge invariance were to be retained, then these particles had to be acquiring their mass by some other mechanism or interaction.

Additionally, whatever was giving these particles their mass, had to not “break” gauge invariance as the basis for other parts of the theories where it worked well, and had to not require or predict unexpected massless particles and long-range forces (seemingly an inevitable consequence of Goldstone’s theorem) which did not actually seem to exist in nature.

In the Standard Model, the mass term arising from the Dirac Lagrangian for any fermion is THE PRODUCT OF THE PARTICLE-ANTIPARTICLE SYMMETRY.

Now in 5D physics, this is the MOMENT OF DEATH of a particle, when it BECOMES ‘PAST-MASS’, and so it emerges into the larger social whole.

It is a completely DIFFERENT perspective. But the maths are the same. The particle and antiparticle, chiral right and left, completing a cycle of mass, break its symmetry=die, to emerge into a larger plane ∆+1 of the dark world. It enters into the larger world of black holes and top quarks.

Physicists, as they have NO CLUE on the life-death world cycle of particles merely realize that it implies the flipping of particle chirality.

So death becomes an abstract ‘non-sense’ by lack of understanding of the particle-antiparticle antisymmetry explained latter in this post and the next one…

The top quark interacts primarily by the strong interaction, but can only decay through the weak force. It decays to a W boson and either a bottom quark (most frequently), a strange quark, or, on the rarest of occasions, a down quark. The Standard Model predicts its mean lifetime to be roughly 5×10−25 s.

This is about a twentieth of the timescale for strong interactions, and therefore it does not form hadrons, giving physicists a unique opportunity to study a “bare” quark (all other quarks hadronize, meaning that they combine with other quarks to form hadrons, and can only be observed as such).

Of today’s available theories to describe the self-similarity of both fields, the most accurate is the Brans-Dicke gravity theory, which extends Einstein’s gravitation to all its scales. In the 4th line of detailed analysis, or for anyone who wants to explore further the physical mathematics of 5D, IT IS the theory of General Relativity for a finite number of scales, making by varying the G-curvature (as we did in the simplified Newtonian model of charges), equivalent any of the forces of the Universe, truly unifying gravitation and the strong field, and hence the top quark vortices and black hole ‘wholes’ made of them:

Screen Shot 2016-03-16 at 09.40.41
In Physical Review in the 50s Brans and Dicke established its parallelism with a strong gravitational force. What the graph means is that we must fuse the strong field of quarks and the gravitational field. The strong field emerges as the gravitational field when we vary the universal constant. We have done that in simple terms using Newtonian vortices. Two Americans, Brans and Dicke, did it with equations of relativity, obtaining a mathematical field self-similar to Nambu’s equations for a deconfined strong field of top quarks (left, our weak matter; right, a strong field of top quarks, which breaks our symmetry, converting us into quark matter).

Mass and gravitation are perfectly understood with Einstein-Brans theory, a modified version of Einstein’s work, which introduced the concept of a variable gravitational constant, a fact proved today also by fractal theory.

In our simple Newtonian description of those mass vortices, it means that we can fuse also the gravitation and strong field, which happens to be 100 times stronger than a charge and 1041 stronger than the gravitational field. Thus, we can relate the strength of their universal constants:

G (gravitational constant) = 1039 Coulomb/charge constant = 1041 quark/strong constant

Thus, strong forces are 102 stronger than our weak ones and 1041 stronger than the gravitational one. And since black holes will be made in all the self-similar theories that describe at quantum level a strong gravitational world, all those theories including the simple vortices described here are telling us that quarks will make black holes.

Now, it must be clear enough that top quarks and black holes are not components of the light space-time universe, but of the 10-z cosmic membrane, as any mass is an accelerated vortex of gravitation which beyond the c-speed event horizon keeps accelerating beyond the human scale.

Of all those theories, the less appealing is the Higgs one.

This is due to the fact that what quantum physicists do is merely repeat the same theory, the original electromagnetic theory that explains how particles interact in electromagnetic space, exchanging particles between them. But this happens because light is a spatial energetic force with no cyclical, temporal, informative mass. So it has spatial parity, it happens in space, and the exchange is extremely fast. But what quantum physicists ignore because they ignore all about physical information is that there are forces that happen in time because they are cyclical forces, which don’t exchange energy but transform energy into information, cyclical mass. The main of those forces is the weak force, which therefore is not a spatial force, reason why it does not keep the parity/spatial symmetry of other forces.

It does happen in a same point of space but in a long period of time. It is mediated by a constant G, which is measured in m-2, mass/time parameters, lasts a long time, and happens in the smallest range, in the same point of space-time. Thus, this force is completely strange to them because as long as they don’t have a proper theory of mass/information/cyclical time clocks, they won’t be able to understand at all a “transformative” force, like the weak force. They started well, as Feynman and Gell-Mann described the force as happening in a single point of space, in a long period of time, evolving our lighter particles in heavier quarks (with an intermediate state called the Z and W particle). Or inversely the force showed how heavier particles decayed into lighter ones.

Yet then came Goldstone and Higgs and tried to explain those forces as electromagnetic spatial forces, just because that is how they describe light. And so Higgs defined the scalar, lineal, not cyclical, state of the super-strong field of top quarks; which is its decay mode: Screen Shot 2016-03-16 at 09.45.28
What this means is obvious. The Higgs is a redundant equation that describes just a specific electroweak reaction that transforms neutrons and protons into heavier quarks through an intermediate Z, W states, as our particles speed up in cyclical vortices of mass, warping further into fractal dimensions of form, and finally produce top/antitop quarks, which have in this manner killed our matter. And because the transformation or inverse decay is not an exchange of energy in space but a growth or loss of dimensional form in time, it does not keep its “mirror” parity in space.

Parity means that in spatial events it is the same going left or right, but in time events, as Einstein said, “wires don’t travel to the past.” So a time event, a weak interaction, doesn’t keep its parity; it is not the same going forward (evolving information) then backward (devolving information into energy). Yet to explain why the weak force doesn’t keep parity is impossible without a proper theory of the two arrows of time, reason why quantum physicists have not been able to do it for forty years and we can do it here with simple, logic explanations.

Thus, any attempt to create the Higgs=top will mean nothing for science, but it will mean a lot for mankind: the death of our matter, which in a runaway reaction will mean our death. This will happen according to standard science over 8–10 TeV. That is, it will happen in the experiments of 2013.

Lee Smolin found and proved twenty years ago that in fact, Higgs’s equations and Nambu’s equations, which preceded him, and Einstein-Brans equations which preceded both, are self-similar fields with a variable G-constant.6 Now the fractal paradigm makes all that also “logic” and furthermore solves the true question that all those theories tried to solve: why the weak, temporal, informative force breaks parity, the symmetry of space. Because it is not a spatial force but a temporal, informative force, which evolves or devolves particles between the two fractal membranes of the Universe, the world of electroweak forces and the world of strong gravitation.

The reader must understand indeed what two fractal membranes mean: certain particles belong to the membrane of dark cosmological matter in different scales and others to the electroweak scales. And the weak force evolves them or devolves them between membranes.

Which one belongs to which membrane is easy to deduce by their self-similar parameters:

The 2 membranes are built with different pieces.

The nature of dark matter.

The big-bang explained as the big-bang of the lower scale of the 5th dimension below the cosmos, is a quasar big-bang of a galaxy, either the previous cycle of the milky way galaxy, or the explosion of a giant galaxy in the local cluster, from where the local group fed on to reproduce the Andromeda and Milky way and dwarf galaxies around it.

A quasar big-bang we shall prove eliminates all those errors, the background becomes local and hence does not break Lorentz invariance (relativity), the galaxy has the same density of matter needed for the big-bang, a quasar is a reversal black hole explosion which happens to be the exact equation of the big bang (also a reversal black hole metric equation) the age of the quasar cycle tabulated between 10 and 20 billion years is exactly the same than the big-bang, it is NOT an illogic theory, as there are infinite galaxies in an infinite Universe the abundance of helium in galaxies does happen more in the center around the black hole where the hadron epoch formed them, and the ages of the big-bang do apply all to a quasar big-bang, and the local measure of the background radiation which FIRST and for very long was considered the background radiation of only the galaxy, fits, and as we shall see in the next picture, in fact the map of the galaxy is exactly the dame of the background radiation:

The equations of mathematical physics are extraordinarily simple and its quantitative relationships too, when we do not become obsessed as computer scientists today with complete exactitude, due to their worldly professions (military physics that require precise targets and electronics that imply very subtle nanometric measures of the paths of logic electrons). This is not the goal of a general philosophy of science in search of synthesis and whys. So we can do basic quantitative analysis as we shall do without extreme precision, which others can reach in detailed’4th line analyses.

Let us in that sense put two simple examples of 2 decades predictions, one of which has been corroborated and the other unfortunately will either by experience or by human extinction to the nuclear industry. Both concern the nature of dark matter, today’s main unknown of astrophysics.

Now, the Linguistic method makes the fundamental guides of research the isomorphisms of nature, which ARE 4 structural elements, organic co-existence of 3 Planes of 5D in all systems, which means we do have to find cut-off atomic or particle substances to ‘mathematical entelechies’ such as black holes and dark matter (as Einstein wanted, since he always was ‘strong’ on the scientific method and its deterministic truths, so he always affirmed he would not ‘believe’ in black holes till its cut off substance was found).

This symmetry between the parts and the whole, coupled with the symmetry of the 3 ‘families’ of mass of increasing density (ud, sc, tb quarks), which form in the lower scale the 3 social planes of evolution of mass, and the symmetry of the 3 regions of the galaxy, the ultra dense small center, which has the same equation in 5D than the quark center of atoms, the ud-region of stars and the dense, gravitational halo, establishes an immediate correspondence between the 3 families of quarks and the 3 regions of the galaxy, making:

– Black holes top quark stars, since top quarks are the only cut-off substance, which has the same parameters of density than black holes, becoming both the ‘doors’ to the next larger ∆±4 plane of the 5th dimension beyond galaxies, and as such will be studied in 5D physics.

– ud-stars the ‘mitochondria’ of the galaxy as an organism, in charge of reproducing its light matter and atoms.

– Strangelets in its stable configuration at 0 pressure (dibaryons, usd-usd atoms) the substance of the Halo, as Nature and hence the galaxy is SYMMETRIC in 5D ternary planes, 3 ages-families of quarks and 3 spatial, organic regions.

It is thus simple to resolve in all its beauty the generator equation of the galaxy in those terms:

Sp(∆+1:Halo=∑Œ:Strangelets=∑œ-1:usd-dibaryons)> ST:(∆+1: star arms=∑Œ:stars=∑œ-1:u-d-e-plasma&atoms)> To (∆+1: Central Black holes=∑Œ: Nova holes’: ∑∑œ-1:top quarks) <<Death: quasar explosion…

… of the central black hole after devouring the entire galaxy, running backwards its equations into a local big-bang with its canonical ages (decay of tops into strange quarks, formation of strangelet halo in the quark epoch, decay of remnant quarks into hadrons, reabsorption of photons, and emission of background radiation, formation of stars and back to the 3 ages of galaxies, etc. etc.

Now this is the Generator Equation across the 3 symmetries of 3 Planes of existence of the 5th dimension, 3 ages of Time (families of quarks of growing informative density) and 3 Spatial organic regions of the Galaxy, BASED IN THE 3+0 fundamental isomorphisms of all systems of nature, equal to any other structure of the Universe, including a human being.

And once this is established T.Œ does study in inverse fashion its elements, starting from the higher scale (galaxy) and the whole structure in 5D (organic properties), down to its spatial configurations its 3 ages and its lower parts, INTERSECTING quantitative analysis of its Universal constants, both internal (Sp/Tƒ, Tƒ/Sp internal densities of energy and information, Sp=Tƒ balanced dynamic metabolism) and external between the entity and its lower fields and upper informative wholes, or constants of action ( Sp+1/Sp/Sp-1 ∆æ: motions and accelerations, and so on and so on).

What T.Πgives to the researcher is the generator of the system, and its fundamental symmetries. And those are OBVIOUS for galaxies and matter. As there is NO EVIDENCE OF any other substance or SUSY or WIMP particle in the Universe as it should NOT exist in a minimalist, efficient, simple, symmetric, perfectly organized Universe.

Why we are so sure of that generator equation? Simple, because we cant write you one similar generator for any species of the universe, including you, or humanity in the larger scale of evolution as a single species (with its 3 horizons of Australopithecus, Erectus and sapiens), and so on and so on. This is the beauty of TŒ, Everything is defined by the generator, which explains a superorganism and its world cycle. Point.

And that is that, and it does not matter that the human ego-trip of lesser scientists, wannabe Saint Nobels of the Dynamite, company-mothers of machines and weapons, CERN included, want to spend fortunes fabricating here strangelets and black holes that in due time will blow us all, to prove cuckoo mathematical fantaphysics of concocted particles that do not exist anywhere.

Now once this is understood of course we can do mathematical calculus to prove every other relationship of quantitative nature on that structure. And we shall do them in very simplified numbers, as others might with supercomputers. This is not our task, specially when I consider humanism and the human mind above the chip and one of my purposes on my 30 years of T.Πresearch was to show the nerds of the chip radiation that a human still can get more meaning with his logic bran that a team of wannabe geniuses through his computer algorithms.

So to the point. Strangelets will be the dark matter of the halo, as that is the symmetry and their role, or else STRANGE quarks do not exist. What quantitative proof we have? Simple.

An atom of a strangelet usd, has 10 times more mass than an atom of light matter, hence strangelet were produced in the quark epoch in the same proportion that ud will be produced, the Halo MUST weight 10 times more than the light matter, and alas! that is exactly the proportion: 90% of the matter is in the halo, 9% in the stars and 1% in the central black hole. How simple is that? As simple as it gets.

Of course now once we have deduced the simplest possible truths with first principles (The Œ=mc2 method of truth, the linguistic and logic reasoning and the 3 symmetries of 5D space-time and its generator equation) we can get as complex in the details as we want, which is the only Thing astrophysicists of the previous paradigm can do, observing and crunching numbers not to explain much of it.

So in the deluded cosmic big-bang (not the real quasar big-bang), they describe the same epochs of the Black hole running backwards its equations of collapse of matter into top quark stars, now decaying forwards in time into its entropic death, as it spills its ultra dense matter that decays into light matter.

Since the big bang is nothing but the reverse explosion of a quasar black hole (origin of the enteral cycle of the galaxy, equivalent to the beta decay of a neutron into a proton and its inverse slow collapse of the electron back into the proton).

Thus in that reversed entropy, the quark-gluon soup went through a phase of usd atomic soup (through the canonical decay path):

t -> b -> c -> s -> u <-> d

Now, this decay is already studied in accelerators and while the first 3 quarks decay extremely fast, in 10-12, the strange quarks lingers 10.000 times, 10-8, more than enough time to form on the initial conditions strangelet atoms in a similar proportion to those up and down atoms created after the photon reabsorption (when the so-called background radiation produced 300.000 years after the quasar explosion).

It is the so called quark epoch when those strangelet atoms stable at zero pressure lead the ‘wave’ towards the external membrane of the future galaxy, slowing down and forming the halo.

The quark epoch began approximately 10−12 seconds after the Big Bang, when the preceding electroweak epoch ended as the electroweak interaction separated into the weak interaction and electromagnetism. During the quark epoch the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. Collisions between particles were too energetic to allow quarks to combine into mesons or baryons. The quark epoch ended when the universe was about 10−6 seconds old, when the average energy of particle interactions had fallen below the binding energy of hadrons. The following period, when quarks became confined within hadrons, is known as the hadron epoch.

But at this time since the period of decay of the Strange quark is so slow at 10-8, an enormous number of usd-usd stable dibaryons had been formed. It is in fact called strange quark because its long life defied all the a priori calculus of our researchers. Then at the end of the quark epoch the left over strange quarks decayed into up quarks that started to form the hadrons of the intermediate region that will form stars

The proportion of dark matter

Now we have use the organic point of view. Let us use the ‘mental point of view’ to calculate the dark matter of the universe, from an ‘Einsteinian perspective’, that is using the observer-observable paradox and the structure in networks with ‘dark holes’ of all 5D Œ-points. Namely, the Universe is like a game of cat alleys, in which each species see only the network of neurons or cells with whom it communicates.

We are as humans connected to people, friends and family and workers and our mass-media gurus, politicos, etc. that is to our reproductive, energetic & informative, parallel ‘cells-citizens’ and NOT to the rest, which become invisible to us (witness LA 60% majority of Latinos completely invisible to the white dominant group). So happens in the interaction between electronic light space-time and the gravitational invisible plane).

What this essentially means is that electrons communicate with electrons (the way they do through photons which explains all the paradoxes of quantum physics and recasts special relativity in 5D physics is studied somewhere else), forming a network defined by the 4th postulate of ®, as a web with dark holes which are not perceived where the parallel Universe of dark matter threads. How can we calculate the proportion between them?

Again we shall consider the utmost simplest calculation possible based in the simplest of all vital space-time beings of the Universe: a bidimensional being with a perfect cyclical time membrane, storing a disk space, according to the simplest Universal constant, Pi, whose meaning should be perhaps obvious to the reader who has gotten so far: a pi sphere is an SE>Tƒ system of 3 lineal diameters turning around a time cycle of 3 ‘ages-motions’, which therefore leaves π-3=014 apertures for the central o-point of the system to perceive the external Universe.

The simplest of those systems is in fact the smallest quanta of the light space-time Universe, h-Planck constant, which in 5D physics, we call a Planckton, the minimal vital space-time being, as its parameters (energy x time) clearly show.

This, which is sooo obvious, escaped for a century quantum physicists, from the idealist metalinguistic Germanic school that brought us into deluded fantasies of self-importance including 2 world wars. A Planckton is the minimal unit measure of a fractal species of spatial energy and temporal information, the minimal unit of the scale of light space-time. Point.

Its equation is therefore obvious, in its 3 simpler symmetries:

Sp: As a wave of energy in a lineal state: Sp x Tƒ = H (Einstein’s equation usually written as E=hƒ);.

ST: as a particle-wave in its intermediate state: H= P(particle momentum) x Wavelength (De Broglie equation)

Tƒ: h=h/2π radians, or minimal unit of angular momentum of the spin of the particle, which is its perceptive, informative state.

So what mystery there is to quantum physics? None. The Planckton is its finitesimal, as the Plankton is the finitesimal of the chain of beings of sea life that feeds from the smallest krill to the largest whale. And so quantitatively you can calculate in terms of planckton or plankton units of energy, all the space-time actions of both ‘planes of the 5th dimension’.

This said, the Planckton or light or electron, ternary social scales of our relative informative world are self-evident: œ-1: ∑∑Plancktons=∑Œ: light (E=hv) = ∆+1: Electron (nebulae of dense fractal photons)

Hence ultimately as we measure planckton in units of angular momentum (mvr) its h spin state is merely a pi cycle surrounding the central p.o.v. which is observing the Universe from its radius o-point while its membrane traces with angular momentum an mvr motion, which is what we measure (as the halo does with its strangelet membrane that stores the overwhelmingly majority of the angular momentum of the galaxy, since r is maximal and mass too).

Now how much it observes; as the 2 pi radians turn around? And by extension how much any disk-like galaxy observes in the larger homeomorphic scale of galactic atoms as its halo turns around?

It is easy to calculate: the apertures are 0.14, the membrane covers 3 diameters, so they cover 3/π=95%, and leave 5% of reality observed through the disk-plane of the planckton.

In the case of the disk-like galaxy, though, the composition of the Halo (external membrane), will also respond very likely to the same space-time perfect, simplest entity.

And since we are inside the galaxy, and systems within a larger whole tend not to perceive from its cover, more than a 5% of its ‘pi’ membrane according to the most general formula of the dark space-time of an Œ relative eye-world: r/π=95%, which happens to be the calculated dark (matter+energy) of the Universe.

I recall to have sent this in a brief note to Nature, 25 years ago, along a simplified model of the bidimensional holographic space-time Universe. Of course they did not even acknowledge the reception. And yet a few years latter they calculated that dark matter and energy (transversal gravitational waves) were 95%.

And then they calculated that the Universe was of course a plane, with the flat galaxy on it, as they were looking through the plane and the apertures of the plane. It is the rest of the non-perceivable Universe filled with galaxies like ours? How many of them they are? 5D cosmology is of course only in its beginnings. That is, it is a ‘point that is a whole world in the mind f the scientist that discovered it’ (Planck on his solitude during its earlier quantum years).

As he said a generation of researchers have to die before a new paradigm takes over, but the problem is that we might become a strangelet of the halo and all die, leaving not researchers to take over, if the Big-bang experiments of primitive physicists of the Abrahamic-Military-lineal religion of physics keep going on strong.

Now the reason we need to add the organic whole view to the present analytical mathematical physics only vision of reality is obvious. The analytic view only studies the small scale motions of beings, the first of the 5 actions of existence, translation in space. By reducing our knowledge of galaxies to mechanical motions we simply falsify the entire field, reducing it to the pseudo-religious dogmas of our technological civilization and anthropomorphic religion of man as the center of the Universe, now in ‘informative safe’, that is as the only sentient, intelligent, living species in the ‘chain of beings’, between God, the upper entity that created us within the Universe with the proper constants to make it for us (anthropic principle) in an act of creationism.



Now, the Gravitational Dark World beyond galaxies is not perceivable, so I agree all what we must say might be considered speculative.

Yet in as much as the Dark World is what the new ‘race’ of accelerators will be exploring we need to consider it in more detail.

Essentially the dark world is the constant world beyond c-speed in space quanta, Sp, and T<o in Tƒ, in temporal frequency.

Thus it is the world beyond c-speed, which in Einstein’s equations, implies either negative mass, negative energy or back to the past in time. This is the preferred solution to simplify equations in 5D astrophysics, though obviously implies a motion in time back a quanta to reach action at distance, as the system that travels to the past, seems simultaneous in the present. Its tachyon equations are direct from the use of the Klein-Gordon formalism for a tachyon system of spin zero (no information in our world), at instantaneous present speed, which we seem to see as infinite:

V=Sp/Tƒ->0 V->Sp/o=∞ action at distance.

If we use Einstein’s formalism, as time is now negative passed the c->T=0 barrier of Lorentz transformations, the factor of contraction of speed now ads: 1-(s/-t)²/c²= 1+(v/c)².

Thus all the parameters that on one side of the c-barrier were  decreasing start to grow and vice-versa. We thus obtain at both sides of the constant of present c-speed a relative past and future world which cancel each other, inside the atom-galaxy (<c >oTk) and a outside the atom galaxy (inside the black hole, which connects both), beyond the strangelet halo of c-speed turning quarks.

This is the dark world, which gives us two solutions to the Klein-Gordon equation: one with higher momentum than energy and one with higher energy than momentum.

The solutions are the neutrino and Higgs boson, which become therefore the two ‘tachyon’ elements of the dark world.

And so since T.Œ, theoretically has no limit on the minimal time clocks of information and smallest space-time of the Universe, we can keep postulate such scales, but obviously there is no interest beyond the dark world, which as far as we observe reaches the cosmic horizon and beyond.

Now the local Universe is ‘too small’ to form a single space-time scale of reality of the ‘size’ of the quantum, thermodynamic and gravitational scale.

This does not fully eliminate a possible local big-bang, of the galaxy-atoms of our neighbourhood but more likely of the scale of a ‘gas explosion’ of a few moles or change of state. We could be just a bit of liquid nitrogen evaporating fast.

The big-bang as death of a physical Sp≈Tƒ organism.

Let us then study reality merely as it seems – born out of a series of beta-decays, that is local big-bangs of galactic neutrons, and use T.Œ in 5D for what is worth – to compare the processes of the different organic scales of matter, and the first thing we must explain is the dual nature of a big-bang as an entropic death which ‘kills a present’ system and breaks its Sp ≈ Tƒ harmony, into a dual big bang.

And so we have said and will keep saying a lot of this phase, since HERE IS WHERE THE REAL BIG-BANG STARTS, IN THE DUAL DEATH OF THE BLACK HOLE:

Screen Shot 2016-04-08 at 20.41.36

 In the graph the real big-bang is dual. It is the explosion of a black hole, which is a top quark star. So we need to introduce some basic 5D cosmology soon.

In this first age, the Galaxy’s black hole died in an explosion of- gravitational waves (the equivalent to neutrino waves in the beta decay and nova explosion, latter considered in more detail), whose enormous energy moved forward the top quarks of the galaxy, decaying fast into strange quarks, which are the only stable heavy quarks at vacuum pressure, forming the atoms of dark matter called dibaryons (Usd-usd hexaquarks).

We are dealing here with the strongest most stable combination of quarks, as the hexagon is a perfect bidimensional pi-closed form of space, where a 3=6 radius diameter, can close completely a ‘geodesic’ elliptical halo, protecting as proteins protect the cell, the galaxy from external influences.

This is the cover of dark mater of the galaxy in perfect symmetry between its 3 families of mass and its 3 organic parts, latter studied in more detail.

For the cosmic atom it would be a much faster process, hypothesized to arise from cosmic inflation, a faster-than-light expansion just after the big Bang – only that we do NOT have the particles that could do that, because the top quark has enough job to expand a black hole with its neutrino, high energy wave and its front of fast decaying top-bottom-charm-Stable strangelet dibaryons, to the size of the galaxy.

The numbers do NOT work for further, faster, expansion. And 5D physics DO NOT speculate. It is a simple, efficient, minimalist theory of reality constructed only with available particles.

Now, we have recently discovered a ttt particle of 700 plus GB, which physicists are clueless about, ignoring what it is, but it has all the aspect of being a triplet top quark, which I prophesized 20 years ago would be the central ‘boson’ substance of the black hole, ‘gas-9’.

In this phase, the quark gluon liquid didn’t cool as physicists think (they still call it plasma 🙂 but exactly the opposite since the little-bang is the DEATH of a black hole of perfect order into entropy, so all is exactly the other way around as the experiments at CERN are constantly proving.

And so it is now when the first ± c-speed symmetry creation NOT symmetry breaking takes place, separating the interior of the galaxy-atom at < c speed, from the interior of the remaining ‘proton’-black hole, smaller than the original neutron-hyper black hole, with the strange quark halo, rotating at relative c-speed in each small strangelet vortex, acting as the protein that isolates the galaxy, and causes the ‘solidity’ of the whole galaxy which – physicists still observe with awe – does NOT work as an unconnected spiral, but thanks to that 96% of dark matter, ahs the exact rigidity of any organic solid system, from a cell to your liquid organism surrounded b a hard solid membrane-skin.

in the graph, all systems follow the same developmental ages. The explosion of the big-bang which will feed a new galaxy according to the cycle of ‘barred galaxies’ as ours, implies an inflationary age of expansion of the ‘membrane’, which through gravitational waves with a halo of strange quarks and a background of ‘neutrinos’ would fast expand the dark matter heavy membrane to the limits of the galaxy, while the slower front of electromagnetic radiation, which does not penetrate so fast, would go behind, leaving as it progresses and cools down a process of confinement of lighter quarks into hadrons and nucleons.

Ahead, the Higgs, faster than light boson (or rather ‘above’) as the explosion would split perpendicular ‘faster than like, asymmetric jets’, and planar waves of matter in different fashion, given the different polarization of neutrino waves and electromagnetic ones, would push away the remnant energy.

However cosmologists invent a…

Quark-antiquark Period?

So next, without understanding the past-future duality of quarks-antiquarks, cosmologists thought the universe, which consisted mostly of energy in the form of neutrinos and photons given the enormous energy density of the big bang of the black hole, the cosmos-galaxy would exist as a collection of quarks and antiquarks along with their exchange particles, a state which has been described as a “superhot plasma of anti-quarks and quarks, annihilating”. This time period is estimated at 10-32 seconds to 10-5 seconds, which would provoke through that annihilation another big bang (cosmology is full of big explosions).

Now 20 years ago, when I first copyrighted 5D physics I affirmed it would be a quark-gluon cold superfluid soup, a perfect time vortex of under zero temperature, since:

– Temperature is a measure of entropic disorder applied only to this side of the universe and to thermodynamic, molecular world.

– Inside the black hole there is a top quark star in solid or superfluid states of boson perfect form.

– The anti-quark is the death moment of the quark that last only a second and so it is not the state of the anti-atom entropic big-bang which would produce a massive wave of quarks, decaying from top into dibaryon strangelets that would form the halo.

And here is where 5d physics proved itself in 2008 when they made a perfect liquid at RHIC.

But then I sued the entre global community of physicists, because next they could provoke a strangelet ice-9 reaction and blow up the Earth. So I never was accepted in any forum of physicists again.



standard model

In the next graph we can see how this dark, gluon-top quark soup (toplet liquid) of a gas-9 reaction (black hole quasar explosion), neatly reclassifies properly the 3 families of increasing mass in the Universe:

In the graph, a first look at the reordering of the strange and top quark triangles of mass, to define the symmetry between the ∆-1 and ∆+1 scales of ‘atoms and galaxies’, in terms of its ∆-1 components (quarks).

To fully understand the previous graph, you should consult the next post on atoms, as quarks fully correspond there. We have already treated in the introduction to this post, the analysis of the ternary symmetry between ∆-1 quarks and ∆+1 parts of the galaxy;

∆+1: top quarks->To: Galactic black holes, ∆-strange quark: Sæ: galactic halo, ∆-1: Ud quarks (our matter): ST-galactic stars.

Now, in this system, it is necessary to understand the role of the Higgs field and top quarks, which conform the outside-the-galaxy dark matter and dark energy.


now it is fundamental to understand that gravitons do NOT EXIST on the sense given by physicists – as the attractive gauging particle between physical masses, but gravitomagnetic transversal waves of neutrinos do exist and are essential to the structure of galaxies and solar systems.

THIS IS THE SOLUTION to the conundrum put up before on the ‘neutrino 1/2 spin’ vs. the graviton 2 spin. Simple: neutrinos are the equivalent in the next cosmic scale of the 5th dimension of an electromagnetic repulsive wave of ‘dark energy-repulsive gravitation’.

The neutrino theory of light, as the pilot-wave theory and the particle-wave duality is the work of De broglie, brutally assaulted by the Bohr gang who took their work as theirs and misinterpreted. A neutrino is the ’emerging’ flow of entropic communication between two atoms, or atom-galaxies, which either goes the entropic way, dissolving-stretching space-time into action at distance and disappears without trace of information: V=s/t=s/0=∞.

Or is not in an open ‘domain’ but constrained in its two limiting ends by the two atoms/galaxies it communicates and then, this condition (to be guided and controlled by two point-particles at their ends, is the only condition needed for two inverse neutrinos, with inverse motion, that is, one emitted by particle a and the other by particle b in a communicating act: 

Fermion < neutrino+Neutrino>Fermion

for a light boson that ‘warps’ pure gravitational action at distance space-time giving it form information to exist. So neutrinos do form light-beams and do dissolve them. And that is another beautiful thought of De Broglie on the path of truth in science: economicity, simplicity and causality.

As we have shown the symmetry of scale between atoms with positive quarks and negative electrons, and galaxies with positive black hole top quark stars and negative strangelet halos ad nauseam, it follows now that dark energy waves will be just the equivalent created by dark energy flows which acquire form between galaxies. The math of it are more or less complex but as we REPEAT ad nauseam, the beauty of the fractal, organic, vital, topological= mathematical universe, is that we can explain it much simpler and intuitively using the concept of symmetries between scales and organic properties and the laws of the scientific method of truth (economicity, simplicity etc.) as long as in the background we use sound-sound mathematical theories (as the neutrino light, providing they are indeed as Jordan proved, exactly inverse neutrinos constrained in its ends – by the particles that use them to entangle and communicate). So we do not need you to be a math-addict, just trust me, i read physics, love maths and only use sound-sound theories of the Universe, unlike we must say many of our fantaphysicists these days.

So there must be G-waves of dark entropy and dark faster than c gravitation in the upper scale between galaxy atoms; and between (at light speed constrained by the galaxy inner structure) stars and planets and black holes and starts.


The unknown limit of human perception is the metric point of 0 information and ∞ speed, from where we obtain our entropy of motion; beyond the ∆-3 scale of pixel perception. It is the neutrino background, or dark energy of expansive death of c-light speeds; of repulsive gravitation, which goes beyond c, with negative mass; a fact stubbornly denied against evidence in physics. Indeed, neutrinos used to be always calculated as negative masses till ‘adjustments’ on the ‘error’≈bias introduced by theoretical dogma corrected those experiments. Indeed, all measures of mass before this century gave a faster than c, negative energy-mass, meaning they are tachions of the ∆-4 dark energy, repulsive gravitation, pure entropic scale:Screen Shot 2016-04-05 at 13.25.44

In the graph, neutrinos have consistently being measured to have negative masses and faster than c-speed, corroborating the models of 5D physics, and yet systematically scientists deny their own experiments to uphold the fantasy of a single scale of space-time where c speed is the limit and mass is always positive. Yet beyond the galaxy, c-speed is the lower limit, gravitation repulsive and a vortex of mass an explosive process of entropic, negative mass destruction.

The negative number question, solution and imaginary numbers.

Next come the question of negative solutions to those equations, what truly they mean? As we explain in number theory, Euler’s vision of them as inverse numbers is the proper meaning. So they do exist, which has clear consequences in areas such as relativity where negative mass, means only an entropic process of expansion of mass into entropy.. I.e:

E=mc2 does not mean energy (really entropy in this case), is mass, as mass is in the other ‘inverse side’ of the equation.

So the real equality happens when m moves to the same side of E:


e =


Which defines negative mass as an expansive entropic destruction of mass. And so in relativity the 2 solutions which can be put as an example of the 2 roots of quadratic equations (one discharged in processes that are social and accumulative):

screen-shot-2017-01-23-at-12-46-00 screen-shot-2017-01-23-at-12-43-15

In graphs, Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.

Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut.

The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.
Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube.

Like warp bubbles, it involves negative entropy, which again would be just in the previous equations to move, E to the mc² side, increasing its density.
Almost every faster than light travel requires negative energies to be implemented at a very large densities. And so there is nothing special about it.

In the graph we can see how negative mass breaks the c-speed barrier very much as a ‘sound barrier’ is breaking provoking an empty shock wave without ‘light space-time’, torn out as the mass slides in the lower faster than c speed scale of invisible gravitation with no information, becoming coded as a neutrino wave, which might collapse back thanks to its negative energy, resurfacing far more advanced back to the ∆-3, light scale.

Now because gravitation is the inverse, lower, upper scales to that of light space-time, in the Universal game of russian dolls, the main example of “Negative” Energy is precisely when the quantum scale or the macroscopic scale is close to the ∆±4 world, in the quantum potential field of a particle moving >c (Bohm’s model of quantum mechanics) and in the ∆+4 region near a strong gravitational field:
– Radial electric or magnetic fields if theirtension were infinitesimally larger,for a given energy density.
Squeezed quantum states of the electromagnetic field and other squeezed quantum fields.
Gravitationally squeezed vacuum electromagnetic zero-point energy.

In general, the local energy density in quantum field theory can benegative due to quantum coherence effects.

Other examples that have been studied are Dirac field states: the superposition of two single particle electron states and thesuperposition of two multi-electron-positron states. In the former(latter), the energy densities can be negative when two single (multi-) particle states have the same number of electrons (electrons and positrons) or when one state has one more electron (electron-positron pair) than the other.
Since the laws of quantum field theory place no strong restrictions on negative energies and fluxes, then it is possible to produce violation of the second law of thermodynamics, and time machines at a local level, which is what GST precludes.

screen-shot-2017-01-13-at-17-14-41Modern neutrino measures confirm v>c; m<0

In the graph ‘again’ we measured faster than light speed, but the meak, peanut dogmatic brain of modern children of thought caused such fuss among idolaters of 4D that the experiment was considered ‘again’ faulty and a cable blamed. The neutrino thus always moves as all its space-time sheet v≥c, while light exists on the other side v≤c, touching each other on the c-scale in which they exchange entropy and information, as neutrinos give birth slowing down into ≈0-mass to a photon, or rather tunnelling from -0 to +0 mass, around the ‘asymptotic, infinite carrier of c-speed.




5d 4scales

 The lowest scale known in the Universe is the h-planck ‘neutrino’ scale of dark energy, expansive gravitation, which therefore has according to experimental evidence and the metric of the 5th dimension, negative mass (as negative is merely in a dynamic Universe the inverse value, so since mass is an implosive vortex, negative mass is an entropic, explosive vortex), and faster than c speed, as 5D metric implies smaller scales with lesser information have higher speed.

Hence the invisibility (less information), non-locality (faster than c-speed) and expansive nature of interstellar space, the scale of the Universe in which the neutrino background dominates.

It is then obvious that testing a negative mass and faster than c-speed motion for neutrinos will mean the proof of the 5th dimension metric, right? Wrong.

Because in the present age of ‘fantaphysics’, data is NOT taken as seriously as abstract theory, since Neutrino was detected there have been an enormous number of tests in which it gave negative mass and faster than light speed, and every time, experimenters have denied it, to uphold the single space-time continuum 4D only metric Universe.

Indeed the last time the Opera case (graph), which provoked such outrage among the physicists believers that after two positive experiments, in a short of coup d’etat, to protect their careers the experimenters denied themselves and ‘magically accused ‘a loose cable’ of making twice the wrong measures (:

IT IS A CASE, similar to the inverse case of big-bang dogma, which accumulates so many cases of falsity that it is a shame to talk of it as science.

 Unfortunately Inflationary information plagues the idealist school of physics that studies the invisible scales, due to the lack of a ‘general model of the fractal Universe with the isomorphic ‘allowed laws’ and the limits TO mathematical HOMOLOGIC METHODS.

WITHOUT a serious philosophy of truth and the understanding of ¬Æ, everything then is OK, because on top a fancy Computer model will make it look nice and real. Yet as Einstein put it not all what is mathematical truth is real. And certainly the imagination of the digital mind is NOT necessarily truth, even if ‘virtual computer reality’ looks ‘so real’. 

So what we can know about the scales beyond our perception? A lot can be inferred in a ‘detective inquire’ which however MUST OBEY AN ABSOLUTE LIMIT OF TRUTH:

‘Science must not occupy itself on equations and themes of which there is NOT any evidence whatsoever’ Einstein.

This is completely denied by ‘practitioners’ of fantaphysics, equipped with Craig computer simulations. So we must limit completely and ignore soundly most of what they do; specially those huge extensions of ‘local facts’ to ‘global symmetries’, to pump up the importance of self-discovery – including the cosmic big-bang instead of the quasar big-bang, based in a projection to infinity in space and time of a local measure (Hubble constant, time-back to the past explosion) string theory without ‘limitations’ and understanding of its essential concepts: open entropic strings, cyclical, closed time strings, and ST-present vibrating strings; dimensions (of time and space), etc.

Once GST clarifies those concepts…   It is then when the isomorphism of the Universe, allow us to study the scale beyond perception on the Planck limit of theoretical physics, of which ‘a lot can be said’ with ‘certain certainty ‘ if we use GST to limit what theories are real and what are mere mathematical truths.

The Neutrino scale.

In that regard the ∆-4 scale of the cosmos has in its micro-spatial finitesimals, as all planes do, its minimal elements to become a ‘whole plane of existence’ described by ternary generator, whose elements are:

-Tiƒ, only a known particle, v, the neutrino, which therefore IS THE EQUIVALENT to the ¥-photon, the only ‘known’ particle that fills completely the light space-time ª∆-3 scale in which we live). And it also shows the duality particle-wave as:

-An ST, wave, with 3 ‘time phases’ of evolution, parallel to those of electrons and quarks (electron, muon and tau neutrino).

-And an Spe field, of which humans have only a theoretical term, added ad hoc by Einstein, a  Universal constant that defines the entropy/energy (physicists are at pains to define both, so for the time being we shall just say ‘e’), lambda, the cosmological constant of Einstein’s space-time, which tilts one side or the other, around zero to define the 3  ‘topological’ arrows of all space-time fields of the larger scales of the Universe.

So as in the neutrino’s ternary particles and waves, we do find in the parameter of pure space, at the planck scale the 3 topologies, describe by EFE:

  • A warping space (elliptic solution), a wave space (hyperbolic solution), a expansive space (big-bang solution).

This we know as certain and it is worth to explore. Many other ‘fantaphysical’ particles and equations are either wrong, or redundant equations which merely obscure the understanding of those known-known, theoretically sound, experimentally proved elements to start our construction.

Then with the homologic, isomorphic method we can easily PICK among all the work of fantaphysics and sound physics on the ∆-4 unknown planck scale, the theories that make sense, with the particles that make sense and reject all others in a simple, efficient, using our Occam’s razor to throw them out… ‘to the nearest black hole’

So what we have to connect the Neutrino BG and the larger ¥-ray and what tells us about the neutrino in ∆-metric?

– We actually have The Theory by the forgotten genius of quantum physics, Mr. De Broglie, an autist French Aristocrat (being a french-spanish≈ catalan one known by his autism beyond these webs, I do have feeling for him, but don’t worry I don’t choose people but theories): It is called the neutrino theory of light and IT is right, and it tells us a few things:

  • As De Broglie shows in his wave-particle duality, expanded by Bohm and today proved the right interpretation thanks to the EM engine (HE not Bohr, discovered and interpreted properly Bohr just stole the fame and misinterpreted), photons must have a minimal mass and an internal clock. We have seen both things to be equal: a mass is a closed cycle, so ALL particles must have a minimal mass, as a closed ‘vortex-like’ motion, offers ‘inertia’ to be displaced lineally. 

Then it is obvious by 5D metric (Spe x Tiƒ ≈ ∆; Spe ∆-1 > Spe ∆;  Tiƒ ∆> Tiƒ ∆-1) that the lower ∆-1 scale of light, the ‘quantum potential’ of B2 theory (ab. Broglie-Bohm) MUST be faster than the c-light scale; that light must have a potential limit of life, as mass always decays (Which must be calculated in terms of a quasar big-bang and the loss of light in the far away distances that dark away – we do NOT see light beyond a limit o distance, not because of a cosmic big bang limit of Universal age but because light tires to red-expansive motion and dies away into neutrinos.

In the next ‘∆-3’ scale we shall explain all this. Broglie explained that 2 neutrinos can give birth to a light beam; as usual the guy was panned by the Bohr circle, as the man would do whatever it takes to steal the fame of the duality theory.

So with its cut-throat approach they found an small error: for those 2 neutrinos to give birth to light, they must be perpendicular to each other. Alas! THAT IS THE IMMEDIATE function/form in GST for smaller particles, which ARE used to communicate the UPPER scale, and to generate it (so you are generated by cells, and when communicating you are looking to the person you talk). So indeed precisely that is the way neutrinos connect and entangle and allow spooky effects between photon and ∑photon=∆+1 electron particles. NEUTRINOS connect them at faster than c speeds (non-local), from A to B, then B-particle communicates with A, and a more complex information at slower speed, called a ¥-ray with more ‘form’ (transversal as opposed to longitudinal polarisation) happens.

It is NO longer a neutrino, a ¥-ray is born. And vice versa, ¥-ray dissolves after going ‘redshift’ to the limit of its entropy into neutrinos. This we said for decades, on deaf ears (PHYSICS is a fundamentalist, dogmatic science, closer to religion with its pretension to substitute philosophy of science as the mother of all truths; so astounding as it might sound to the naive idealist scientist, as economics, another religion  mechanical power, doesn’t take lightly attack on dogma, and dogma is mechanical power).

Yet now as new ‘players’ (Asians) come in, a bit of reason is entering. So the neutrino theory of light and neutrino physics is all the rage. We shall not enter into hardcore theory, this is not the purpose of this blog, which just tries to ‘prune the tree of science’ of its bad fruits (ethically and intellectually). As it would make it all very cumbersome (perhaps in the 3rd line). But alas a good paper on the reverse ‘light theory of neutrinos’ (decay and creation of them at >>c speed) has been recently blogged. So we cut-paste this text:


“If photons can die, they could give off particles that travel faster than light.

Many particles in nature decay over time. For instance, radioactive atoms are unstable, eventually breaking down into smaller particles and giving off energy as they do so.

Scientists generally assume photons do not break down, since they are thought to lack any mass with which to decay. However, they might instead potentially have masses too small for current instruments to measure.

“How much do we actually know about photons?” asked particle physicist Julian Heeck at the Max Planck Institute for Nuclear Physics at Heidelberg, Germany. “They led to several revolutions in science, but their properties are still a puzzle.”

The current upper limit for the mass of the photon is less than two-billionths of a billionth of a billionth of a billionth of a billionth of a billionth of a kilogram. This would make it about less than a billionth of a billionth of a billionth of the mass of a proton.

The extraordinarily long lifetime Heeck calculated is an average. “There is the possibility that some photons have decayed,” he said.
If photons do break down, the results of such decay must be even lighter particles, ones that would travel even faster than photons. Assuming photons have mass, “there is only one particle we know from the Standard Model of particle physics that might be even lighter — the lightest of the three neutrinos,” Heeck said.

Neutrinos are ghostly particles that only very rarely interact with normal matter. Countless neutrinos rush through everyone on Earth every day with no effect.

“It might well be that the neutrino is lighter than the photon,” Heeck said. In principle, each photon might decay into two of the lightest neutrinos.

“The lightest neutrino, being lighter than light, would then actually travel faster than photons,” Heeck said.

The idea of neutrinos that move faster than photons would seem to violate the notion, based on Einstein’s theory of relativity, that nothing can travel faster than light. However, this assumption is based on the idea of the photon not having any mass. Einstein’s theory of relativity “just states that no particle can travel faster than a massless particle,” Heeck said.

Intriguingly, the speed that photons travel at means their extraordinary life spans will pass by quickly from their perspective. Einstein’s theory of relativity suggests when particles travel extraordinarily quickly, the fabric of space and time warps around them, meaning they experience time as passing more slowly than objects moving relatively slowly.”

Indeed, the end is what decided me to pay-per-view Physics letters to read the maths and it looks good. I liked this fundamental truth of GST, our relative inner subjective time clocks make our lives to be similar in time. So a cell ‘dies’ by splitting into 2 new cells every 1 to 3 days. An ant that thinks 10 times faster lives 7 years (queen) equivalent to our 72 years (7 x10).

And photons the ‘cells’ of all our light space-time Universe live in internal time-clock, a billionth of a day, if according to my calculus they live in external time for 1o billion years (as stars do and the horizon problem seems, from their perspective, they will only live about three years).

Errors and insights on ‘invisible physics’. The scaling and hierarchy problems.

It is worth to mention again that as physicists do not study the scalar ∆Universe, things are a bit messy on the department of a clear streamlining of equations and knowledge. The problem has no solution due to the inflationary nature of professional computer-aided research which needs to produce meaningful paper and cannot CLOSE WITH THE SINGLE TRUTH, the myriad of grants and money for research in all those other idealist truth that pump up budgets for Craig computer power.

Indeed, the biggest HURDLE TO ACCEPT TRUTH WHICH IS ALWAYS SINGLE (ALBEIT with multiple causality to create a being) in this case the neutrino truth of the ∆-1 scale is that THERE ARE TOO MANY PEOPLE, inventing particles and feeling gods of physics to tell the THEME IS CLOSED, ONLY NEUTRINO theory is worth to explain the lower scale and all its synonymous (dark energy, interstellar space, super gravitation, string planck scale etc.) Cut the bullShit or else it is all messy.

And of course add GST so things make sense between discontinuous scale. One simple example will suffice:

The most important and obvious of those errors is the so called ‘Catastrophic vacuum’.

In cosmology, the cosmological constant problem is the disagreement between measured values of the vacuum energy density (the small value of the cosmological constant) and the zero-point energy suggested by quantum field theory.

Depending on the assumptions, the discrepancy ranges from 40 to more than 130 orders of magnitude, a state of affairs described by Hobson as “the worst theoretical prediction in the history of physics.”

The magnitude of this discrepancy is such that the statement “the observable universe consists of exactly one elementary particle” is at least ten orders of magnitude more accurate (-;

Several authors have recently identified and pondered the significance of this erroneous ratio of the theoretical and observational estimates of the energy density of the vacuum, bu only Nottale (1993), one of the few physicists that have my deep respect for his study of the scalar Universe (albeit only in the context of physics) associated it, rightly with a scaling law for the cosmological constant.

In fact, the reader will realise if he has ‘enough knowledge of GST that the number is exactly a 10ˆ(¹º x³ x 4) scaling, meaning that it is the canonical 10¹º difference of Sp-length (single dimension) between the ‘units’ of each higher plane, multiplied by 4 dimensions  of space-time through which the force can spread for each plane, and 3 sub-scales within each ‘big plane’.

So what this means is that as each plane ‘filters’ the energy and information that can ’emerge upwards’, we have a ‘3 times problem of hierarchy’ here. Indeed, if the contiguous electromagnetic ∆ and ∆+1 gravitational scaling gives us a ’10ˆ10′ x 4 dimensions through which the force spread = 10ˆ40 difference between both forces, explain in depth in the analysis of the unification equation of forces and charges, here we must identify 2 ‘more’ jumps of fundamental planes of physical space-time, and postulate that through those 3 planes, the hierarchy problem of 10ˆ40 scales to 120:


The scales of the Universe, are those of the fifth dimension:


So the one we talk about is the lower, higher ∆±4 (dark energy, dark world, repulsive gravitation: expansive entropy-space, akin to the electromagnetic expansive waves between ‘galaxy-atoms’.)

The nested structure of the fractal Universe as seen from the ∆o thermodynamic scale of the human inner world.

Humans are in that sense the summit of a certain molecular game mostly belonging to gaseous light atoms (we are a mere 1-6,7,8 combination of a 100 scale). And as such only anthropomorphism makes us believe is the center of the game.

In the bottom we see the dark world beyond the diatomic and simple molecular clouds of hydrogen-galaxies, (ab. HG), the commonest observer in this region of the cosmos. In that sense, the scalar universe in nested format means that paradoxically larger structures of wholes are deeper in ∆-scales and so have as minimal quanta of space and curvature a smaller part.

in the dark world it seems to be the cosmological constant which is a measure of the curvature of cosmological space-time (∆-4).

Yet, our interest picks with the scales we observe, the next one, ∆±3 of particles till atoms of the inner world of galaxies. This scale is quite symmetric and we can treat galaxies as electronic nebulae around protonic top quark dark atom bcb black hoke frozen stars. This simple scheme of cosmology that best suits the rules of epistemology and truth implies that each galaxy atom with a positive black kerr hole of top ++ quarks, and a surface of negative strangelet electrons, with the softer mass-density of our stars in the middle, will be at cosmic level a predator-prey, information-entropy situation where ordinary matter will be the entropy of dark matter:


In the graph, we can see the solution of the vacuum catastrophe along with the solution of the hierarchy of forces:

Humans which are the first ‘scale where gravitational forces’ overcome thermodynamic ones (hence gravitational mechanics guides our motions more than temperature changes) is 10 scales up to the atomic electromagnetic scale, which multiplies for the canonical 4D models of relativity for a single space-time continuum explains the 10ˆ40 hierarchy problem between Q and G forces, which we easily unify with 5D metric.

Since the scaling is ‘lineal’ along the Sp-parameter, but a force spread long the 4 dimensions of a single space-time, we do have to multiply it by four.

On the other hand, the planck scale is in the 10¯³º range. So it is 3 times the  10ˆ40 hierarchy problem: 10¹²º hierarchy problem solved easily.

So what the cosmological constant tells us that we are in the ‘planck scale’, the scale of the cosmological constant, of the cosmos, of dark energy, of neutrinos… and theoretical strings; being neutrinos the known known stuff, and theoretical closed and open, time-like and space-like strings the theoretical stuff which we know to be truth but not real (and certainly needed a serious conceptual upgrade).

Indeed, the weak force angle of neutrinos gives us a theoretical value for them of the size of the planck scale, while particles such as the electron must be point-particles smaller than the probing 10ˆ-18…

So all would indicate really regardless of intermediate scales that in the largest 10ˆ9-11 jump of scales indicated in the above graph, we have the ‘human, lower gravitational 0 scale’, the atomic 10ˆ-10 scale, the particle 10ˆ-20 scale and the neutrino, planck, 10ˆ-30 scale.

Thus the scale of the cosmological Sp- constant is the same that the scale of the neutrino Tiƒ particle. And that is really the most essential fact we have discovered. What though is the influence of this neutrino field. It is the graviton field? What is the weak force, which neutrinos carry? etc. etc. A lot of questions remain unanswered here and we shall try to fill them up this year 2017, as the first and second line are starting to resemble ‘something readable’. But when I don’t know, depends on the moods of the days. To state so far that:

  • No, the graviton is NOT the neutrino, but the planck mass, the micro-black hole of a compton length. No need to use any inflationary theory but accept facts that completely match theory and praxis. This are the micro-black holes that do NOT evaporate (sorry Mr. Hawking, you have ‘no idea’ of what time is and how information travels to the past – NOT energy, not certainly black holes) but are distributed all over the galaxy and form the fundamental gravitational field.
  • No, the weak force is NOT a spatial force, but a temporal dimension, of ‘trans-form’ation of particle.
  • And so the neutrino is a very curious type of ‘entity’, one that travels back and forth ‘across’ the 5th scalar dimension, moving energy and cyclical momentum between scales. And we shall leave it like that.

Now the way to see this is the russian doll theories of wholes and parts:


SO the dark cosmological energy scale must be served by the neutrino-string scale. And for that reason we find symmetries:

∆±4 dark energy, cosmic neutrino interstellar waves and strings, cosmological strings.

∆±3 atomic-particle symmetry with galaxy, black holes and stars.

∆±1 symmetry between molecular genetic dna and human memetics of human super organisms.

∆±0: human mind, cell


%d bloggers like this: