MATHEMATICAL GATEWAY TO COMPLEMENTARY HIDDEN VARIABLES IN MACROPHYSICS

It is shown that even physically meaningful and experimentally confirmed formulas of physics and mathematics can be extended by enabling some previously unrecognized (or considered as just fixed) parameters to either vary independently and thus reveal them as previously hidden variables or to turn them into fixed exposure functions whose cumulative impact varies along yet another formerly hidden variable. Uncovering of hidden variables requires (new) synthetic approach to mathematics. The need for revealing hidden variables is prompted mainly by unanticipated experimental results, whose more precise outcomes apparently challenge the previously espoused paradigms upon which those simpler former formulas had been established. Operational rules of calculus can reveal the hidden variables that could extend the laws of classical physics whose predictions disagree with new experimental evidence. Presence of such variables has already been confirmed in several experiments.


INTRODUCTION
Ongoing progress in technologies and refined techniques of experimentation have led to advances in measurements resulting in more accurate experimental data, some of which point to inconsistencies that make reconciliation of the newer data with theoretical predictions derived from some previously invented laws of nature virtually impossible, even when the laws are mathematically supported [1]. It is usually an indication that the mathematics behind those former laws was too simplistic [2]. Physical laws are evolving as we dig into their mathematical underpinnings on ever-deeper level of inquiry.
Pure mathematics has not developed yet generic methods for inventing new formulas which could represent the more sophisticated laws of nature that are recommended by results of new, more precise experiments. In particular, there is no generic mathematical methodology for enhancing old laws of mathematics and physics through operational or structural syntheses. Courses on differential equations traditionally focus only on solving them (mainly via making conceptual shortcuts and often very crude approximations), not on improving them conceptually. Even though many former mathematical rules and formulas developed for physical sciences exhibit conceptual deficiencies and (some also) flagrant operational incompleteness, for which no mathematical justification can be ever provided [2], there is no systematic mathematical quest for elevating them onto a conceptually higher level.
Since in order to be scientific all laws of nature must be testable, though not necessarily invariable [3], then the old laws should be upgraded when new unbiased experimental results challenge them.
Nevertheless, it is possible to synthesize new, unanticipated before operational formulas, which reconciled experiments previously deemed as irreconcilable [4] and/or formerly quite unexplainable (see [1], for example). Hence, a new synthetic mathematics -as opposed to the old one that is often associated with Poncelet -has the potential to play not only the leading role in  LTD, 2015 operational support of most exact sciences, but also to rectify its own shortcomings in the process. Pure mathematics steadily deteriorated because it propagated tacitly veiled slipups disguised as its legitimate attainments while ignoring some of its proven laws/rules [2] and disregarding even its own inconvenient achievements when the latter dare to challenge its simplistic concepts or its overly simplified methods [5], [6].
Most common way of solving difficult equations was to reduce them to simpler ones, often via approximations that diminished the suitability of so-obtained solutions to the problem at hand, which was supposed to be represented by the equations. Unlike approximations, which often tend to reduce the number of active predictors (i.e. differentiable variables) by replacing them with some (often easier to handle) passive predictors (i.e. fixed parameters), the new synthetic method seeks to enhance the adequacy of the solutions. This can be achieved by actually increasing the number of operationally active variables (and/or encapsulating them function-variables) that could be influencing the model described by the given equations. In fact, even some former parameters may become variables in the enhanced formulas, if that is what the formerly unanticipated experimental results suggest.
If the above statement sounds like a paradox, consider this: by oversimplifying the definition of potential energy, for example, former mathematics tacitly discarded all the -other than purely radial -contributions to the energy whose impact was not quite understood. As a consequence, any serious balancing of so incompletely defined potential energy against the work done that is compensated by the energy could never be actually performed, because incomplete energy may not always equal the total energy involved in the given field interactions. For it was farce when physicists pretended to balance what they know (or should have known) is incompletely defined potential energy [2]. This charade is perpetuated in physics for decades [7] and in some cases even for centuries [8].
By enhancing the former mathematical expression of potential energy with extra independently varying variables and formerly unrecognized parameters [9], however, the balancing act was made operationally legitimate [10] as well as experimentally confirmed (i.e. physically adequate too) [1]. Reduction for the sake of simplicity is not always a reliable guarantor of acceptable solvability of the reduced difficult equations, not to mention appropriateness of soobtained simplified solutions.

MATHEMATICS STAGNATED BY IGNORING FEEDBACK FROM EXPERIMENTS
By solving generic mathematical problems for physics, mathematics supplies both the operational and structural underpinnings for laws of physical sciences. Without operationally legitimate and truly structurally sound mathematics, physics is not always an exact science.
Yet all too often feedback coming from physics and applied mathematics was disregarded causing thus pure mathematics to focus inwards on merely providing pedantically correct answers even to not quite correctly formulated questions. The aversion to acceptance of constructive feedback hurts also pure mathematics itself, which still perpetuates some correctible operational inaccuracies as well as rectifiable conceptual mistakes, despite their devastating impact on research and education [2], [8].
One of most blatant blunders was caused by unfortunate traditional definition of the rate dW of the physical work done W that is compensated by potential energy of the radial/center-bound field of force vector F acting over distance pointed to by the distance-pointing vector r as: . Notice that so-defined rate dW is operationally wrong and thus conceptually inadmissible, not to mention the fact that it is also logically incorrect and even linguistically rather problematic. For it has misdefined the work done rate as force acting over (the rate of change of distance) dr rather than as force acting over the distance r=|r|, as it used to be understood in physics, at least verbally [11].

ILCPA Volume 50
According to proven product differentiation rule the rate of work done -meaning vector of force acting over distance determined by the distance pointing vector r -should be legitimately defined as (1) instead, where is the angle of visibility of the trajectory from its perihelion [2]. The operationally complete definition of work done rate (as properly evaluated differential of the function of force acting over distance) that is proposed in the eq. (1) is actually true to its conceptual understanding in physics and legitimate from the operational standpoint of mathematics. The former, operationally incomplete definition of work done rate made many inaccurate predictions for other than purely radial phenomena [1], because it just disregarded the nonradial term [2]. The former definition is illegitimate for it violates the -absolutely mandatory -proven generic operational rule that governs differentiation of all products. Notice that for at least partly nonradial (i.e. not always purely radial) nontrivial trajectory (dα≠0) as well as for two or more nonparallel radial force fields that cannot all vanish at the same time (dF≠0), the formerly ignored two terms (highlighted in red) on the far-right-hand-side (FRHS) of the chain of eqs. (1) can have nonzero values. The omitted terms proved significant in experiments [1].
For the detractors that sometimes invoke the principle of definitionary freedom I would say that one might define anything, but once a notion is arbitrarily defined, the notion should be accordingly and logically treated. Therefore, if one incompletely defines the rate of work done, and consequently that of potential energy, then one also forfeits one's ability to balance -in generalthe incompletely defined work done with potential energy exchanged during realistic physical interactions.
In gravitational field whose radial [12] force's magnitude was famously defined by Newton as (2) the omitted nonradial term must split into at least a linear nonradial and compound (or two-pronged, if partly tangential and partly binormal) angular nonradial subcomponent, as indicated by the eq.
(1). The angular nonradial part yields an equipotential function w() of the work done, for which the potential energy of the given field is always spent only along an equipotential surface. The surface is determined by the trajectory's radius pointing vector r, which points to the radial distance r=|r| and is codetermined also by the trajectory's perihelion radius r p =|r p | and by the field's average, i.e. constant density of matter Q, the function w() that reflects the formerly ignored angular term 2Frsin2αdα is: which depends on the function Q(λ) that measures an exposure of the orbiting (or perhaps just passing by at perihelion r p ) mass m (assumed as being insignificantly small in comparison to the (assumed as huge) source mass M of the locally dominant force field. The mass m traversing (or orbiting) the local field is exposed to the (assumed as constant just for the sake of simplicity) density of matter functional [Q](λ) of the dominant source mass M [10], [9]. The previously unrecognized angular distance λ (i.e. effectively extra hidden variable whose presence was unidentified due to the faulty former definition of work done rate) is measured along an equipotential surface, and k is a proportionality constant, whose value should be estimated from relevant experiments [1], [8]. The coefficient k was proved equal to 1 in experiments with rays/waves traversing the gravitational fields of Sun [13] and Earth [14], which have been reconciled in [1] and explained in [8]. The coefficient can be used for fine-tuning of data from experiments conducted in vicinity of large gas giants like Saturn and Jupiter, for instance.

International Letters of Chemistry, Physics and Astronomy Vol. 50
Note that unlike most functions, which directly depict the evolution of a trajectory or an orbit, or energy/work as it is in the eq. (3), the equipotential function w(Q(λ)) determines directly an exposure of the mass m to density of matter Q (of the dominant body of mass M). The exposure takes place over varying angular distance λ that lies on the equipotential part of the trajectory of the mass m. The radial distance r is always constant on equipotential parts of the trajectory, whereas all the other parameters and variables present in the eq. (3) are assumed as being always constant for the sake of simplicity.
Furthermore, the density of matter functional [Q] must be entered into the eq. (3) as an intensive variable (like temperature) because it must not be determined by the big mass M per its volume as it was sometimes defined for most practical -not conceptual -purposes [15]. It should be represented by specific gravity -or a similar intensive variable -that is rendered by a ratio of densities of matter.
Evidently, Q cannot be an extensive variable in the eq. (3), for in such a case the mass M would be either eliminated or duplicated, depending on where it would stand in the functionvariable Q. In thermodynamics, certain radiative effects not only depend on the amount of heat Q and temperature T, but codepend also on the exposure time to the primary predictors (namely Q and T). Similarly, in the eq. (3) the source mass M does affect angular nonradial gravitational effects not only directly/radially through its presence within the local radial/center-bound field, but also indirectly, via exposure of the orbiting mass m to prolonged impact of the mass M. The exposure takes place along angular distance λ on the equipotential parts of the trajectory/orbit followed by the mass m. This curious, formerly quite unanticipated effect of such equipotential exposure has already been confirmed in several unbiased experiments and observations [1].
As the dominant source of the local field, the mass M affects thus the orbiting/traversing mass m in two distinct ways: directly by its presence (or radial impact) and indirectly by tangent exposure to its presence (hence nonradial angular impact of M). Nonetheless, there is only one mass M and thus the density of matter functional Q must be of an intensive magnitude in order for it not to introduce the source mass M twice into the same eq. (3). Thus, the single (assumed as practically nonrotating) central source mass M exerts twofold impact on the orbiting (or traversing its field) mass m. For by bending the trajectory of the orbiting mass m, the source mass M also induces an angular twist on the trajectory of the mass m (from purely mathematical point of view) or on the mass m that follows its trajectory affected by the dominant mass M (if speaking in terms of physics).
This issue could have been rectified long time ago, for already in 1773 AD Lagrange has realized that the scalar potential function V(r)=1/r (where r=|r| is the magnitude of the distance pointing vector r) solves Laplace equation [16]. Since Laplace equation has second derivatives that can spread in all three dimensions, it could induce -according to Frenet-Serret (FS) formulassome other than purely radial effects too [17]. When the FS formulas emerged (with a solid proof) about 75 years later, it was probably too late for successors of Lagrange to consider them and correct the "great master". The FS formulas are still undervalued in physics even though mathematically are fairly well understood.
Since any forced motion is screwlike in general [17], hence the -omitted in the incomplete former definition of work done rate -nonradial component of the total rate must not be ignored or arbitrarily discounted, except in deliberate approximations. For it makes no sense to try to balance incompletely defined potential energy. Also making conceptual inferences from the allegedly total former potential energy, when it is obvious that the latter was incompletely defined, is a recipe for misconceptions.
The operationally complete eq. (1) leads to very different mathematical approach to interactions taking place within radial force fields, especially with multiple field's sources [10].
One consequence of the eq. (1) is that exchange of energy happening along equipotential surfaces should depend also on density of matter of the source mass M of locally dominant gravitational field (i.e. a star or planet) [9], [8]. This conclusion has already been confirmed in several experiments [1]. It is also consistent with few other unbiased experiments [7], [18], [19],

120
ILCPA Volume 50 some of which were commonly misinterpreted in the past mainly due to the operationally incomplete former definition of the work done rate and consequently thus also of the potential energy that corresponds to the work being done by the central force field. The unfortunate misinterpretations created confusion lingering for many decades [1] or even centuries [8], [20]. Experimental hints allowed not only the discovery of previously quite unanticipated nonradial effects of purely radial fields [1], [10], but also permitted formal introduction of corrections to the oversimplified former analyzes of interactions taking place in radial/center-bound force fields [2].
During the process of making synthetic extensions to mathematics pertaining to the nonradial gravitational phenomena, the effects of certain previously unrecognized (hence hidden) variables have been uncovered [1]. It is thus the purpose of this paper to outline in abstract terms the mathematical (as opposed to experiment-driven) gateway to hidden variables.
The possibility of presence of certain yet unidentified variables/predictors is very important for enhancements to those currently challenged laws of physics whose predictions disagree with curious experimental results as well as for rectifying faulty (or perhaps unduly oversimplified) mathematical tools whose predictive value is also challenged by formerly unanticipated results of those experiments.
Enhancing formulas to fit curious experimental results is common practice in physics. Yet only operationally legitimate formulas should be enhanced, for enhancing the former, incomplete rate of potential energy could give a false sense of correctness in addition to too large discrepancies instead of just ~4% for linear magnitudes [7] or ~11% for angular ones [1].
A new class of exactly solvable potentials was also tried. The new potentials included the impact of shapes and density of charge of the atomic nucleus. Yet the charge density near center of the nucleus of 205 Pb predicted from the new potential function was about 18% too high as compared to its value measured in experiments [21], because it was based on the former incomplete radialonly potential.
Notice that the previously disregarded angular nonradial potential energy term given in the eq. (1) is a twist that comes with opposite sign, which corresponds thus to a repulsive force [10]. Since the compound nucleus contains multiple protons, some of these protons must introduce other than radial i.e. nonradial components to the combined force field of the compound nucleus and consequently thus a nonradial twist, because some of their radii cannot always be aligned along the radius of the central field of the protons in the nucleus. Moreover, rotation of the nucleus adds an extra nonradial twist too, which would further diminish the excessive predicted potential that was calculated with the use of the radial-only term, which alone is present in the former, operationally incomplete potential formula.
Recent measurements of atomic diameter with the use of laser beams found the atom's diameter about 4% shorter than previously estimated [22]. Previously the proton was the only source of the central force field. Now, however, the laser beams introduced an extra source of the combined force field in addition to that of the proton itself and consequently thus an extra nonradial angular twist, which was unaccounted for in the former radial-only formula [7]. The omitted angular nonradial part of potential energy must lead to at least 3.48% higher estimates of linear radial magnitudes [7].

MULTIVARIABLE CALCULUS IS OPEN-ENDED FOR EXPANSION OF FUNTIONS
Partial derivative of a function f() of several independently varying variables with respect to one of these variables is defined as regular derivative of the function when treated as if it was function of only the one, single variable with all the other variables assumed as always remaining constant during the particular operation of partial differentiation. Hence for a function f(x,y) of any two independently varying variables x and y, its partial derivative with respect to x may be conventionally defined as: International Letters of Chemistry, Physics and Astronomy Vol. 50 when the two independent variables x,y belong to the same (and presumed as their native) reference frame (x,y) in which the function f(x,y) is directly representable. Notice that the subscripts have only informative purpose, not operational one. They only supply information on how to perform operations.
By native reference frame in general (and native vector basis in particular), I mean such that can be directly measured/expressed in the same units as the independent variables, at least in principle. By the same token, by a foreign reference frame (or a foreign vector basis in particular), I mean one in which independent variables cannot be directly measured/expressed, and thus their introduction would likely require indirect representation of some kind of [6].
Derivatives are ratios df x (x,y)/dx of these two differentials, which in turn are just rates of changes [23]. Hence df x (x,y) denotes the rate of change of the function f(x,y) over an interval of variability of the independent variable x with respect to which the function is being partially differentiated. It is thus the rate in the direction associated with the variable x (which is usually assumed as straightlinear, just for the sake of simplicity, but the assumption is not necessary in general). Some authors point out that actual (as opposed to symbolic) derivative is effectively equal to its differential [24]. This is true for simple functions, but only in an abstract sense for compound functions. I prefer to view derivative as ratio of differentials in order to avoid explicit distinguishing between simple and compound functions, which will be done implicitly with the use of chain rule.
Moreover, I am also assuming henceforth that the function f() is always unique, in order not to unnecessarily complicate this presentation. Treatment of nonunique functions as just compound ones comprising several unique functions is explained in many textbooks -see [25] p.91, for instance.
Hence -according to the above definition (4) -each total directional sum of partial derivatives of the function f(x,y) with respect to each independently varying respective variable can be written as: which contain the sum of both the active derivative and the other (temporarily made inactive for the duration of the particular partial differentiation performed with respect to the active -for the time being -independent variable) derivative rationalized by the ratio of differentials of the inactive to the active independent variables, respectively.
From the two representations one could draw the formal conclusion that the total derivative (or differential) with respect to each independently varying variable is the sum of all partial derivatives (or differentials). Hence the effects of all the partial differentials are essentially added to (or superposed on) each other, separately in the direction of each variable.
If differentials in the eqs. (5) and (6) are equal then they effectively define an exact differential of the function f(x,y) [26] p.901ff, [27], [28], [29], [30]. Since this 'exactness' is actually driven by some geometric or quasi-geometric symmetry that is manifested in geometry through abstract orthogonality, an inexact differential could be enhanced by superposing effects of some extra variables which have not been considered previously in the frame (x,y).
Nevertheless, when experimental results differ substantially from those predicted by the function that is supposed to govern the phenomena tested in the experiment, appliedmathematicians search for any other variables that could be used as predictors able to reconcile the experimental results -even if the differentials of the unsatisfactory original function are (accidentally) exact. In the latter case, the symmetry that causes the exactness may not be the highest possible symmetry that governs the whole set of predictors which affected the phenomena tested in the unreconciled experimental results.

ILCPA Volume 50
Looking at the eq. (4) one can notice that all the other variables, which remain constant during the given partial differentiation, were purposely made to behave during the particular differentiation just as fixed parameters do. One may ask thus whether a constant parameter could also be turned into a certain variable when considered on a deeper level of inquiry into the actual process modelled by the function. This question can be definitely answered in affirmative in some specific cases, because it essentially means superposition of the effect of the (assumed henceforth as varying) parameter, which assumption -for most intents and purposes -depends on the physical model that the original function is supposed to represent.
In cases when the already present independent variables (x,y) are insufficient as predictors of the process being modelled by the function f(x,y) one might try to find/uncover yet another (previously hidden) independent variable that would increase the predictive power of so-enhanced function.
Nevertheless, even in those cases when it is impossible to let the fixed parameters vary, they still might be turned into certain constant functions of exposure whose duration perhaps depends on yet another, (secondary) complementary hidden variable. The other reason for introduction of a secondary complementary hidden variable is to prevent an overlap of new variables with the already present ones.
It does not matter whether the exposure is measured in terms of time (as its duration) or distance (as its length). Unlike functions of evolution, which are characterized by their increasing or decreasing value, functions of exposure can remain constant (like fixed sources of heat or some other radiation), but their cumulative impact increases with the time (or length) of exposure to them. This distinction between (active) functions of evolution and (passive) functions of exposure will be clarified later on. Now if we try to extend (↟) the function f(x) into f(x,↟y) by expanding its frame into (x,y), we get (7) because due to the fact that neither the function nor the extra variable y (which should be complementary to the variable x) did exist in the f x () part (taken with respect to x) of the new, prospective extended function f(x,↟y). Hence dx/dy=0. The partial derivative ∂f x (x,↟y) y /∂y = 0 too, for the extra variable y is absent (∄y) there by definition. Hence, there is nothing to differentiate (no y to differentiate with respect to it). In the former mathematics, the nullification was often done by decree as a sort of authoritarian declaration [31]. The sign for the complementary extension ↟ has only informative role. I should use it henceforth mainly in substitutions, definitions and narratives for clarity of reasonings, but will skip it in most operations in order to avoid unnecessary clutter.
While the last term on the FRHS in the eq. (5) is both formally valid and logically noncontradictory and thus the operation is doable in general (at least in principle), the corresponding to it last term on the FRHS in the eq. (6) is logically contradictory even though it may be formally valid. The partial derivative ∂f y (x,↟y) x /∂y of the prospective extended function f(x,↟y) in eq. (5) could be envisioned because the rationalizing dependence dy/dx associated with it is generally quite meaningful (i.e. the prospective extended variable y could be a function of x). The partial derivative ∂f x (x,↟y) y /∂x in the eq. (7) is a functional (which as all functionals cannot really be differentiated), not a function and so it creates conceptual conflict. For the original variable x is -by definition -not yet dependent on the prospective variable y (i.e. dx/dy is ruled out due to absence of y). Since dx/dy is not an option, the alternative partial derivative ∂f x (x,↟y) y /∂y of the same term in eq. (6) is tried (with dx/dx=1, which is admissible as a tautology). However, this alternative is also contradictory, because it demands performing partial International Letters of Chemistry, Physics and Astronomy Vol. 50 differentiation with respect to y, while y is also requested to be kept fixed (i.e. as a constant functional/value) for the differentiation. Since it is conceptually inadmissible and thus practically impossible to (meaningfully) differentiate any number-value, parameter, or fixed functional, the aforementioned conflict cannot be satisfactorily resolved. Notice that the conflict has emerged because the prospective variable y did not exist. This fact emphasizes the need for the expanding sign '↟' in addition to the sign of absence '∄'.
Recall that definite integral can be a number while indefinite integral is a function [32], but actual distinction between functions and their functionals depends on the situation in which the expression is considered rather than just on its formal representation. Functional is just a number to be calculated, the value of its generating function; this can include also definite integral. Switching the status of an expression after its integration from that of function to its functional is meant to prevent duplicate integration of the already integrated function in the very same situation.
Operational formulas of mathematics are like genetic code that lays dormant as a pattern for the operations to be performed according to prescriptions encoded within its formulas. By analogy to epigenetics, the subscripts play the role of epigenes, which can either suppress or express (allow) the execution of (operational and structural) patterns encoded in genes. Although the subscripts are not operational, but merely informative, they could tell us whether the formally valid formula should be performed or perhaps the particular operation shall be inhibited for certain settings of the subscripts. Since no epimathematics has been invented yet, I must use narratives for the present explanations of the mathematical nuances that are ignored in most orthodox approaches to mathematics. Since the new synthetic formula (3) has already been confirmed in several unbiased experiments [1], we should take the new synthetic approach seriously, instead of concocting fake theorems [2], which implicitly tried to dismiss the unimaginable by justifying the inherently unjustifiable. Synthetic mathematics enables us to envision the allegedly unimaginable by sheer power of mathematical operations. As J.S. Bell remarked, proofs of impossibility can prove only lack of imagination [33], see also [34]. Hence, mathematics should not be restricted by "great" minds' knowledge or imagination (or lack thereof). Nevertheless, the aforesaid contradictions in the eq. (6) emerge only when the function f(x) is being extended into the prospective function f(x,↟y) via the complementary variable y. For a function of two variables f(x,y) the equations (5) and (6) are valid and formally noncontradictory. This means that if a logical contradiction would ensue, it would not be due to the formulas but might result from the particular situation, i.e. it would be contingent upon conditions unrelated to these two formulas. However, when a function f(x) -or one with multiple independent variables -predicts very well some experimental results, but fails to predict others in the same category, it should be complemented, not dismissed. This behavior means that the given function is somewhat incomplete and therefore its completion requires superposition with a certain other prospective function, which should be obtained via extending the original function into a previously hidden complementary variable. For the original function may be both: true and yet insufficient for predicting all relevant phenomena in its category. Einstein's general theory of relativity (GTR), for example, gives practically perfect predictions for purely radial effects of gravitational fields. The results of the GTR's test with the use of a hydrogenmaser frequency standard in a spacecraft launched nearly vertically upward to 10000 km agreed with the GTR's predication at the 7010 -6 level [35]. All purely radial GTR's tests I reviewed were always splendid. I have not seen or heard about single purely radial test of the GTR that challenged the theory. When the experimental setting was not quite purely radial, however, numerous experiments and observations failed to agree with the GTR's predictions. When the impact of the locally dominant gravitational field was not exactly aligned with the radius of the field that points outwards from the local center of gravity, results of such experiments differed significantly from predictions of the GTR: see [36], [37], [38], [39], [18], [40], [19], [22], [21], and experiments by Sadeh [13], [14]. The latter have been reconciled in [1] by slightly simpler complementary formula than the eq. (3).

ILCPA Volume 50
At first, the observed deflection was only slightly larger than predicted and within boundaries of error [41], but with more precise measurements already in 1919 AD it was too large for the GTR to be correct [37]. Einstein admitted that his GTR was not devised for any other than just purely radial phenomena. He wrote that some tangential deviations [from the radial gravitational attraction] would be too slight if measured on Earth [42], which was common assumption back then. Due to his deliberate omission of any nonradial (i.e. tangential and/or binormal) gravitational effects, even his "flagship" prediction of deflection of light, which is evidently partly tangential phenomenon, was not quite accurate [1]. The GTR was clearly not designed for nonradial effects of gravity and thus it must be complemented for phenomena happening along equipotential surfaces, which fact was experimentally confirmed [1]. The GTR can serve as cosmological theory of gravitation where tangential interactions are negligibly small. The eq. (7) equates derivative of the extended function with its partial derivative with respect to the new complementary variable y while the original variable x is turned into just fixed parameter. Yet because the variable y is absent (∄y), the derivative actually determines a certain fixed functional w[]: (8) where the first integration w[]∫dy proceeds between zero and a certain y whereas the second integral equals to the fixed differential turned into functional because there was no active differential of y to be integrated. Recall that functional, as just a value to be calculated from its formula, is not a function and thus it cannot be meaningfully differentiated or integrated. Since the prospective extra complementary variable y is definitely absent (∄y) in the formal differential of the prospective function f y (x,↟y) x to be differentiated with respect to y while x must remain fixed, the function is actually just a functional that cannot be legally integrated. Note that w[] on the LHS is just a name to be substituted with functional.
The truth is that only legitimately obtained differentials can be integrated. But because the integral w[]∫dy evaluates to either function (as an indefinite integral) of y or to functional (if it is treated as a definite integral in the range from zero to a certain value of the variable y) -which depends on the situation, when the integral equates to functional, it must be considered as just a functional. Hence, it can be legitimately substituted by the functional on its RHS in the middle implication of the chain of eqs. (8). The often-ignored fundamental distinction between functions and their functionals prevents illegitimate integration of expressions that are not really valid differentials. The substitution is not only permissible, but it is necessary indeed. Otherwise, we would perhaps inadvertently (i.e. unknowingly, due to breakdown of Leibnitz's notation) have integrated functional, not differential.
Harmonic extensions of analytic functions under certain conditions have been proposed in [43], but some authors considered that impossible in general, at least within the realm of classical potential theory [44]. The extension proposed above relies on the possibility that by inclusion of a formerly hidden variable y one can generate a new function of y from the functional w[] shown in the eq. (8). Now if the new function is (allowed to be) superposed to the original function, the resultant extended "superfunction" supersedes the original function without having changed the original function itself. That is why I called the extra variable and the new synthesized function complementary.
The eq. (8) is thus a generic prototype for prospective complementary extensions. Nonetheless, its conceptual complexity in conjunction with novelty of the synthesis calls for an exemplification. Yet before we move on to discuss advanced issues, we need to summarize former radial-only terms.
Recall that the partial differential W(r) representing partial (i.e. definitely incomplete) rate of work done taken in the radial direction alone, yields only the radial part U(r) of the field's potential energy that is spent on the work being done by the gravitational force field F. This implies the formula (9) where V(r)=1/r is the generic radial-only potential function of radial/center-bound force field, in which case function can be considered as scalar [10].

SYNTHESIZING EXTENDED FUNCTION WITH A COMPLEMENTARY VARIABLE
Formerly unexpected extra frequency decrease found in the previously unreconciled experiments conducted by Sadeh [13], [14], which have been eventually reconciled in [1] and discussed in [8], suggested that the original function f(x)=W(r) should be extended to a prospective supplementary function f(x,↟y)=W(r,↟Q) by introducing an extra generic predictor variable Q. For the extra variable Q to properly complement the actual radial variable r (i.e. the radial distance), it must be of some other than radial (hence nonradial) character. Notice that the conclusion that Q must be nonradial is not an existential postulate, but it has been inferred from the fact that any variable complementary to a radial one must be either nonradial itself or it must depend on a nonradial variable if it is a function-variable.
Since x=r=const defines an equipotential sphere surrounding the gravity center of the force field due to the mass M, the variable Q could refer to the matter that is enclosed inside the given sphere. Notice that we are not introducing any new physical entity, but only a new complementary interaction among the already available masses M and m, which have been previously assumed as being constant. However, as the locally dominant source of the field, even constant mass M can have both: the usual radial effects of gravity represented by the radial work done W(r) as well as some nonradial effects of exposure to M (denoted by the prospective extra variable Q) which can affect the orbiting mass m too.
By substituting x≔r and y≔Q in eq. (8) we can turn the functional w[x,y] into a function w(Q) (10) where the derivative (treated as a functional) must be replaced with the functional [U(r)] from the eq. (9) because r as well as every other parameter present in the U is fixed. The eq. (10) defines thus w(Q) as a generic function of quantity of matter Q. Yet the function w(Q) with respect to Q is also just a functional with respect to the -assumed as fixed -variables G, M, m, r, which do not really vary but play the role of parameters with certain fixed values assigned them for each particular star or planet.
Since the function w(Q) has been derived entirely from generic mathematical formula, we cannot really say yet what it can actually describe. We cannot definitely say how the quantity of matter Q is supposed to be measured and thus we do not know what it may represent physically either. All I have shown up to this point is that the eq. (10) implies that a certain complementary variable Q fitting the description of quantity of matter (because it is codetermined by constant G, M, and m, within a sphere of fixed radius r) can be formally synthesized. The variable Q was hidden before AD 2000 when it was first introduced in [1]. The formal necessity of its presence had been deduced back then from physical considerations [1]. Here it has been derived from purely operational formulas of mathematics that is from rules of differential calculus. The physical explanations served only as a guide, due to the sheer complexity of the phenomena as well as newness of the new synthetic methods of mathematics.
From the obvious fact that any equipotential sphere surrounding the source mass M constitutes the nonradial boundary of a spatial enclosure of the mass M, we can infer that Q should refer to an effect of quantity of matter of the mass M contained inside the enclosure. Nothing else is known about the variable Q at this point. Notice that the function w(Q(λ)) shown in eq. (3) has been obtained by less abstract physical methods. It can surely serve as guide, but must not be used for any direct inferences here, because I intend to show that a function like w(Q(λ)) can be deduced

126
ILCPA Volume 50 also from a more abstract and purely operational mathematical formula via synthesizing the nature and features of the formerly hidden function-variable Q. I am going to show that calculus permits unraveling of hidden variables. The fact that operational formulas of mathematics can allow us to make a formal discovery of previously hidden variable is conceptually very significant indeed. For we have seen that there is much more to potential energy than the former (i.e. pre-2000 AD) physics stipulated. Yet it is up to unbiased experiments to confirm that what the synthetic mathematics allows in principle is actually taking place in the physical reality too. However, in order to gain plausible experimental confirmation one has to be more specific as to how is the variable Q measured and what really is the prospective mathematically synthesized function w (Q(λ)).
Yet one might ask whether what has been derived above in terms of derivatives and differentials, which are rates, could also be applied to the integral formulas that produce amounts or accumulations. The affirmative answer to the question supplies the Leibnitz's rule for differentiating under integrals: (11) showing that partial differentials can protrude integrals while bounded by their bounds, compare [45].

ENCAPSULATING EXTENDED FUNCTION WITH COMPLEMENNTARY VARIABLE
Notice that by the chain rule for differentiation of functions encapsulating function-variables [46] (12) each subsequent complementary variable actually becomes a new function-variable encapsulating the primary independent variable t of the given representation of the particular differential with respect to the variable chosen as the acting variable in the given differentiation. This conclusion may become quite clear when one rereads the eq. (12) backwards, i.e. in reverse order, from right to left. Now applying the chain rule (12) in reverse order, the equations (5) and (6) may be rewritten as: where in each total directional derivative the extra variable becomes function-variable encapsulating the primary variable of the given representation of the total derivative with respect to the directional variable chosen as the primary variable for each particular partial differentiation. The total directional derivatives on the LHS are just names (or labels) to be substituted with outcomes of the operations to be performed on the derivatives standing on the RHS in these equations. The derivative df y (y(x))/dx on the FRHS in the eq. (13) means that we are seeking an intermediate variable y that depends on present x. One can always set an intermediate variable. The encapsulation does not create contradictions.
Notice that I have dropped the last subscripts on the FRHS of the two equations, because they lost their informative value for the ensuing encapsulated functions. We have inherited the subscripts from the eq. (4) where they are meaningful albeit for simple functions. Now, however, their once desirable presence -as indicators of fixing the inner independent variable for the partial International Letters of Chemistry, Physics and Astronomy Vol. 50 differentiation -became confusing, because the inner variable is also marked there as varying due to the chain rule (CR) (12) that overrides all operations (for the CR is proven as always true). Due of the ensuing encapsulation, the last subscript became misleading. This change has neither operational nor structural consequences.
If superposition of effects is permissible, one can build up the grand total derivative as follows: (15) which represents the sum of all partial derivatives taken in all directions as well as the derivatives of encapsulated functions, provided the experimental situation allows superposition of the differentials. Contingencies for superposition belong in physics; mathematics enables superposition by default. The substitution f(x,↟y) ≔f(x,y) in the eq. (15) means that if the prospective variable ↟y is found and included as a regular variable y, the prospective function turns into regular function f(x,y) whose expression can be further evaluated just as it is shown on the RHS of the eq. (15). Notice that without the substitution mark '≔' the chain of equations (15) could lead to misinterpretations and might even produce nonsensical inferences. Compilers of some programming languages distinguish substitution from regular equality (and even conditional equality '==' from regular equality symbol '=') in order to produce unambiguous computer code. In order to be quite unmistakably interpreted and then correctly evaluated, our mathematical expressions should be unambiguous too.
The formerly hidden variable y, which extends the frame (x) of the original function f(x) into a frame (x,↟y) in which the extended function f(x.↟y) is cast via synthetic top-down proceeding, is not just any variable but a very specific complementary variable. Although the prospective complementary variable y is varying quite independently of x, the purpose for its unveiling is to complement x in the prospective function f(x,↟y). Most traditional mathematical bottom-up proceedings, whether deductive or inductive, focus on what attributes of f(x) the function can project. They relied thus on allegedly "self-evident" axioms and somewhat arbitrary primitive notions. Yet if an extended complementary function exists, it may have also some extra, peculiar attributes that just cannot be deduced from the original function f(x) alone. Therefore, the prospective extended function should be synthesized.
Although at first glance, the notation for the encapsulated derivatives may appear as somewhat counterintuitive, it is formally valid in the sense that it was synthesized upon abstract hints suggested by envisioned irreconcilable experimental results and derived from proven mathematical relationships. This does not mean that mathematics needs external crutches to move forward, but it would certainly welcome experimental confirmation of its syntheses in order to get rid of inadvertent misconceptions.
The synthetic extension formula (15) offers a class of derivatives for complementary variables. It is just very generic prototype. The yet unknown attributes of the extended function are not specified. Note that the eq. (15) extends the original function f(x) by revealing just a single extra complementary hidden variable, but adding more hidden variables is also possible even though it is more complicated.
The eq. (15) implies that derivatives of functions whose scope is expanded with the use of some formerly hidden complementary variables are somehow inherently intertwined or entangled. Possible meanings of the mathematical entanglement shall be discussed elsewhere. Hence extending realistic (as opposed to just postulated or otherwise arbitrarily declared/defined) functions requires new synthetic approach that was kept beyond the horizon of orthodox mathematics, and eluded physics in the past.
With extensions comprising encapsulated function-variables, however, we may face the issue of expanding dimensionality. Having said that, I shall pinpoint only the attributes of dimensionality that have been misconceived in some traditional approaches utilized in former mathematics.

ILCPA Volume 50
Traditional pure mathematics defined some concepts with the use of quite arbitrary existential postulates selected primarily just for their convenience. Noncontradiction was their main concern, but it was not always controlled. Noncontradiction can ensure consistency [47] but not truth of pure mathematics, in which the latter cannot be ascertained. Yet replacement of truth by noncontradiction is untenable. Thus, we need external confirmation of purely mathematical reasonings by unbiased experimental results [48]. The arbitrary postulative method endorsed in former pure mathematics often created an artificial reality. Synthetic mathematics deals with the actually existing physical reality, whose explanations cannot rely on whimsical existential postulates. Why would orthodox mathematicians -in their right mind -try to postulate existence of things that already exist in the physical reality, unless they do not really want to investigate the actual reality (whether it is physical or abstract mathematical), but rather some arbitrary creations of their own, so often confused minds? Proving that an idea can be derived from preconceived primitive notions and built upon allegedly selfevident axioms can establish neither its truth nor even its validity. Only unbiased experimental evidence can help us with that. In ancient times of Euclid, the proof was indispensable and rightly so, because mathematics was then build from scratch. Once built, however, the operational power of mathematics exceeds our ability to grasp its overarching principles as well as our imagination. We can no longer keep on inventing mathematics without facing the risk of creating inadvertent misrepresentations and/or misconceptions, but we may synthesize the advanced mathematics that is unreachable by other means from experimental hints.
Gustave Choquet admitted that the well-intended Euclid-Hilbert axiomatization of geometry (that was based on the notions of length, angle and triangle) so marvelously concealed the underlying vector space that the concept of vector remained unrecognized for ages [49]. I am not crying with Dieudonné "down with Euclid" [50] or "away with triangle" [51], but I agree that the secure ancient mathematical paradise has been lost [52], for of no formal system one can affirm with certainty that all contentual considerations are representable within it [53]. Thus definitions in analysis are not to be considered as arbitrary; they must satisfy the condition of utility as regards the science to which they belong [54].
Hence, to the three usual levels of abstraction (uninterpreted formal calculus, partially interpreted or indeterminate geometrical system, and the actual application of the geometrical system that is fully interpreted and testable [55]) one should add the possibility of existence of an overarching symmetry whose presence could be found by synthesizing unanticipated clues supplied by experimental hints.
Just as we have learned from operational principles of mathematics that some hidden variables can be unveiled, we should reveal also structural attributes of the physical reality -such as handling of abstract dimensions -presumably from geometric and quasi-geometric abstract structures ostensibly present in the actual reality. For only solvable procedures -as certain assemblages of mathematically legitimate operations -should correspond to definitely constructible (geometric or even most abstract quasi-geometric) structures and vice versa [5], [6].
If a procedure -such as an equation -is insolvable even in principle, then the corresponding to it structure that the procedure is supposed to describe is perhaps misplaced. We should certainly not abandon insolvable procedures, but rather cast them into another context, in which they could become solvable, even if only approximately or perhaps only in principle. Similarly, instead of just throwing out the apparently unconstructible yet evidently existing structures, we should recast them into yet another, realistic conceptual context. For real physical existence of those apparently unconstructible structures actually means that some of our former concepts are misconceived and the theories built upon them are perhaps misguided and/or (at least conceptually) deficient. Arbitrary concepts whose existence is postulated via statements disguised as definitions can create nonexistent artificial reality.
International Letters of Chemistry, Physics and Astronomy Vol. 50

OVERCOMING OPERATIONAL LIMITS ON EXTENSIONS OF FUNCTIONS
Since there is an implicit virtual limit (posted by Abel and enhanced by Galois) based on proven insolvability of polynomial equations (in single variable) of degree higher than four, hence also the number of dimensions that are directly representable within any single vector basis seems to be limited to just four [5], [6]. If so then is the extendibility of the given function's scope or frame limited too?
Although the number of independently varying variables directly representable within the same vector basis is limited (by Galois' theory, as well as by curious behavior of Lagrange resolvents [5]) to only four, one could always add also an extra foreign variable that is not directly representable in the native vector basis of the already present variables. However, the foreign variable must belong to some other, preferably dual vector basis, in which it could be directly represented [6]. Recall that the dual vector basis is just yet another flavor of the (assumed as primary) vector basis, for it covers the latter without increasing the total number of geometrically distinct dimensions [6].
In fact, Dedekind has already realized that although six equations are needed to represent three orthogonal (hence formally independent) lines, this requirement adds only one extra constraint (to the regular three geometric dimensions), yet in two flavors. This feature turns the abstract generic four-dimensionality into indistinguishable pair of (1+3)D and (3+1)D kind [56], [5].
Traditionally, the most important dimension functions for general (topological) metric space R are the covering (or Lebesgue) dimension dim R and the strong inductive (or Čech) dimension Ind R [57], [58], [59]. One can say that compact metric space has a certain finite dimension r if there exists a closed -covering where >0 [60] -compare also [61], [62], [63]. Covering approach was favored mainly due to the belief that dimensionality is to be determined inductively from local properties of abstract sets. The topological dimensions are inadequate, for inductiveness implies only enumeration, not build-up of no-nonsense dimensions. Sweeping 1D line to get 2D plane and then the plane to get 3D solid works only that far. Once it gets to 4D=(3+1)D it must turn into (1+3)D and then dimensions become more complicated just as Lagrange resolvents for dimensions higher than four [5]. By tacitly ignoring achievements of Abel, Galois, Lagrange and Dedekind pure mathematics lost its integrity [5].
Yet topological dimension may be actually an instance of a certain yet unknown global principle or an unidentified symmetry. Hausdorff too has admitted that it is impossible to distinguish between space and its perfect abstract mirror image [hence also its dual] by some inner (i.e. geometric and/or topological) criteria. Thus the two (1+3)D and (3+1)D operationally quite equivalent representations may be tied together in a way that topological methods did not recognize [64]. Topology should not impose thus its nice-and-easy, but not always appropriate, methods on other branches of mathematics.
Hence, the 4D quasi-geometric structure that can possess no more than four distinct and solvable dimensions has also six complex or perhaps abstract algebraic coordinates or alternatively up to eight overlapping parameters [6]. This conclusion is specifically corroborated also by the (mathematical) catastrophe theory (CT), which fails for more than six external variables [65]. Recall that -according to the CT's Classification Theorem -in any system governed by a scalar potential, and in which the system's behavior is determined by no more than just four different factors, only seven distinct and qualitatively different types of discontinuity are possible [66] p.42. For catastrophes with more than five control factors, an infinite number of singularities without unique unfolding emerge [66] p. 55.
For elementary catastrophes, as singularities of smooth maps with r4, could be finitely classified by unfolding only certain (7) polynomial germs: [67] p.53. Beside these seven different types of discontinuity René Thom has also envisioned an extra catastrophe of transition through (or near) singularity [68], which radically differs from the two basic catastrophes of conflict and bifurcation [69], [70] p.47. Taken together, however, they could match the aforementioned eight abstract parameters. Nevertheless, since CT is topological theory, i.e. essentially qualitative [71], it is

ILCPA Volume 50
suitable only for qualitative dynamics [72]. Yet all the aforementioned inferences from definitely qualitative CT agree with synthetic analyses of dimensionality and spatiality [5], [6]. The so-called 3X n algebras (equipped with both inner product and ternary vector product) also exist only in (abstract) dimension n=8 [73] just as 4D metric space is covered by 8 parameters [74], [6]. This means that spatial and quasi-spatial nD structures evidently do exist, but dimensions higher than four should be quantized/bundled in 4-tuples. Higher-dimensional spaces are thinkable [75] but maybe upon the basis of infinite-dimensional sets even though there is no proof of existence of such sets [76].
Quantization of dimensions seems unavoidable [5], [6]. For in general, if one would define nD vector as an ordered tuple of real numbers [77], then it leads to systems of linear equations. It is known that the system of n linear equations with n+1 coefficients defines the set of row vectors in an abstract (n+1)D Euclidean space [78] and that subspaces of vector spaces of dimension n(n+1)/2 constitute diagonalizable matrices, i.e. conjugate vector spaces of vector space of symmetric matrices [79]. If orthogonality of vectors is required, however, the operational restrictions (on the maximal number of unique dimensions put in a single vector basis) make quantization of abstract dimensions necessary.
The formerly unrecognized importance of works of Abel, Galois and Lagrange for restriction on the number of dimensions within single geometric space was outlined in [5], [6], wherein the ideas of Grassmann, Hestenes, Poincaré, Riemann, Dedekind and several other mathematicians and physicists have also been briefly highlighted. Many other authors also contributed important ideas pertaining to geometric as well as abstract dimensionality, which shall be further discussed elsewhere.
Here I will focus mainly on operational approach to enhancing dimensionality of equations within the abstract yet quantitative conceptual realm of differential calculus. For in the present paper I am not looking for a mathematical function that could represent a perfect or perhaps "final" law of nature, but just for a very practical method to enhance any function's predictive power via more diversified set of predictors. By 'practical method' I mean one that is operationally valid and conceptually applicable.
By looking at RHS of the eq. (4) one can easily see that the variable y, made fixed for the partial differentiation of the given function f(x,y) with respect to variable x, effectively became temporarily indistinguishable from a parameter during that differentiation. Conversely, a fixed parameter p could be turned either into an active variable y, or -if the parameter cannot be made active -then perhaps it could be turned into a passive function-variable [y](h) of yet another (previously hidden) but now quite independently varying variable h, which can play the role of an actively varying extra complementary variable.
This kind of enhancement would make the original single-variable function f(x)[p 0 ] with some set of fixed parameter(s) p 0 into a two-variable function f(x,[y](h))[p 1 ], at least in principle. Although the new function-variable [y] (in its new role as fixed function) could not vary itself, the fixed function [y](h) -as a passive function of exposure -is being indirectly accumulated over h. It has a cumulative effect that can vary with the actively varying hidden variable h, whose variability is quite independent of that of variable x. The nature of functions of exposure will be explained in more detail later on.

INTRODUCING EXTRA COMPLEMENTARY VARIABLE
If experimental results suggest that even the extended function f(x,y) cannot always predicts what happens in the given experiment, then one may try to add yet another complementary hidden variable z such that the following equation diminishes or perhaps even erases the discrepancy arising from the "unstable" predictions of the aforesaid function. The enhanced derivative can be expressed as follows:

International Letters of Chemistry, Physics and Astronomy Vol. 50
provided that the extra complementary variable z is of the same kind as the variable x (although not necessarily like y), and is varying quite independently of them, so that the original reference frame (x,y) can be extended into the extra complementary variable z. The new extended complementary reference frame (x,y,z) should be backwards compatible with then original reference frame (x,y).
However, such an independently varying complementary variable can extend the dimensionality of the original function, provided it fits into the prospective extension of the original vector basis (or the original reference frame, in general). Hence, if the dimensionality of the function is -for whatever reason -limited and thus cannot be exceeded, the previously hidden extra complementary variable must not be introduced. How to handle the latter cases shall be explained elsewhere.
The extra complementary variable is an example of a previously hidden variable. Hidden variables were proposed to remove indeterminism from quantum mechanics, but the hidden variables that are revealed in operational mathematics open up the door to enhancements in any exact theory. At a deeper level of inquiry into the given phenomena a variable that was previously hidden, or which had been "invisible" (or perhaps just unrecognized as relavant to the given phenomena), may emerge, no matter how sophisticated the theory appears to be. Some irreconcilable unbiased experiments could hint at possible existence of unreconized (hence hidden) variables, which could make possible reconciliation of the discrepancies found between curious experimental results and previously predicted ones, whose prediction was based on the conceptually unsatisfactory original fuction.
Whether a magnitude is to be treated as fixed parameter or as a yet another independently varying variable depends on the depth of inquiry into the phenomena described by the given equation. Hence, it should depend on physics, not on mathematics.
Nevertheless, sometimes a fixed parameter is unlikely to vary discernibly enough for the present resolution of physical instruments to detect its variation. However, this inconvenience could be further amplified by the fact that -if left invariable -the prospective fixed parameter (whose extra impact could extend the original function, according to some experimental hints) would overlap with some parameter(s) already present in the original function. In such a case, the extra impact of the overlapped original parameter would be either cancelled or duplicated, depending on position of the prospective overlapping parameter within the extended function formula with respect to the overlapped one. Since neither of these options is admissible, this (often very disheartening yet commonly encountered) case requires special attention.

REASONS FOR OVERLAPPING COMPLEMENTARY EXTERNAL VARIABLE
While fixing (or freezing) an active variable during partial differentiation with respect to another variable makes the frozen variable temporarily passive without deactivating its impact in the process, a parameter is already fixed by definition and therefore could not be frozen again. Neither could it be removed without having its impact on the process deactivated, which option is inadmissible in general, because fixed parameters codetermine fixed abstract structures just as independently varying variables codetermine the operational procedure that depends on the underlying fixed/static structure which is determined by fixed parameters. The fact that operational procedures make the underlying structures dynamic is the main reason for unveiling hidden variables by turning some parameters (previously thought of as being fixed) into newly unraveled variables or fixed (but exerting cumulative influence through exposure to their accumulated effects) functions of some other formerly hidden variables. For in order to predict outcomes of more accurate experiments, one needs models with more independently varying variables -acting as extra predictors of the experimental outcomes -which would cause the given function to more adequately represent the effective operational dynamics of the mathematical structure that underlies the given model. This is presumably because the complexity of natural world apparently has no upper limit.

ILCPA Volume 50
Even the parameters that were previously disregarded as being irrelevant to the modelled process could be considered as candidates for becoming stand-ins for hidden variables, or for becoming fixed functions of exposure that depend on certain other, varying hidden variables. It depends on how deep is the level of inquiry into the modelled phenomena, which should be guided by experimental hints.
Mathematicians are very strict when it comes to handling operations. Yet they tend to investigate them just in terms if mappings of sets, while paying only lip service to the structures operated on by the abstract operational procedures whose arbitrariness often disassociates them from the structures.
The concepts of curves, surfaces and hypersurfaces give rise to the mathematical idea of variety, which consists of common solutions of a finite number of polynomial equations in a finite number of variables [80]. Yet despite the proven fact of insolvability of polynomial equations of degree higher than four (in single variable) [5] the dimensionality of their structures within a single space used to be viewed as being entirely unrestricted [6]. Nevertheless, since any linearly independent system of a vector space can be complemented to a basis [81] p.49, and the dimensionality of the basis is actually restricted [5], then increasing the dimensionality of the spatial structure that would host the modelled function may require revealing of some hidden variables which could expand the original function.
If the needed extra supplementary external variable overlaps -partially or entirely -one of the variables or parameters already present in the function f(x,y), it must not be introduced as standalone (i.e. independently varying) variable, but only as a function-variable. Moreover, such an overlapping supplementary synthetic function should be rendered as an intensive (i.e. not as an extensive) variable. This is necessary in order not to introduce the effect of the -already present in the f(x,y) -overlapped variable again. It could happen if the latter is an extensive variable whose impact should be neither duplicated nor eliminated (in the supplemented function along the new supplementary -whether actual geometric or just abstract -direction associated with the external supplementary variable).
For example: The angular nonradial work done function w(Q(λ)) standing on the LHS in the eq. (3) is an indirect function of exposure to the constant density of matter functional [Q](λ) accumulated over the angular distance λ, which is the actual independently varying variable that is measured along an equipotential surface. On the RHS of the equation, the matter density Q is constant; it does not vary with the angular distance λ and it cannot vary with the radial distance r that is fixed on equipotential parts of the trajectory taken by the mass m.
The density of matter Q (of the locally dominant force field due to the source mas M) might be -in general -a function of both the radial distance r as well as the perihelion distance r p (which can also vary in general). But in its particular role as determining the angular nonradial work done function w(Q(λ)) the matter density Q is treated as constant function of exposure. Hence the function w(Q(λ)) depends on independently varying angular distance λ but it is codetermined also by tentatively fixed function-variable Q, which is only assumed as fixed for the sake of simplicity. The function w(Q(λ)) is codetermined also by tentatively fixed radial distance r (while on the equipotential surface) and by all the other permanently fixed parameters present in its formula (3).
The density of matter in the eq. (3) was defined as Q=Q MassM /Q Water i.e. as ratio of densities of the source mass M to density of water, which makes it an intensive variable. This way the source mass M is not reintroduced again. For its influence is rendered indirectly through the average density Q even though mass M remains dormant in the constant function-variable Q. Notice that the source mass M of the locally dominant force field has effectively been substituted in the eq. (3) by equivalent mass of water. The feature made material contents of stars and planets comparable for most practical purposes.
In first definition of Newton's "Principia" he made clear that quantity of matter, which he called mass, should arise from both: density and bulk of material substance ascribed to every massive body, whereas quantity of motion arises from velocity of the body and its mass combined International Letters of Chemistry, Physics and Astronomy Vol. 50 [82] p.1. Newton was correct in his insistence that density of matter should have impact on some aspects of gravitation.
Although in his assertion that the universal power of gravity is proportional to several quantities of matter (Proposition VII, Theorem VII in [82] p.321) Newton was clearly aware that the observed effect of attractive (radial) gravity affected motion and was itself affected by motion.
For in yet another place (Proposition X, Problem VII in [82] p.197) he speaks about the resistance to density of the medium in which bodies float and mentioned square of velocity that measures the resistance. Yet in "Axioms, or laws of motion" he does not speculate about the possible causes of phenomena but restricts himself to statements of facts, which could be observed, at least in principle. He uses there kinematic/geometric terms like 'change of motion' as well as physical terms like 'motive force' [82] p.11.
Even if one would discount the relativistic fact that body's mass changes with motion, mass is not a basic quantity of nature [83]. Even if only a single star is considered, its mass is not an unambiguous measure of its matter content because mass [in its role as measure of material contents] depends upon the state of binding of its baryons, and therefore rather the total number of baryons inside a star is a measure of the amount of matter the star contains [84].
The mass measures properly only inertia (or resistance to motion), not constitution of matter. Only if two bodies would travel at the same speed along practically the same or closely adjacent trajectories then the ratio of their matter could be substituted by the local ratio of their masses, provided they are far away from influence of all the other massive bodies. Inconsiderate use of mass as a substitute for matter/substance and other former mathematical blunders adversely affected physical analyzes for centuries [2], [9], [8].
Since mass and density of matter could vary quite independently of each other -even though the two processes are tied together, as it is observed during explosions of stars -these two measures of material substance certainly overlap, at least in part. Therefore, mass and density of matter should be considered as distinct, though not exactly separate, mathematical functions of attributes of matter.
When the scopes and/or impacts of two conceptually distinct extensive variables overlap, one of them must be turned into an intensive variable just in order not to introduce twice the same extensive variable in the same formula, even when one extensive variable has twofold impact on the outcome of the phenomena modelled by the formula. One needs a pair of previously hidden variables to achieve that goal in general. This is not a matter of philosophical preferences, but mathematical necessity.
Both λ and Q are examples of hidden variables, for they have not been recognized before. Neither was their impact taken into account before 2000 AD [1] even though theoretical possibility of their existence dates back to 1773 AD [16] when Lagrange has made his ingenious mathematical discovery of the intricate relationship -shown in eq. (9) -between force fields and their potential energy.
Derivative of every function is tangent to the function at each point where the function is uniquely determined. Since the angular distance λ is always tangent to the radial distance r on any equipotential sphere, the original radial-only function in the eq. (2) -wherein the extra variable λ is absent -should also determine derivative of a compound function, namely the prospective overlapping complemented exposure function.
Henceforth I will denote the function of exposure by 〖Q(λ)〗, because the extra parameter Q remains constant/average, i.e. unchanging, function of the independently varying angular distance λ. Unlike a regular function of evolution that always keeps on changing/evolving, the constant exposure function 〖Q(λ)〗 is being accumulated over the distance λ but the fixed function-variable Q itself never changes its preassigned value at the present depth of inquiry.

FORMAL INTRODUCTION OF OVERLAPPING COMPLEMENTARY VARIABLES
If we try to extend the function f(x) into f(x,↟y,↟t) by expanding its frame into (x,y,t), we obtain (17) from the pattern showing derivative for 3-variable extension given in the eq. (16) while using analogy to the 2-variable eq.(7).
Evidently, the derivative because the prospective variable t is absent therein (∄t). In its standard form the second term must vanish as logically inadmissible because there is no dependence of the variable x on the prospective variable t as the chain rule would suggest.
The alternative representation of the second term makes it vanish too because the prospective variable t does not exist in the expression f x (): and therefore we must dismiss the second term. Although inadmissible terms actually evaluate to null, they may be viewed as resulting in zero in the generic formulas for the sake of simplicity of this presentation.
Due to the chain rule (12) and the above conflicts, the eq. (17) has been effectively reduced to: (18) with the informative subscripts on the RHS dropped because they lost their original purpose once the chain rule has been applied. The latter feature has already been explained above. Since both variables y and t are absent in the function f y (x, ∄y(∄t)), I must rewrite the eq. (18) as (19) so that the rearranged prospective variables do not create conflicts. While the variable y can stand on either side, the variable t must appear only on the RHS, where it stands as independently varying one. Notice that only t is independently varying variable, whereas y is clearly constant (for zero derivative is assigned either constant or nonexistent variables) as unchanging function-variable that depends on t.
After integrating the two differentials in the eq. (19) we obtain generic dependence of [y](t) as (20) where the field's function of the independent variable x=r has been taken to be the radial-only function of potential energy U(r)=GMm/r just as it appears in the eq. (9). Note that while the expression on the RHS: f y (x, ∄y(∄t))t is a differential, the expression on the LHS: df y (x, ∄y,∄t) is only a name/label of the variable into which the differential on the RHS will be placed. It is analogous to variable declared in a computer program (i.e. storage that is reserved) for the substitution of the function that will actually result from the operations performed on the RHS of the eq. (20). Although mathematical operations are always symbolic, one must not ascribe to the symbols an arbitrary meaning and then substitute mathematics for thought to the detriment of both [85].
Even though a formal expression may look like a function, this does not make it into function. Not everything that is written in formulas is guaranteed to become what it is supposed to be.

International Letters of Chemistry, Physics and Astronomy Vol. 50
Mathematical expressions are like computer programs. We should always declare labels/functionals for substitutions.
Actually, only dt is actively varying in the expression f y (x, ∄y(∄t))t, whereas f y (x, ∄y(∄t)) can be a functional, because absent y does not change it. It is similar to the differential (2x)'dx=2, where the number 2 has been obtained from the derivative of the function 2x. There the number 2 is also a functional or value of its generating function.
As I mentioned before, whether an expression is treated as function or as its functional, is only a matter of operational choice, which depends on the situation in which it is going to be considered. The operators, however, must be used according to the choice that had been made.
When the eq. (20) is applied to radial/center-bound gravitational field we obtain the following: (21) where λ=rθ is the angular distance that corresponds to the angle θ on equipotential surface determined by the fixed radial distance r, i.e. distance from the gravity center of the given gravitational force field. Note that uniform (i.e. expressed in the same units) spherical coordinates are determined by radius and two (spherical) angles, each of which corresponds to an equipotential distance on the sphere of radius r [86]. Here I am taking dimensionless regular angle θ, assuming that -for a nonrotating body -there is no point in splitting it into its tangential and binormal parts, just for the sake of simplicity. To neutralize angular units (such as degrees or radians) one should assign a coefficient like k in the eq. (3).
The function-variable Q must be substituted by fixed density of matter of the given gravitational field's mass source M, in order not to overlap it with the source mass M that is already present in the original equation. Given all these substitutions, the extended function f Q () is evidently representing the additional nonradial angular potential energy w〖Q(λ)〗 of the radial gravitational field.
Obviously, formulation of every rule and law of mathematics and physics depends on the depth of inquiry into the phenomenon described by the rule/law. Heretofore I have not investigated any angles that could determine sign and radius of perihelion. Nevertheless, the formula (21) could become quite similar to the eq. (3) if the perihelion would be assumed as lying on the surface of the mass M, i.e. if r p =r. It is also very similar to the formula that I had derived from physical considerations in [1], which has already been confirmed by several unbiased experiments and observations [1].
At the present level of inquiry into the hidden variables that complement and/or extend theories of force fields in macrophysics, I have not asked the question of whether the extended functions exist or if so, then whether they are physically meaningful. Since we are talking about potential of a vector force field, which can be viewed as differential operator [87] and as such could not vanish without a valid reason, the first question becomes nonissue if the second question is answered in affirmative, which is indeed the case, because the formula w〖Q(λ)〗 has already been experimentally confirmed [1].

DISCUSSION OF THE RESULTS
Although in the present paper I have focused mainly on the new, purely nonradial interactions of the radial/central gravitational field alone, I conjecture that gravity should depend also on temperature and the thermal processes taking place inside both stars and in physically active cores of planets.
However, the present author is not aware of any actual experimental data that could support this conjecture, which is suggested by obvious analogy of the function of exposure to source mass to that of exposure to various types of radiation, including the thermal one.

ILCPA Volume 50
The purely operational ability of enhancing former mathematical laws also hints at the possibility of finding hidden variables suitable for application in other theories of physics. Even very successful theories may need enhancements when some experiments reveal previously unexpected features of the reality those theories tried to describe but could not predict. Also in the case when a theory predicts the outcome of some experiments but could not explain their inner workings, unveiling previously hidden variables could provide the mathematical mechanism that drives the formerly unexplained phenomena.
The functions entertained in the present paper are generic in their nature on purpose, for the sake of simplicity. I have shown that because more adequate laws of physics and mathematics are not only desirable, but often also necessary, a tight collaboration between mathematics and physics is needed to make them more conceptually adequate as well as compliant with experimental results.
Although physics delivers mostly comments, whereas the underlying mathematics is what makes physical laws reliable even if not fully operational, pure mathematics without feedback from physics can inadvertently distort its (often too abstract) reasonings. Therefore, this concise demonstration of predictive capabilities of the new synthetic mathematics calls for integration of abstract mathematical ideas with physical reasonings not only in research, but in education too. Since the physical world is one, it seems reasonable to investigate it from mathematical and physical angles simultaneously, rather than to create its two discrepant views, which not always are quite compatible, and therefore may be conceptually confusing for students and often for researchers too.
While calculus of vectors is restricted by operational solvability of equations representing spatial structures, differential calculus in general is open-ended. It can accommodate multifaceted extensions to partial differentials. Even when the differentials are already exact -i.e. governed by a symmetry -extra variables could be found and appended either inline or on top of the existing ones. Even if the extra appended variable makes the (already exact) original differential inexact, operational validity of the extension is quite plausible, because yet another hidden variable could make also the enhanced differential exact at a higher level in hierarchy of symmetries. The fact that six identical planar 2D squares can be rearranged into 3D cube can serve as simple analogy for operational possibility of such a hierarchically higher level of overarching symmetry.
The unveiling of previously hidden variables can introduce either a new varying magnitude that had not been considered as relevant to the process modelled by the law of nature to be enhanced, or it may turn a parameter already present in the law of nature, which was previously assumed as always remaining constant, into an independently varying variable. Yet another option is to turn a parameter, which must remain constant of necessity, into a fixed function-variable playing the role of a certain function of exposure. Both these scenarios are mathematically possible and physically plausible.
Operational restrictions placed on abstract quasi-geometric structures are evidently not obstacles whose presence is to be concealed, but indicators of more complexity involved in their build-up and handling than orthodox mathematics ascribed to those structures. Therefore, instead of relying on just pure thoughts, which led to many failures in the past, the synthetic development of new mathematical tools should be guided also by experimental hints. For validity of mathematical reasonings should be controlled from outside of the realm of mathematics too.
The mathematical possibility of existence of hidden variables does not mean that all previously developed theories are inherently deficient, but rather that the so-called "final" theories are close to finality only relative to the paradigms from which they originated. On a deeper level of inquiry into given phenomena, however, one could establish a new or an enhanced set of paradigms, presumably through application of principles of the new synthetic mathematics to apparently deprecated former paradigms.
This method of discovering hidden variables does not really suggest that there may be a certain hidden subquantum theory, even if such a theory might be eventually developed. The mathematics suggests that conceptual holes exist even in the -allegedly already completed -classical mechanics as well as in the calculus itself. The holes are perhaps previously unrecognized opportunities.

International Letters of Chemistry, Physics and Astronomy Vol. 50
Even formally perfect theories could be complemented by expanding their scope. The Einstein's GTR, which is perfect for radial-only gravitational interactions, has been complemented by the theory of nonradial effects of radial gravity [1] because Einstein left them out of scope of his GTR [42].
The present author is fully aware of the possibility (or fact, if you will) that several other classes of hidden variables are still tacitly concealed in abstract axiomatical foundations of former mathematics. However, severe scientific misconceptions resulted in apprehension that causedpresumably quite inadvertent -propagation of deficient concepts and even fake theorems [2]. Those misconceptions hamper inquiry into other variables hiding behind the traditional axiomatics of mathematics.
Applying mathematics requires not only comprehension, which is based on (relatively low level) analytical skills that could be delegated to computers, but also understanding of "how things really work", which is usually placed on a higher, essentially synthetic level in the new Bloom's taxonomy pyramid [88]. While understanding is certainly desirable as a goal, comprehension is a must. Making mistakes is not abnormal. Tacitly concealing them -is. Since mistakes are unavoidable, let us make the process of their creation transparent so that others could start thinking about ways to rectify them.
Newton's theory of gravitation was critiqued by some of the "giants" upon whose shoulders he managed to stand up, because without a counteracting repulsion, which neither his nor Einstein's theory supplied, the universe should have collapsed. By counteracting repulsion I mean such that must not be radial, because presence of radial repulsion could not be recognized, for it would just diminish the attraction without ever resolving the impending collapse. Radial-omly repulsion could only slow down the collapse.
One can see that the angular nonradial term in the eq. (1) corresponds to truly nonradial repulsion, because its sign is opposite to the signs of the attractive terms [10]. Although the critique of Newton's theory was linguistically correct, one must not use logical inferences based upon one's comprehension of linguistic meanings of words to judge relations between mathematical terms.
By the same token, one must not conclude that radial fields cannot produce any nonradial effects or other than radial interactions, for this dismissal would not only contradict the proven Frenet-Serret formulas of differential geometry [17], but would also defy experimental evidence to the contrary [1].
Similarly, the argument that extending abstract dimensionality could destroy the original metric is not quite correct if the extended scope of the original function adds a variable that does not change the original vector bases. For in such a case a pseudospace (or configuration space) is effectively created with an abstract pseudodistance, which is never higher than the sum of subsequent "subdistances" determined by the other variables in the metric [89].

CONCLUSIONS
The eq. (21) that represents the angular nonradial part of potential energy of radial/centerbound gravitational force field has been derived in the present paper upon purely operational principles of mathematics. Physical considerations served only as explanatory guide here. The main reason for this approach was to show that the new synthetic mathematics, which I proposed, advocated and applied in the present paper, has truly enormous predictive power enabling us to uncover some previously hidden complementary variables.
The extra complementary variables can enhance thus some formerly established laws of physics as well as rectify those incompletely defined former concepts that were entertained in mathematics and then applied in physics too.
The proven above existence of formerly hidden passive nonoverlapping function-variable, which in this paper emerged from operational mathematics, shows that multipronged superposition of effects is supported in calculus by default. Since the function-variable can be identified with density of matter, the impact of constitution of matter (or contents of fields in general) calls for