**5. Results and discussion**

#### **5.1 Complex adaptive evolutionary system: thermodynamic landscape**

For the purpose of evaluating algorithmic fitness on the given landscape the following ansatz can be utilized for Particle Swarm Optimization [PSO] for a decision tree:

$$\mathbf{For}f: \mathbf{M}\mathbf{n} > \mathbf{M} \text{ and } \mathbf{where}\mathbf{'M} = \mathbf{K}\mathbf{H}\_{-}(\mathbf{t}, \mathbf{h}) \ge \mathbf{i}\mathbf{E} \tag{7}$$

Where

M = Manifold

0 M = Algorithmic Landscape/Manifold, for

KH = potentiated Knowledge History of the Algorithm

t = tradition or procedural process structure [word, string, grammar, memory mapping, rulebase, database] of the algorithm

h = computational multiplicatives and inequalities of the x iE = Adaptive Landscape History of the [Numerical] Object(s) or Particle(s) on the Network.

Lovbjerg and Krink demonstrate for the thermodynamic variables on the Particle Swarm Optimization [PSO]:

$$\mathbf{j} \rightarrow \mathbf{vi} = \chi(\mathbf{w} \rightarrow \mathbf{vi} + \mathfrak{q} \rightarrow \mathbf{1i}(\mathbf{p} \rightarrow \mathbf{i} - \rightarrow \mathbf{xi}) + \mathfrak{q} \rightarrow \mathbf{2i}(\mathbf{p} \rightarrow \mathbf{g} - \rightarrow \mathbf{xi})) \tag{8}$$

where χ is the constriction coefficient.

Here the Three General Rules of Macrodynamics [3GRM] Macrodynamic Automata Rules [MAR] are applied to Information Natural Dynamics [IND] criteria:

*Atomistic Mathematical Theory for Metaheuristic Structures of Global Optimization… DOI: http://dx.doi.org/10.5772/intechopen.96516*

Natural Sets, Natural Kinds, Natural Procedures, Natural Strings, Natural Radicals, Natural Binaries, Natural Radices, in Complex Adaptive Evolutionary System [CAES]-Multiagent System [MAS] for 4D model variables.

These variables should each contain criteria:

Time-Complexity, Particle-Value, Particle-Weighting in Fuzzy set theory, Gravity of System, <<> > Nanodynamics of System variables [TC-PV-PW-GS-NDS]

Set approximate to

Clerc has demonstrated a general Metaheuristic algorithm where for *<sup>f</sup>***:** *<sup>n</sup>* ! essentially *f***(a)** ≤ *f***(b)**. S includes the number of particles in the swarm having

Initialize the particle's position with a uniformly distributed random vector: xi

Update the particle's velocity: vi,d ω vi,d + φp rp (pi,d-xi,d) + φg rg

g pi*:*½ � 7 (6)

**M** ¼ **KH**\_ð Þ **t**, **h x iE** (7)

Initialize the particle's best known position to its initial position: pi xi

specific position and velocity in the search—space:

*Computational Optimization Techniques and Applications*

while a termination criterion is not met do:

for each dimension d = 1, … , n do

for each particle i = 1, … , S do

update the swarm's best known position: g pi

Pick random numbers: rp, rg � U(0,1)

Update the particle's position: xi xi + vi

Initialize the particle's velocity: vi � U(�|bup-blo|, |bup-blo|)

Update the particle's best known position: pi xi

**5.1 Complex adaptive evolutionary system: thermodynamic landscape**

**For** *f* : **Mn > > M and where**<sup>0</sup>

KH = potentiated Knowledge History of the Algorithm

M = Algorithmic Landscape/Manifold, for

mapping, rulebase, database] of the algorithm

where χ is the constriction coefficient.

cle Swarm Optimization [PSO]:

For the purpose of evaluating algorithmic fitness on the given landscape the following ansatz can be utilized for Particle Swarm Optimization [PSO] for a

t = tradition or procedural process structure [word, string, grammar, memory

h = computational multiplicatives and inequalities of the x iE = Adaptive Land-

Lovbjerg and Krink demonstrate for the thermodynamic variables on the Parti-

! vi ¼ χð Þ w ! vi þ φ ! 1i pð ! i� ! xiÞ þ φ ! 2i pð Þ ! g� ! xi (8)

Here the Three General Rules of Macrodynamics [3GRM] Macrodynamic Automata Rules [MAR] are applied to Information Natural Dynamics [IND]

scape History of the [Numerical] Object(s) or Particle(s) on the Network.

Update the swarm's best known position:

for each particle i = 1, … , S do

� U(blo, bup)

if f(pi) < f(g) then

(gd-xi,d)

**5. Results and discussion**

decision tree:

Where M = Manifold

0

criteria:

**154**

if f(xi) < f(pi) then

if f(pi) < f(g) then

*f* : **Omega set Rn > > R with the global minima** *f* <sup>∗</sup> **and the set of all global minimizers** *X*<sup>∗</sup> **in Omega to find the minimum best set in the function series of x**ð Þ

(9)

for system conditions, system boundaries, number and density of particles in the total Information Natural Dynamics [IND] of the Global Optimization Algorithm [GOA]. These are applied to algorithmic manifold for the candidate solution on the given search spaces. It can be argued that given the extremes of information disequilibrium applied to macrodynamic disequilibrium models, there are inevitably generated extremals of various degrees of power, in the incremental Information Dynamics.

#### **5.2 Complex adaptive evolutionary system: weighting**

These differentiable functions can be further defined c.f. Dense heterarchy in Complex Systems Algorithms of a coupled oscillators, where in general formula.

$$\frac{d\mathbf{x}}{dt} = (P(t) + \mu \, Q(t, \mu))\mathbf{x} + f(t),\tag{10}$$

Here in a differential equation we can demonstrate.

$$u^{(n)} = f\left(t, u, u', \dots, u^{(n-1)}\right), n \ge 2,\tag{11}$$

These can be demonstrated in Particle Swarm Optimization [PSO], and Macrodynamic models of Meta-optimization of Particle Swarm Optimization [PSO] [7], c.f.

$$\mathbf{v}\_{i}(\mathbf{t}+\mathbf{1}) = \mathbf{w} \cdot \mathbf{v}\_{i}(\mathbf{t}) + \mathfrak{n}\_{1}r\_{1} \left(\mathbf{p}\_{i} - \mathfrak{x}\_{i}(\mathbf{t})\right) + \mathfrak{n}\_{2}r\_{2} \left(\mathbf{p}\_{best} - \mathfrak{x}\_{i}(\mathbf{t})\right) \tag{12}$$

**for each set of given epoch or evolutionary landscape scenario** prediction in analytical and expectation weighting parameter formula algorithm optimization [Meissner, et al., *ibid*].

#### **5.3 Complex adaptive evolutionary system: thermodynamics**

Regarding bounding definitions, Chaitin demonstrated in Algorithmic Information Theory [AIT] algorithmic decomposition given Boltzmann-Shannon entropy, where in general formula to set the integral.

$$\mathbf{H}(\mathbf{X}) = \sum\_{i} \mathbf{P}(\mathbf{x}\_{i}) \mathbf{I}(\mathbf{x}\_{i}) = -\sum\_{i} \mathbf{P}(\mathbf{x}\_{i}) \log\_{b} \mathbf{P}(\mathbf{x}\_{i}),$$

and

$$\lim\_{p \to 0+} p \log (p) = 0.$$

For Information Natural Dynamics [IND] pairwise comparison, the genesis and stigmergetic evolutionary dynamics of the A-group agent **H\_A** c.f. for

$$f: \mathbb{R}^n \to \mathbb{R} \text{ for } \mathbf{x}\_{\mathbf{i}} \in \mathbb{R}^n \text{ and } \mathbf{v}\_{\mathbf{i}} \in \mathbb{R}^n \tag{13}$$

exist for give node degree correlations and links at network computations for a

*Atomistic Mathematical Theory for Metaheuristic Structures of Global Optimization…*

*C k*ð Þ¼ *<sup>k</sup>*�<sup>1</sup>

*<sup>L</sup>*<sup>∝</sup> log log *<sup>N</sup>* for *P k*ð Þ� *<sup>k</sup>*�*<sup>γ</sup>*

*P k*ð Þ� *<sup>k</sup>*�*<sup>γ</sup>* with *<sup>γ</sup>* <sup>¼</sup> <sup>1</sup> <sup>þ</sup> *<sup>μ</sup>*

� � <sup>¼</sup> *<sup>δ</sup>xix <sup>j</sup>*

These Metaheuristics for Global Optimization Algorithms [GOA] are for purpose of achievement of the theoretical completion between two and more nodes on the network landscape, and ultimately the given requirements for the applied electrical grid. This theory can be utilized to derive, add, multiply, subtract, or divide units designated as necessary to accurately define the parameters for control of the

Some theoretical requirements for Power System applications and machine learning algorithm libraries for solving heuristic challenge for power requirements

1. In the definition for innovation in Global Optimization Algorithms [GOA] for Machine Learning in Power Systems the Path-decision or Algorithm is the activated object [*ontesis*] and Algorithmic network is the kind or type of systemic algorithmic operation of object-getting and technology-building [*telesis*], due to computational physical plasticity conditions and relevant

2.Furthermore the network theory meaning of Path-decision or Algorithm and the computational landscape itself as a network, can be defined discretely in terms of multiple avenues and nodes for algorithms of Boolean systems, [e.g. *st-connectivity*] whence it has progressed in weight, mass and velocity of the

3.Path-decision or *Algorithm programmes* in Computational Sciences, by modules of Alphanumeric Symbols/Characters as *Power Systems of Algorithms*, [PSA]- Information Natural Dynamics [IND] or in a macrodynamic method, *Systems*

1 þ *δxix <sup>j</sup>*

degð Þ� *u* degð Þ*v :*,

*a*<sup>∞</sup> *:*

*:*

It can be determined in scale-free network nodes.

*s G*ð Þ¼ <sup>X</sup>

ð Þ *u*, *v* ∈*E*

*p xi*, *x <sup>j</sup>*

electrical grid, and for control of network extremals.

and control on manifolds have been demonstrated:

*:* (16)

*:*

generic clustering law of

*DOI: http://dx.doi.org/10.5772/intechopen.96516*

and dynamically

where

where.

**6. Conclusion**

criteria.

**157**

defined Ontology.

in Particle Swarm Optimization [PSO].

The variables include function space, gradient, vector and weighting and additionally the stigmergy of the given Macrosystem and subsystem autonomics such as in a Pairwise Breakout Model [PBM].

In associated Fuzzy set logic to determine power externals and singelton mechanics in atomistic Natural Dynamic System [NDS] operations, are often utilized border-pairs or extremals for multi-pair Multilayer Perception-Learning Classification Algorithms [MLP-LCM] [7].

Examples of particle Monadicity in Algorithmic Information Theory [AIT] whence the formula

$$\mathbf{F}: \mathbf{L(A)^N} > \mathbf{L(A)}\\
\text{where the n-tuple } (\mathbf{R\_1}, \dots, \mathbf{R\_N}) \in \mathbf{L(A)^N} \tag{14}$$

indicate the possibilites and types of information physical mechanics for possible variables of the landscape extremals as particles within the min-max parameters [8]. A particle-discrete control function of the node degrees on the evolutionary landscape can therefore be defined where essentially

$$f \text{ } \mathbf{t}(\mathbf{i}): \mathbf{P}(\mathbf{k})^{\mathbf{n}} > \mathbf{K} \mathbf{j} \text{ } \mathbf{Integral } \mathbf{o} \text{ } \mathbf{t} \text{ } \mathbf{k}(\mathbf{t'}) \mathbf{K} \mathbf{p} \text{ } \mathbf{S} \mathbf{ig} \text{ } \mathbf{d} \text{ } \mathbf{t} \text{ } \mathbf{t} \text{ } \tag{15}$$

To quantify, *Primary extrema* of n-arity or *n-Adicity* [Jonah Lissner] are therefore defined by the Author [Jonah Lissner] for alternatives of pair choices [as monoidal algorithmic circuits in the Complex Adaptive Evolutionary System [CAES]] which can be fractional off the prime polynomial root modulos, from the initial power conditions and therefore generate the discrete information inequalities. These can be demonstrated in Hensellian numbers, and secondly, derivable fractional functions, inherent in any given complex *topos* of an complex adaptive evolutionary system [9].

Nagata defined thusly: A local ring *R* with maximal ideal *m* is called **Henselian** if Hensel's lemma holds. This means that if *P* is a monic polynomial in *R*[*x*], then any factorization of its image *P* in (*R*/*m*)[*x*] into a product of coprime monic polynomials can be lifted to a factorization in *R*[*x*] [10].

#### **5.4 Complex adaptive evolutionary system: networks**

The network of circuits then form the basis for Complex Network Systems [CNS] from Simple networks *L*∝ log *N* and adaptive complex or dynamic systems and increasingly complex or quantum probability mechanics.

Scale-free or Barbasi-Albert Models are utilized to advance the mechanics and hypotheses for Complex Network Systems [CNS] e.g. for

$$P(k) \sim k^{-3}$$

and

$$p\_i = \frac{k\_i}{\sum\_j k\_j},$$

*Atomistic Mathematical Theory for Metaheuristic Structures of Global Optimization… DOI: http://dx.doi.org/10.5772/intechopen.96516*

exist for give node degree correlations and links at network computations for a generic clustering law of

$$\mathbf{C}(k) = k^{-1}.\tag{16}$$

It can be determined in scale-free network nodes.

$$L \propto \log \log N \text{ for } P(k) \sim k^{-\gamma}.$$

where

lim *p*!0þ

stigmergetic evolutionary dynamics of the A-group agent **H\_A** c.f. for

in Particle Swarm Optimization [PSO].

*Computational Optimization Techniques and Applications*

in a Pairwise Breakout Model [PBM].

whence the formula

and

**156**

Classification Algorithms [MLP-LCM] [7].

landscape can therefore be defined where essentially

polynomials can be lifted to a factorization in *R*[*x*] [10].

**5.4 Complex adaptive evolutionary system: networks**

and increasingly complex or quantum probability mechanics.

hypotheses for Complex Network Systems [CNS] e.g. for

*p* log ð Þ¼ *p* 0*:*

For Information Natural Dynamics [IND] pairwise comparison, the genesis and

The variables include function space, gradient, vector and weighting and additionally the stigmergy of the given Macrosystem and subsystem autonomics such as

In associated Fuzzy set logic to determine power externals and singelton mechanics in atomistic Natural Dynamic System [NDS] operations, are often utilized border-pairs or extremals for multi-pair Multilayer Perception-Learning

Examples of particle Monadicity in Algorithmic Information Theory [AIT]

**<sup>F</sup>** : **L A**ð Þ**<sup>N</sup> > > L A**ð Þ**where the n**‐**tuple** ð Þ *<sup>R</sup>***1**, … ,*RN* <sup>∈</sup>**L A**ð Þ*<sup>N</sup>* (14)

*<sup>f</sup>* **t i**ð Þ : **P k**ð Þ**<sup>n</sup> > > Kj Integral o t k t**ð Þ' **Kp Sig dj t**ð Þ*=***dt***:* (15)

indicate the possibilites and types of information physical mechanics for possible variables of the landscape extremals as particles within the min-max parameters [8]. A particle-discrete control function of the node degrees on the evolutionary

To quantify, *Primary extrema* of n-arity or *n-Adicity* [Jonah Lissner] are therefore defined by the Author [Jonah Lissner] for alternatives of pair choices [as monoidal algorithmic circuits in the Complex Adaptive Evolutionary System [CAES]] which can be fractional off the prime polynomial root modulos, from the initial power conditions and therefore generate the discrete information inequalities. These can be demonstrated in Hensellian numbers, and secondly, derivable fractional functions, inherent in any given complex *topos* of an complex adaptive evolutionary system [9]. Nagata defined thusly: A local ring *R* with maximal ideal *m* is called **Henselian** if Hensel's lemma holds. This means that if *P* is a monic polynomial in *R*[*x*], then any factorization of its image *P* in (*R*/*m*)[*x*] into a product of coprime monic

The network of circuits then form the basis for Complex Network Systems [CNS] from Simple networks *L*∝ log *N* and adaptive complex or dynamic systems

Scale-free or Barbasi-Albert Models are utilized to advance the mechanics and

*P k*ð Þ� *<sup>k</sup>*�<sup>3</sup>

*pi* <sup>¼</sup> <sup>P</sup> *ki j k j* ,

*<sup>f</sup>* : *<sup>n</sup>* ! **for xi** <sup>∈</sup> *<sup>n</sup>* **and vi** <sup>∈</sup> *<sup>n</sup>* (13)

$$s(G) = \sum\_{(u,v)\in E} \deg(u) \cdot \deg(v)\_{\cdot,v}$$

and dynamically

$$P(k) \sim k^{-\gamma} \text{ with } \gamma = 1 + \frac{\mu}{a\_{\text{\textquotedblleft}p}}.$$

where.

$$p\left(\mathbf{x}\_i, \mathbf{x}\_j\right) = \frac{\delta \mathbf{x}\_i \mathbf{x}\_j}{1 + \delta \mathbf{x}\_i \mathbf{x}\_j}.$$

## **6. Conclusion**

These Metaheuristics for Global Optimization Algorithms [GOA] are for purpose of achievement of the theoretical completion between two and more nodes on the network landscape, and ultimately the given requirements for the applied electrical grid. This theory can be utilized to derive, add, multiply, subtract, or divide units designated as necessary to accurately define the parameters for control of the electrical grid, and for control of network extremals.

Some theoretical requirements for Power System applications and machine learning algorithm libraries for solving heuristic challenge for power requirements and control on manifolds have been demonstrated:


*of Utilization* may develop, for the purpose of heuristic advancements based on computational physical references, functions and operations on specific topologies and as treated as given *Computational sequences*.

**References**

Theorem

[1] "Fermat's Last Theorem", Wikipedia, Website, extracted 2021. https://en. wikipedia.org/wiki/Fermat's\_Last\_

*DOI: http://dx.doi.org/10.5772/intechopen.96516*

ttps://doi.org/10.1016/S0378-4371(00)

[9] Ardema, Rajan, Lang, 1989. "Threedimensional energy-state extremals in feedback form", Journal of Guidance, Control, Dynamics Volume 12, Number 4

[10] Nagata, Masayoshi, 1953. "On the theory of Henselian rings", *Nagoya Mathematical Journal*, **5**: 45–57, doi: 10.1017/s0027763000015439, ISSN

[11] R.G. Strongin, Ya.D. Sergeyev, 2014. Global optimization with non-convex constraints: Sequential and parallel algorithms, Kluwer Academic

Publishers, Dordrecht. http://www.gbv. de/dms/ilmenau/toc/315801492.PDF

[12] Sergeyev, Ya. D.; Kvasov, D. E.; Mukhametzhanov, M. S., 2018. "On the

efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget". *Scientific Reports*. Springer Science and Business Media LLC. **8** (1): 453. doi: 10.1038/s41598-017-18940-4. ISSN 2045–2322. PMC 5765181. PMID

29323223.

0027-7630, MR 0051821

00013-3

*Atomistic Mathematical Theory for Metaheuristic Structures of Global Optimization…*

[2] The unreasonable effectiveness of mathematics in the natural sciences.

mathematical sciences delivered at New

Schneider, G. 2006. "Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training". *BMC Bioinformatics*. **7** (1): 125.

[4] "Arrow's Impossibility Theorem", Wikipedia, Website, extracted 2021. h ttps://en.wikipedia.org/wiki/Arrow%

Richard Courant lecture in

York University, May 11, 1959. Communications on Pure and Applied Mathematics, Volume13, Issue1, February 1960 Pages 1–14.

[3] Meissner, M.; Schmuker, M.;

doi:10.1186/1471-2105-7-125.

27s\_impossibility\_theorem

[6] "Theory of Oscillations",

Encyclopedia of Mathematics, Website, extracted 2021. https://encyclopediaof math.org/wiki/Oscillations,\_theory\_of

[7] Meissner, M., Schmuker, M. & Schneider, G, 2006. Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. *BMC Bioinformatics* **7,** 125. h ttps://doi.org/10.1186/1471-2105-7-125

[8] Czirok, A., Vicsek, T. 2000.

*Mechanics and its Applications.*

**159**

"Collective behavior of interacting selfpropelled particles". *Physica A: Statistical*

Volume 281, Issues 1–4, Pages 17–29. h

pp. 1588–1593.

[5] Lovbjerg, M.; Krink, T. 2002. "Extending Particle Swarm Optimisers with Self-Organized Criticality" (PDF). *Proceedings of the Fourth Congress on Evolutionary Computation (CEC)*. **2**.

It is proposed from 3 Rules of Information Physics [3IP] and The 3 General Rules of Macrodynamics [3GRM] for *Unprovable Ideals, Cardinals or Delimitations of Optimization* from origins in *The Rule of Perpetuation of Information Inequalities of Primaries.* These criteria are the basis to utilize previous methodologies of reasoning for contemporary and future new evolutionary algorithmic landscapes in the accretive methods.

This Metatheory develops theoretical agreement for the computational physical basis for a General Global Optimization Field Theory [GGOFT], given the algorithmic requirements of minima and maxima of a set of functions for a given computational surface, to determine roots, stationary and turning points, points of inflection, convexity, and concavity for atomistic qualities of evolutionary landscape extremals and their subsequent geometric values and derivations [11].

Therefore in this dialectic, the *Onts* or Particles in Complex Adaptive Evolutionary System [CAES] and Dynamic Global Workspace Theory-Intelligent Computational System Organization [DGWT-ICSO], can be understood as network gateways in conjunction with nonlinear surfaces, described by *Epistemes,* or Semantical value for given Formulae, Algorithms, Landscape. Their purpose is to build and attempt to game-solve more complex and efficient, workable algorithmic structures for the machine learning algorithm challenges to incremental Global Optimization Algorithm [GOA] regimes [12].
