Spatial-Temporal Analysis

#### **Chapter 5**

## Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics

*Markus Vogl*

#### **Abstract**

Within this chapter, a practical introduction to a nonlinear analysis framework tailored for time-series data is provided, enabling the safe quantification of underlying evolutionary dynamics, which describe the referring empirical data generating process. Furthermore, its application provides the possibility to distinct between underlying chaotic versus stochastic dynamics. In addition, an optional combination with (strange) attractor reconstruction algorithms to visualize the denoted system's dynamics is possible. Since the framework builds upon a large variety of algorithms and methods, its application is by far trivial, especially, in hindsight of reconstruction algorithms for (strange) attractors. Therefore, a general implementation and application guideline for the correct algorithm specifications and avoidance of pitfalls or other unfavorable settings is proposed and respective (graphical) empirical examples are shown. It is intended to provide the readers the possibility to incorporate the proposed analysis framework themselves and to conduct the analyses and reconstructions properly with correct specifications and to be knowledgeable about misleading propositions or parameter choices. Finally, concluding remarks, future avenues of research and future refinements of the framework are proposed.

**Keywords:** nonlinear dynamics, attractor reconstruction, time-series quantification, chaos analysis framework, financial markets

#### **1. Introduction**

Following current estimates, the predictive analytics market is expected to grow from around 10.5 Bn. USD at the end of 2021 to around 28–30 Bn. USD until 2030, thus, stating the immense relevance of successful forecasting capabilities for the technological advancement in our digitalized, fully connected and global economy [1–3]. Therefore, respective fields of applications for predictive analytics (or related methodologies) can be represented by any real-world system interacting with practitioners or researchers alike [4]. For example, climate, hydrological cycles, ecosystems, the human brain, neuroscientific observations, the universe, engineering applications, economic systems or financial markets can be seen as such realworld systems [5]. Nonetheless, previously denoted examples are all classified as complex systems [5].

The meaning of referring to complex systems renders itself obvious once one regards contemplation of real-life confrontations, in which similar scenarios tend to evolve similarly and occur repeatedly [6]. The latter similar repetition of scenarios leads to the association of a predefined level of determinism in said real-life systems due to the timely evolution of memory and experience effects [6, 7]. Henceforth, quantitative modeling via deterministic differential equations proposes itself as a suitable methodology to cope with these kind of systems, since the respective entireties can be characterized by equivalent mathematical differential equations [6, 8]. Presupposing that the initial conditions of the referring systems are exactly disclosed, the respective differential equations enable the predictions of the systems' final states to an indefinite level of precision and time span due to the deterministic characteristics of the systems [6]. In terms of predictive analytics and forecasting attempts of such systems, presupposing said deterministic scenario, would illustrate the prerequisite of the future evolutions of the systems to be completely explicable via the current states, principally indicating a 'plainness' in terms of the predictability of such systems [9, 10].

However, scrutinizing one of the previously denoted examples as a representation of such systems with deterministic real-world characteristics leads to the emergence of unexpectedly drastic insights with vast practical implications [11]. The before mentioned real-world systems, such as financial markets, viewed as complex systems and driven by inherent or underlying empirical properties (i.e. stylized facts1 ), result in a contrastingly challenging effort in terms of predictability and mathematical modeling in comparison to the before assumed 'plainness' of deterministic forecasts [6, 11]. Consequently, the determination of the true data generating process (DGP) of timeseries, which are empirical observations of the underlying complex systems such as a stock price series for financial markets, with respect to stylized facts and other innovations is advantageous for the systems' observers, researchers or other involved entities [20, 21]. Under the presupposition of complex dynamical systems, seemingly conceptual differences are the basis for the discussion on the underlying nature and essential functioning of the emerging dynamics of, for example, financial markets or other defined real-world systems [6]. A deeper understanding of such assumed underlying laws of dynamical motions would facilitate the thorough application of chaos analysis in such real-world systems [22].

Substantial literature about testing underlying systems' dynamics and chaos in such real-world systems (e.g. financial markets) provides strong evidence of nonlinearity and as a consequence, a special class of models, namely chaos models, arose [11, 23]. Chaos institutes a deeper rationale for the above-mentioned essential characteristics and the underlying nature of the evolutionary processes driving a complex (real-world) system, which is affected by nonlinearities [10]. The first property, or distinctive feature of chaotic dynamical systems, is that even though deterministic, these systems characterize themselves via sensitivity to initial conditions2 , implying

<sup>1</sup> Stylized facts, in particular, on financial markets can be volatility dynamics (e.g. [12]), nonlinearity (e.g. [13]), asymmetry (e.g. [3]), long memory (e.g. [14]), multifractality (e.g. [15]) and momentum driven trend characteristics, which clearly contradict the efficient market hypothesis [16]. Furthermore, studying stylized facts requires considering the heterogeneity of actors (e.g. [17]), resulting in multifractal timescales and behavioral patterns (e.g. [18]). All these properties occur at different timescales simultaneously, indicating the existence of stated nonlinearities (e.g. [19]) within the complex system of financial markets. Note that other complex systems may yield a similar variety of empirical characteristics to be regarded in respective predictive endeavors.

<sup>2</sup> Deviations from a trajectory of the system's phase (or state) space.

#### *Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

slight fluctuations or even marginal perturbations of the initial conditions to be capable to render precise predictions on a long time scale meaningless and futile in their totality due to exponentially increasing error terms [9, 22]. In addition, data measurement limitations3 in regard to current initial conditions, state an upper bound of the predictability, even if the model is completely disclosed [22]. Second to elaborate on, is the recurrence property, reflecting upon the dynamical behavior of such systems, which can potentially be exploited for the characterization of underlying dynamical evolutionary rules (or empirical DGPs) as presented later [6, 8].

Recent trends within chaotic dynamical analysis have led to a proliferation of publications, stating structural nonlinear models to be capable of displaying instabilities and chaos to be able to mimic empirical time-series properties4 [22]. Henceforth, a crucial pillar in nonlinear forecasting for over 40 years is the revelation of whether the considered time-series data sets are generated via deterministic or stochastic5 dynamical systems since their respective mathematical operations differ noticeable (see the bibliometric review of Vogl [25]) [23, 24, 26]. Speaking in a mathematical sense, a chaotic dynamical system has a dense collection of points with periodic orbits, sensitivity to initial conditions and topological transitivity, which is discussed in Eckmann and Ruelle [27], Devaney [28], BenSaïda and Litimi [29]. Chaos further refers to bounded steady-state behavior, which neither represents an equilibrium point, nor a quasi-periodic nor a periodic point nor indicates that nearby points separate exponentially in finite time, resulting in those chaotic systems revealing very complex and seemingly random evolutions out of the view of standard statistical tests [22].

Hence, chaos reveals apparent randomness of (chaotic) complex system realizations, yielding underlying patterns, interconnectedness, feedback loops, recurrence, self-similarity (fractality) and self-organization capabilities [30–32]. For example, in financial systems, hyperchaotic6 phenomena potentially evolve into crises, denying any form of system control [37]. Referring to scientific literature, the first tests of chaotic behavior for complex time-series systems were executed following the Brock-Dechert-Scheinkman (BDS) test of Brock et al. [38], yet, revealed its omnipotence, since it is unable to differentiate, whether the revealed nonlinearities originate from stochastic or chaotic dynamics [39]. Unfortunately, even comparisons between the most powerful tests (e.g. close-return test, BDS test and Lyapunov exponent7 ) do not result in conclusive findings [39]. In point of fact, several propositions toward a more conclusive solution in the scientific literature were brought to light, with no further positive indications [39].

The former statement is an allegory for the vast dilemma concerning the determination of the true, mostly unknown nature of complex dynamical (real-world) systems – whether it be stochasticity or chaoticity [40]. These systems are almost graphically similar and cannot be differentiated by respective statistical standard tests [29, 41]. Following Aguirre and Billings [41], a verification of strong noise influence on the identifiability of chaotic dynamics is provided, leading to misspecifications

<sup>3</sup> In terms of measurement errors, sampling frequency and data accuracy, among others.

<sup>4</sup> Thus, vast disseminations of literature about deterministic chaotic behavior and the design of (economic) models in the regime of chaotic behavior from a theoretical view arose [24].

<sup>5</sup> Originating from pure randomness.

<sup>6</sup> Hyperchaos is considered, if more than two positive Lyapunov exponents exist (e.g. [33–35]). If a discrete nonlinear system is dissipative (spontaneously symmetry breaking), a positive maximum Lyapunov exponent is an indication of chaotic dynamics within the system under regard [36].

<sup>7</sup> A positive maximum Lyapunov exponent can occur even in non-chaotic series, due to inadequate application on noisy data sets [39].

of chaotic dynamics as stochastic dynamics due to noise-disturbance, rendering the discovery of evolutionary chaotic processes very difficult [42]. The majority of empirical time-series data is often small and noisy in comparison to its laboratorybased 'physics´ data counterparts', suggesting a preclusion of dynamical identification if the noise levels are greater than a predefined critical threshold value [10, 41, 43]. Therefore, the great controversy of the nonlinear empirical literature as stated above is whether a complex system is characterizable via (low-dimensional) deterministic chaos or generated via stochastic dynamics and if those chaotic complex systems are controllable [10, 43].

To solve this ongoing debate, Vogl and Rötzel [40] and Vogl [44] successfully proposed distinct analysis frameworks, enabling the clear and safe quantification and determination of the underlying (empirical) DGPs of time-series data sets. Nonetheless, due to publication technical reasons, Vogl and Rötzel [40] present a framework specification tailored toward solely stationary time-series, while Vogl [44] supposes additional customization for non-stationary data. However, these specifications originate from one original, singular and holistic chaos analysis framework for nonlinear time-series, which will be presented in its totality hereinafters. Furthermore, since the determination of the (empirical) DGP by the proposed framework is by far trivial in its application, owing to a large variety of advanced algorithms to be implemented, this chapter is purposed to provide a clear and distinct practical guideline on how to successfully implement the denoted analysis framework. Particularly, the determination of 'scaling regions' via correlation sum and correlation dimensional schemes (refer to [45–47]) and the reconstruction algorithms of (strange, fractal) attractors of complex dynamical systems with chaotic traits (see [10, 48]) represent one of the main emphases, among others, since many erroneous conductions are possible and are widely dispersed throughout the scientific literature [25, 40, 45]. The distinct goal and aim of this chapter are to provide the researcher and practitioner with an empirical-practical guide on how to implement the presented chaos analysis framework successfully, thus, determining the (empirical) DGP and reconstructing potentially existing system attractors out of a scalar timeseries given [40, 44]. Moreover, the insights provided are mostly independent of the framework, generalizable and abstractable to any other kind of subsequent or related empirical analyses conducted.

Therefore, given in Section 2, the framework will be introduced completely and its inherent parts briefly reconciled, while the example data and subsequent correct specifications and selections to conduct a correct analysis are proposed in Section 3. Furthermore, Section 4 levels around the avoidance of pitfalls and the empirical results of misspecifications via practical examples, before concluding remarks and future avenues are discussed in Section 5. Please note that mathematical definitions or formulas are neglected and the reader is referred to the stated literature instead. If no further explication is granted, the literature is seen as a prerequisite for arguments and propositions, since the focus is purely on practical applicability in a theoreticalscientific context.

#### **2. Framework overview**

Before elaborating on the analysis framework itself, it is relevant to reconcile the contribution and relevance of the propagated inherent paradigm shift in quantitative modeling, namely, the previous determination of the empirical DGP and its

#### *Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

characteristics before selecting referring mathematical procedures and models [40, 44]. Referring back to the introductory mentioned expected market increase by around 20 Bn. USD in predictive analytics, it is crucial for successful forecasting to be informed about the underlying evolutionary building mechanics of the to be analyzed time-series data, before deploying cost-intensive predictive applications. One may imagine deploying planned-out long-term predictive solutions on time-series systems, which however are chaotic and, thus, only predictable over a short time scale. This would either lead to disastrous outcomes and very poor predictive accuracy or render model performance and the totality of forecasts futile due to exponentially increasing error terms owing to said chaotic mechanics [49, 50]. Instead, the proposed paradigm shift underlying the framework states the initial determination of the empirical DGP with inherent empirical characteristics, leading to exploitable knowledge about the predictive time horizons, hidden system properties and, therefore, the minimum model capability requirements, before practical implementation and roll-outs are conducted [40]. The model selections, thus, follow the insights of the determined empirical DGP [44]. Regarding the scientific side of the framework, existing literature and research do not execute sufficient theoretical precaution within respective applications and interpretations, leading to fragmentation and dispersion of methodology and modeling, thereby representing the rationale for the framework's creation [40, 45].

Hereinafter, the chaos analysis framework is presented in detail. In Section 2.1, the framework in general and its components are elucidated, while sections 2.2 through 2.7 level around the contents of each analysis step, while also introducing the inherently applied algorithms and methods in a sparse and nonmathematical as well as practical-error avoidant oriented manner.

#### **2.1 Chaos analysis framework**

The holistic chaos analysis framework presented in **Figure 1**, consists of six steps and will be elucidated hereinafter. Before elaborating on the steps in detail, the brief course of analysis will be outlined. First, it is mandatory to analyze given noise contamination and its respective levels and the nature of the potential noise [43]. Noise is capable of disturbing the identification of the underlying dynamics and, thus, is regarded as analysis destructive [45]. Furthermore, it is deemed favorable to gather basic statistical insights from the (denoised) datasets under analysis via determination of standard statistical tests, which incorporate tests for stationarity, nonlinearity and correlations, among others [40, 45]. It is possible to determine the applicability of reconstructions solely based on these insights. Second, several chaos measures and nonlinear metrics are calculated such as the sample entropy (see [51]), Lyapunov exponents (refer to [52, 53]) or the Hurst exponent (see [54]). These insights are relevant to determining the nature of the underlying dynamical system based upon mathematical procedures. Third, if applicable, (strange) attractor reconstruction algorithms can be implemented to reconstruct the system's attractor visually. Fourth, an independent recurrence quantification analysis (RQA) paired with discrete wavelet transformations (DWT, refer to [55]) can be applied to (a) determine the existence of various sub-dynamics and (b) exploit denoted recurrence properties mathematically as well as visually [6, 8, 56]. This reveals hidden characteristics of the analyzed datasets. Fifth, spectral characteristics, especially exploitable in forecasting by applying fractional calculus, are analyzed via wavelet-based multiresolution analysis (MRA) [57–59].

#### **Figure 1.**

*Generalized chaos analysis framework for the determination of the empirical DGP and underlying empirical system characteristics based on scalar time-series data taken with permission of [40, 44]. Step I (prerequisites and standard tests) consists of testing prerequisites, which are required to conduct a nonlinear dynamical analysis. Therefore, noise reduction is mandatory, followed by tests for stationarity, Gaussianity (distribution in general), nonlinearity, and space-time separations, which can prevent an analysis. Step II (chaos measures and tests) encompasses a collection of effective nonlinear dynamical or chaotic measures. First, a correlation sum scheme is applied to determine and test significant 'scaling regions'. Moreover, the dimensionalities and properties of the system are tested (e.g. correlation dimension, Lyapunov exponents). Furthermore, information content via sample entropy is analyzed, among others. Step III (phase space reconstruction) involves the proper reconstruction of the system and a graphical representation using embedding theorems such as the traditional Takens' embedding. Step IV (recurrence quantification analysis) is an independent confirmation of the previous steps, namely, the ability to describe and quantitatively measure the characteristics of the underlying dynamics, optionally, with the application of rolling window scale averages and a subsequent discrete wavelet transformation (DWT) application to determine the potential existence of sub-dynamics within the data. Furthermore, the quantification is not dependent on the prerequisites of steps I–III. Step V (multiresolution analysis) elaborates on the spectral properties of the data and is elucidated via continuous wavelet power spectra (CWT). Step VI (distributions and power-Laws) is to determine power-law characteristics via multifractal detrended fluctuation analysis (MFDFA) and distributional coherence tests.*

Finally, (multi)scaling and (multi) fractal characteristics are elaborated on via the conduction of a multifractal analysis. The multifractal analysis includes a multifractal detrended fluctuation analysis (MFDFA, see [60]) as basis, which renders (locally) minimum and maximum Hurst exponents graphically visible, while subsequently providing inherent scaling coefficients. In particular, generalized Hurst exponents, multifractal scaling exponents and the multifractal scaling spectrum can be derived, thereinafter. Moreover, distributional coherence tests can be applied to validate, the 'less worst' distributional fit and, whether a power law is present in the data (refer to [61]).

In total, holistic insight into the underlying empirical DGP and inherent characteristics is obtained, with which one may select appropriate models thereinafter.

#### **2.2 Prerequisites and standard tests**

Initially, the prerequisites and standard tests are presented briefly. The time-series has mandatorily to be denoised properly to ensure a descent beyond a predefined threshold level, best via nonlinear filter schemes [43, 45, 58]. Most time-series are contaminated by noise due to measurement errors and microstructure noise occurrences [39, 62]. Following Aguirre and Billings [41], noise exerts a mentionable (negative) influence on the identifiability of processes inherited via chaotic dynamics. Henceforth, if a certain level of noise is exceeded, accurate estimations of dynamic models and subsequent analyses are voided in their entirety8 [41]. The only feasible approach, therefore, is the drastic reduction of noise levels to 'workable levels', since contaminating noise in evolutionary dynamics may be dynamical noise in either additive or multiplicative specification, thus, disrupting the dynamical identification on several even small scales [26, 63]. Regarding the nonlinear filter structures, two criteria have to be met, namely, (1) the applied filters are required to be unbiased and (2) the residual variance of the filters levels the noise variance [41]. Please note that some nonlinear filters such as the median filter Introduce (artificial) autocorrelations in the data, which should be avoided, thus, wavelet filters are deemed to be favorable for nonlinear denoising (refer to [55, 65, 66]).

Once the time-seriess is successfully denoised, standard statistics can be applied to elaborate on primal insights into the underlying mechanics [40]. Within the framework, the first and – destructive of reconstruction algorithms if missing – property is stationarity<sup>9</sup> [40]. One has to exert special strictness in terms of stationarity, thus, proposing a 1% significance level for two successive tests is deemed favorable [44, 45]. For the framework, the augmented Dickey-Fuller (ADF) test and the more powerful Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test are executed (refer to [67, 68]), which both have to concur to be regarded as valid results in terms of stationarity. To elaborate on the initial test of distributional characteristics, a Kolmogorov-Smirnov (KS) test for a Gaussian specification (refer to [69]) is conducted, yet, other specifications are possible. Nonetheless, a 1% level of significance is recommended to adhere to the strictness of the presuppositions of the analysis. Moreover, to test for the existence of nonlinearity, which is a prerequisite for the existence of chaotic dynamics, the BDS test (refer to [38]) is executed. Please note that due to its stated omnipotence, it is only applied to identify nonlinearity in general and specifically not to distinguish stochasticity versus chaoticity [39]. Further note that sufficiently enough embedding dimensions have to be selected for the BDS test as well as subsequent methodologies to meet practical insights.

Lastly, correlation structures have to be elucidated, beginning with the calculation of autocorrelation functions (ACFs, see [70]) with sufficiently large lags (e.g. 100–300). The ACFs serve as the basis for the validation of potential reconstructions (see Section 2.4) and indicate, whether analysis disturbance is given. Moreover, following Kantz

<sup>8</sup> For example, too much noise, leads to test rejections, disrupts the Grassberger-Procaccia algorithm (see [46, 47]), thus, the correlation dimensional estimates and alters the Lyapunov exponent calculations [63, 64].

<sup>9</sup> Even if scientifically debatable, 'brute-force' methods such as logarithmic distances will provide sufficient results, since the sole purpose is analysis not forecasting, thus, no drawbacks are to be expected.

and Schreiber [45], nonzero autocorrelations are deemed problematic owing to trajectory vectors being closely located in phase space due to continuously evolving time, which is also known as temporal correlation. To determine a relevant 'scaling region' by the application of correlation sum schemes (see [46, 47]), the absence of temporal correlations is mandatory due to fitting issues in regional curve shapes and the lack of invariance of said correlation sums as depicted in [45]. Hence, dynamically correlated time-series violate the estimation requirements and if sufficiently large or worst oscillating, the analysis is futile [45].

To analyze temporal correlations, Provenzale et al. [71] propose estimates of the correlation time by applying time separation plots, presupposing pairs of points in phase space to be dependent on threshold distance and, additionally, on elapsed time in between respective measurements. Henceforth, the contour curves of said plots have to saturate and remain at an acceptable and non-oscillating boundary level [71]. The existence of a sufficient 'scaling region' is the premise for successful reconstructions [45]. Building upon the correlation sum scheme, determining the slopes of each correlation sum curve per selected embedding dimension results in an estimate of the (fractal) correlation dimension10, which is plotted by itself and has to saturate as well (refer to [45]). Novelty within the framework to determine the validity of the underlying 'scaling region' is a step difference test proposed by Vogl and Rötzel [40], which tests step differences of the correlation sum curves in a Student's t-test against zero and graphically examines the resulting p-value heatmap. One can select the minimum embedding dimension by selecting the one, which has no p-value above 1% significance. Note that the existence of an ongoing 'scaling region' is also directly visible in the heatmap.

#### **2.3 Chaos measures and tests**

In addition to the prerequisites, several singular chaos metrics are worth determining to gather more initial insights into the potential underlying nature of the time-series dynamics under analysis [40]. First, the sample entropy as proposed in Richman and Moorman [51] is calculated, reflecting information content and self-similarity characteristics, thus, delivering insights into the presence of fractality within the data.

Furthermore, various Lyapunov exponents are determined, namely, (1) the maximum Lyapunov exponent, (2) the Lyapunov spectrum and (3) the Lyapunov time. Lyapunov exponents measure chaotic strength in a dynamical system by measuring the exponential convergence or divergence of nearby trajectories in phase space [45, 73]. It is possible to calculate Lyapunov exponents equaling the number of phase space dimensions, i.e. the number of the estimated embedding dimension, leading to the Lyapunov spectrum, which indicates the nature of the underlying dynamical systems, whether it be conservative or dissipative [74]. The largest exponent is labeled as the maximum Lyapunov exponent, depicting the exponential divergence or convergence of close trajectories and can be determined via the algorithm of Rosenstein et al. [75]. Note that a positive maximum Lyapunov exponent in combination with a negative Lyapunov spectrum sum is mostly seen as a sign of chaos, yet, is critiqued by the lack of distributional tests [11]. Therefore, a distributional rationale in form of the Bask-Gençay bootstrapping test is favorable, since it provides

<sup>10</sup> In finite scalars like time-series, according to Ramsey et al. [72], correlation dimensional estimates tend to return artificially smaller values than the theoretically assumable fractal dimension.

*Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

a significance indication, particularly, in cases of small positive, beyond zero maximum Lyapunov exponents at a sufficient level of significance [76]. Please note that several ten to a hundred thousand of bootstrapping steps are advisable to obtain reliable results. Moreover, the Lyapunov time represents the inverse of the maximum Lyapunov exponent and, thus, implies the time-span, the system requires to render itself chaotic and non-predictable, i.e. the time in which the exponential growing errors remain in a 'forecastable' range, before diverging too far [53]. The Lyapunov time is interpretable in either time-series units or in SI units [seconds] for real-world applications [53].

Finally, the Hurst exponent or in the case of non-stationary data, the detrended fluctuation analysis (DFA) alpha value is calculated to obtain in-depth information about the evolutionary nature of the dynamical system [54, 77, 78]. In an ongoing debate, the interpretation of the Hurst exponent and its initial interpretation by Benoit Mandelbrot (see [79, 80]) is challenged [16, 25, 44].

The Hurst exponent is interpreted as follows, namely, (1) the system is representing a Wiener process11, should the Hurst exponent equal exactly 0.5, (2) the system is revealing long memory effects if it is exceeding 0.5 and (3) is being a mean-reverting system, should the exponent value be below 0.5 [16]. Nonetheless, recent empirical studies (refer to [16, 44]) state that the exceedance of 0.5 by the Hurst exponent reveals measurable fractal trends (or trending characteristics), which are an explicative rationale for momentum effects on financial markets. Within the setting of this analysis, the latter, novel indication is more suitable. The exceedance of 0.5 indicates persistency and the existence of a power-law, resulting in the denoted fractal characteristics [16, 44, 81]. Additionally, the Hurst exponent can be applied to determine the fractal dimensionality estimation (2-H) [82]. Finally, as an additional novelty, is adapting the Bask-Gençay test to the Hurst exponent as depicted in Vogl [44], ensuring the said exponent to be tested on significance [83]. In total, the second step enables the elucidation of the dynamical systems' properties directly, thus, providing a solid indication of its underlying evolutionary nature.

#### **2.4 Phase space reconstruction**

An important step toward the conduction of successful predictions of nonlinear time-series systems is the method of attractor reconstruction, leading back to the 1920s (refer to [84]) and the ideas of Packard et al. [85], Ruelle [86] and Takens [48], which represent the calculation of various invariant quantities required to characterize the underlying system [87]. This is mostly the presupposition for the nonlinear dynamical analysis of a time-series and state space model implementations [88]. The main contribution of reconstructions is given by the reconstruction of phase space, which is capable of preserving geometrical invariants (e.g. eigenvalues, fixed points or fractal dimension) of referring system attractors, including the Lyapunov exponent of according trajectories [88]. To phrase it differently, attractor reconstruction can be seen as a method to recreate the full deterministic state space based upon a lower dimensional time-series (i.e. a scalar) [87]. Thence, state space reconstruction is the generation of a multidimensional, deterministic state space out of the underlying, sampled time-series data [88]. Furthermore, embedding is, thus, the mathematical

<sup>11</sup> Only in this scenario the efficient market hypothesis taken out of quantitative finance holds and is violated else.

process by which an attractor is reconstructable presupposing a given set of scalar measurements, i.e. time-series datasets owing to dimensional preservation characteristics [43].

The resulting accuracy of the attractor reconstruction is directly dependent on the methodology applied to the reconstruction process and also influences the Lyapunov spectrum [87]. Therefore, several problems may occur, since the Lyapunov exponent cannot be labeled as invariant toward initial conditions, thus, stating a dependence on sample size within the reconstruction of time-series trajectories in phase space [11]. Following Nichols and Nichols [87], several methods for delay-time and embedding dimension selection exist for the standardized delay coordinate reconstruction, namely, the comparison between ACFs and the probabilistic concept of mutual information, while false nearest neighbor approaches are feasible to minimize said delay vectors. Nonetheless, the most common procedure is the delay-time reconstruction in combination with various embedding dimensions [89]. A non-exhaustive overview is proposed in **Table 1**. To be more detailed, the delay-time is defined as the time-span between two neighboring points applied to reconstruct the attractor, while the referring embedding dimension represents an estimate of the true dimension of the assumed phase space, which is intended to be reconstructed [97]. To refer back to the denoising presupposition given in Section 2.2, the underlying scientific theory requires noise-free data, on which natural processes timely evolve, which else leads to difficulties in state variable estimations [87].

According to Takens [48], in absence of noise contaminants, it is always feasible to embed a scalar time-series into a state space. Assuming the existence of noise, two trajectories of the same initial condition, potentially evolve differently and converge to different asymptotic behavior, thus, even the exact knowledge of said initial conditions does not guarantee the predictability of the system's final state [88, 98]. Therefore, noise has to be treated as an influential source of unpredictability, which cannot be fully disclosed via the deployment of conventional methodologies of nonlinear dynamical analysis such as exit bases or uncertainty exponents [98].


#### **Table 1.**

*Overview of existing reconstruction algorithms within the scientific literature.*

#### *Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

To revert to practical implementations, it is relevant to determine the choice of delay- times (τ), which directly influences the success and accuracy of reconstruction algorithms [87]. Hence, a too small selection results in vectors to be very near and almost identical, thus, carrying redundant information and leading the attractor to collapse onto the 45° line in state space [87]. In contrast, a too large selection will produce uncorrelated (unrelated) coordinates owing to exponentially growing errors in chaotic regimes, resulting in decorrelated vectors in hindsight of the underlying time-series [87]. Henceforth, the two boundary scenarios have to be well-balanced to receive a proper reconstruction, yielding maximal independence, while preserving dynamically related coordinate properties [87]. The most commonly applied variant is the ACF delay, with several possibilities, namely, (1) the first zero crossing, (2) crossing of 0.1, or 0.5 and (3) not exceeding 1/e [87]. Please note that ACFs propose linear time evolutionary calculations and may, thus, be misleading [87]. Due to experiments, the most accurate representations by the author were achieved by selecting variant (1), i.e. the first zero crossing or in modification, the first zero crossing, while subsequent coefficients additionally stay insignificant. Moreover, the embedding dimension as shown by Sauer et al. [99] has to be topologically equivalent to the true attractor, if the embedding dimension is chosen to be larger than two times the fractal dimension of said attractor. Note that once the embedding dimension is selected sufficiently high, a reconstruction resembles almost always an embedding, independent of parameter selections [88]. Mostly, delay-coordinates are selected, yet, there exist the families of derivatives and principal component reconstructions, as depicted combinatorial in the spectral embedding (see [10]) [88]. Within practical applications, the author deems a combination of (1) Takens delay-time embedding, which, unfortunately, resembles a 'spaghetti monster' in most cases and (2) the more sophisticated variant by Song et al. [10], applying a spectral embedding in combination with a k-nearest neighbors algorithm (k-NN), principal component analysis (PCA) and Laplacian Eigenmaps as very suitable. In the author's empirical experiments, the PCA components are best selected to equal the embedding dimension, while the number of neighbors for the k-NN can be best determined by the following heuristic, namely, 0.01len(data)\*1.5τ [40]. Moreover, to receive a correct reconstruction the properties stated in **Table 2** are strict to be adhered to.


#### **Table 2.**

*Overview of parameter selections for attractor reconstruction specifications.*

#### **2.5 Recurrence quantification analysis**

Independently from previous attractor reconstruction and other prerequisites, the RQA bases itself upon the introductory denoted exploitation of the recurrence property12 of dynamical systems, thus, is applicable to any time-series data (e.g. [6]) [100]. The RQA is conducted by quantifying the recurrence plot (RP) as introduced in Eckmann et al. [101]. The RP and RQA analysis benefit from the preservation13 of time-ordering information contents given in the analyzed data as well as contained spatial structure [22]. With the RQA one may detect fundamentally given characteristics underlying a dynamical system, namely, the recurrence states, resulting in a 'robust to noise and data limitations' method of quantifying and identifying (chaotic) dynamics [22]. Thus, respective trajectories and transitions are rendered visible, in combination with the degree of complexity, i.e. the fractal structures, which may be inherent in the analyzed data [8]. To determine the RQA a threshold level has to be decided on, which determines whether nearby points are counted as recurrent or not [7]. Following van den Hoorn et al. [102], propose several threshold determination methods, yet, traditionally, according to Koebbe and Mayer-Kress [103] as well as Zbilut and Webber Jr. [104], the threshold value should not exceed 10%. Additionally, the threshold value should not be lower than five times the sample noise [6]. Furthermore, it is common to exclude the identity line (proportional to the maximum Lyapunov exponent) of the RP from the analysis [105]. First, the RPs can be interpreted visually, which is presented in Marwan et al. [6], p. 251. Second, there exist two different types of measures taken out of a RQA, namely, minima-dependent versus single-value measures [6]. One may plot the minima dependency for several selections and choose an appropriate value to quantify the RPs. The length of diagonal lines represents the duration of trajectory local evolutions, while vertical (horizontal) lines mark time durations, in which the underlying dynamics are trapped (labeled as intermittency or laminar state) [7]. The commonly applied measures are depicted in Vogl and Rötzel [40], **Table 3**. Finally, to distinguish the results from stochastic, chaotic or other systems, one may either apply a Wiener process realization, a mathematical chaotic system realization or respective surrogate 14 datasets (see [45]). Paired with the conceptions taken out of Section 2.6, signal theoretical decompositions can be applied to identify potential hidden sub-dynamics [56]. To gather and obtain the most information out of the analyzed data signal, wavelets with better localization properties are commonly proposed in form of a DWT filter bank [44]. The resulting low pass and high pass decompositions can then be applied as novel datasets to the RQA analysis and sub-RPs can be created and quantified to demonstrate potential sub-dynamics [56]. Note that the process can be repeated as often as required, should more than residual noise remain after one respective decomposition or iteration.

#### **2.6 Multi-resolution analysis**

To be very brief, time-series data are localized in the time domain, yet, may also yield exploitable frequency components, which in case of non-stationarity or

<sup>12</sup> The recurrence property originates from a topological approach and is given by the Poincaré recurrence theorem.

<sup>13</sup> Presupposing the existence of a low-dimensional attractor, presence of dependence on initial conditions and the manifestation of said recurrence property.

<sup>14</sup> I.e. destroying given determinism by shuffling via FTs. Then, comparison with original data.


#### **Table 3.**

*Overview of steps one and two for all datasets with implication.*

non-periodicity or other unfavorable characteristics will not be extractable via classical Fourier transformations (FT) [13, 106, 107]. Therefore, wavelets (i.e. tailored or bi-orthogonal) applied in a multi-resolution analysis (MRA) are well suited to extract underlying frequency information by retaining as much time localization information as possible [13, 16]. Thus, for filtering or denoising activities of time-series, discrete cascade filter banks with wavelet shrinkage (see [55, 108]) are applicable at length for various scales [40]. Moreover, to obtain insights into time-frequency localizations of to-be-analyzed datasets, one may apply a continuous wavelet transformation (CWT), resulting in a spectrum [57].

#### **2.7 Distributions and power-laws**

To elaborate on power-law characteristics, it is important to denote the interconnections between chaotic dynamics, strange attractors, fractals and power-laws. In short, a dissipative (chaotic) dynamical system will reveal its phase space over timely evolution to deflate onto its own strange attractor, which is characterized via a fractal set [109, 110]. Generally, a fractal set yields a non-integer (non-Euclidean, thus, generalized) dimension, namely, the Hausdorff-Besicovitch dimension and is further characterized via self-similarity, i.e. multi-scaling, in addition to irregularities, nondifferentiability and recursiveness [111]. Henceforth, a (multi) fractal system requires a local power-law contributing to the mentioned scaling properties [111]. Therefore, a power-law is defined as the scalar relationship between two quantities and, thus, is characterized via scale invariance [112]. A fractal system with one scaling exponent is labeled monofractal, yet, multifractal systems require a singularity spectrum of exponents [111]. Referring back to a dissipative dynamical system, which deflates onto its strange attractor, thus, is represented by a fractal set. The fractal set of a strange attractor is rendered visible via its Poincaré sections, which show intersections of said strange attractor [110, 111]. To be more detailed, the intersections of strange attractors are fractal sets, which are described via multi-scaling and, thus, via powerlaws [111].

Analyzing time-series enables not only the reconstruction of potential (strange) attractors, yet, opens the way to mathematically determine given power-laws (i.e. the multi-scaling characteristics) of its underlying (multi) fractal properties [113]. Following Yuan et al. [114], state two rationales for time-series multifractality, namely, (1) the existence of fat-tailed probability distributions and (2) nonlinear temporal correlations. To draw out the multifractal spectrum, one may apply a multifractal analysis, which is built upon the MFDFA. The MFDFA visually depicts the scaling properties, as well as the (local) maximum and minimum Hurst exponents, also supporting the fractal trending interpretation discussed earlier [60]. Moreover, the generalized Hurst exponents, multifractal scaling exponents and the previously denoted multifractal scaling spectrum can be derived from the MFDFA output quantities [60]. In addition, calculating complementary cumulative distribution functions (CCDFs) and comparing them with power-law or other potential theoretical distribution types, enables the more or less save determination of power-law or other distribution fits [61]. However, as a word of absolute caution, the determination of coherence tests for various distributions has to be interpreted very carefully. The coherence tests are calculated via paired distributional fitting comparisons based upon log-likelihood measures, alongside other parameters [61]. These serve the purpose of achieving insights into suitable distributions, which may describe the datasets best, or to phrase it realistically, which at least represent the 'less worst' fit [61]. The coherence tests, thus, represent a comparison and no goodness of fit, which as indicated requires the reader to exert special care with the interpretation. It is advisable to fall back on graphical displays on log-log plots, which revealed as a useful guide for the practical implementations of the author. Concluding the powerlaws, the analysis is complete and the interpretation can carefully be exerted.

#### **3. Correct empirical specifications**

For each step of the analysis several algorithms are to be determined and a larger variety yield graphical insights, which can be either quantified or applied as a visual aid to deduce further insights and implications. Since a complete analysis as shown in Vogl and Rötzel [40] or Vogl [44] would vastly exceed the page limitations of this guide, the didactics of the practical display are as follows. First, this section will provide an idealistic outcome of a generic and mathematically tailored time-series,

#### *Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

based upon the Lorenz system (refer to [115]) to demonstrate the 'best-case' scenario as the generalized point of reference, while two additional real-world datasets are presented as a comparison, namely, (1) the change rate of wind speed of Mars and (2) S&P500 1-minute tick return data. Second, the real-world datasets will be elaborated on in Section 4, since some hindrances are given and, thus, require analysis of potential misspecifications. The preliminary elaborations via steps one and two are depicted in **Table 3**, while **Figure 2** presents the correlation sum, the correlation dimensional scheme as well as the correlation structures for the generic dataset. Note that for attractor reconstructions, **Table 2** already proposes the favorable characteristics to enable the correct implementation of reconstruction algorithms. Steps three

#### **Figure 2.**

*Generalized point of orientation via the generic dataset for determination of reconstruction possibility. (a) Shows the correlation sum plots, which visually depict 'scaling regions', (b) shows the correlation dimensional plot, which ideally saturates as shown, (c) states the heatmap of p-values for the step-test of the lines of (a), (d) states the ACFs for 300 lags, which are insignificant and (ideally) stay that way and (e) shows the space-time separation plot for 100 steps, which is very low and non-oscillating. Note that for step two of the analysis, the minimum embedding dimension can be taken out of the heatmap, namely, by picking the first row with only 1% significance or lower p-values. During the existing experiments of the author, the heuristic of two times the fractal dimension as stated in the main text is also given by applying this selection method.*

to six are only stated in Section 4 for the real-world examples. In general, the stricter the interpretation and analysis, the better the results of the reconstruction, correct specification of underlying empirical DGPs and subsequent modeling.

Moreover, while steps one and two as presented above (in addition to potential reconstructions of step three), suffice to quantify the nature of the underlying system (i.e. whether it is dissipative, reconstructable or potentially chaotic), regarding the analysis of steps four through six, provide the exact quantified details of the system's characteristics and serve as double confirmation procedure. Mainly, the RQA measure quantification provides exact details about the underlying empirical DGP, namely, a percentage comparison with surrogate data or pure stochastic (e.g. a Wiener process) or pure deterministic chaotic systems (e.g. a Lorenz system) (refer to [6, 40]). Thence, one is capable of pinpointing whether the underlying system is pure chaotic, pure stochastic or a mixture of both and in which margins [40]. Subsequently, deriving frequency information, information about sub-dynamics, the existence of powerlaws and multifractal spectra enables the correct model selection as final outcome of the quantification. Nonetheless, a proper differentiation can be achieved after step four already. Since the RQA is too spacious it is neglected for this display, thus, the reader is referred to [6, 8, 40, 56].

#### **4. Empirical negative examples**

Continuing the previous section, hereinafter, the results for the two real-world datasets are presented in **Figures 3** and **4**, while the remaining indications are provided in **Table 4**. The invalid reconstruction algorithm results via Takens delay-time embedding (refer to [48]) and via spectral embedding (refer to [10]) are proposed in **Figure 5** to clarify the relevance of proper prerequisites analysis. Regarding the Mars wind speed change rates, clear deterministic traits and sub dynamics are observable by the RQA, yet, a clear identity as a chaotic system as well as a distinct reconstruction is not possible. This is illustrated by the lack of a clear 'scaling region', dropping correlation dimensions, high ACFs and temporal correlations, which render this analysis step invalid. Furthermore, no scale independent multifractal scaling spectrum is visible and a nested power-lawexponential distribution is proposed as 'less worst' distribution via the coherence tests (see [61]). In addition, no frequency information is determinable via CWT. Thus, the only insight generated is that it is a potentially chaotic, deterministic and dissipative system, while the exact modeling metrics are extractable out of the RQA quantification tables (refer to [6]). Regarding the (invalid) Takens reconstruction may suggest a non-chaotic attractor, since the results resemble a valid one in parts, yet, this is an invalid approach nonetheless. Spurious chaotic measure results are obtained by the S&P500 1-minute return series, since according to the BDS test nonlinearity is excluded, while the Hurst exponent indicates a clear mean-reversion. Furthermore, no 'scaling region' by correlation sums and a non-saturating correlation dimension in combination with oscillating temporal correlation voids any other step of the analysis or reconstruction. Regarding the reconstruction by Takens, the linear nature is determinable. Moreover, the system has frequency information, yet, no power-law nor multifractal scaling characteristics (in agreement with the Hurst exponent indication of mean-reversion). Following the RQA, sub dynamics and low levels of determinism are given, while vast stochastic characteristics are dominant. A final concluding remark at this point, considers

*Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

#### **Figure 3.**

*Results for the dataset wind speed change rate Mars (left) and S&P500 1-minute ticks (right). (a) States the correlation sum scheme, (b) the p-value heatmap, (c) the correlation dimensions, (d) the ACFs for 300 lags and (e) the space-time separation plots with 100 steps. Comparing with the generic datasets visually already reveals the conceptual differences and problems inherent in the analyzed data.*

the frequency of the data samples, namely, Vogl and Rötzel [40] observed chaos in daily S&P500 returns, while in S&P500 1-minute tick return data, mean-reversion is present, leading to insights proposed by BenSaïda [39], namely, that the same system at different frequency levels, may propose different dynamics, revealing a scale dependence of the underlying empirical DGP.

Referring back to step three, namely, the attractor reconstruction, one may see various outcomes based upon false pretenses in the reconstruction results. In terms

#### **Figure 4.**

*Overview of RQA-DWT results for the display of RPs and sub-dynamics. (a) Is the DWT for the wind speed change rates of Mars and (c) for the S&P500 1-minute return ticks. In addition, (b) represents the approximation (left) and detail (right) coefficients for (a), while (d) represents the same for (c). It is denotable that both series consists of sub-dynamics. Note that the Mars detail coefficients may resemble a hidden chaotic subsystem, which can be separately analyzed.*

*Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*


#### **Table 4.**

*Overview of steps four to six for all datasets with implications.*

#### **Figure 5.**

*Display of attractor reconstructions for the generic dataset based upon a Lorenz set (a, b), the wind speed change rates of Mars (c, d) and the S&P500 1-minute return ticks (e, f). (a), (c) and (e) represent the Takens delay-time embedding, yet, (c) and (e) are proven to be not reconstructable. (b), (d) and (f) display the spectral embedding in combination with a k-NN algorithm and a PCA with Laplacian Eigenmaps. As with (c) and (e), note that the analysis shows that (d) and (f) are not to be reconstructable. It is visible that a violation of reconstruction prerequisites results in very poor reconstructions since those are not to be conducted in the first place.*

of extreme high ACFs or temporal correlations, the attractor is dispersed and flattened, while a lack of scaling characteristics results in singular 'spaghetti like' lines. Furthermore, as stated in Nichols and Nichols [87], a deflation or stretching on the 45° line of the 3D space is also possible. Note that in the case of a linear system such as the S&P500 1-minute tick returns, the Takens embedding only states straight lines, which clearly indicates the absence of nonlinearity. A proper reconstruction shows a closed and dense area and visible attractor-like structures. For reference, as stated in Vogl and Rötzel [40], a pure stochastic system such as a Wiener process will end up resembling a 'ball' with no trajectory structure. In regards to time-series data with higher dimensional estimates, exceeding 3D spaces, the reconstructed graphical display may appeal 'deformed', owing to the lack of degrees of freedom in the visual display. On a final note, the exertion of particular care regarding the prerequisites of the reconstruction is highly advised, since violations result in poor representations and false characteristics, which will build the groundwork for subsequent quantitative modeling attempts. Furthermore, it is advisable to alter the delay-times and dimension estimates in several iterations to be sure to hit the most 'representable' form of the time-series system under analysis, especially, in more complex applications such as spectral embedding. Finally, the framework only provides the most basic intuitions or the minimum set of knowledge for analysis to be possible at all, refinements are always encouraged. Taken together, the stated insights can be abstracted into a minimum set of requirements, which have to be fulfilled by potential model selections. Furthermore, one may reapply the whole analysis on the DWT sub-dynamics series to elaborate on potential hidden (strange) attractors.

#### **5. Concluding remarks**

Within this chapter, a practical guideline for the complete implementation of a combinatory, chaos analysis framework separately proposed in Vogl and Rötzel [40] for stationary and in Vogl [44] for non-stationary data is presented in its entirety. The framework is proposed as an integrated, holistic approach to analyzing the empirical DGP of nonlinear time-series data and provides the possibility to distinguish chaoticity from stochasticity while referring to underlying evolutionary dynamics. The analysis steps are elucidated, potential pitfalls and theoretical rationales stated and prerequisites discussed in detail. Moreover, an 'idealistic' versus 'negative' case is empirically and graphically introduced and debated based upon real-world time-series sets 15. With this guide, the reader should be able to conduct the analysis themselves, without being prone to misspecifications and common errors present in the scientific literature.

Lastly, concluding remarks and current frontiers in the elaborated context are briefly to be stated. Current gaps in research and frontiers on the reconstruction of attractors is vastly seen in the application of neural network, evolutionary algorithms and other reconstruction methodologies to obtain sufficient and high-quality reconstructions and analysis insights (see [116, 117]). Nonetheless, the research field of time-series reconstruction and quantification of empirical DGPs is scarce and defined as a current gap in research, particularly, in hindsight of novel technological advancements such as artificial intelligence solutions. To conclude, Nieto et al. [98], states

<sup>15</sup> Even if not displayed in this chapter, during the preparation period, several different time-series have been analyzed, e.g. flood and river discharge series, wind power, energy prices, tweet-frequencies, nonlinear fluids and fundamental economic indicators, among others.

#### *Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

unpredictability as a 'fundamental topic' in the nonlinear scientific domain, owing to its consequences being rooted in the existence of sensitivity to initial conditions as the main trait of chaotic dynamics. Furthermore, no common understanding of unpredictability exists, since differing definitions may be applicable, e.g. problems in predicting trajectory evolutions may not be seen as a problem in hindsight of scattering problems, which only level around asymptotic behavior, thus, define problems only in the prediction of final system states [98]. Furthermore, predictability in subsequently implemented models is a vast topic, which is neglected for the discussion of this chapter, yet, deemed of utmost relevance to it.

The stated framework can be enhanced further and shows several limitations, namely, it is computationally expensive and consists of many algorithms and methods, which are time intensive. Moreover, the selected methods are chosen due to their vast application in the scientific literature and not on performance. Hence, no optimization has been conducted yet, owing to the goal dependence of the analysis framework, even if applicability to various time-series is given. Furthermore, there exists no way to resolve attractor reconstructions given the existence of high ACFs and high or oscillating temporal correlations. Moreover, the framework is graphically reliant, which is seen as a potential hindrance in terms of future automatization and application on larger data pools and automated decision rule generations. Nonetheless, to conclude, the presented framework is seen as the fundamental basis or minimal building block for future research, i.e. as the provision of a stepping stone toward more advanced, transparent and reliable insights originating from the scientific nonlinear dynamics community. The enablement to safely distinguish chaoticity from stochasticity paired with the detailed characterization of the empirical time-series DGP, in general, is expected to have a positive influence on the quantification, modeling and the future prospects of the field, solving a 40-year-old debate. Resolving stated debate, hopefully, opens the way to more coherent insights and persistent knowledge about time-series systems and the quantification of the real-world in various disciplines such as medicine, hydrology, economics and physics. The inherent paradigm shift is also expected to make model selection easier and more self-explanatory in the future of time-series predictions.

#### **Author details**

Markus Vogl Executive Management, Markus Vogl {Business & Data Science}, Wiesbaden, Germany

\*Address all correspondence to: markus.vogl@vogl-datascience.de

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Gaul V and Kumar V. Allied Market Research. 2020. [Online]. Available: https://www.alliedmarketresearch.com/ predictive-analytics-market. [Accessed May 27, 2022].

[2] Markets and Markets. Markets and Markets. 2021. [Online]. Available: https:// www.marketsandmarkets.com/Market-Reports/predictive-analytics-market-1181. html. [Accessed May 27, 2022].

[3] Aguilar-Rivera R, Valenzuela-Rendón M, Rodríguez-Ortiz J. Genetic algorithms and Darwinian approaches in financial applications: A survey. Expert Systems with Applications. 2015;**42**:7684-7697

[4] Poornima S, Pushpalatha M. A survey on various applications of prescriptive analytics. International Journal of Intelligent Networks. 2020;**1**:76-84

[5] Dudkowski D, Jafari S, Kapitaniak T, Kuznetsov N, Lenonov G, Prasad A. Hidden attractors in dynamical systems. Physics Reports. 2016;**637**:1-50

[6] Marwan N, Romano MC, Thiel M, Kurths J. Recurrence plots for the analysis of complex systems. Physics Reports. 2007;**438**:237-329

[7] Marwan N, Wessel N, Meyerfeldt A, Schirdewan A, Kurths J. Recurrence plot based measures of complexity and its application to heart rate variability data. Physical Reviews E. 2002;**66**(2):026702

[8] Marwan N, Kurths J. Line structures in recurrence plots. Physical Letters A. 2005;**336**(4-5):349-357

[9] Guégan D, Leroux J. Forecasting chaotic systems: The role of local Lyapunov exponents. Chaos, Solitons and Fractals. 2009;**41**:2401-2404

[10] Song X, Niu D, Zhang Y. The chaotic attractor analysis of DJIA based on manifold embedding and Laplacian Eigenmaps. Mathematical Problems in Engineering. 2016;**4**:1-10

[11] Fernández-Rodríguez F, Sosvilla-Rivero S, Andrada-Félix J. Testing chaotic dynamics via Lyapunov exponents. Journal of Applied Econometrics. 2005;**20**:911-930

[12] Adams Z, Füss R, Glück T. Are correlations constant? Empirical and theoretical results on popular correlation models in finance. Journal of Banking & Finance. 2017;**84**:9-24

[13] Alexandridis AK, Kampouridis M, Cramer S. A comparison of wavelet networks and genetic programming in the context of temperature derivatives. International Journal of Forecasting. 2017;**33**:21-47

[14] Shi Y, Ho K-Y. Long memory and regime switching: A simulation study on the Markov regime-switching ARFIMA model. Journal of Banking & Finance. 2015;**61**:189-204

[15] Kristoufek L. Multifractal height cross-correlation analysis: A new method for analyzing long-range cross correlation. Europhysics Letters. 2011;**95**(6):68001

[16] Berghorn W. Trend Momentum. Quantitative Finance. 2015;**15**:261-284

[17] Ramiah V, Xu X, Moosa IA. Neoclassical finance, behavioural finance and noise traders: A review and assessment of the literature. International Review of Financial Analysis. 2015;**41**:89-100

[18] Celeste V, Corbet S, Gurdgiev C. Fractal dynamics and wavelet analysis: *Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

Deep volatility and return properties of bitcoin, Ethereum and ripple. The Quartely Review of Economics and Finance. 2019;**76**:310-324

[19] De Luca G, Dominique G, Giorgia R. Assessing tail risk for nonlinear dependence of MSCI sector indices: A copula three-stage approach. Finance Research Letters. 2019;**30**:327-333

[20] Beltratti A, Stulz RM. Why is contagion asymmetric during the European sovereign crisis? Journal of International Money and Finance. 2019;(99-C):102081

[21] Charfeddine L. True or spurious long memory in volatility: Further evidence on the energy futures markets. Energy Policy. 2014;(71-C):76-93

[22] Barkoulas JT, Chakraborty A, Ouandlous A. A metric and topological analysis of determinism in the crude oil spot market. Energy Economics. 2012;**34**:584-591

[23] Matilla-García M, Queralt R, Sanz P, Vázquez F. A generalized BDS statistic. Computational Economics. 2004;**24**:277-300

[24] Sandubete JE, Escot L. Chaotic signals inside some tick-by-tick financial time series. Chaos, Solitons and Fractals. 2020;**137**:109852

[25] Vogl M. Controversy in financial chaos research and nonlinear dynamics: A short literature review. Chaos, Solitons and Fractals. 2022;**162**:112444

[26] Çoban G, Büyüklü AH. Deterministic flow in phase space of exchange rates: Evidence of chaos in filtered series of Turkish lira-Dollar daily growth rates. Chaos, Solitons and Fractals. 2009;**42**(2):1062-1067

[27] Eckmann J, Ruelle D. Ergodic theory of chaos and strange attractors. Reviews of Modern Physics. 1985;**57**(3):617-656

[28] Devaney R. An Introduction to Chaotic Dynamical Systems. Cambridge: Addison Wesley; 1989

[29] BenSaïda A, Litimi H. High level chaos in the exchange and index markets. Chaos, Solitons and Fractals. 2013;**54**:90-95

[30] Abarbanel H, Brown R, Sidorowich J, Tsimring L. The analysis of observed chaotic data in physical systems. Reviews of Modern Physics. 1993;**65**:1331

[31] Fuh C-C, Tsai H-H, Yao W-H. Combining a feedback linearization controller with a disturbance observer to control a chaotic system under external excitation. Communications in Nonlinear Science and Numerical Simulation. 2012;**17**:1423-1429

[32] Sornette D. Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization and Disorder: Concepts and Tools. Heidelberg: Springer Verlag; 2004

[33] Rössler O. An equation for hyperchaos. Physics Letters A. 1979;**71**:155-157

[34] Ma C, Wang X. Hopf bifurcation and topological horseshoe of a novel finance chaotic system. Communications in Nonlinear Science and Numerical Simulation. 2012;**17**:721-730

[35] Gao Q, Ma J. Chaos and Hopf bifurcation of a finance system. Nonlinear Dynamics. 2009;**58**:209

[36] Dechert WD, Gençay. The topological invariance of Lyapunov exponents in embedded dynamics. Physica D. 1996;**90**:40-55

[37] Jahanshahi H, Yousefpour A, Wei Z, Alcaraz R, Bekiros S. A financial hyperchaotic system with coexisting attractors: Dynamic investigation, entropy analysis, control and synchronization. Chaos, Solitons and Fractals. 2019a;**126**:66-77

[38] Brock W, Dechert W, Scheinkman J, LeBaron B. A test for independence based on the correlation dimension. Econometric Reviews. 1996;**15**:197-235

[39] BenSaïda. Noisy chaos in intraday financial data: Evidence from the American index. Applied Mathematics and Computation. 2014;**226**:258-265

[40] Vogl M, Rötzel PG. Chaoticity versus stochasticity in financial markets: Are Daily S&P 500 return dynamics chaotic? Communications in Nonlinear Science and Numerical Simulation. 2022;**108**:106218

[41] Aguirre LA, Billings S. Identification of models for chaotic systems from noisy data: Implications for performance and nonlinear filtering. Physica D. 1995;**85**:239-258

[42] Kyrtsou C, Labys WC, Terraza M. Noisy chaotic dynamics in commodity markets. Empirical Economics. 2004;**29**:489-502

[43] Kostelich EJ. The analysis of chaotic time-series data. Systems & Control Letters. 1997;**31**:313-319

[44] Vogl M. Hurst Exponent Dynamics of S&P 500 Returns: Implications for Market Efficiency, Long Memory, Multifractality and Financial Crises Predictability by Application of a Nonlinear Dynamics Analysis Framework. Working Paper SSRN, Under Review. 2022. Available from: https://papers.ssrn.com/ sol3/papers.cfm?abstract\_id=3838850

[45] Kantz H, Schreiber T. Nonlinear Time Series Analysis. Cambridge: Cambridge University Press; 2003

[46] Grassberger P, Procaccia I. Characterizsation of strange attractors. Physica Review Letters. 1983a;**50**:346-394

[47] Grassberger P, Procaccia I. Measuring the strangeness of strange attractors. Physica 9D. 1983;**9**(1-2):189-208

[48] Takens F. Detecting strange attractors in fluid turbulence. In: Rand D, Young L-S, editors. Dynamical Systems and Turbulence. Berlin: Springer; 1981. pp. 366-381

[49] Cencini M, Cecconi E, Vulpiani A. Chaos from simple models to complex systems. World Scientific. 2010;**17**:1-480

[50] Tirandaz H, Aminabadi S, Tavakoli H. Chaos synchronization and parameter identification of a finance chaotic system with unknown parameters, a linear feedback controller. Alexandria Engineering Journal. 2018;**57**:1519-1524

[51] Richman J, Moorman J. Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology - Heart and Ciculatory Physiology. 2000;**278**(6):H2039-H2049

[52] Park JY, Whang Y-J. Random walk or chaos: A formal test on the Lyapunov exponent. Journal of Econometrics. 2012;**169**:61-74

[53] Shevchenko II. Lyapunov and diffusion timescales in the solar neighborhood. Working Paper with arXiv-ID: 1012. 2018;**3606v2**:1-22

[54] Hurst H. Long-term storage capacity of reservoirs. Transactions of the American Society of Civil Engineers. 1951;**116**:770

[55] Sundararajan D. Discrete Wavelet Transform - a Signal Processing Approach, India. Singapore: John Wiley & Sons; 2015

*Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

[56] Chen Y, Yang H. Multiscale recurrence analysis of long-term nonlinear and nonstationary time series. Chaos, Solitons and Fractals. 2012;**45**(7):978-987

[57] Mallat S. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1989;**11**(7):674-693

[58] Mitra S. A wavelet filtering based analysis of macroeconomic indicators: The Indian evidence. Applied Mathematics and Computation. 2006;**175**:1055-1079

[59] López de Prado M. Advances in Financial Machine Learning. Hoboken: John Wiley & Sons Inc.; 2018

[60] Fan Q, Liu S, Wang K. Multiscale multifractal detrended fluctuation analysis of multivariate time series. Physica A. 2019;**532**:121864

[61] Alstott J, Bullmore E, Plenz D. Powerlaw: A python package for analysis of heavy-tailed distributions. PloSONE. 2014;**9**(4):e95816

[62] Bao D, Yang Z. Intelligent stock trading system by turning point confirming and probabilistic reasoning. Expert Systems with Applications. 2008;**34**:620-627

[63] Altan A, Karasu S, Bekiros S. Digital currency forecasting with chaotic metaheuristic bio-inspired signal processing techniques. Chaos, Solitons and Fractals. 2019;**126**:325-336

[64] Argyris J, Andreadis I. The influence of noise on the correlation dimension of chaotic attractors. Chaos, Solitons and Fractals. 1998;**9**(3):343-361

[65] Donoho D. Denoising by soft threshold. IEEE Trans on Information Theory. 1995;**41**(3):613-627

[66] Donoho D, Johnstone I. Ideal spatial adaption by wavelet shrinkage. Biometrika. 1994;**81**(3):425-455

[67] MacKinnon J. Approximate asymptotic distribution functions for unit-root and cointegration tests. Journal of Business and Economic Statistics. 1994;**12**:167-176

[68] Kwiatkowski D, Phillips P, Schmidt P, Shin Y. Testing the null hypothesis of stationary against the alternative of a unit root. Journal of Econometrics. 1992;**54**:159-178

[69] Massey FJ Jr. The Kolmogorov-Smirnov test for goodness of fit. Journal of the Americal Statistical Association. 2012;**46**(253):68-78

[70] Andrews D. Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica. 1991;**59**:817-858

[71] Provenzale A, Smith L, Vio R, Murante G. Distinguishing between lowdimensional dynamics and randomness in measured time series. Physica D. 1992;**58**:31-49

[72] Ramsey J, Sayers C, Rothman P. The statistical properties of dimension calculations using small data sets: Some economic applications. International Economic Review. 1990;**4**:991-1020

[73] Bajo-Rubio O, Fernandez-Rodriguez F, Sosvilla-Riverio S. Chaotic behaviour in exchange-rate series: First results for the peseta-U.S. Dollar case. Economics Letters. 1992;**39**:207-211

[74] Darbyshire A, Broomhead D. Robust estimation of tangent maps and Lyapunov spectra. Physica D. 1996;**89**:287

[75] Rosenstein M, Collins J, De Luca C. A practical method for calculating largest

Lyapunov exponents from small data sets. Physica D. 1993;**65**:117-134

[76] Bask M, Gençay R. Testing chaotic dynamics via Lyapunov exponents. Physica D. 1998;**114**:1-2

[77] Peng C-K, Buldyrev S, Havlin S, Simons M, Stanley H, Goldberger L. Mosaic organization of DNA nucleotides. Physical Review E. 1994;**49**(2):1685

[78] Hardstone R, Poil S-S, Schiavone G, Jansen R, Nikulin V, Mansvelder H, et al. Detrended fluctuation analysis: A scale-free view on neuronal oscillations. Frontiers in Physiology. 2012;**3**:450

[79] Mandelbrot BB, van Ness J. Fractional Brownian motions, fractional noises and applications. SIAM Review. 1968;**10**(4):422-437

[80] Mandelbrot BB, Wallis JR. Some long-run properties of geophysical records. Water Resources Research. 1969;**5**(2):321-340

[81] Opong K, Mulholland G, Fox A, Farahmand K. The behaviour of some UK equity indices: An application of Hurst and BDS tests. Journal of Empirical Finance. 1999;**6**(3):267-282

[82] Mandelbrot BB. Fractals and Chaos. New York: Springer; 2004

[83] Norouzzadeh P, Jafari G. Application of multifractal measures to Teheran price index. Physica A. 2005;**356**:609-627

[84] Yule G. On a method of investigating periodicities in disturbed series with special reference to wolfer's sunspot numbers. Philosophical Transactions of the Royal Society of London Series A. 1927;**226**:267-298

[85] Packard N, Crutchfield J, Farmer J, Shaw R. Geometry from a time series. Physical Review Letters. 1980;**45**:712-716

[86] Ruelle D. Chaotic Evolution and Strange Attractors. Cambridge: Cambridge University Press; 1989

[87] Nichols J, Nichols J. Attractor reconstruction for non-linear systems: A methodological note. Mathematical Biosciences. 2001;**171**:21-32

[88] Gibson J, Farmer J, Casdagli M, Eubank S. An analytic approach to practical state space reconstruction. Physica D. 1992;**57**:1-30

[89] Cao L. Practical method for determining the minimum embedding dimension of a scalar time series. Physica D. 1997;**110**(1-2):43-50

[90] Cannas B, Cincotti S. Neural reconstruction of Lorenz attractors by an observable. Chaos, Solitons and Fractals. 2002;**14**:81-86

[91] Toledo-Suárez C. Meta-chaos: Reconstructing chaotic attractors from the separation of nearby initial conditions on hyperhelices. Communications in Nonlinear Dynamics and Numerical Simulation. 2010;**15**:2249-2253

[92] Yeo K. Data-driven reconstruction of nonlinear dynamics from sparse observation. Journal of Computational Physics. 2019;**395**:671-689

[93] Asefa T, Kemblowski M, Lall U, Urroz G. Support vector machines for nonlinear state space reconstruction: Application to the great salt Lake time series. Water Resources Research. 2005;**41**:W12422

[94] Ma H-G, Zhang C-L, Li F. State space reconstruction of nonstationary time-series. Journal of Computational *Chaos Analysis Framework: How to Safely Identify and Quantify Time-Series Dynamics DOI: http://dx.doi.org/10.5772/intechopen.106213*

and Nonlinear Dynamics. 2017;**12**:031009-031001

[95] Fraser A, Swinney H. Independent coordinates for strange attractors from mutual information. Physical Review A. 1986;**33**(2):1134-1140

[96] Broomhead D, King G. Extracting qualitative dynamics from experimental data. Physica D. 1986;**20**:217-236

[97] Rüdisüli M, Schildhauer T, Biollaz S, Van Ommen J. Measurement, monitoring and control of fluidized bed combustion and gasification. In: Fluidized Bed Technologies for Near-Zero Emission Combustion and Gasification. UK: Woodhead Publishing; 2013. pp. 813-864

[98] Nieto A, Seoane J, Sanjuán M. Final state sensitivity in noisy chaotic scattering. Chaos, Solitons and Fractals. 2021;**150**:111181

[99] Sauer T, Yorke J, Casdagli M. Embedology. Journal of Statistical Physics. 1991;**65**:579

[100] Pentari A, Tzagkarakis G, Tsakalides P, Simos P, Bertsias G, Kavroulakis E, et al. Changes in restingstate functional connectivity in neuropsychiatric lupus: A dynamic approach based on recurrence quantification analysis. Biomedical Signal Processing and Control. 2022;**72**:103285

[101] Eckmann J-P, Kamphorst S, Ruelle D. Recurrence plots of dynamical systems. Europhysics Letters. 1987;**5**:973-977

[102] Van den Hoorn W, Hodges P, van Dieen J, Kerr G. Reliability of recurrence quantification analysis of postural sway data. A comparison of two methods to determine recurrence threshold. Journal of Biomechanics. 2020;**107**:109793

[103] Koebbe M, Mayer-Kress G. Use of recurrence plots in the analysis of timeseries data. In: Casdagli M, Eubank S, editors. Proceedings of SFI Studies in the Science of Complexity, Vol. XXI, Redwood City, 1992. Reading, MA: Addison-Wesley; 1992. pp. 361-378

[104] Zbilut J, Webber C Jr. Embeddings and delays as derived from quantification of recurrence plots. Physics Letters A. 1992;**171**(3-4):199-203

[105] Theiler J. Spurious dimension from correlation algorithms applied to limited time-series data. Physical Reviews A. 1986;**34**(3):2427-2432

[106] Cariolaro G. Unified Signal Theory. London: Springer; 2011

[107] Wojtaszczyk P. A Mathematical Introduction to Wavelets. Cambridge: Cambridge University Press; 1997

[108] Chang S, Grace S, Yu B, Vetterli M. Adaptive wavelet thresholding for image denoising and compression. IEEE Transactions. 2000;**9**(9):1532-1546

[109] Strogatz S. Nonlinear Dynamics and Chaos. Colorado: Westview Press; 2014

[110] Katok A, Hasselblatt B. Introduction to the Modern Theory of Dynamical Systems. Cambridge: Cambridge University Press; 1995

[111] Mandelbrot BB. The Fractal Geometry of Nature. USA: Freeman; 1977

[112] Cao G, He L-Y, Cao J. Multifractal Detrended Analysis Method and its Application in Financial Markets. Singapore: Springer; 2018

[113] Barunik J, Aste T, Di Matteo T, Liu R. Understanding the source of multifractality in financial markets. Physica A. 2012;**391**:4234-4251

[114] Yuan Y, Zhuang X-T, Jin X. Measuring multifractality of stock price flucutation using multifractal detrended fluctuation analysis. Physica A. 2009;**388**:2189-2197

[115] Lorenz E. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences. 1963;**20**:130-141

[116] Zelinka I, Chadli M, Davendra D, Senkerik R, Jasek R. An investigation on evolutionary reconstruction of continuous chaotic systems. Mathematical and Computer Modelling. 2013;**57**:2-15

[117] Jokar M, Salarieh H, Alasty A. On the existence of proper stochastic Markov models for statistical reconstruction and prediction of chaotic time series. Chaos, Solitons and Fractals. 2019;**123**:373-382

#### **Chapter 6**

## Spatial-Temporal Data Analysis in Nonlinear System

*Xing He and Minyu Chen*

#### **Abstract**

Spatial-temporal analysis is at the heart of data mining in Big Data Era. Most mathematical tools are incompetent to deal with spatial-temporal data. This phenomenon has greatly spurred the development of data science, especially in the field of BDA (big data analytics). This chapter proposes random matrix theory (RMT) to handle this problem, which begins by modeling spatial-temporal datasets as sequences, whose term is in the form of a random matrix each. Then, some fundamental RMT principles are briefly discussed, such as asymptotic spectrum laws, transforms, convergence rate, and free probability, in order to extract highdimensional statistics from the random matrix as the indicators. The statistical properties of these indicators are discussed for a better understanding of the system. Finally, some potential application fields are given.

**Keywords:** spatial-temporal data, electric power system, data-driven, random matrix theory, situation awareness, big data analyzation

#### **1. Introduction**

Electric power system reliability and intelligent management are critical to our daily living. Engineers and academics have recently focused on the use of large-scale phase measuring units (PMUs) to improve wide-area monitoring, protection, and control [1–5].

Most existing algorithms in power grid are model-based, which are built upon mechanism assumptions/simplifications and linear system control theory, with a determined and typically analytic outcome. These models, however, are ineffective for today's power system, which is of ever-increasing complexity and uncertainty [6–10]: 1) Interconnection of nearby utilities may frequently improve overall safety and efficiency, resulting in a huge interconnected system, such as the North American Power Grid, which serves almost 400 million customers throughout the continent [11]; 2) the continuous penetration of cell units (e.g., distributed generations) that are small-size, large-number, distributed-deployment, diverse-behaviors, smart-response, and uncertain-control [12]. 3) physical disciplines (mechanics, magnetism, electric, and electronics) of a system are closely intertwined, especially in a CHP (combined heat and power) system or even IES (integrated energy system) [13]; and 4) the construction of energy foundation for a smart city. Those above characteristics are advantageous to an open, flat, nonlinear, high-uncertainty, and distributed EIoT, as shown in **Figure 1** [14]. For such an EIoT, a

**Figure 1.**

*Diagram of future energy internet of thing: its resource flow, data flow, and participants [14].*

precise mechanistic model or even a proper descriptive representation can hardly be formulated, let alone model-based linear mode.

Furthermore, engineering data, such as sampling data in a power system, is not similar to image data. Various sensors, such as phasor measuring units (PMUs), are used to sample data from the grid. The huge dataset is in a high-dimensional vector space and in time series: the temporal variations (*T* sampling instants) and spatial fluctuations (*N* grid nodes) are recorded concurrently, and hence it is called spatial-temporal data.

Most mathematical tools are incompetent in this task [15]. Facing the above spatial-temporal data, we can hardly extract statistical information, particularly spatial-temporal correlations; the high-dimensional structure does not match the requirements of most traditional mathematical methods. Also, this task is incompatible with supervised training algorithms such as neural networks, due to the lack of or asymmetry of massive labeled data [16].

Fortunately, random matrix theory (RMT), by unifying time and space through their ratio *c* = *T*/*N*, can strictly and mathematically deal with such data. Moreover, linear eigenvalue statistics (LESs) built from data matrices follow Gaussian distributions for very general conditions, and other statistical variables are studied due to the latest breakthroughs in probability on the central limit theorems of those LESs.

#### **2. SpatialTemporal data analyzation mode, tools, theory, and applications in electric power system**

#### **2.1 Big data era, fourth paradigm, and data-driven model**

The world's science has altered, as seen in **Figure 2** [17]. Initially, there was only experimental science, followed by theoretical science, which included Newton's Laws, Maxwell's equations, and so on. The theoretical models became too hard to solve analytically for many issues, and people had to start simulating. These simulations have carried us through much of the previous millennia. People nowadays are collecting data through intensive sensors or simulations. The data flood has an impact on experimental, theoretical, and computational science, and several state-of-the-art data technologies and data sciences have converged to provide tremendous promise for data-intensive scientific discovery, the so-called Fourth Paradigm.

**Figure 2.** *Science paradigms and fourth paradigm [17].*

Data-driven becomes a natural and stressful topic in energy systems, as evidenced by *IEEE TRANSACTIONS ON SMART GRID* special issue "big data analytics for grid modernization" published in 2016 [18]. Data-driven approaches are also characterized as model-free; we no longer rely significantly on physical models, and hence can manage instances where physical parameters are incorrect or even totally unavailable. Data-driven mode enables a quick start to our task, especially for a modern energy system in which the behaviors and discipline of system cell units are strongly coupled.

### **2.2 Basic of spatialTemporal data, and high-dimensional information**

Spatial–temporal data analysis means that we simultaneously deal with a large number of variables (in *N-*dimensional spatial space), and each variable (*i* = 1, *…* , *N*) samples *time series* for a duration (in *T*-dimensional temporal space). A classical statistic theory treats fixed *N* only (typically *N <* 6 [3]), e.g., for ABC-dq0 transformation *N* = 3. This fixed small *N* is called the low-dimensional regime. In practice, we are interested in the case that *N* can vary arbitrarily in size compared with *T* (typically *T >* 60, *N >* 20, and *c* = *N/T >* 0 [15]). This fundamental difference is the *primary motivation* for studying BDA.

Spatial–temporal data mining is expected to contribute some (high-dimensional) information with domain-specific meaning attached as the supplement to DT-based situation awareness (SA). High-dimensional indicators (outputs of high-dimensional statistics) and deep features (outputs of deep learning) are two main types of representation of high-dimensional information.

#### **2.3 Spatial-temporal data utilization architecture and tools**

Most mathematical methods struggle to extract information from spatial-temporal data. This phenomenon has accelerated the development of data science, particularly in the field of AI and BDA. We describe a high-focus technique for each field: 1) DL (Deep learning), which is good at massive data modeling in AI [19] and 2) Highdimensional statistics, or more precisely, RMT, which does well in data analytics in BDA. Both tools use a set of (high-dimensional) methodologies for integrated spatialtemporal modeling and analysis, and they have already made profound impacts on many domains. **Figure 3** depicts the architecture of large data mining based on DL and RMT.

#### *2.3.1 Deep learning and its advantages*

DL is a cutting-edge data mining algorithm. As demonstrated in **Figure 4**, deep characteristics are learned at some level from comparatively hidden features in the hierarchy [20]. DL uses the enormous data in a non-handcrafted approach to create a

#### **Figure 3.**

*Architecture of spatial-temporal data utilization.*

**Figure 4.** *A typical ANN structure.*

deep (nonlinear) network model. A typical ANN (Artificial Neural Network) network is modeled as

$$y = f(\mathbf{x}) \triangleq f^L\left(\mathcal{W}^L\dots f^2\left(\mathcal{W}^2 f^1\left(\mathcal{W}^1 \mathbf{x} + b^1\right) + b^2\right) \dots + b^L\right) \tag{1}$$

The above DL network model can be built with little prior knowledge relevant to the physical mechanism or causal relationship. As a result, DL may be used in a variety of situations or even systems without major changes. For example, we use CNN (convolutional neural networks) for computer version system modeling [21], LSTM (long short-term memory) for prediction [22], and deep reinforcement learning for strategy optimization [23].

In a complicated system, DL has a competitive edge in terms of possible data use. Furthermore, the test error might be used to quantify the DL model's performance on the generalization task, ensuring its usefulness in a real-world situation.

DL holds a competitive advantage for feasible data utilization in a complex system. In addition, the performance of the DL model on generalization task could be quantitatively evaluated by the test error, ensuring its usefulness in a real-world situation.

#### *2.3.2 Big data analytics and RMT and the advantages*

BDA uses spatial-temporal joint analysis to acquire high-dimensional statistics. Matrix-based variables, such as eigenvalue or the matrix variate itself, are likely to provide some insight to BDA [24]. These matrix-based variables are the variables of the *N* � *T* (large-dimensional) spatial-temporal data matrix that have an intrinsic statistical link, whether causal or not. These matrix-based variables are analytically intractable due to their high dimensionality rather than their big size. RMT is inextricably linked to this issue.

RMT understands the joint eigenvalue distribution as the statistic analytics in the asymptotic regime. In particular, by unifying time and space through their ratio c = T/ N, BDA is acquired as the functionals of the eigenvalue distributions. For example, the matrix's LES indicators [25] follow Gaussian distributions for very general conditions. Furthermore, many LES-derived variables, whose statistical properties are mostly derivable and provable, are studied due to the latest breakthroughs in highdimensional probability [15]. In this sense, RMT is rigorous and fundamental in nature. Besides, RMT performs well with only moderate-size (unlabeled) data, which is often true for a domain-specific problem in EIoT.

#### **2.4 Random matrix theory in a nutshell**

#### *2.4.1 RMT and its universality principle*

Two ensembles, Gaussian unitary ensemble (GUE) and Laguerre unitary ensemble (LUE), are studied first in RMT [10]:

$$\Gamma = \begin{cases} \frac{1}{2}(R + R^H), R \in \mathbb{R}^{N \times N}, \text{GUE}; \\ \frac{1}{T}RR^H, R \in \mathbb{R}^{N \times T}, \text{LUE}. \end{cases} \tag{2}$$

where R is i.i.d. standard Gaussian random matrix.

We investigate the rate of convergence of the expected empirical spectral distribution (ESD) of Γ. Let hΓ (x) denotes the true eigenvalue density. Wigner's Semicircle Law and Wishart's M-P Law, respectively, for GUE and LUE, say that

$$h\_{\Gamma}(\mathbf{x}) = \begin{cases} \frac{1}{2\pi} \sqrt{4 - \mathbf{x}^2} & , \mathbf{x} \in [-2, 2], \text{GUE} \\\frac{1}{2\pi c \infty} \sqrt{(\mathbf{x} - a\_1)(\mathbf{x} - a\_2)} & , \mathbf{x} \in [a\_1, a\_2], \text{LUE} \end{cases} \tag{3}$$

where *<sup>a</sup>*<sup>1</sup> <sup>¼</sup> <sup>1</sup> � ffiffi *<sup>c</sup>* <sup>p</sup> ð Þ<sup>2</sup> , *<sup>a</sup>*<sup>2</sup> <sup>¼</sup> <sup>1</sup> <sup>þ</sup> ffiffi *<sup>c</sup>* <sup>p</sup> ð Þ<sup>2</sup>

Universality principle enables us to perform hypothesis tests under the assumption that the matrix entries are not Gaussian distributed but use the same test statistics as in Gaussian case. Numerous studies using field data [25, 26] demonstrate that M-P Law is universally valid with moderate matrix sizes, such as tens. This is the very reason why RMT is widely used in engineering.

#### *2.4.2 Linear eigenvalue statistics and its properties*

Consider a random matrix **Γ**∈R*<sup>N</sup>* � *<sup>T</sup>* , and **M** is the covariance matrix **M** = 1/T**ΓΓ**H. The LES *τ* of **Γ** is defined in [27].

$$\pi\_{\varphi} = \sum\_{i=1}^{N} \mathbbm{q}(\lambda\_i) = \operatorname{Tr} \wp(\mathbf{M}),\tag{4}$$

Law of Large Numbers tells us that *N*�<sup>1</sup> *τϕ* converges in probability to the limit

$$\lim\_{n \to \infty} \frac{1}{N} \sum\_{i=1}^{N} \boldsymbol{\varrho}(\lambda\_i) = \int \boldsymbol{\varrho}(\lambda) \boldsymbol{\varrho}(\lambda) d\lambda \tag{5}$$

where *ρ*(*λ*) is the probability density function, which is given in Eq. (4). Therefore, we deduce that

$$\pi\_{\varphi} = \sum\_{i=1}^{N} \varrho(\lambda\_i) = \operatorname{Tr} \rho(\mathbf{M}) = N \int \varrho(\lambda) \rho(\lambda) d\lambda \tag{6}$$

The Central Limit Theorem (CLT) for LES is studied as the natural second step:

$$\begin{split} \sigma^{2} \left( \mathfrak{r}\_{\theta} \right) &= \frac{2}{c\pi^{2}} \int \int\_{-\frac{\pi}{2} < \theta\_{1}, \theta\_{2} < \frac{\pi}{2}} \varphi^{2} (\theta\_{1}, \theta\_{2}) (1 - \sin \theta\_{1} \sin \theta\_{2}) d\theta\_{1} d\theta\_{2} \\ &+ \frac{\kappa\_{4}}{\pi^{2}} \left( \int\_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \rho (\zeta(\theta)) \sin \theta \, d\theta \right)^{2} \end{split} \tag{7}$$

See [25] for details.

#### *2.4.3 LES-based hypothesis testing for random matrix*

LES *τ*, as a positive scalar random variable defined in Eq. (5), is studied instead of the probability distribution of eigenvalues in Eq. (4). It can be viewed as a

mathematically rigorous dimensionality reduction—the *N* � *T* random matrix is reduced to a positive scalar random variable.

As *<sup>N</sup>* ! <sup>∞</sup>*,* the asymptotic limit of LES *<sup>τ</sup>* expectation and variance, i.e., E(*τ*) and *<sup>σ</sup>*<sup>2</sup> (*τ*)*,* is given, respectively, in Eqs. (7) and (8). These two equations are sufficient to study the scalar random variable *τ.* Universal principle, as well as engineering experiences, demonstrate that moderate values of *N* and *T* are accurate enough for our practical purposes. LES *τ* is robust against data flaw and insusceptible to noises [10]. All of these statistical properties make LES a good SA indicator.

#### **2.5 High-dimensional situation awareness indicator system and its properties**

According to Eq. (5), numerous LESs can be designed from a certain spatialtemporal data Γ. Similarly, other high-dimensional indicators, for instance, statistic indicators, deep features, and electrical features, are calculable as the outputs of data tools according to **Figure 4**. They are tied together to provide an insight into domainspecific SA criteria for detection, prediction, etc. The details about the highdimensional SA indicator system and its successful application cases can be found in ref. [15].

With these indicators, the high-dimensional indicator system is built; it supplies a multiple view angle to gain insight into the system. Aiming to provide a domainspecific SA task, the test function ϕ plays a role as a flexible filter depending on our task. **Table 1** lists the properties of LES indicators and makes a comparison with classical ones.

**Table 1** tells that LES provides a better indicator system in the 4th Paradigm. The relation of the LES indicators to the classical ones, in some sense, is just like that of quantum physics to the classical one. Comparing experimental values with ideal theoretical values, LES conducts SA in a complex system statistically.


**Table 1.**

*High-dimensional indicator system for EIoT.*

In short, RMT supplies us with a data-driven approach to indicator extraction for the informatization of a real system via sampling spatial-temporal data. A cluster of statistical indicators, via a mathematical procedure, is formed as a new epistemology for the system. Some advantages—such as data-driven and model-free mode, theoretic guided, fast in speed, reasonableness, sensitivity, flexibility, and robustness against bad data—have already been shown in our previous work [10, 17].

#### **2.6 LES-based hypothesis testing for random matrix**

To study the convergence as a function of *N*, we study LES instead of the probability distribution of eigenvalues in Eq. (4). For an arbitrary test function with enough smoothness, LES τ (see it as a random variable *Y*) is a positive scalar random variable defined in Eq. (5). As *N* ! ∞, the asymptotic limit of its expectation, E(*Y*), is given in Eq. (6), and the asymptotic limit of its variance, *σ*<sup>2</sup> (Y), is given in Eq. (7). These two equations are enough to study the scalar random variable *Y*. This approach can be viewed as a dimensionality reduction—the random data matrix of size *N* � *T* is reduced to a positive scalar random variable *Y*! This dimension reduction is mathematically rigorous only when *N*,*T* ! ∞, but *N*/*T* ! *c*. Experiences demonstrate, however, that moderate values of *N* and *T* are accurate enough for our practical purposes. Moreover, our previous work shows that LES is robust against data errors (e.g., data loss, data out-of-synchronization) and insusceptible to (independent) random noises (not limited to white noises), which is not true to those low dimensional statistics, such as mean and variance of any single variable. All these statistical properties make LES a good matrix-based variable for a hypothesis testing design aiming to provide anomaly detection task.

We formulate the hypothesis testing in terms of the statistical properties of LES. Referring to the Gaussian property and standard scores, the detection is modeled as a binary hypothesis testing: the normal hypothesis *H*<sup>0</sup> (no anomaly present) and the abnormal one *H*1, denoted by:

$$\begin{aligned} \mathcal{H}\_0 &: \left| \frac{\boldsymbol{\tau}\_{\boldsymbol{\varphi}} - \mathbb{E}\left(\boldsymbol{\tau}\_{\boldsymbol{\varphi}}\right)}{\sigma\left(\boldsymbol{\tau}\_{\boldsymbol{\varphi}}\right)} \right| < \varepsilon, \\\mathcal{H}\_1 &: \left| \frac{\boldsymbol{\tau}\_{\boldsymbol{\varphi}} - \mathbb{E}\left(\boldsymbol{\tau}\_{\boldsymbol{\varphi}}\right)}{\sigma\left(\boldsymbol{\tau}\_{\boldsymbol{\varphi}}\right)} \right| \ge \varepsilon, \end{aligned} \tag{8}$$

where *ϵ* is a threshold value that needs to be preset—e.g., at a significance level of 0.05, the *ϵ* should be set at 1.96.

#### **3. Conclusion**

This chapter, motivated for the future's electrical grid, studies the nonlinear analysis based on RMT. Three ingredients are discussed in detail: 1) data modeling modeling the spatial-temporal data as a sequence of random matrices, which are naturally connected to RMT. 2) data analytics—conducting high-dimensional analysis to obtain the statistical indicators. 3) interpretation—interpreting the indicator by studying its properties for a better understanding of the system.

The experimental indicators, which are fully derived from the sampling data, are applicable to various engineering functions. For example, by comparing the LESs with their theoretical prediction, anomaly detection can be implemented.

Future research directions include: (1) Model validation with different implementations of the grid, ranging from statistic, dynamic and real-world systems; (2) Data fusion with a number of random data matrices, using mathematical tools such as free probability; and (3) The use of Gaussian random matrices in replacement for general data matrices that are obtained from the electrical grid. The universality principle of RMT says that this replacement causes negligible errors.

#### **Acknowledgements**

This work is supported by the National Natural Science Foundation of China (Grant No. 51907121).

### **Author details**

Xing He\* and Minyu Chen Shanghai Jiao Tong University, Shanghai, China

\*Address all correspondence to: hexing\_hx@126.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Chakrabarti S, Kyriakides E, Bi T, Cai D, Terzija V. Measurements get together. IEEE Power and Energy Magazine. 2009;**7**(1):41-49

[2] Luo L, Bei H, Chen J, Sheng G, Jiang X. Partial discharge detection and recognition in random matrix theory paradigm. IEEE Access. 2016;**PP** (99):1-1

[3] Lei C, Qiu RC, Xing H, Ling Z, Liu Y. Massive streaming pmu data model- ing and analytics in smart grid state evaluation based on multiple highdimensional covariance tests. IEEE Transactions on Big Data. 2018;**4**(1): 2332-7790

[4] Hou W, Ning Z, Lei G, Xu Z. Temporal, functional and spatial big data computing framework for large-scale smart grid. IEEE Transactions on Emerging Topics in Computing. 2017;**PP** (99):1-1

[5] Tu C, Xi H, Shuai Z, Fei J. Big data issues in smart grid – A review. Renewable and Sustainable Energy Reviews. 2017;**79**:1099-1107

[6] Shaker H, Zareipour H, Wood D. A data-driven approach for estimating the power generation of invisible solar sites. IEEE Transactions on Smart Grid. 2016; **7**(5):2466-2476

[7] Motter AE, Myers SA, Anghel M, Nishikawa T. Spontaneous synchrony in power-grid networks. Nature Physics. 2013;**9**(3):191-197

[8] Wang L, Li HW, Wu CT. Stability analysis of an integrated offshore wind and seashore wave farm fed to a power grid using a unified power flow controller. IEEE Transactions on Power Systems. 2013;**28**(3):2211-2221

[9] Xu X, He X, Ai Q, Qiu RC. A correlation analysis method for power systems based on random matrix theory. IEEE Transcations on Smart Grids. 2017; **8**(4):1811-1820

[10] He X, Ai Q, Qiu C, Huang W, Piao L, Liu H. A big data architecture design for smart grids based on random matrix theory. IEEE Transactions on Smart Grid. 2017;**8**(2):674-686

[11] Transmission, Office Of Electric Grid 2030: A National Vision for electricity's Second 100 Years. Washington, DC. Office of Electric Transmission & Distribution. 2003

[12] Yang B, Yu T, Shu H, Dong J, Jiang L. Robust sliding-mode control of wind energy conversion systems for optimal power extraction via nonlinear perturbation observers. Applied Energy. 2018;**210**:711-723

[13] Fu X, Sun H, Guo Q, Pan Z, Xiong W, Wang L. Uncertainty analysis of an integrated energy system based on information theory. Energy. 2017;**122**: 649-662

[14] Guo J. The evolution of power system characteristics and related thinking. In: 2nd "Clean Energy Development and Consumption Symposium". Xi'an, China: Chinese Society for Electrical Engineering; 2019

[15] Qiu R, Antonik P. Smart Grid and Big Data. New York: John Wiley and Sons; 2015

[16] Cheng L, Yu T. A new generation of ai: A review and perspective on machine learning technologies applied to smart energy and electric power systems.

*Spatial-Temporal Data Analysis in Nonlinear System DOI: http://dx.doi.org/10.5772/intechopen.105709*

International Journal of Energy Research. 2019;**43**(6):1928-1973

[17] Hey AJ, Tansley S, Tolle KM, et al. The Fourth Paradigm: Data-Intensive Scientific Discovery. Vol. 1. WA: Microsoft Research Redmond; 2009

[18] Hong T, Chen C, Huang J, Lu N, Xie L, and Zareipour H. "Guest editorial big data analytics for grid modernization". IEEE Transactions on Smart Grid. Sept 2016;7(5):2395–2396

[19] Najafabadi MM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E. Deep learning applications and challenges in big data analytics. Journal of Big Data. 2015;**2**(1):1

[20] Ren Y, Zhang L, Suganthan PN. Ensemble classification and regressionrecent developments, applicationsand future directions. IEEE Computational Intelligence Magazine. 2016;**11**(1):41-53

[21] Ling Z, Zhang D, Qiu RC, Jin Z, Zhang Y, He X, et al. An accurate and real-time method of self-blast glass insulator location based on faster r-cnn and u-net with aerial images. CSEE Journal of Power and Energy Systems. 2019;**5**(4):474-482

[22] Kong W, Dong ZY, Jia Y, Hill DJ, Xu Y, Zhang Y. Short-term residential load forecasting based on lstm recurrent neural network. IEEE Transactions on Smart Grid. 2017;**10**(1):841-851

[23] Zhang Z, Zhang D, Qiu RC. Deep reinforcement learning for power system applications: An overview. CSEE Journal of Power and Energy Systems. 2019; **6**(1):213-225

[24] Adhikari S. Matrix variate distributions for probabilistic structural dynamics. AIAA Journal. 2007;**45**(7): 1748-1762

[25] Shcherbina M. Central limit theorem for linear eigenvalue statistics of the wigner and sample covariance random matrices. Journal of Mathematical Physics, Analysis, Geometry. 2011;**7**(2): 176-192.

[26] He X, Qiu RC, Ai Q, Chu L, Xu X, Ling Z. Designing for situation awareness of future power grids: An indicator system based on linear eigenvalue statistics of large random matrices. IEEE Access. 2016;**4**:3557-3568

[27] Lytova A, Pastur L, et al. Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. The Annals of Probability. 2009;**37**(5):1778-1840

Section 3

## Analysis of Nonlinear Systems and Nonlinear Control

#### **Chapter 7**

## Structural Properties and Convergence Approach for Chance-Constrained Optimization of Boundary-Value Elliptic Partial Differential Equation Systems

*Kibru Teka, Abebe Geletu and Pu Li*

#### **Abstract**

This work studies the structural properties and convergence approach of chanceconstrained optimization of boundary-value elliptic partial differential equation systems (CCPDEs). The boundary conditions are random input functions deliberated from the boundary of the partial differential equation (PDE) system and in the infinite-dimensional reflexive and separable Banach space. The structural properties of the chance constraints studied in this paper are continuity, closedness, compactness, convexity, and smoothness of probabilistic uniform or pointwise state constrained functions and their parametric approximations. These are open issues even in the finite-dimensional Banach space. Thus, it needs finite-dimensional and smooth parametric approximation representations. We propose a convex approximation approach to nonconvex CCPDE problems. When the approximation parameter goes to zero from the right, the solutions of the relaxation and compression approximations converge asymptotically to the optimal solution of the original CCPDE. Due to the convexity of the problem, a global solution exists for the proposed approximations. Numerical results are provided to demonstrate the plausibility and applicability of the proposed approach.

**Keywords:** chance-constrained optimization, structural properties, state-constrained boundary-value PDE, probabilistic state constraints

#### **1. Introduction**

Partial differential equations (PDEs) are widely used to describe the spatial variations of physical, biological, and social systems as well as processes in mechanical engineering, thermodynamic, chemical engineering, medicine, industrial manufacturing, etc. [1, 2]. Moreover, practical PDE models involve uncertainties

arising from imprecise model parameters and the system's operational environment. In real-life applications, external influences have a non-negligible impact and seriously affect system behaviors [2–6]. For example, ambient temperature, wind, and pressure are uncertain external influences that seriously impact system performances.

External input uncertainties will cause output uncertainties in system state variables [4, 7–11]. Such random inputs usually affect the boundary of the system and thus should be compensated by distributed boundary control. Hence, we consider in this study the randomness of the boundary condition of elliptic PDE systems and solve the chance-constrained optimization problems of such systems [12].

This study is an extension of the previous works in [13, 14] in which the randomness from the model parameters of a PDE system was considered but without considering boundary-valued control. In the present study, we consider the randomness from a nonhomogeneous boundary condition of a PDE system, which implies that the required state solution of the chance-constrained optimization of boundary-value elliptic partial differential equation (CCPDE) is a random field [15]. The control input is applied deterministically at the boundary function to compensate for the random disturbances. As a result, the study addresses the issue of chance-constrained optimization of a randomly boundary-valued PDE system.

Mathematically, in this work, we consider a random parameter *ξ*∈ Ω coupled with a spatial variable *x*∈ *D* at the boundary condition of the PDE system. We assume that the uncertainty is under a given probabilistic measure *Pr* of the complete probability space ð Þ Ω, Σ, *Pr* where Σ is a sigma-Algebra in the Borel set Ω. This study analyzes the properties of infinite-dimensional optimization problems in the reflexive and separable Bochner space with the elliptic PDE system as equality constraint and its probabilistic state constraints as inequality constraints. In general, for CCPDE problems, significant difficulties arise from chance constraints. Specifically, the main structural properties such as continuity, compactness, convexity, and differentiability of the probabilistic state-constraint functions are difficult to analyze. In addition, solving chance-constraints problems is generally not a trivial task.

Therefore, our investigation first focuses on the theoretical analysis of the main structural properties of the probability pointwise state-constrained functions in the CCPDE. The presence of uncertainties on the nonhomogeneous and nonlinear Dirichlet boundary conditions impacts the required state solutions. It is necessary to investigate the optimality conditions to the existence and uniqueness of the solution to the CCPDE problem. Subsequently, since such CCPDE problems are generally difficult to solve directly and also potentially nonsmooth [6, 16], this work proposes smoothing approximation methods to address this difficulty [2, 13].

The numerical computation for solving the CCPDE problem needs a finitedimensional representation of the infinite-dimensional space through a discretization coupled with an appropriate sampling of the random variables by the multilevel Mont-Carlo method (ML-MCM). Since the resulting finite-dimensional chanceconstrained optimization problem is generally nonsmooth, nonconvex, and difficult to solve directly, we use the recently proposed inner-outer approximation approach [6] for the solution of the CCPDE problem. Several structural properties of the innerouter approximation-based CCPDE are also analyzed in this study. In the previous work, [13] the convexity of the outer approximation was investigated. In this study, we address the convexity issue of the inner approximations to guarantee the optimal global solution of the CCPDE.

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

#### **2. Problem definition**

We consider chance-constrained optimization of a boundary-value elliptic PDE system (CCPDE),

$$\text{CCPDE}: \quad \min\_{\boldsymbol{\mu}} E[\mathcal{J}(\boldsymbol{\mathcal{y}}, \boldsymbol{\mu}, \xi)] := \left\{ E\left[ \left\| \boldsymbol{\mathcal{y}} - \boldsymbol{\mathcal{y}}\_d \right\|\_{H^1\_x(D)}^2 \right] + \frac{\rho}{2} \left[ \left\| \boldsymbol{\mu} \right\|^2 \right]\_{L^2(D)} \right\} \tag{1}$$

subject to :

$$-\nabla.(\kappa(\mathfrak{x})\nabla \mathfrak{y}) = f(\mathfrak{x})\,\mathrm{in}D \times \mathfrak{Q},\tag{2}$$

$$\left.y\right|\_{\partial D} = \mathbf{g}(\mathbf{x}, u, \xi), \ \xi \in \Omega, \mathbf{x} \in \partial D, u \in U \tag{3}$$

$$\Pr\left\{\mathcal{y}\_{\min}(\mathbf{x}) \le \mathbf{y}(\mathbf{x}, u, \xi) \le \mathcal{y}\_{\max}(\mathbf{x})\right\} \ge a, \mathbf{x} \in D,\tag{4}$$

$$u\_{\min} \le u(\boldsymbol{\omega}) \le u\_{\max}, \boldsymbol{u} \in U \tag{5}$$

where *D* ⊂ *<sup>n</sup>* is a given bounded convex open spatial domain with Lipschitz boundary *∂D* and *n* ≥2, *ρ* is a given regularization parameter, *Dc* is a given compact subset of the closure *D* of *D*. ∇*:* and ∇ represent the divergence and gradient operator w.r.t. *x* in the weak sense of Sobolev spaces, respectively. The state function *y x*ð Þ , *<sup>u</sup>*, *<sup>ξ</sup>* : *<sup>D</sup>* � *<sup>U</sup>* � <sup>Ω</sup>↦ is a random continuous function in *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ¼ *D* <sup>Γ</sup><sup>∈</sup> *<sup>W</sup>*1,2ð Þj *<sup>D</sup>* <sup>Γ</sup>ð Þ¼ *<sup>x</sup> g x*ð Þ, *<sup>x</sup>*∈*∂<sup>D</sup>* with *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *D* being a closed subspace of the Sobolev space *H*<sup>1</sup> ð Þ¼ *<sup>D</sup> <sup>W</sup>*1,2ð Þ *<sup>D</sup>* for any *<sup>ξ</sup>*<sup>∈</sup> <sup>Ω</sup>. *yd* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> ð Þ *D* is a given function describing the desired profile of the state and is assumed twice differentiable w.r.t. *x*∈ *D*. 1 With < *h*, *g* > *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *<sup>D</sup>* and <sup>&</sup>lt;*h*, *<sup>g</sup>* <sup>&</sup>gt; *<sup>H</sup>*<sup>1</sup> ð Þ *<sup>D</sup>* , we denote the related standard scalar product (see [1–5, 7, 8, 17, 18] for more details on Sobolev spaces).

The triple ð Þ Ω, Σ, *Pr* represents a complete probability space, with a set of all possible outcomes <sup>Ω</sup> <sup>⊂</sup> *<sup>p</sup>*, with *<sup>σ</sup>*-Algebra <sup>Σ</sup>⊂2<sup>Ω</sup> and probability measure *Pr*ð Þ� : Σ↦½ � 0, 1 and *Pr*ð Þ¼ Ω 1. The parameter *ξ* represents uncorrelated input random vector variables distributed homogeneously acting on the system through the boundary *∂D*. Such disturbances are position-dependent random parameters and distributed inside or outside of the boundary of the spatial domain *∂D*. In general, such infinitedimensional random parameters can be treated by a dimensional-reduction method using the Karhunen-Loeve (KL) expansion (see [19]) or a finite-dimensional representation using a discretization method [3, 4].

The input data *g x*ð Þ , *u*, *ξ* might vary randomly from one point of the boundary domain *∂D* to another point and thus their uncertainty should be described in terms of random fields, which can be dealt with a sampled covariance from multilevel Monte Carlo method (MLMCM) [2, 9, 20]. The expected value *E*½ �� is taken with respect to the probability space and the probability measure possesses the Radon-Nikodym derivative *ϕ* w.r.t. the Lebesgue measure *μ*, i.e., *dPr*ð Þ¼ *ξ ϕ ξ*ð Þ*dμ ξ*ð Þ. Moreover, we suppress the measure *μ* and write simply *dPr*ð Þ¼ *ξ ϕ ξ*ð Þ*dξ*. The random variable *ξ* is assumed to have a continuous probability density function *ϕ ξ*ð Þ with Ω being its sample space of the support set. For each *u*∈ *U*, due to the random variable *ξ*, the solution *y* of the boundary values (2)–(3) is a stochastic linear boundary value state function indicated by *y u*ð Þ , *ξ*; *x* .

<sup>1</sup> The space *H*<sup>1</sup> ð Þ *<sup>D</sup>* is a Hilbert space with norm <sup>∥</sup> � <sup>∥</sup>*H*<sup>1</sup> ð Þ *<sup>D</sup>* . The Sobolev space *<sup>H</sup>*<sup>1</sup> ð Þ *D* is the completion of *C*<sup>1</sup> *D* w.r.t. <sup>∥</sup> � <sup>∥</sup>*H*<sup>1</sup> ð Þ *<sup>D</sup>* .

After the solution of the PDE system, Eq. (4) expressed by *P*ð Þ¼ *:*, *x Pr y x*ð Þ , *<sup>u</sup>*, *<sup>ξ</sup>* <sup>≤</sup>*ymax*ð Þ *<sup>x</sup>* � �<sup>≥</sup> *<sup>α</sup>*, <sup>∀</sup>*x*<sup>∈</sup> *<sup>D</sup>* defines a single pointwise probability state constraint, for each *x*∈ *D*, to be satisfied with a given reliability level *α*, where *α*∈ ð � 0, 1 . It should be noted that chance constraints for the PDE system considered in this study can be expressed in the following two forms:

1.Single chance constraints

$$\Pr\left\{\mathcal{Y}\_{\min}(\mathbf{x}) \le \mathbf{y}(\mathbf{x}, u, \xi) \le \mathcal{Y}\_{\max}(\mathbf{x})\right\} \ge a, \forall \mathbf{x} \in D,\tag{6}$$

2. Joint chance constraint

$$\Pr\{\mathcal{Y}\_{\min} \le \mathcal{Y}(\mathbf{x}\_i, u, \xi) \le \mathcal{Y}\_{\max}, \forall \mathbf{x}\_i \in D\} \ge a,\tag{7}$$

The first one describes the chance constraints imposed on individual points in D (i.e., pointwise chance constraints), while the second one requires the satisfaction of the constraints at all points with a probability level. In this study, only the form of single pointwise constraints is considered. A joint CCPDE is mathematically complex and needs further studies.

The right-hand side of Eq. (2) is assumed to be a function in *L*<sup>2</sup> ð Þ *D* . Due to (3), *y* depends on *u* and *ξ*, and therefore, we need to analyze the existence and uniqueness of a weak solution. In Section 3, we will prove these by verifying the continuous-bilinear and coercivity form in the Bochner space with the associated expectation of the norm *E*ð∥ � ∥*H*2ð Þ *<sup>D</sup>* Þ based on the Lax-Milgram theorem.

Specifically in this study, (2) and (3) lead to a boundary value of an elliptic PDE system with nonhomogeneous Dirichlet boundary condition *y*j *<sup>∂</sup><sup>D</sup>* ¼ *g x*ð Þ , *u*, *ξ* , *ξ*∈ Ω and *u*∈ *U*. *u* is a control variable bounded by *umin* and *umax* by (5). Since the output *y* is constrained, we have to find an optimal control profile w.r.t. *x*∈ *D*, in the admissible set ð Þ *Uadm* by variational analysis.

Now, we define a separable and reflexive Bochner space by mapping W ≔ *L*ð Þ Ω; *W D*ð Þ from the Borel space to the Sobolev space *W D*ð Þ

$$\begin{aligned} H(\Omega; W(D)) &= \{ v : \Omega \to W(D) : v \text{ is measurable}, \|v\|\_{\mathcal{W}} \\ &= \int\_{\Omega} \|v\left(\cdot, \xi\right)\|\_{W(D)}^2 \phi(\xi) d\xi < +\infty \} \end{aligned} \tag{8}$$

From the PDE system defined in (2)–(3), the related function spaces of individual inputs are defined as follows:

$$\mathcal{H} \coloneqq L^2\left(\mathfrak{Q}; H^1\_{\mathfrak{g}}(D)\right), \quad \mathcal{L} \coloneqq L^2(D), \mathcal{G} \coloneqq L^2\left(\mathfrak{Q}, H^2(D)\right), \mathcal{B} \coloneqq L^2\left(\mathfrak{Q}, H^{1/2}(D)\right), \mathcal{K} \coloneqq L^\infty(D). \tag{9}$$

Since the spaces defined above are separable, the weak measurability for the random PDE system is equivalent to the strong measurability (see [[21], Section 3.5 Cor. 2]).

In addition, we define scalar products in L and H spaces, respectively,

$$\langle a, b \rangle\_{\mathcal{L}} = \int\_{\Omega} \int\_{D} a(\mathbf{x}, \xi) b(\mathbf{x}, \xi) d\mathbf{x} \phi(\xi) d\xi, \quad \|a\|\_{\mathcal{L}}^2 = \text{ } < a, a >\_{\mathcal{L}}, \tag{10}$$

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

$$\langle a,b\rangle\_{\mathcal{H}} = \int\_{\Omega} \int\_{D} a(\mathbf{x},\xi)b(\mathbf{x},\xi)d\mathbf{x}\phi(\xi)d\xi + \int\_{\Omega} \int\_{\partial D} (\nabla a(\mathbf{x},\xi))^{\dagger}\nabla b(\mathbf{x},\xi)d\mathbf{x}\phi(\xi)d\xi,\tag{11}$$

$$\left\|\left\|a\right\|\right\|\_{\mathcal{H}}^2 = \_{\mathcal{H}} = \_{\mathcal{L}} + <\nabla a, \nabla a>\_{\mathcal{L}},\tag{12}$$

for *a*, *b*∈L and *a*, *b*∈ H, respectively. In addition, in (9),

$$\begin{split} \mathcal{B} = \left\{ v: \Omega \to W(\partial D): v \text{ is measurable}, L\_2 \Big( \Omega \times H^{-1/2}(\partial D) \Big) \to \mathbb{R} \right\}, \\ \|v\|\_{\mathcal{B}} = \left( \int\_{\Omega} \|v\|\_{H^{-1/2}(\partial D)}^2 \phi(\xi) d\xi \right)^{1/2} \\ &= \left[ \|v\|\_{L\_2(\Omega \times \partial D)}^2 + \sum\_{|m|=[s]} \int\_{\Omega} \int\_{\partial D} \frac{|\partial^m v(\mathbf{x}\_1) - \partial^m v(\mathbf{x}\_2)|^2}{|\mathbf{x}\_1 - \mathbf{x}\_2|^{n+2+2(s-[\!])}} d\sigma \phi(\xi) d\xi \right]^{1/2}, \\ \|v\|\_{\mathcal{B}}^2 = \left[ \|v\|\_{L\_2(\Omega \times \partial D)}^2 + \int\_{\Omega} \int\_{\partial D} \frac{(|v(\mathbf{x}\_1) - v(\mathbf{x}\_2)|^2)^2}{|\mathbf{x}\_1 - \mathbf{x}\_2|^{\frac{s+1}{\!\!}}} d\sigma \phi(\xi) d\xi \right] < \infty, s = 1/2, \end{split} \tag{13}$$

is a norm of trace function in the boundary space H of the boundary value with a compact embedding from B where *dσ* is the surface measure at the boundary of *D* [22]. Finally, the space of the model parametric coefficient <sup>K</sup> ¼ f*<sup>v</sup>* : *<sup>L</sup>*<sup>∞</sup>ð Þ! *<sup>D</sup>* : v is measurable : <sup>∥</sup>*v*∥*L*∞ð Þ *<sup>D</sup>* <sup>¼</sup> *maxsupn*!<sup>∞</sup>j j *vn* <sup>2</sup> <sup>&</sup>lt; <sup>∞</sup>g. Thus, *<sup>v</sup>*<sup>∈</sup> <sup>H</sup>, it implied that *<sup>v</sup>*ð Þ �, *<sup>ξ</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *<sup>D</sup>* and *<sup>E</sup>*½∥*v*ð Þ �, *<sup>ξ</sup>* <sup>∥</sup><sup>2</sup> *H*1 *<sup>g</sup>* ð Þ *<sup>D</sup>* �<sup>&</sup>lt; <sup>þ</sup> <sup>∞</sup>.

The probability density *ϕ* is assumed to be Lebesgue measurable and almost everywhere positive on Ω. Hence, the spaces L, G, and ℋ are a reflexive Bochner space, e.g., Hilbert spaces using the standard equivalence classes. Note also that G, H, K,B are dense subspaces of L in the topology of L.

The variable *u* ∈*L*<sup>2</sup> ð Þ *D* is a decision variable that belongs to the set of admissible decisions

$$U\_{adm} := \left\{ u(\mathbf{x}) \in L^2(D) \mid u\_{\mathbf{z}} \le u(\mathbf{x}) \le u\_b \right\}, \forall \mathbf{x} \in D,\\
\text{for } u\_{\mathbf{z}} \ge u\_{min} \text{ and } u\_b \le u\_{max}, \tag{14}$$

where *ua*, *ub* ∈ *L*<sup>2</sup> ð Þ *D* are given functions with *ua* ≤*ub*. Observe that equalities and inequalities of functions in the Lebesgue space *L*<sup>2</sup> ð Þ *D* and corresponding Sobolev spaces are valid only almost everywhere on *D:* The term almost everywhere (a.e.) will be suppressed in this study assuming almost surely (a.s.) without any confusions arising. Note that *Uadm* is a nonempty, convex, closed, and bounded subset of *L*<sup>2</sup> ð Þ *D* .

In the elliptic PDE system (2)–(3), the random parameters in the boundary condition in B<sup>2</sup> represent the effect of external and internal disturbances such as ambient temperature, pressure, and wind; also, there is a factor of imprecise model parameters, while those in the *forcing term f* are nonrandom input function. For the sake of simplicity of presentation, the coefficient and forcing term (*κ*, *f* ) respectively are

<sup>2</sup> The boundary of the elliptic operator needs to be *<sup>C</sup>*<sup>1</sup> smooth for ensuring *<sup>x</sup>*↦*y u*ð Þ , *<sup>x</sup>*, *<sup>ξ</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup>*=*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* (see: [7]).

considered as nonrandom input functions in this study. Moreover, the forcing term is continuous w.r.t. *x*∈ *D* a.s. and *f x*ð Þ∈L*:* 3 .

As a result, pointwise probabilistic state constraint *<sup>P</sup>*ð Þ¼ *:*, *<sup>x</sup> Pr ymin* <sup>≤</sup>*y*≤*ymax* � �≥*α*, overall the spatial region *x*∈ *Da:s:*, is conservative (worse-case) if, its reliability level *α* ¼ 1, with no chance of constraints violation. The internal function *ymin* ≤*y* ≤*ymax*, ∀*x*∈ *D* cannot be computed deterministically. Hence, the expression in (4) defines a chance (probabilistic) constraint by stipulating the satisfaction of the inequality constraint on *y* ∈ H with a given probability value of a reliability level *α*∈ ½ Þ 0*:*95, 1 . Moreover, (4) represents a pointwise chance constraint, i.e., the constraint on the state variable is required to hold with the same reliability level *α* at each individual point of *x*∈ *D*.

The required random state solution *y x*ð Þ , *u*, *ξ* is a function in the infinitedimensional space H, so that, the infinite number of probabilistic constraints make sense whenever in the equivalence class *y* is w.r.t. *x* a continuous element, which is ensured by Sobolev embedding theorems in *L*<sup>2</sup> <sup>ð</sup>Ω, *<sup>H</sup>*<sup>2</sup> ð Þ *<sup>D</sup>* <sup>∩</sup> *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *D* ), excellent properties of the inhomogeneity term *f*, and the convexity of *D*. From the embedding theory of Sobolev space, one can use a more general setting in <sup>H</sup>*<sup>p</sup>* <sup>¼</sup> *<sup>W</sup>*1,*<sup>p</sup>* with *<sup>p</sup>*<sup>&</sup>gt; *<sup>d</sup>* and sufficient regular *D*. Thus, the convexity of *D* can be relaxed. We give here only one opportunity, where it works. It is essential for our approximation approach (see, e.g., theorem 3.5 continuity of *y*, that space for *y* can be continuously embedded in the space of continuous functions).

For instance, at some critical spatial locations *x*∈ *Dc* ⊂ *D*, the reliable level *α*∈ ½ Þ 0*:*95, 1 can be chosen, as a result, this study focuses on the solution of CCPDE with pointwise constraints but considers reliable level independent of *x* for simplicity of representation, and it is not trivial to directly extend our inner and outer approximation concept in [6] to joint and uniform chance constraints for the infinite number of *x*∈ *D*. Therefore, solving the CCPDE problem is not a trivial task since there is no simple equivalent deterministic representation. Also, there is no closed-loop analytic representation for the probabilistic state constraint in the equation expressed in (4). The structural properties of (4) are not yet analyzed properly, generally unknown, nondifferentiable, and nonconvex.

#### **3. Existence of the solution of the PDE system**

In this paper, we need to solve the weak variational form of the random PDE system with nonhomogeneous Dirichlet boundary value of the elliptic PDE system as we defined in equations expressed (2) and (3), the control is applied on the boundary of polygonal spatial domain *D* and the control *u*∈*L*<sup>2</sup> ð Þ *D* ,

$$-\nabla \cdot (\kappa(\mathbf{x}) \nabla \mathbf{y}(\mathbf{x}, \xi) = f(\mathbf{x}), \mathbf{on} \, D \times \Omega \text{ a.s.}, \tag{15}$$

$$\left.y(\varkappa,\mu,\xi)\right|\_{\varkappa\in\partial D} = \mathfrak{g}(\varkappa,\mu,\xi), \quad \xi\in\Omega\text{ a.s.}\tag{16}$$

For every test function *v*∈ H, we can apply integration by part,

<sup>3</sup> The density function *ϕ ξ*ð Þ¼ <sup>Q</sup><sup>∞</sup> *<sup>i</sup>*¼<sup>1</sup>*ϕ<sup>i</sup> <sup>ξ</sup><sup>i</sup>* ð Þ is infinite-dimensional probability density function, where *<sup>ξ</sup><sup>i</sup>* <sup>∈</sup> <sup>Ω</sup> is distributed homogeneously through the boundary of the spatial domain *∂D:* It has a finite-dimensional representation from KL expansion [19].

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

$$E\left[\int\_{D} \kappa(\mathbf{x}, \cdot) \nabla p(\mathbf{x}, \cdot) \nabla v(\mathbf{x}, \cdot) d\mathbf{x}\right] = E\left[\int\_{D} f(\mathbf{x}) v(\mathbf{x}, \cdot) d\mathbf{x}\right] + E\left[\int\_{\partial D} g(u, \mathbf{x}, \cdot) v(\mathbf{x}, \cdot) d\mathbf{s}\right], \forall v \in \mathcal{H},\tag{17}$$

which is the weak form of the PDE system (15)–(16) with *f x*ð Þ in the Hilbert space L and solution *y*ð Þ*:* ∈ H. The relevant functions of the inputs are in separable and reflexive Bochner space. Since *f x*ð Þ∈L, *κ* ∈ K, and *g* ∈B, the spatial domain D is convex, the well-known *shift statements* (see, for instance, [[23], Th. 3.30]; i.e., higher regularity of *f* is shifted to higher regularity of *y*) imply that *x*↦*y u*ð Þ , *x*, *ξ* ∈*C D* � �. Since the continuity of *x*↦*y u*ð Þ , *x*, *ξ* is required only on the subset *D*, the convexity of *D* is not necessary whenever the critical spatial domain *Dc* ⊂ *intD*, *intD* is the interior of space *D*. However, to guarantee the well-posedness of the weak form, our investigation is based on the following standard assumptions.

*Assumption* 3.1. **(A1.1)** The domain *D* is convex, the set *Dc* ⊂ *D* is compact and *yc* <sup>∈</sup>*C D*ð Þ <sup>∩</sup> *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *<sup>D</sup>* , *yd* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *<sup>D</sup>* <sup>∩</sup> *<sup>H</sup>*<sup>2</sup> ð Þ *D* .

**(A1.2)** The coefficient *κ*ð Þ� ∈ K is positive and bounded such that

$$0 < \kappa\_{\min} \le \kappa(\mathbf{x}) \le \kappa\_{\max}, \mathbf{x} \in D \quad \text{a.s.}, \tag{18}$$

where *κmin*, *κmax* are finite constants.

**(A2.1)** For each *u*∈*L*<sup>2</sup> ð Þ *<sup>D</sup>* , the random forcing term *<sup>u</sup>*↦*f u*ð Þ , � : *<sup>L</sup>*<sup>2</sup> ð Þ! *D* L is continuous.

**(A2.2)** For each *u*∈*L*<sup>2</sup> ð Þ *<sup>D</sup>* , the random forcing term *<sup>u</sup>*↦*f u*ð Þ , � : *<sup>L</sup>*<sup>2</sup> ð Þ! *D* L is continuously Fréchet differentiable.

**(A3)** The forcing term has a Taylor expansion form m *f x*ð Þ¼ , *u u x*ð Þþ P<sup>∞</sup> *<sup>n</sup>*¼<sup>0</sup>*<sup>f</sup> n* ð Þ *<sup>x</sup>*<sup>0</sup> *<sup>=</sup>*ð ÞÞ *<sup>n</sup>*! <sup>∗</sup> ð Þ *<sup>x</sup>* � *<sup>x</sup>*<sup>0</sup> *<sup>n</sup>* , where *u*∈ *L*<sup>2</sup> ð Þ *D* and *f* <sup>0</sup> ∈L.

**(A4)** For each *u*∈ *L*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* , the random forcing term *<sup>u</sup>*↦*g u*ð Þ , � , � : *<sup>L</sup>*<sup>2</sup> ð Þ! *<sup>∂</sup><sup>D</sup>* <sup>L</sup> is continuous. For each *u*∈ *L*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* , the random forcing term *<sup>u</sup>*↦*g u*ð Þ , � , � : *<sup>L</sup>*<sup>2</sup> ð Þ! *<sup>∂</sup><sup>D</sup>* <sup>L</sup> is continuously Fréchet differentiable. and *g* is linear w.r.t. *u:*

**(A5)** The random variables *<sup>ξ</sup><sup>Τ</sup>* <sup>¼</sup> *<sup>ξ</sup>*1, … , *<sup>ξ</sup><sup>p</sup>* � � are independently, identically distributed with a continuous joint multivariate probability density function *ϕ ξ*ð Þ¼ Π*p <sup>i</sup>*¼<sup>1</sup>*ϕ<sup>i</sup> <sup>ξ</sup><sup>i</sup>* ð Þ and the set <sup>Ω</sup> <sup>¼</sup> <sup>Π</sup>*<sup>p</sup> <sup>i</sup>*¼<sup>1</sup>Ω*i*, where <sup>Ω</sup>*<sup>i</sup>* <sup>⊂</sup> , *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>p</sup>*, such that

$$f(\mathbf{x}, \boldsymbol{u}) = \boldsymbol{u}(\boldsymbol{\alpha}) + \sum\_{n=0}^{\infty} f^{(n)}(\boldsymbol{x}\_0) / (n!) \* (\boldsymbol{\alpha} - \boldsymbol{x}\_0)^n \tag{19}$$

$$\mathbf{g} = \boldsymbol{u}(\boldsymbol{\kappa}) + \mathbf{g}\_0(\boldsymbol{\xi}) \left( \frac{\partial \mathbf{y}\_0(\boldsymbol{\kappa})}{\partial \mathbf{x}\_0} \right) + \sum\_{n=1}^n \mathbf{g}\_k(\boldsymbol{\xi}) \frac{\partial \mathbf{y}(\boldsymbol{\kappa})}{\partial \mathbf{x}\_k} = \boldsymbol{u}(\boldsymbol{\kappa}) + \sum\_{k=1}^N \mathbf{g}\_k(\boldsymbol{\xi}) \frac{\partial \mathbf{y}\_k}{\partial \mathbf{x}\_k} \tag{20}$$

with *u*, *ak* ∈*L*<sup>2</sup> ð Þ *D* , *k* ¼ 0, 1, 2, … , *n:*.

In the assumption A5, *f* and *g* are commonly given as a series, which is called *finite-dimensional noise* representation (20) (see [4, 5, 20]) for the boundary condition *g*. In fact, for numerical computations, it is essential to reduce the dimension of the uncertainties in *g* from KL dimension reduction method.

#### **3.1 Solution of the random PDE with nonhomogeneous boundary control**

From the equations expressed in (15) and (16), we have to show that the Lax-Milgram theorem of the continuous-bilinear and coercivity property for every test function in *v*∈ H,

$$- \left( \nabla \cdot (\kappa\_0 \nabla \mathfrak{y}) + \mathfrak{y}\_{\text{dD}} \right) v(\mathfrak{x}, \mathfrak{xi}) = f v(\mathfrak{x}, \mathfrak{xi}) + \mathfrak{g} v(\mathfrak{x}, \mathfrak{xi}).\tag{21}$$

This implies for *v*∈ H,

$$\begin{split} & -\int\_{\Omega} \int\_{D} \left( \nabla \cdot \left( \kappa\_{0} \nabla \mathfrak{y} + \mathsf{y}\_{\mathrm{d}\mathcal{D}} \right) \right) \nu(\boldsymbol{x}, \boldsymbol{\xi}) d\boldsymbol{x} \phi(\boldsymbol{\xi}) d\boldsymbol{\xi} \\ &= \int\_{\Omega} \int\_{D} f \nu(\boldsymbol{x}, \boldsymbol{\xi}) d\boldsymbol{x} \phi(\boldsymbol{\xi}) d\boldsymbol{\xi} + \int\_{\Omega} \int\_{\partial \mathcal{D}} g \nu(\boldsymbol{x}, \boldsymbol{\xi}) d\boldsymbol{\sigma} \phi(\boldsymbol{\xi}) d\boldsymbol{\xi}. \end{split} \tag{22}$$

It implies the following integration functions of the expectation,

$$-E\left[\int\_{D} \nabla \cdot (\kappa\_0 \nabla \boldsymbol{y} + \boldsymbol{y}\_{\text{dD}}) \right] \boldsymbol{v}(\mathbf{x}, \boldsymbol{\xi}) d\mathbf{x}\right] = E\left[\int\_{D} \boldsymbol{f} \boldsymbol{v}(\mathbf{x}, \boldsymbol{\xi}) d\mathbf{x}\right] + E\left[\int\_{\text{dD}} \boldsymbol{g} \boldsymbol{v}(\mathbf{x}, \boldsymbol{\xi}) d\boldsymbol{\sigma}\right], \forall \boldsymbol{v} \in \mathcal{H}.\tag{23}$$

The Sobolev space plays several roles in the study of stochastic PDE system [7]. The space *L*<sup>2</sup> ð Þ¼ *<sup>D</sup> <sup>H</sup>*<sup>0</sup>ð Þ *<sup>D</sup>* equivalence class of real-valued Lebesgue measure and square integrable function defined on the spatial domain D. Let *<sup>H</sup>*<sup>1</sup> <sup>¼</sup> *<sup>H</sup>*<sup>1</sup> ð Þ *D* denote the vector subspace of *H* defined by *H*<sup>1</sup> <sup>=</sup> *<sup>v</sup>*<sup>∈</sup> *<sup>H</sup>*<sup>1</sup> : <sup>∇</sup>*vxi* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> � � *<sup>i</sup>* <sup>¼</sup> 1, 2, 3, … , *<sup>n</sup>* the equipped with the norm

$$\left\|\boldsymbol{\nu}\right\|\_{\boldsymbol{H}^{1}}^{2} = \left\|\boldsymbol{\nu}\right\|\_{\boldsymbol{L}^{2}(\boldsymbol{D})}^{2} + \left\|\nabla\_{\boldsymbol{x}\_{i}}\boldsymbol{\nu}\right\|\_{\boldsymbol{L}^{2}(\partial\boldsymbol{D})}^{2}.\tag{24}$$

Space *H*<sup>1</sup> ð Þ *D* is Hilbert space, and it is known as the Sobolev space of order 1. Let *Domain D*ð Þ denote the *<sup>C</sup>*<sup>∞</sup>ð Þ *<sup>D</sup>* with a compact support. The closure of *Domain D*ð Þ is norm topology of *H*<sup>1</sup> ð Þ *<sup>D</sup>* , the required random solution is in *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *D* is subspace of *H*1 ð Þ *<sup>D</sup> :* The dual of *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* is *H*�<sup>1</sup> , is the dual space of continuous linear function on *H*<sup>1</sup> *g* , both of Sobolev space *H*<sup>1</sup> *<sup>g</sup>* and *H*�<sup>1</sup> are space of functional distributions in the sense of Schwartz and have nonunique representations, and the functional at boundary has a polynomial approximation in Eq. (20), with *gk*ð Þ*<sup>ξ</sup>* and *<sup>∂</sup><sup>y</sup> <sup>∂</sup>xk* ð Þ *x* for *k* ¼ 1, 2, 3*:* … , *n:* Where the derivative of *yk*ð Þ *x* is understood in the sense of distribution. Generally, the input random boundary conditions,

$$\mathbf{g} \in L^2\left(\Omega; H^{1/2}(\partial D)\right) \equiv \mathcal{B},\tag{25}$$

that defined by above expansion of orthonormal function the adjoint is in *L*<sup>2</sup> Ω; *H*�1*=*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* � for the defined *<sup>ξ</sup>*<sup>∈</sup> <sup>Ω</sup>. Since there is nonzero and nonlinear random function *g* 6¼ 0 at the Dirichlet boundary condition. The boundary *g* is approximated by Fourier transform, and one can define Sobolev space one can define Sobolev space *H<sup>s</sup>* for all real numbers s, *s*<0 these are genuine distributions as characteristic function, for *<sup>s</sup>* <sup>¼</sup> 0, we have *<sup>H</sup>*<sup>0</sup> <sup>¼</sup> *<sup>L</sup>*<sup>2</sup> ð Þ *D* for *s*> 0 these are the regular function spaces *Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

contained in *H*<sup>1</sup> , for example, y belongs to *H*<sup>1</sup> *<sup>g</sup>* ð Þ *D* see [21, 24]. There is the trace operator Γ : *L*<sup>2</sup> <sup>ð</sup>Ω; *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð ÞÞ *<sup>D</sup>* <sup>↦</sup> *<sup>L</sup>*<sup>2</sup> <sup>Ω</sup>; *<sup>H</sup>*1*=*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* , <sup>Γ</sup>ð Þ¼ *<sup>y</sup> <sup>y</sup>* <sup>¼</sup> *<sup>g</sup>* for *<sup>x</sup>*∈*∂<sup>D</sup>* � . The space *H<sup>m</sup>* ⊂ *Hm*�*<sup>k</sup>* for *k*>0 and the injection is compact. The trace function loses its interior smoothness and may be a distribution on the boundary as *<sup>∂</sup><sup>y</sup> <sup>∂</sup><sup>n</sup>* belongs to *<sup>H</sup>*�1*=*<sup>2</sup> ð ÞÞ *<sup>∂</sup><sup>D</sup>* , n is a normal to the test function *<sup>v</sup>*, i.e., *<sup>∂</sup><sup>y</sup> <sup>∂</sup><sup>v</sup>* <sup>∈</sup> *<sup>H</sup>*�1*=*<sup>2</sup> ð ÞÞ *<sup>∂</sup><sup>D</sup>* . We shall define the inner product space (.,.) or ≺*:*, *:*≻ by the double integral function on the set *D* and Ω, there is the duality pairing between *H*<sup>1</sup> and *H*<sup>∗</sup> <sup>1</sup> at the trace operator. In each case of boundary-value condition of bilinear-continuity form *a y*ð Þ , *v* and coercive (elliptic) form, we can formulate the following equation:

$$a(\mathfrak{y}, \mathfrak{v}) - (f, \mathfrak{v}) - (\mathfrak{g}, \mathfrak{v}) = \int\_{\mathfrak{U}} \int\_{D} v(L\mathfrak{y} - f) d\mathfrak{x} \phi(\xi) d\xi + \sum [a\_{ik}\partial\_{\mathfrak{k}}\mathfrak{y} - \mathfrak{g}] \eta ds \tag{26}$$

that vanishes the right-hand side of second integral, and *L* and *aik* are the linear and differential operators, respectively [7]. The weak PDE system gives

$$-E\left[\int\_{D} \nabla \cdot (\kappa\_0 \nabla y) v(\mathbf{x}, \xi) d\mathbf{x}\right] = E\left[\int\_{D} f v(\mathbf{x}, \xi) d\mathbf{x}\right] + E\left[\int\_{dD} (y - \mathbf{g}) v(\mathbf{x}, \xi) d\sigma\right], \mathbf{v} \in \mathcal{H}.\tag{27}$$

**Definition 3.2.** The system of elliptic PDE in Eqs. (2) and (3) has a weak solution *y*∈*L*<sup>2</sup> <sup>ð</sup>Ω; *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð ÞÞ *D* , if there exists a measurable random variable *y* w.r.t. *ξ* defined in Ω such that *Ea y* ½ �¼ ð Þ , *<sup>ϑ</sup> El y* ½ �¼ ð Þ , *<sup>v</sup> E f* ½ð Þþ , *<sup>ϑ</sup> E g* ½ � ð Þ , *<sup>ϑ</sup>* for all *<sup>ϑ</sup>*<sup>∈</sup> <sup>H</sup> <sup>¼</sup> *<sup>L</sup>*<sup>2</sup> <sup>ð</sup>Ω; *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð ÞÞ *D :* The operator *a* satisfies the continuous bilinear and coercive form.

**Definition 3.3.** The system of elliptic PDE said to be stable in *L*<sup>2</sup> <sup>ð</sup>Ω; *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð ÞÞ *D* , if it has a weak solution *y*∈ *L*<sup>2</sup> <sup>ð</sup>Ω; *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð ÞÞ *D* and the forcing term *f* ∈L expressed in Eq. (9) and the boundary input *g* ∈ *L*<sup>2</sup> Ω, *H*<sup>1</sup>*=*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* � . The solution *<sup>y</sup>* is continuous depending on the random parameter, i.e., *E y*½ �¼ *Ey f* ½ � ð Þ , *g* . The subspace of all function form *L*<sup>2</sup> Ω, *H<sup>l</sup>* ð Þ *<sup>D</sup>* � whose generalized derivatives up to order *<sup>l</sup>* exist and belong to *L*<sup>2</sup> Ω, *H<sup>l</sup>* ð Þ *<sup>D</sup>* � . The space *<sup>H</sup><sup>l</sup>* ð Þ¼ *<sup>D</sup> <sup>W</sup><sup>l</sup>* <sup>2</sup>ð Þ *D* is called Sobolev space order *l:*.

**Theorem 3.4.** *Lax-Milgram Theorem: Let <sup>κ</sup>*ð Þ*:* <sup>∈</sup> *<sup>L</sup>*<sup>∞</sup>ð Þ *<sup>D</sup> be a functional, and there exists a constant κmin* > 0 *and κmax* >*κ*ð Þ *x* >*κmin almost surly and the test function v*∈ H ¼ *L*2 <sup>ð</sup><sup>Ω</sup> � *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð ÞÞ *<sup>D</sup>* , *<sup>g</sup>* <sup>∈</sup>*L*<sup>2</sup> <sup>Ω</sup> � *<sup>H</sup>*<sup>1</sup>*=*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* � � *and f x*ð Þ<sup>∈</sup> *<sup>H</sup>*<sup>1</sup> ð Þ *<sup>D</sup>* � � <sup>∗</sup> *. The operators a and l defined in Eq. (25) hold continuous-bilinearity and coercivity. Thus, the variational problem defined in Eq. (1) and (2) has unique solution y*<sup>∈</sup> *<sup>L</sup>*2ðΩ, *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð ÞÞ *D for all ξ*∈ Ω*.*

*Proof.* The elliptic PDEs in Eqs. (15) and (16) have weak solution if there exists a test function *v*∈ H, this boundary condition *g* moves to left-hand side

$$-\nabla(\kappa(.)\nabla\eta)v = f(x)v \,\mathrm{in}D;$$

$$(\boldsymbol{y} - \boldsymbol{g}(.,\boldsymbol{\xi},\boldsymbol{u}))v = \mathbf{0} \,\mathrm{on}\,\boldsymbol{U} \times \partial\boldsymbol{D} \times \boldsymbol{\Omega};$$

$$\Rightarrow \int\_{\Omega} \int\_{D} -\nabla\kappa(.)\nabla\boldsymbol{y}.v d\boldsymbol{x} \Phi(\boldsymbol{\xi})d\boldsymbol{\xi} \,\tag{28}$$

$$\boldsymbol{\xi} = \int\_{\Omega} \int\_{D} \boldsymbol{f}v d\boldsymbol{x} \Phi(\boldsymbol{\xi})d\boldsymbol{\xi} + \int\_{\Omega} \int\_{\partial\Omega} (\boldsymbol{g} - \boldsymbol{y})(.,\boldsymbol{\xi},\boldsymbol{u})\nu d\sigma \Phi(\boldsymbol{\xi})d\boldsymbol{\xi}.$$

Where *dσ* ¼ *n :ds* is the surface measure at the boundary of D from integration by part,

$$\begin{split} \mathbb{E} & \Rightarrow \int\_{\Omega} \int\_{D} \kappa(.) \nabla \mathsf{y}. \nabla v dx \Phi(\xi) d\xi - \int\_{\Omega} \int\_{\partial D} \frac{\partial \mathsf{y}}{\partial n}(\xi, \mathsf{x}) v(\xi, \mathsf{x}) \, d\sigma \Phi(\xi) d\xi \\ & \qquad = \int\_{\Omega} \int\_{D} f v dx \Phi(\xi) d\xi + \int\_{\Omega} \int\_{\partial D} (\mathsf{g} - \mathsf{y}) . v d\sigma \Phi(\xi) d\xi \end{split} \tag{29}$$

Since *<sup>y</sup>:v*ð Þ¼ *<sup>ξ</sup>*, *<sup>x</sup>* 0 for *<sup>x</sup>*∈*∂<sup>D</sup>* because the unit normal vector is perpendicular to the boundary *∂D*. Let

$$a(\boldsymbol{y}, \boldsymbol{v}) = \int\_{\Omega} \int\_{D} \kappa(.) \nabla \boldsymbol{\uprho} . \nabla \boldsymbol{\uprho} d\boldsymbol{x} \boldsymbol{\upPhi}(\boldsymbol{\upxi}) d\boldsymbol{\upxi};$$

$$l(\boldsymbol{y}, \boldsymbol{v}) = \int\_{\Omega} \int\_{D} \boldsymbol{f} \boldsymbol{\uprho} d\boldsymbol{x} \boldsymbol{\upPhi}(\boldsymbol{\upxi}) d\boldsymbol{\upxi} + \int\_{\Omega} \int\_{\partial D} \boldsymbol{g}(., \boldsymbol{\upxi}, \boldsymbol{u}) . \boldsymbol{\uprho} d\boldsymbol{\uprho} \boldsymbol{\upPhi}(\boldsymbol{\upxi}) d\boldsymbol{\upxi};\tag{30}$$

$$\Rightarrow a(\boldsymbol{y}, \boldsymbol{v}) = l(\boldsymbol{y}, \boldsymbol{v}).$$

We need to show that continuous-bilinear and coercivity form. These two properties are sufficient for the existence and uniqueness of weak solution [7, 8]. The input random functions are in a reflexive and separable Bochner space. Thus, weak solution is the same as the classical solution, the weak measurability is also similar to the strong measurability seen in [24].

$$\begin{split} |\Rightarrow |a(\boldsymbol{y}, \boldsymbol{\nu})| = |\int\_{\Omega} \int\_{D} \kappa(\boldsymbol{x}) \nabla \boldsymbol{\nu}. \nabla \boldsymbol{\nu} d\boldsymbol{x} \Phi(\boldsymbol{\xi}) d\boldsymbol{\xi}| &\leq |\boldsymbol{k}(\boldsymbol{x})| \int\_{\Omega} \int\_{D} |\nabla \boldsymbol{\mathcal{V}} \boldsymbol{\nu} d\boldsymbol{x} \Phi(\boldsymbol{\xi}) d\boldsymbol{\xi}| \\ &\leq \|\Bbbk(.)\|\_{L^{\infty}(D)} \|\nabla \boldsymbol{\mathcal{V}}\|\_{L^{2}(\Omega \times D)} \|\nabla \boldsymbol{\nu}\|\_{L^{2}(\Omega \times D)} \\ &\leq \|\Bbbk(.)\|\_{L^{\infty}(D)} \|\boldsymbol{\nu}\|\_{L^{2} \Omega \times H^{1}\_{x}(D)} \|\boldsymbol{\nu}\|\_{L^{2}(\Omega \times D)}. \end{split} \tag{31}$$

Hence, the operator *a* is continuous bilinear. The same case for *l y*ð Þ , *v* this is integral of duality product between a mapping in Hölder theorem

$$\begin{split} |l(\boldsymbol{\jmath},\boldsymbol{\nu})| &= \left| \int\_{\Omega} \int\_{\boldsymbol{D}} f \boldsymbol{\nu} d\boldsymbol{x} \Phi(\boldsymbol{\xi}) d\boldsymbol{\xi} + \int\_{\Omega} \int\_{\partial\mathcal{D}} \boldsymbol{g} \cdot \boldsymbol{\nu} d\sigma \Phi(\boldsymbol{\xi}) d\boldsymbol{\xi} \right| \\ &\leq \left| \int\_{\Omega} \int\_{\boldsymbol{D}} f \boldsymbol{\nu} d\boldsymbol{x} \Phi(\boldsymbol{\xi}) d\boldsymbol{\xi} \right| + \left| \int\_{\Omega} \int\_{\partial\mathcal{D}} \boldsymbol{g} \cdot \boldsymbol{\nu} d\sigma \Phi(\boldsymbol{\xi}) d\boldsymbol{\xi} \right| \\ &\leq \|\boldsymbol{f}\|\_{\left(L^{2}(\boldsymbol{D})\right)} \|\boldsymbol{\nu}\|\_{\left(L\_{2}(\Omega\times\boldsymbol{D})\right)} + \|\boldsymbol{\lg}\|\_{\left(L^{2}(\Omega\times\boldsymbol{d}\boldsymbol{D})\right)} \|\boldsymbol{\nu}\|\_{\left(L\_{2}(\Omega\times\boldsymbol{d}\boldsymbol{D})\right)} \\ &\leq \|\boldsymbol{f}\|\_{\left(H^{1}(\boldsymbol{D})\right)} \|\boldsymbol{\nu}\|\_{\left(L\_{2}(\Omega\times\boldsymbol{D})\right)} + \|\boldsymbol{\lg}\|\_{\left(L\_{2}\left(\Omega, H^{1/2}(\boldsymbol{d}\boldsymbol{D})\right)\right)} \, ^\*\|\boldsymbol{\nu}\|\_{\left(L\_{2}(\Omega, d\boldsymbol{D})\right)}. \end{split} \tag{32}$$

Therefore, both *a* and *l* are continuous bilinear form. To see coercivity, ∀*v*∈*V* Fubini theorem implies that <sup>∣</sup>*a v*ð Þ , *<sup>v</sup>* <sup>∣</sup> <sup>¼</sup> <sup>Ð</sup> Ω Ð *<sup>D</sup>*∣*κ*ð Þ *<sup>x</sup>* <sup>∇</sup>*v:*∇*vdx*Φð Þ*<sup>ξ</sup> <sup>d</sup>ξ*∣ ≥ *<sup>κ</sup>min*<sup>Ð</sup> Ω Ð *<sup>D</sup>*∣∇*v:* ∇*vdx*Φð Þ*ξ dξ*∣ because *κmin* is bounded from below a.s., and independent of random *ξ*, *<sup>κ</sup>min*ð Þ *<sup>x</sup>* is lower bounded as well. Thus, <sup>∣</sup>*a v*ð Þ , *<sup>v</sup>* ∣ ≥*κmin*<sup>Ð</sup> Ω Ð *D*∥∇*v*∥<sup>2</sup> *Vdx*Φð Þ*ξ dξ*≥*kmin=C* ∥*v*∥<sup>2</sup> *<sup>V</sup>* Where *<sup>C</sup>* <sup>¼</sup> *<sup>c</sup>*<sup>2</sup> by Poincare-Friedrichs inequality <sup>∣</sup>*a v*ð Þ , *<sup>v</sup>* ∣ ≥*κmin=c*<sup>2</sup>∥*v*∥<sup>2</sup> <sup>H</sup> ≥ *κmin=c*<sup>2</sup>∥*v*∥<sup>2</sup> *<sup>V</sup>* the same for *l*. The required numerical solution is obtained from stochastic finite difference method (SFDM) or finite element method (SFEM), for the indexed *x*∈ *D* the numerical solution

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

$$\mathcal{Y}(\mathbf{x}, u, \xi) = \mathbf{A}^{-1} \{ f\_{\vec{\eta}}/(\kappa)(u) + \mathbf{g}\_{\vec{\eta}}(u, \xi) \forall \mathbf{x} \in \text{Da } \mathbf{s}. \tag{33}$$

The *y x*ð Þ , *u*, *ξ* have a linear property w.r.t. *u*, *ξ* jointly. This case the matrix *A* obtained from the discretization operator *a* is positive definite [22]. Therefore, the weak solution *<sup>y</sup>* <sup>∈</sup>*L*<sup>2</sup> <sup>Ω</sup> � *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *<sup>D</sup>* � � exists and is unique. We will analyze the continuously dependent of the solution *<sup>y</sup>*<sup>∈</sup> *<sup>L</sup>*<sup>2</sup> <sup>Ω</sup> � *<sup>H</sup>*<sup>1</sup> *<sup>g</sup>* ð Þ *<sup>D</sup>* � � on *<sup>f</sup>*, *<sup>g</sup>*, *<sup>u</sup>* and *<sup>k</sup>*, <sup>∀</sup>*ξ*<sup>∈</sup> <sup>Ω</sup>*:* □

**Theorem 3.5.** *Suppose the coefficient operator <sup>κ</sup>* <sup>¼</sup> <sup>P</sup> *ijκijxi* ð Þ *<sup>ω</sup> xj* ð Þ *ω and <sup>κ</sup>ij*ð Þ *<sup>x</sup>* <sup>∈</sup>*L*<sup>∞</sup>ð Þ *<sup>D</sup> is nonrandom, is independent of <sup>ξ</sup> seen in [7] and deterministic coefficient function of κ*ð Þ*: is indexed by x*∈ *D. There exists a lower bound number κmin* > 0 *such that <sup>κ</sup>min* <sup>≤</sup>*κ*ð Þ *<sup>x</sup>* <sup>≤</sup> *<sup>κ</sup>max a.s. and* <sup>P</sup> *ijκijxi xj* ≥*κminx*<sup>2</sup> *for x*∈ *D subset of n. Thus elliptic PDE system is L*<sup>2</sup> *stable in the sense of distribution in Definition (3.2) and Definition (3.3). There exist c*>0*, c is independent of f, g for all fixed x* ∈ *D, and c is dependent only κ, such that*

$$E\left(\left\|\mathbf{y}(f,\mathbf{g})\right\|^2\right)\_{H^4(D)} \le c \left\{ E\left(\left\|f\right\|^2\right)\_{H^4(D)} + E\left(\left\|\mathbf{g}\right\|^2\right)\_{H^{-1/2}(\partial D)} \right\} \tag{34}$$

∀*f* ∈ *H*<sup>1</sup> ð Þ *<sup>D</sup> and g* <sup>∈</sup>*L*<sup>2</sup> <sup>Ω</sup>, *<sup>H</sup>*�<sup>1</sup> <sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* � �, *<sup>E</sup>* <sup>∥</sup>*<sup>f</sup>* <sup>∥</sup><sup>2</sup> � � <sup>¼</sup> <sup>Ð</sup> <sup>Ω</sup>∥*<sup>f</sup>* <sup>∥</sup><sup>2</sup> <sup>Φ</sup>ð Þ*<sup>ξ</sup> <sup>d</sup><sup>ξ</sup>* <sup>¼</sup> <sup>∥</sup>*<sup>f</sup>* <sup>∥</sup><sup>2</sup> Ð <sup>Ω</sup>Φð Þ*ξ dξ* ¼ ∥*f* ∥<sup>2</sup> *: This theorem is proved and extended in the work [25]*.

**Remark**: For dimension *n*≤3, the map of the solution is continuous embedding from *<sup>L</sup>*<sup>2</sup> <sup>Ω</sup> � *<sup>H</sup>*<sup>1</sup> ð Þ *<sup>D</sup>* � � ! *<sup>L</sup>*<sup>2</sup> <sup>Ω</sup> � *<sup>H</sup>*<sup>1</sup> ð Þ *<sup>D</sup>* � � <sup>∗</sup> <sup>∩</sup> *<sup>L</sup>*<sup>2</sup> <sup>Ω</sup> � *<sup>H</sup>*�1*=*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* � � �is fulfilled and the mapping is continuous and linear *E* ∥*y*∥*H*<sup>1</sup> *<sup>g</sup>* ð Þ *D* h i <sup>≤</sup>*<sup>c</sup>* <sup>∥</sup>*<sup>f</sup>* <sup>∥</sup><sup>∗</sup> *H*1 ð Þ *D* h i <sup>þ</sup> *<sup>E</sup>* <sup>∥</sup>*g*∥*H*1*=*2ð Þ *<sup>∂</sup><sup>D</sup>* <sup>n</sup> h i} for each *ξ* a.s., see the prove in [7].

**Theorem 3.6.** *Let U and V be Hilbert spaces. Then, the linear mapping L* : *U* ! *V is an isomorphic mapping if and only if the associated form a* : *U* � *V*↦ *satisfies*


*Proof.* We need to show that injective and surjective map *L* : *U* ! *V*. The equivalence of continuity of *L* : *U* ! *V*, *L u*ð Þ¼ <sup>1</sup> *Lu*<sup>1</sup> ¼ *L u*ð Þ¼ <sup>2</sup> *Lu*2, *u*1, *u*<sup>2</sup> ∈ *U*

$$
\Rightarrow a(u\_1, v) = a(u\_2, v), \forall v \in V \tag{35}
$$

) *a u*ð Þ¼ <sup>1</sup> � *u*2, *v* 0 implies *u*<sup>1</sup> � *u*<sup>2</sup> ¼ 0*:* For the surjective ∀*f* ∈ *L u*ð Þ, *L u*ð Þ from the image of the pre image *<sup>u</sup>*. There exists a unique *<sup>u</sup>* <sup>¼</sup> *<sup>L</sup>*�<sup>1</sup> ð Þ*f :* Thus, *c*∥*u*∥*<sup>U</sup>* ≤ *sup a u*ð Þ , *<sup>v</sup>* ∥*v*∥*<sup>V</sup>* � � <sup>¼</sup> *sup a f*ð Þ , *<sup>v</sup>* ∥*v*∥*<sup>V</sup>* <sup>¼</sup> <sup>∥</sup>*<sup>f</sup>* <sup>∗</sup> <sup>∥</sup> � . □

#### **3.2 Reduced optimization of PDE with random data**

The problems of CCPDEs are not properly studied. Moreover, the pointwise or uniform probabilistic state-constrained function is a nonsmooth, nonconvex, intractable, and infinite-dimensional state constraints. The following assumptions are needed for reducing dimension and variability for analyzing the structural properties of CCPDE.

*Assumption* 3.7. Assume that the functionals


The optimization problem of

$$\min\_{\omega(.) \in \mathcal{L}} E[J(\mathcal{y}(\infty, \xi), \mathfrak{u}(.))] \tag{36}$$

subject to:

$$P(u; \boldsymbol{\kappa}) = p\_{\boldsymbol{\kappa}}\{\boldsymbol{u}\} = \Pr\{\boldsymbol{y}(\boldsymbol{\kappa}, u, \boldsymbol{\xi}) \le p\_{\max}\} \ge a, \boldsymbol{\kappa} \in D \tag{37}$$

the random variable *ξ*∈ Ω have a log-concave density function *ϕ ξ*ð Þ. If the objective function in Eq. (36) is convex, then the chance-constrained programming is a convex optimization problem and has a globally optimal solution.

*Proof.* The functional J is a convex function from the convexity property of norm. The expectation *E* is the integral of the convex function, is convex. Moreover, the solution of the PDE system *y* is a linear w.r.t ð Þ *x*, *ξ* . The internal functional *γ* ¼ *y* � *ymax* is a quasiconcave from proposition (3.4), then *Px*ð Þ *u* is a convex w.r.t. u. Suppose the constrained *P* ¼ f g *u* ∈ *U=P u*ð Þ ; *x* >*α* is a convex set [26]. Hence, the composition functional *EJyu* ð Þ ð Þ ð Þ is the composition of convex and lower-continuous function. Therefore, *EJyu* ð Þ ð Þ ð Þ is a convex function and the set <sup>B</sup> <sup>¼</sup> *<sup>L</sup>*<sup>2</sup> <sup>Ω</sup>, *<sup>H</sup>*<sup>1</sup>*=*<sup>2</sup> ð Þ *<sup>∂</sup><sup>D</sup>* <sup>⊂</sup> *<sup>P</sup>* <sup>∩</sup> *Uadm* , convex intersection of convex set is convex. □

The optimization problem of CCPDE reduced to the following programming form and the solution *y* is continuous dependent on the parameters *f* and *g* in theorem (3.5),

$$\min\_{u(.) \in P \cap U\_{adm}} q(u(.)) = E[I(\mathcal{y}(f(\infty), (\mathcal{g}(\infty, \xi), u(.)))],\tag{38}$$

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

where the probability function in Eq. (37) expressed *P*ð Þ¼ *:*, *x Pr <sup>γ</sup>* <sup>¼</sup> *yf x* <sup>ð</sup> ð Þ, *<sup>g</sup>*ð Þ*<sup>ξ</sup>* , *<sup>u</sup>*ð Þ*:* Þ � *ymax* <sup>≤</sup> 0, <sup>∀</sup>*x*<sup>∈</sup> *<sup>D</sup>* , the set *<sup>P</sup>* <sup>¼</sup> f g *<sup>u</sup>* <sup>∈</sup> *<sup>U</sup>=P x*ð Þ ; *<sup>u</sup>* <sup>&</sup>gt; *<sup>α</sup>* this optimization problem admits unique optimal solution. The random variable *ξ*∈ Ω has a log-concave density function *ϕ ξ*ð Þ. If the objective function in Eq. (47) is a convex, then the chance-constrained programming is a convex optimization problem and has a global optimal solution.

#### **3.3 The structural property of probabilistic constrained function**

Most of the recent research works on the chance-constrained optimization problems do not include a probabilistic state constraints. In this study, we have deliberated a pointwise probabilistic state constraints, which are expressed in Eq. (41). The structural properties of the probabilistic state-constrained function are not properly analyzed. The continuity, differentiability, compactness, and convexity properties of the state constraints are important for guaranteeing the optimality criteria. The function of state constrained looks like,

$$P(u; \boldsymbol{\mathfrak{x}}) \coloneqq p\_{\boldsymbol{\mathfrak{x}}}\{u\} \coloneqq \Pr\{\boldsymbol{y}((\boldsymbol{\mathfrak{x}}, \boldsymbol{\xi}), u)\} \leq p\_{\max}, \forall \boldsymbol{\mathfrak{x}} \in D\} \geq a. \tag{39}$$

The internal part of the probability function is

$$\chi(\mu, \varkappa, \xi) = \wp((\varkappa, \xi), \mu) \left| -\jmath\_{\max} \le 0, \forall \varkappa \in D,\tag{40}$$

which is a continuous differentiable function from the equation expressed (33), it does not imply the continuity and differentiability of the probability function in Eq. (41). The following propositions guaranteed the structural properties for the probabilistic uniform constrained functions, and these are a strong form of a pointwise state constrained,

$$P(u; \boldsymbol{\kappa}) \coloneqq p\_{\boldsymbol{x}}\{\boldsymbol{u}\} \coloneqq \Pr\{\boldsymbol{y}((\boldsymbol{\kappa}, \xi), \boldsymbol{u})\} = \boldsymbol{y}\_{\max}, \forall \boldsymbol{\kappa} \in D\} = \mathbf{0}.\tag{41}$$

it holds the measure zero property

$$P(u; \boldsymbol{\kappa}) \coloneqq p\_{\boldsymbol{x}}\{\boldsymbol{u}\} \coloneqq \Pr\{\boldsymbol{y}((\boldsymbol{\kappa}, \xi), \boldsymbol{u})\} = \boldsymbol{y}\_{\max}, \forall \boldsymbol{\kappa} \in D\} = \mathbf{0}.\tag{42}$$

**Proposition 3.8.** *Assume that γ*ð Þ *x*, *:*, *ξ are Borel measurable w.r.t ξ*∈ Ω *for all u* ∈ *U and for all x*∈ *D and γ*ð Þ *x*, *:*, *ξ are weakly sequentially lower semicontinuous (wsls) or weakly sequentially upper semicontinuous (wsus) vice versa for x* ∈ *D and ξ*∈ Ω*. Then, P u*ð Þ¼ ; *x Pr*ð Þ *γ*ð Þ *x*, *u*, *ξ* ≤0 *is wsls or wsus vice versa and γ*ð Þ¼ *x*, *u*, *ξ y* � *ymax* ≤0, ∀*x*∈ *D. The same case, for fixed x* ∈ *D the function* Pð Þ¼ *u* f g *u* ∈ *Uadm* : *P*ð Þ *:*, *x* ≥*α is wsls or wsus vice versa.*

*Proof.* From a given assumption, P is well defined by Borel measurability of *γ*ð Þ *x*, *u*, *:* w.r.t. *ξ*∈ Ω in the second argument. Fix an arbitrary *u*^ and let *un* ∈ *Uadm*, *un* ! *u*^, arbitrary weakly convergent sequences in *Uadm* and any arbitrary fixed *x*∈ *D* and let *xn* ∈ *D*, *xn* ! *x*^. Denote by *unl* a subsequence such that

*liminf <sup>n</sup>*!<sup>∞</sup>Pð Þ¼ *un lim <sup>l</sup>*!<sup>∞</sup><sup>P</sup> *unl :* The variable of the decision *<sup>u</sup>* <sup>¼</sup> *u x*ð Þ linearly continuous dependent on *x*∈ *D:*.

Define the sets *P* ¼ f g *ξ*∈ Ω : *γ*ð Þ *x*, *u*, *ξ* ≤ 0, ∀*ξ*∈ Ω and *Pn* ¼ f*ξ*∈ Ω : *γ*ð Þ *x*, *un*, *ξ* ≤0∀*ξ* ∈ Ωg, *n*≥ *no* ∈ N *:* Since, by *γ* being wsls in the first argument, we have

$$\liminf\_{n \to \infty} \mathbb{1}\_{n \to \infty} \chi(\mathbf{x}, u\_n, \xi) \ge \lim\_{l \to \infty} \chi(\mathbf{x}, u\_{n\_l}, \xi) \ge \chi(\mathbf{x}, \hat{u}, \xi), \forall \xi \in \Omega/P. \tag{43}$$

Consequently, *γ*ð Þ *x*, *un*, *ξ* <0, ∀*ξ*∈ Ω*=P* and all *n*≥ *n*0, *ξ*∈ Ω. Denoting by

$$h(\xi) = \begin{cases} 1 & \text{if } \xi \in P \\ 0 & \text{if } \xi \notin P \end{cases} \tag{44}$$

the characteristic function of a set *Uadm*, this entails that *h P*ð Þ! *<sup>n</sup>*ð Þ*ξ* 0 as *n* ! ∞, ∀*ξ*∈ Ω*=P:* By the Lebesgue dominance convergence theorem, Ð <sup>Ω</sup>*h*½*γ*ð Þ *x*, *un*, *ξ ϕ ξ*ð Þ*dξ* ! 0 for all *ξ*∈ Ω*=P*.

On the other way *E h*½ ½ � *γ*ð Þ *x*, *un*, *ξ* ≤ *E h*½ ½ �¼ *γ*ð Þ *x*, *u*, *ξ* 1, ∀*ξ*∈*P*. Therefore,

$$\lim\_{l\to\infty} \mathcal{P}(u\_{n}) = \lim\_{l\to\infty} \text{Pr}\left[\gamma(\mathbf{x}, u\_{n}, \boldsymbol{\xi}) \le 0\right] = \lim\_{l\to\infty} \int\_{\Omega} h(\gamma(\mathbf{x}, u\_{n}, \boldsymbol{\xi})) \phi(\boldsymbol{\xi}) d\boldsymbol{\xi}$$

$$\mathcal{P} = \lim\_{l\to\infty} \int\_{\Omega/P} h\left[\gamma(\mathbf{x}, u\_{n}, \boldsymbol{\xi}) \phi(\boldsymbol{\xi}) d\boldsymbol{\xi} + \lim\_{l\to\infty} \int\_{P} h[\gamma(\mathbf{x}, u\_{n}, \boldsymbol{\xi}) \phi(\boldsymbol{\xi}) d\boldsymbol{\xi}]$$

$$\ge \lim\_{l\to\infty} \inf \int\_{\Omega/P} h\left[\gamma(\mathbf{x}, u\_{n}, \boldsymbol{\xi}) \phi(\boldsymbol{\xi}) d\boldsymbol{\xi} + \lim\_{l\to\infty} \inf \int\_{P} h[\gamma(\mathbf{x}, u\_{n}, \boldsymbol{\xi}) \phi(\boldsymbol{\xi}) d\boldsymbol{\xi}]$$

$$\ge \lim\_{l\to\infty} \inf \int\_{P} h[\gamma(\mathbf{x}, u\_{n}, \boldsymbol{\xi}) \phi(\boldsymbol{\xi}) d\boldsymbol{\xi}] \tag{45}$$

<sup>≥</sup> *lim <sup>l</sup>*!<sup>∞</sup> *inf* <sup>Ð</sup> *<sup>P</sup>ϕ ξ*ð Þ*dξ* ¼ *Pr*½ �¼ *ξ*∈*P Pr*½*ξ*∈ *P* : *γ*ð Þ *x*, *:*, *ξ* ≤ 0� ¼ Pð Þ *u*^ . Thus, the function <sup>P</sup> is wsls from the equation in above *liminf <sup>n</sup>*!∞Pð Þ¼ *un lim <sup>l</sup>*!∞P *unl* � �≥Pð Þ *<sup>u</sup>*^ *:* For wsusc property, related propositions also proved in work of

[11, 16]. □ **Proposition 3.9.** *Assume that D is compact subset of <sup>n</sup> , if γ is wsus then P u*ð Þ , *x is wsus at* <sup>∀</sup>*<sup>u</sup>* <sup>∈</sup> *U satisfying pr <sup>γ</sup>* <sup>∗</sup> ð Þ¼ ð Þ¼ *<sup>u</sup>*, *<sup>ξ</sup>* <sup>0</sup> <sup>0</sup>*, this is said to be measure zero property where <sup>γ</sup>* <sup>∗</sup> <sup>¼</sup> *inf* ð Þ *<sup>γ</sup>*ð Þ *<sup>x</sup>*, *<sup>u</sup>*, *<sup>ξ</sup> defined in proposition (3.8).*

The prove is the similar to the proof of (3.8), has been proved in [16]. Let *x*∈ *D* be fixed. We need to show the convexity property of probability function *u* ! *p u*ð Þ , *ξ* needs convex property with respect to ð Þ *u*, *ξ* along the continuous probability density function *ϕ ξ*ð Þ. From the solution of PDE system, we have a continuous *y* in Bochner space and *y x*ð Þ¼ , *<sup>ξ</sup>*, *<sup>u</sup> <sup>A</sup>*�<sup>1</sup> � �*=*ð Þ*<sup>k</sup>* � � *<sup>j</sup> <sup>H</sup>*ð Þ *f x*ð Þþ *kg u*ð Þ , *:*, *:* ð Þ *<sup>x</sup>:<sup>ξ</sup>* � �∀*x*<sup>∈</sup> *<sup>D</sup>*, <sup>∀</sup>*ξ*<sup>∈</sup> <sup>Ω</sup>. Thus, *<sup>γ</sup>*ð Þ¼ *<sup>x</sup>*, *:*, *: <sup>A</sup>*�<sup>1</sup> � �*=*ð Þ*<sup>k</sup>* � � *<sup>j</sup> <sup>H</sup> f x*ð Þþ *kgu* ð Þ ð Þ , *:*, *:* ð Þ� *<sup>x</sup>*, *<sup>u</sup>*, *<sup>ξ</sup> ymax* � �≤<sup>0</sup> � and *Pr A*�<sup>1</sup> ��� � *j <sup>H</sup>*ð Þ *f x*ð Þþ ð Þ*<sup>k</sup> g u*ð Þ , *:*, *:* ð Þ� *<sup>x</sup>:<sup>ξ</sup>* ð Þ*<sup>k</sup> ymax* � �Þ≤0<sup>g</sup> <sup>≥</sup>*<sup>α</sup>* is a convex w.r.t. ð Þ *<sup>u</sup>*, *<sup>ξ</sup>* jointly from the linearity of the internal function for any fixed *x*∈ *D:* Therefore, it is a sufficient in the finite-dimensional optimization, continuity, and linearity of *y* guarantee for continuity and convexity of *P u*ð Þ ; *x* [26, 27]. Our aim is to extend it to infinite-dimensional case.

**Remark:** Assume that *<sup>γ</sup>*ð Þ¼ *<sup>u</sup>*, *<sup>x</sup>*, *<sup>ξ</sup> <sup>A</sup>*�<sup>1</sup> *<sup>=</sup>*ð Þ*<sup>k</sup>* � � *<sup>j</sup> <sup>H</sup>f x*ð Þ � � <sup>þ</sup> *g x*ð Þ� , *:*, *: ymax* is a concave w.r.t.*ξ*∈ Ω, ∀*u*∈ *U* and ∀*x*∈ *D*, for each u there exists random vector *ξ*∈ Ω such that *γ* ≤0, ∀*x*∈ *D* then, *ξ* has a density *ϕ ξ*ð Þ distributed continuously and

$$\Pr\{\mathbf{y}^\*(u,\xi) = \mathbf{0}\} = \mathbf{0} = \Pr\{\left(A^{-1}/(k)\right)\left(\mathbf{j}\_H\mathbf{f}(\mathbf{x})\right) + \mathbf{g}(u,\dots)(\mathbf{x},\xi) = \mathbf{y}\_{\max}\} = \mathbf{0}, \forall \mathbf{x} \in D,\tag{46}$$

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

where, *<sup>γ</sup>* <sup>∗</sup> <sup>¼</sup> *inf* ð Þ¼ *<sup>γ</sup>*ð Þ *<sup>x</sup>*,*u*,*<sup>ξ</sup> inf <sup>u</sup> <sup>A</sup>*�<sup>1</sup> *<sup>=</sup>*ð Þ*<sup>k</sup>* � � *<sup>j</sup> <sup>H</sup>f x*ð Þ � �þ*g u*ð Þ ,*:*,*:* ð Þ� *<sup>x</sup>*,*<sup>ξ</sup> ymax*,∀*x*∈*<sup>D</sup>* � �*:* **Proposition 3.10.** *Let U be Banach space and D be arbitrary index set. Let the ndimensional random vector ξ*∈ Ω *have a log-concave density, i.e., density of its logarithm is possibly extended valued concave function. Assume that the function γ*ð Þ¼ *x*, *u*, *ξ A*�<sup>1</sup> *<sup>=</sup>*ð Þ*<sup>k</sup>* � � *<sup>j</sup> <sup>H</sup>*ð Þþ *f x*ð Þ *g u*ð Þ , *:*, *:* ð Þ *<sup>x</sup>*, *<sup>ξ</sup>* � � � *ymax is quasi-concave for all x in D, then the set M* ¼ f g *u*∈ *U=P u*ð Þ ; *x* >*α is a convex set.*

*Proof.* From structural property of wslsc on the proposition (3.8) pick up *<sup>γ</sup>* <sup>∗</sup> <sup>¼</sup> *inf <sup>u</sup>*ð Þ *<sup>γ</sup>*ð Þ *<sup>x</sup>*, *<sup>u</sup>*, *<sup>ξ</sup>* and we defined *P u*ð Þ ; *<sup>x</sup>* <sup>≔</sup> *Pr <sup>γ</sup>* <sup>∗</sup> ð Þ ð Þ *<sup>x</sup>*, *<sup>u</sup>*, *<sup>ξ</sup>* <sup>≤</sup><sup>0</sup> for all u in Banach space *Uadm* <sup>¼</sup> *<sup>M</sup>* <sup>⊂</sup> *<sup>U</sup>* to fix arbitrary elements in *<sup>U</sup>*, *<sup>u</sup>*1, *<sup>u</sup>*<sup>2</sup> <sup>∈</sup> *<sup>U</sup>* and quasi concavity of *<sup>γ</sup>* <sup>∗</sup> such that *ξ*<sup>1</sup> and *ξ*<sup>2</sup> in Ω and *u*1, *ξ*<sup>1</sup> ð Þ, *u*2, *ξ*<sup>2</sup> ð Þ∈ *U* � Ω and *λ*∈½ � 0, 1 there exists indexed *x*∈ *D* such that *<sup>γ</sup>* <sup>∗</sup> *<sup>λ</sup> <sup>u</sup>*1, *<sup>ξ</sup>*<sup>1</sup> ð Þþ ð Þ <sup>1</sup> � *<sup>λ</sup> <sup>u</sup>*2, *<sup>ξ</sup>*<sup>2</sup> ð Þ ð Þ <sup>≥</sup> *min <sup>γ</sup>* <sup>∗</sup> *<sup>u</sup>*1, *<sup>ξ</sup>*<sup>1</sup> ð Þ, *<sup>γ</sup>* <sup>∗</sup> *<sup>u</sup>*2, *<sup>ξ</sup>*<sup>2</sup> fð ð Þg and the density *<sup>ξ</sup>* having log-concavity density shown in Prekopa ([11, 28] Theorem [4.2.1]) for the given log-concave distribution. The *pr*ð Þ *ξ*∈*λP* þ ð Þ 1 � *λ q* ≥*pr*ð Þ *ξ*∈*p λ Pr*ð Þ *ξ*∈*q* 1�*λ* Þ for p and q are convex subset of Ω. We need to show the convexity of set M in *u*<sup>1</sup> and *u*<sup>2</sup> in M and *λ*∈ð Þ 0, 1 we define a map *H* : *U* ! u in U observe that *H u*ð Þ<sup>1</sup> and *H u*ð Þ<sup>2</sup> are the convex set an immediate consequence of the quasi concave on *γ* <sup>∗</sup> . The sets *λH u*ð Þ <sup>1</sup> þ ð Þ 1 � *λ H u*ð Þ<sup>2</sup> ⊂ *H*ð Þ *λu*<sup>1</sup> þ ð Þ 1 � *λ u*<sup>2</sup> . Thus, *H* is a concave set (*γ* is a quasiconcave). In other words, let *ξ*<sup>1</sup> ∈ *H u*ð Þ<sup>1</sup> and *ξ*<sup>2</sup> ∈ *H u*ð Þ<sup>2</sup> , *ξ*∈ *P*ð*λu*<sup>1</sup> þ ð Þ 1 � *λ u*2Þ ¼

*pr*½ � *ξ*∈ *H*ð Þ *λu*<sup>1</sup> þ ð Þ 1 � *λ u*<sup>2</sup> ≥*pr ξ*∈ *H u*ð � <sup>1</sup> *<sup>λ</sup>* <sup>∗</sup> *pr*½ � *<sup>ξ</sup>*<sup>∈</sup> *H u*ð Þ<sup>2</sup> ð Þ <sup>1</sup>�*<sup>λ</sup>* <sup>¼</sup> *αλ <sup>α</sup>*ð Þ <sup>1</sup>�*<sup>λ</sup>* <sup>¼</sup> *<sup>α</sup>*, h for reliable level *<sup>α</sup>*. The *<sup>ξ</sup>* <sup>¼</sup> *λξ*<sup>1</sup> <sup>þ</sup> ð Þ <sup>1</sup> � *<sup>λ</sup> <sup>ξ</sup>*<sup>2</sup> <sup>∈</sup> <sup>Ω</sup>, *<sup>γ</sup>* <sup>∗</sup> *<sup>u</sup>*1, *<sup>ξ</sup>*<sup>2</sup> ð Þ, *<sup>γ</sup>* <sup>∗</sup> *<sup>u</sup>*2, *<sup>ξ</sup>*<sup>2</sup> ð Þ<sup>≤</sup> 0 obtaining from the quasi concavity of *<sup>γ</sup>* <sup>∗</sup> we have *<sup>γ</sup>* <sup>∗</sup> <sup>ð</sup>*λu*<sup>1</sup> <sup>þ</sup> ð Þ <sup>1</sup> � *<sup>λ</sup> <sup>u</sup>*2, *<sup>ξ</sup>*Þ ¼ *<sup>γ</sup>* <sup>∗</sup> *<sup>λ</sup>u*1, *<sup>ξ</sup>*<sup>1</sup> ð Þþ ð Þ <sup>1</sup> � *<sup>λ</sup>* ð ÞÞ *<sup>u</sup>*2, *<sup>ξ</sup>*<sup>2</sup> <sup>≥</sup> *min <sup>γ</sup>* <sup>∗</sup> ð Þ *<sup>u</sup>*1, *<sup>ξ</sup>*<sup>1</sup> , *<sup>γ</sup>* <sup>∗</sup> ð Þ ð Þ *<sup>u</sup>*2, *<sup>ξ</sup>*<sup>2</sup> <sup>≥</sup>0. Finally, the result is proved the convexity of chanceconstrained, *<sup>λ</sup>u*<sup>1</sup> <sup>þ</sup> ð Þ <sup>1</sup> � *<sup>λ</sup> <sup>u</sup>*<sup>2</sup> <sup>∈</sup> *<sup>M</sup>*. Hence, M is a convex set. □

**Proposition 3.11.** *Let M* ¼ *U be separable Banach space defined in proposition of the convexity (3.10) and M*<sup>∗</sup> *is a dual space of M, assume that the <sup>γ</sup>*ð Þ *<sup>x</sup>*, *:*, *:* <sup>≤</sup><sup>0</sup> *is a weakly sequentially upper continuous (w.s.u.s) for each u*∈ *M and x*∈ *D. Then, it satisfies the following three conditions and the three statements are equivalent [11, 16].*

i. *The γ*ð Þ *x*, *u*, *: is a Borel measurable function w.r.t. ξ*∈ Ω ∀*u*∈ *M and x* ∈ *D*.

ii. *The set M* ¼ f g *u* ∈ *M* : Pð Þ *u*; *x* ≥*α is weakly closed*.

iii. *The* Pð Þ *u*; *x is weakly sequentially upper continuous (wsus)*.

*Proof.* From the assumption, the function *<sup>γ</sup>*ð Þ *<sup>x</sup>*, *<sup>u</sup>*, *:* : *<sup>n</sup>* ! upper semicontinuous for each *u*∈ *U* and each *x*∈ *D*. Consequently, the sets f g *ξ*∈ Ω : *γ*ð Þ *x*, *u*, *ξ* ≤0, ∀*x*∈ *D* are closed, which implies the Borel measurability w.r.t. *ξ* (i). The proposition (3.8) explains the continuity property with measure zero property in (42), justifies to talk about probability of events as in (i) and (iii). It is an immediate consequences of (ii). Hence, in order to (i) prove (ii) let Pð Þ *u* in be arbitrary and consider weakly convergent sequences *un* ! *u* has a convergent subsequence *unl* ∈ *M* with *un* ∈ *M* for all n, it is from Arzela-Ascoli, every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. We have to show that u in M we define *H u*ð Þ¼ f g *ξ*∈ Ω : *γ*ð Þ *x*, *u*, *ξ* ≤ 0 from *un* ∈ *M*, and *P*f g *ξ=H u*ð Þ*<sup>n</sup>* ≥ *α* . Boundness of *un* by weak convergence implies that there is some closed ball B with sufficiently large radius such that u in *B* for all n. From separability of *U* <sup>∗</sup> , the weak topology *B* metrizable w.r.t. *ξ* for fixed x. Regarding the finite-dimensional case, it has been proved in work [16].

**Lemma 3.12.** *All the assumptions of Proposition (3.11) are true, there are constants ε*>0, *σ* >0*, such that with d referring to the Hausdorff distance from the point on Uadm to the set in probabilistic set Px*ð Þ *u*; *ξ such that d*ð∈ *Uadm*, f g *u*∈P : *P u*ð Þ , *x* ≥ ℘ ≤ *σmax log* f g ð Þ� ℘ *log P*ð Þ *<sup>x</sup>*ð Þ *u*, *x* , 0 , ∀℘ ∈½ � *α* � *ε*, *α* þ *ε .*

For an infinite-dimensional problem of CCPDE, we have proved it in the previous work [14, 25]. This lemma has been verified for the finite-dimensional problem without the PDE system in [16]. It needs Arzela-Ascoli theorem, this CCPDE constrained function also holds bounded and continuous property [16, 21]. The Lax-Milgram theorem is sufficient for the bounded and continuous property from the continuous bilinear and coercivity form; see in the theorem (3.4).

#### **4. Approximation for CCPDE by inner and outer functions**

The approximation of the probability constrained function has been analyzed in the previous work [6, 14]. This proposed approach is a smooth parametric approximation for the nonsmooth and intractable probability function. This study will analyze some open issues related to the topological and structural properties of the CCPDE. Some of the issues are continuity, differentiability, closedness, compactness, and convexity of the inner and outer approximation functions in the infinitedimensional Bochner space. They are significant for assuring the optimality criteria for the existence and uniqueness of the optimal control. Furthermore, the convergence approach and the numerical results are studied properly.

The smooth parametric inner and outer approximations are analyzed in the work [6]. This section briefly analyzes the parametric functions to define a smooth approximations to the optimal control of the boundary-value CCPDE. The optimal control of the CCPDEs is approximated by the family of the sequence of solutions of the innerouter approximation problems, when the approximation parameter *τ<sup>k</sup>* tends to zero as k goes to infinity. In case of convexity and norm convergence of the approximate solutions to a solution of CCPDE can be proved and to some extended the structural analysis of the proposed approach for guaranteeing the optimality criterion of CCPDE. For this purpose, we employ and extend our recent work [6, 13, 14, 25], where the inner-outer approximation methods, for finite-dimensional and smooth CCPDE, were proposed to solve the reduced CCPDE problem.

We consider the parametric Geletu-Hofmann function

$$\theta(\tau, s) = \frac{1 + m\_1 \tau}{1 + m\_2 \tau \exp\left(-\frac{s}{\tau}\right)}, \quad for \quad \tau \in (0, 1), \ s \in \mathbb{R} \tag{47}$$

to approximates a nonsmooth problem of the CCPDE in the work [13]. If we fix the parameters *m*<sup>1</sup> ¼ 0, *m*<sup>2</sup> ¼ 1*=τ*, the parametric function *θ τ*ð Þ , *s* is the same as the sigmoid function. Thus, we can approximate the probabilistic constraint by sigmoid function in terms of *θ τ*ð Þ , *s :* Unfortunately, the sigmoid function is not bound to the probabilistic constrained function from the below, and the computation is a too slowing in comparison with *θ τ*ð Þ , *s :* The advantage of theta is bound to the probabilistic function from above and below. Thus, the probabilistic constraint of CCPDE is bounded by the family of smooth functions of inner *E*½*θ τ*ð Þ , �*γ*ð Þ� *u*, *ξ*; *t*, *x* and outer 1 � *E*½*θ τ*ð Þ , *γ*ð Þ� *u*, *ξ*; *t*, *x* approximations from the lower and upper bound, respectively. We have a piecewise continuous function,

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

$$h(s) = \begin{cases} 1 & \text{if } s \ge 0 \\ 0 & \text{if } s < 0. \end{cases} \tag{48}$$

Multiplied by the discontinuous function, with jump nonlinear function and an infinitely differentiable function *Eh s* ½ � ð Þ , where *s* ¼ *γ*ð Þ *x*, *u*, *ξ* is defined above in Eq. (48). Then, the product is infinitely differentiable, and the difference of approximation on the parametric functions in the region 0 <*x*<*ε* that is tiny for small positive number epsilon but the probabilistic constraint has measure zero property defined in Eq. (42). This makes the approximation functions a nice function property. The *m*<sup>1</sup> and *m*<sup>2</sup> are positive constants, the parametric family each property of *θ* function is written and proved in [6, 13] together with its continuity, differentiability, and convexity of its approximation functions. The following well-known identities are obtained

$$p(u; \boldsymbol{\omega}) = E[h(-\boldsymbol{\gamma}(u, \xi; \boldsymbol{\omega}))] = \mathbf{1} - E[h(\boldsymbol{\gamma}(u, \xi; \boldsymbol{\omega}))],\tag{49}$$

despite the appealing expected value *E h*½ �¼ ð Þ *<sup>γ</sup>*ð Þ *<sup>u</sup>*, *<sup>ξ</sup>*; *<sup>x</sup>* <sup>Ð</sup> <sup>Ω</sup>*h*ð Þ *γ*ð Þ *u*, *ξ*; *x ϕ ξ*ð Þ*dξ* representation of probability functions in (49), the missing smoothness of the unit jump function does not provide computational advantages. Nevertheless, the function *h* provides an insight for the construction of a smoothing approximation for the probability function *p* and the internal function f g *θ τ*ð Þ , *γ* , *τ* ∈ð0, 1Þ of functions *θ* : ð Þ� 0, 1 ! <sup>þ</sup> possessing the following *strict monotonicity* and *uniform limit* properties.

*Assumption* 4.1. Suppose there is a parametric family of a functions *θ τ*ð Þ , *s* , which possess the following properties such as monotonic (strictly increasing) uniform limit properties


Thus, from the above boundedness property of jump function in above assumption we have,

$$\mathbf{1} - \theta(\tau, s) \le h(-s) \le \theta(\tau, -s),\tag{50}$$

It follows that, *E*ð Þ 1 � *θ τ*ð Þ , *s* ≤*E h*ð Þ ð Þ �*s* ≤*E*ð Þ *θ τ*ð Þ , �*s* , from property of expectation. Hence,

*p u*ð Þ ; *<sup>x</sup>* <sup>≔</sup> *Pr y u* <sup>f</sup> <sup>ð</sup> , *<sup>ξ</sup>*, *<sup>x</sup>*<sup>Þ</sup> <sup>≤</sup>*ymax*g≥*α*, <sup>∀</sup>*x*<sup>∈</sup> *<sup>D</sup>* ) *Pr*f*ξ*<sup>∈</sup> <sup>Ω</sup> : *<sup>γ</sup>*ð*x*, *<sup>u</sup>*, *<sup>ξ</sup>*<sup>Þ</sup> <sup>≤</sup>0g ¼ *Ehs* ð Þ ð Þ *<sup>s</sup>*¼*γ*ð Þ *<sup>x</sup>*,*u*,*<sup>ξ</sup>* <sup>≤</sup><sup>0</sup> <sup>≥</sup>*α*, <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> *<sup>D</sup>:* Now, based on the parametric function *θ*, the following functions are defined

$$\rho(\tau, u; t, \varkappa) \coloneqq E[\theta(\tau, -\gamma(u, \xi; t, \varkappa))],\tag{51}$$

$$\psi(\tau, u; t, \kappa) \coloneqq \mathbf{1} - E[\theta(\tau, \gamma(u, \xi; t, \kappa))], \tau \in (0, 1). \tag{52}$$

Under the measure zero property and smoothness properties of *γ*ð Þ *u*, *ξ*; *t*, *x* , the functions *ψ τ*ð Þ , *u*; *t*, *x* and *φ τ*ð Þ , *u*; *t*, *x* can be shown to be smoothing approximations of 1 � *p u*ð Þ ; *t*, *x* and *p u*ð Þ ; *t*, *x* , respectively (see Geletu et al. [6]). Moreover, the following convergence properties

$$\inf\_{\pi \in (0,1)} \rho(\pi, u; t, \infty) = p(u; t, \infty);\tag{53}$$

$$\sup\_{\tau \in (0,1)} (1 - \psi(\tau, u; t, \infty)) = p(u; t, \infty);\tag{54}$$

and the Lebesgue dominance convergence properties analyze the almost indeed convergence properties of the inner and outer function sequence. The detail convergence approach of the outer approach to CCPDE has been studied in the work [6, 13]. However, the convergence analysis for the inner approximation to the optimal solution of CCPDE of the smooth function *IA<sup>τ</sup>* has not been properly analyzed in the previous work [13] because the study needs the convex approximations and subdifferentiability for probabilistic constraints as the convergence of stationary points of *IA<sup>τ</sup>* is very relevant for the existence of the optimal solution.

For several chance constraints *pi* ð Þ *u* ≥ *α*, *i* ¼ 1, 2, … , *m*, the regularity is given by *u*∈ *U*j*pi* ð Þ *<sup>u</sup>* <sup>≥</sup>*α*, *<sup>i</sup>* <sup>¼</sup> 1, 2, … , *<sup>m</sup>* <sup>¼</sup> *cl u*<sup>∈</sup> *<sup>U</sup>*j*pi* ð Þ *<sup>u</sup>* <sup>&</sup>gt;*α*, *<sup>i</sup>* <sup>¼</sup> *<sup>i</sup>* <sup>¼</sup> 1, 2, … , *<sup>m</sup>* [29]. For continuously differentiable probability functions *pi* , a sufficient condition for the validity of the regularity assumption is the satisfaction of the Mangasarian-Fromowitz constraint qualification (MFCQ) at the active points, [16, 29] Proposition 3.7.

The respective feasible sets of inner and outer approximations are defined as follows:

$$\mathcal{M}(\mathfrak{r}) \coloneqq \{ u \in U \mid \psi(\mathfrak{r}, u; \mathfrak{x}) \le 1 - a, (\mathfrak{x}) \in D \};\tag{55}$$

$$\mathcal{S}(\mathfrak{r}) \coloneqq \{ u \in U \mid \varrho(\mathfrak{r}, u; \mathfrak{x}) \ge a, (\mathfrak{x}) \in D \}. \tag{56}$$

As a consequence of the properties of the functions *ψ τ*ð Þ , *u*; *x* and *φ τ*ð Þ , *u*; *x* , we have the following relations among the feasible sets of CCPDE, IA*τ*, and OA*τ*. That is,

$$\mathcal{M}(\mathfrak{r}) \subset \mathcal{P} \subset \mathcal{S}(\mathfrak{r}), \text{ for} \\ \mathfrak{r} \in (\mathbf{0}, \mathbf{1}). \tag{57}$$

The tightness property of the relaxation of IA and compression of OA sequentially analyzed in the previous work [29], if 0<*τ*<sup>2</sup> ≤ *τ*<sup>1</sup> <1, then.

$$\mathcal{M}(\mathfrak{r}\_2, \mathfrak{x}) \subset \mathcal{M}(\mathfrak{r}\_1) \subset \mathcal{P}(\mathfrak{x}) \subset \mathcal{S}(\mathfrak{r}\_1, \mathfrak{x}) \subset \mathcal{S}(\mathfrak{r}\_2). \tag{58}$$

It leads *lim <sup>τ</sup>*!0þ*S*ð Þ¼ *<sup>τ</sup>* <sup>∩</sup> *<sup>τ</sup>* <sup>∈</sup>ð Þ 0,1 *<sup>S</sup>*ð Þ¼ *<sup>τ</sup> <sup>P</sup>* and *lim <sup>τ</sup>*!0þ*M*ð Þ¼ *<sup>τ</sup> Cl* <sup>∪</sup>*<sup>τ</sup>* <sup>∈</sup>ð Þ 0,1 *<sup>M</sup>*ð Þ*<sup>τ</sup>* <sup>¼</sup> *P:* Both *M*ð Þ*τ* and *S*ð Þ*τ* are closed sets as *τ* ! 0þ; see the property in proposition 5.1 and 5.2 in the next section.

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

The new monotonic properties of the objective function *EJTu* ð Þ ð Þ ð Þ , *:* with respect to *τ* are a continuous function on *u*∈ *Uadm*. If 0 <*τ*<sup>1</sup> <*τ*<sup>2</sup> <1 with *inf* ½ �¼ ∅ ∞,

$$\begin{split} \inf\_{u \in M(\mathfrak{r}\_{\mathbb{T}})} [E(f(T[u]))] &\geq \inf\_{u \in M(\mathfrak{r}\_{\mathbb{T}})} [E(f(T[u]))] \geq \inf\_{u \in P} [E(f(T[u]))] \\ &\geq \inf\_{u \in S(\mathfrak{r}\_{\mathbb{T}})} [E(f(T[u]))] \geq \inf\_{u \in S(\mathfrak{r}\_{\mathbb{T}})} [E(f(T[u]))], \forall \tau \in (0, 1). \end{split} \tag{59}$$

It implies the following compact form conditions of nondegenerate fuzzy optimality,

$$\begin{split} \inf\_{u \in M(\mathfrak{r}\_2)} [\operatorname{E}(f(T[u])) + \mu\_i(a - (\mathbbm{1} - \varphi\_i(u\_i))) \ge \inf\_{u \in M(\mathfrak{r}\_1)} [\operatorname{E}(f(T[u])) + \mu\_i(a - (\mathbbm{1} - \varphi\_i(u\_i)))] \\ \ge \inf\_{u \in P} [\operatorname{E}(f(T[u])) + \mu\_i(a - \mathcal{P}\_i(u\_i)) \le \inf\_{u \in S\_{\mathfrak{r}(1)}} [\operatorname{E}(f(T[u])) + \mu\_i(a - \varrho\_i(u\_i))] \\ \ge \inf\_{u \in S(\mathfrak{r}\_2)} [\operatorname{E}(f(T[u])) + \mu\_i(a - \varrho\_i(u\_i)), \forall \mathfrak{r} \in (\mathbf{0}, 1). \end{split} \tag{60}$$

The complementary property of the probability function 1 � *p u*ð Þ ; *x* ≤*ψ τ*ð Þ , *u*; *x* ≤1 � *α* and *φ τ*ð Þ , *u*; *x* ≥*p x*ð Þ≥ *α*, for*x*∈ *D* hold true. Now, using the parametric functions *φ τ*ð Þ , � and *φ τ*ð Þ , � , we define the following problems with the same objective function *q* in the equation given (38) as in CCPDE.

$$\begin{aligned} \min\_{u} q(u) & \qquad (IA\_{\tau}) & \qquad \min\_{u} q(u) & (OA\_{\tau})\\ \text{s.t.} & \qquad & \text{s.t.} & \\ \Psi(\tau, u; \boldsymbol{x}) & \leq 1 - a, \boldsymbol{x} \in D\_{\boldsymbol{c}} & \qquad & \rho(\tau, u; \boldsymbol{x}) \geq a, \\ u \in U, & \qquad & u \in U. \end{aligned} \tag{61}$$

The feasible set of CCPDE defined in the above set *P* in Eq. (41) has a property that satisfies Mangasarian-Fromowitz constraint qualification (MFCQ). It is an important prerequisite for applying the necessary optimality criteria in nonlinear optimization. The MFCQ is a condition for the regularity of a permissible point. The MFCQ is in one of point x, and if this point is a local minimum, then the Karush-Kuhn-Tucker conditions are met at this point. If the MFCQ is valid, it is easy to check whether a given point is optimal or not; see the work [13]. From Section 6 proposition (3.10), it follows that the convexity of inner approximation is convex conservative, *ψ* is concave function or all fixed x in D and *ξ*∈ Ω. The convexity of *φ* analyzed in the work [13]. The Slater condition is a sufficient condition for strong duality to hold for a convex optimization problem named after Slater; for further analysis, please see the work in [13].

The necessary optimality condition is not valid for expressing local optimal solution in this technique, but can be shown to hold true for generalized stationary points of Fritz–John types [27, 29]. This essentially requires the uniform convergence of partial gradients of inner and outer functions *φi*ð Þ *τ*, *u*, *ξ* and 1 � *ψi*ð Þ *τ*, *u*, *ξ* of ∇*φi*ð Þ¼ *τ*, *u*, *ξ* 0 and ∇ 1 � *ψ<sup>i</sup>* ð ð Þ *τ*, *u*, *ξ* Þ ¼ 0, for *τ<sup>k</sup>* ! 0þ, as *k* ! ∞ on bounded subsets W of *Uadm* [29]. In addition, each strict local minimum of CCPDE is a cluster point of local minima of the inner approximation problems *IA<sup>τ</sup>* under the satisfaction of tightness or the outer approximation problems *OAτ*; see ([29], Proposition 3.3–3.7). The following sections are more about the structural properties of approximation functions such as closedness, convergence, and differentiability of *IA<sup>τ</sup>* and *OAτ*; those properties are not properly analyzed.

#### **5. Closedness, convergence, and differentiability of** *IA<sup>τ</sup>* **and** *OA<sup>τ</sup>*

#### **5.1 Closedness property of M set of** *IA<sup>τ</sup>* **and S set of** *OA<sup>τ</sup>*

The nice properties of the parametric function are given in assumption (4.1); closedness property is defined by the distance from a particular point on the *P* to set *MorS* (this distance is close to zero). It is called Hausdorff distance (point to set distance). The specific point of *P* is closed under the arbitrary sequence of the set *Mn* and *Sn:* This property is an essential property for the compactness of the specified *M* and *S*. In this case, compactness is boundedness, closedness, and any convergent sequence has a convergent subsequence with the same limit point. The boundness and subsequential convergence property have been clear from (4.1) and tightness property.

Let *u*∈ *Uadm* be separable Banach space and *Uadm* <sup>∗</sup> is dual space of *Uadm*, assume that the *γ*ð Þ *x*, *:*, *:* are the continuous differentiable functions as shown in proportions (3.8) and (3.9) for the continuity property of *Px*ð Þ *u* for any fixed *x*∈ *D*. Then, it satisfies the compactness of set *P*, which is a feasible set of CCPDE and the conditions as shown (proposition 3.7 in the work [11]). The *γ*ð Þ *x*, *u*, *:* is a Borel measurable w.r.t. *ξ*∈ Ω, thus, *P* ¼ f g *u* ∈ *Uadm* : *P u*ð Þ≥*α* is weakly closed in the proposition (3.11), it is from the property of the wssc of *P*. Now, the closedness property of *M* and *S* proved in the following proposition.

**Proposition 5.1.** *Assume that the γ*ð Þ *u*, *:*, *x are Borel measurable for all u*∈ *U and x*∈ *D and γ*ð Þ *:*, *x*, *ξ are weakly sequentially semicontinuous (wss) for x*∈ *D and ξ*∈ Ω*. Then φ γ*ð Þ ð Þ *x*, *u*, *ξ* , *τ and ψ γ*ð Þ ð Þ *x*, *u*, *ξ* , *τ are wssc and γ*ð Þ¼ *x*, *u*, *ξ y* � *ymax* ≤0, ∀*x*∈ *D.*

*Proof.* This is extension of the proposition 3.1 and 3.2. The prove is directly related with proofs of those propositions. Observe first, that *ρ* is well defined by Borel measurability of *γ* in the second argument. This *γ* is wssc, then the parametric function *θ γ*ð Þ , *τ* is wssc. It satisfies the six properties of monotonicity, smoothness, boundedness, and uniform convergence property are expressed in Assumption 4.1. The inner function and outer functions are continuous.

**Proposition 5.2.** *The function* 1 � *ψ*ð Þ *u*, *x of M of IA<sup>τ</sup> and φ*ð Þ *u*, *x of OA<sup>τ</sup> are continuous from the continuity property θ τ*ð Þ , *γ . The set M of IA<sup>τ</sup> and the set S of OA<sup>τ</sup> are closed if the following Hausdorff distance holds true, and there are constants* ∀*ε*>0, *σ* >0*, such that with d referring to the Hausdorff distance*

*d u*ð Þ , f g *u*∈ *Uadm*j*φ*ð Þ *u*, *x* ≥*α* ≤*σmax log* f g ð Þ� *α log* ð Þ *φ*ð Þ *u*, *x* , 0 , ∀*α*∈½ � 1 � *ε*, 1 *. The same case for IA, there are constants* ∀*ε* >0, *σ* > 0*, such that with d referring to the Hausdorff distance d u*ð Þ , f g *u*∈*Uadm*j1 � *ψ*ð Þ *u*, *x* ≥*α* ≤*σmax log* f g ð Þ� *α log* ð Þ 1 � *ψ* , 0 ,∀*α*∈½ � 1 � *ε*, 1 *.*

*Proof.* The proof directly follows from the nice property of expectation in the given assumptions (4.1, i–vi). Moreover, the proposition (3.11), the inequality *E h*ð Þ ð Þ*γ* ≥*α* is bounded by the two parametric functions *φ*ð Þ *u*, *x* , and 1 � *ψ*ð Þ *u*, *x* . This is a direct extension of the lemma (3.12), we have shown that, for arbitrary *u*1, *u*<sup>2</sup> ∈ *Uadm* and *λ*∈½ � 0, 1 the inequality *φ λ*ð Þ *u*<sup>1</sup> þ ð Þ 1 � *λ u*<sup>2</sup> ≤*λφ*ð Þþ *u*<sup>1</sup> ð Þ 1 � *λ φ*ð Þ *u*<sup>2</sup> and ð Þ 1 � *ψ λ*ð Þ *u*<sup>1</sup> þ ð Þ 1 � *λ u*<sup>2</sup> ≥*λ*ð Þ 1 � *ψ* ð Þþ *u*<sup>1</sup> ð Þ 1 � *λ* ð Þ 1 � *ψ*ð Þ *u*<sup>2</sup> hold true from convexity property in (3.10), so *ψ*ð Þ *u* is a concave function. This means that *logφ*, *log* ð Þ 1 � *ψ* are the log-convex functions and, hence, the inequality *μ φ*ð Þ ð Þ *u* ≥*α* and *μ ψ*ð Þ ð Þ *u* ≤1 � *α* is equivalent with *P u* ^ð Þ<sup>≤</sup> � *log* ð Þ *<sup>α</sup>* , where *P u* ^ð Þ <sup>≔</sup> � *logP* is a log-convex function. The Proposition (3.11), P(u) is wsus, the expectation of P(u) is *P*^, is wsls from continuity property in proposition (3.8) and (3.9). The function given in proportion (3.10) is a convex function. From Robison-Ursescu theorem ([16], Lemma 4), the continuous

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

convex function is closed. Therefore, the probabilistic constrained function P(u) is closed.

We have proved that the sets M and S are defined in the following section, are the convex feasible sets. It has a closed and convex graph. Consider an arbitrary sequence ð Þ! *un*, *tn* ð Þ *<sup>u</sup>*^,^*<sup>t</sup>* with *un* <sup>∈</sup> *<sup>M</sup>* of IA and ð Þ! *un*, *tn* <sup>ð</sup>*u*^,^*t*) with *un* <sup>∈</sup>*<sup>S</sup>* of OA. Then, *un* ∈ *Uadm* and, hence,*u*∈ *Uadm* by closedness of *Uadm* see in Proposition (3.11).

Moreover, *φ*^ð Þ *un* ≤*tn* and *ψ*^ð Þ *un* ≤*tn*. Since *φ*^ and *ψ*^ are wsls, we derive that *<sup>φ</sup>*^ð Þ *<sup>u</sup>*^ <sup>≤</sup> *liminf <sup>n</sup>*!<sup>∞</sup>*φ*^ð Þ *un* <sup>≤</sup> *liminf <sup>n</sup>*!<sup>∞</sup>*tn* <sup>¼</sup> ^*<sup>t</sup>* and *<sup>ψ</sup>*^ð Þ *<sup>u</sup>*^ <sup>≤</sup> *liminf <sup>n</sup>*!<sup>∞</sup>*ψ*^ð Þ *un* <sup>≤</sup> *liminf <sup>n</sup>*!<sup>∞</sup>*tn* <sup>¼</sup> ^*t*. So, ^*t*<sup>∈</sup> *<sup>M</sup>*ð Þ *<sup>u</sup>*^ and ^*t*<sup>∈</sup> *<sup>S</sup>*ð Þ *<sup>u</sup>*^ implying that the graph of M is closed.

The same case for OA, we have that *ψ*^ð Þ *u*<sup>1</sup> ≤*t*<sup>1</sup> and *ψ*^ð Þ *u*<sup>2</sup> ≤*t*2. Then, convexity of *hatφ* yields that *ψ λ* ^ð Þ *u*<sup>1</sup> þ ð Þ 1 � *λ u*<sup>2</sup> ≤*λt*<sup>1</sup> þ ð Þ 1 � *λ t*2. In other words, *λt*<sup>1</sup> þ ð Þ 1 � *λ t*<sup>2</sup> ∈*S*ð Þ *λu*<sup>1</sup> þ ð Þ 1 � *λ u*<sup>2</sup> , proving that the graph of S is also convex. Further analysis on the convexity of IA is in the following section.

For the properties of closedness of IA and OA in the infinite-dimensional space, we need further analysis of Robinson- Ursescu theorem; see [16]. It is depend on the Hausdorff distance between the point of the set and the set of probabilistic constrained function expressed in Eq. (41). Thus, closedness and boundedness are sufficient for compactness in the finite-dimensional case.

**Lemma 5.3.** *Under the assumptions of Proposition 3.7, there are constants ε* >0, *σ* > 0*, such that with d referring to the Hausdorff distance d u*ð Þ ∈ *M*, *u*∈ *Uadm*j*ψ*ð Þ *u*, *x* ≤1 � *τ* ≤ *σmax log* f g ð Þ� *τ log* ð Þ *ψ*ð Þ *u*, *x* , 0 ∀*τ* ∈½ � *α* � *ε*, *α* þ *ε for IA and d u*ð Þ ∈*S*, *u* ∈ *Uadm*j*φ*ð Þ *u*, *x* ≥*τ* ≤*σmax log* f g ð Þ� *τ log* ð Þ *φ*ð Þ *u*, *x* , 0 ∀*τ* ∈½ � *α* � *ε*, *α* þ *ε .*

The prove of this lemma is analyzed in [16].

#### **5.2 Convergence of the stationary points and differentiability of** *IA<sup>τ</sup>* **and** *OA<sup>τ</sup>*

The continuity and continuous differentiability of the functions *ψ*ð Þ *s*, *τ* , *φ*ð Þ *s*, *τ* are directly from the continuity and continuous differentiability of the parametric function *θ τ*ð Þ , *s* . The expectation of a continuous function is also continuous. Additionally, the expectation of a continuously differentiable function is continuously differentiable.

The associated probability functions *Px*ð Þ *u* ≥ *α*, ∀*x*∈ *D*, in the chance constraints, are allowed to be lower semicontinuous or continuous in Proposition (3.8). Also, it cannot be differentiable. Hence, the characterization of optimality properties of CCPDE calls for generalized subdifferentiation such as Fréschet, Clarke, and Mordukhovich subderivative and implicit formula of gradient computations on the reflexive and separable Bochner space. It is analyzed in the previous submitted work in [25]. This concept is applicable to lower semicontinuous (lsc) functions. Furthermore, the epigraph of lsc function is closed everywhere. These subdifferentials assure the optimality criteria for the CCPDE. Also, the *Px*ð Þ *u* ≥*α*, *x*∈ *D* are Lipschitzian functions, assuring the explicit formula for the Clarke subgradient under special conditions.

For each *u* ∈ *Uadm*, there is some neighborhood *W* ⊂ *Uadm* and some measurable function *<sup>q</sup>* : *<sup>n</sup>* ! <sup>þ</sup> such that, for continuously partially differentiable up to order *<sup>b</sup>* on *uo* ∈ *Uadm* for *ξ*∈ Ω almost surely (a.s.),

$$\sup\_{1 \le p\_{u\_j} \le W} \max \sum\_{\sum\_{j=1}^n m\_j} | \frac{\partial^{\sum l\_i} \chi}{\partial u\_1^{m\_1} \partial u\_2^{m\_2} \partial u\_3^{m\_3} \dots \partial u\_j^{m\_j}} | \le q(\xi);\tag{62}$$

$$E(q) = \int\_{\Omega} q(\xi)\phi(\xi)d\xi < \infty,\tag{63}$$

for *<sup>b</sup>* <sup>¼</sup> <sup>P</sup>*<sup>n</sup> <sup>j</sup>*¼<sup>1</sup>*lj* <sup>≥</sup>P*<sup>j</sup> <sup>i</sup>*¼<sup>1</sup>*mi*. Suppose each property above holds true. Then, for the parametric function *θ* ¼ H, the functions *φ γ*ð Þ , *τ* and *ψ γ*ð Þ , *τ* are continuously partially differentiable w.r.t. u up to order b on *Uadm*, for all *τ* ∈ ð Þ 0, 1 . The higher derivatives

$$\frac{\partial \sum\_{j=1}^{\tau} l\_1^j}{\partial u\_1^{m\_1} \partial u\_2^{m\_2} \partial u\_3^{m\_3} \dots \partial u\_j^{m\_j}} = \int\_{\Omega} \frac{\partial \sum\_{j=1}^{\tau} l\_j}{\partial u\_1^{m\_1} \partial u\_2^{m\_2} \partial u\_3^{m\_3} \dots \partial u\_j^{m\_j}} \phi(\xi) d\xi;\tag{64}$$

for the case for outer approximation,

$$\frac{\partial \sum\_{j=1}^{u} l\_{j}}{\partial u\_{1}^{m\_{1}} \partial u\_{2}^{m\_{2}} \partial u\_{3}^{m\_{3}} \dots \partial u\_{j}^{m\_{j}}} = \int\_{\Omega} \frac{\partial \sum\_{j=1}^{u} l\_{j}}{\partial u\_{1}^{m\_{1}} \partial u\_{2}^{m\_{2}} \partial u\_{3}^{m\_{3}} \dots \partial u\_{j}^{m\_{j}}} \phi(\xi) d\xi;\tag{65}$$

for P*<sup>n</sup> <sup>j</sup>*¼<sup>1</sup>*lj* <sup>≥</sup>P*<sup>j</sup> <sup>i</sup>*¼<sup>1</sup>*mi*. Further analysis of convergence approach and uniform convergence gradient of the approximation functions are seen in [29, 30] of proposition 4.2. All the above properties ensure the application of Lebesgue's dominating convergence theorem [29] for interchanging the integration and differentiation operations. Observe the chain rule yields powers *EJTu* ð ð Þ ð Þ till order *l* ¼ *k* in the upper estimation of the integrand. The differentiability up to order *k* of the lower and upper approximating functions *E*ð Þ Hð Þ *τ*, *γ* and *E*ð Þ Hð Þ *τ*, �*γ* converges to the corresponding order derivative of *p x*ð Þ , *u* ∀*t*∈*Ia:s:* It depends mainly on the existence of the corresponding derivatives of the internal function *γ* w.r.t.*u* and the finite expectation of these derivatives, uniformly on some neighborhood of *W* ¼ *B u*ð Þ ,*r* , whenever the family H is suitably chosen. Thus, it needs the newly generalized derivative in the neighborhood set of *W*, such as Fréchet and the weak derivatives (Clarke and Mordukhovich derivatives) for computing gradient, stationary points, and their conveniences; see in the previous work [25].

Assumptions on the properties *P*<sup>1</sup> *to P*5, together with additional properties in above *γ*, it is smooth function w.r.t. *u*. Because the random state solution *T* is smooth w.r.t *u*. The differential operator *A* computed from coercivity properties of *a* and *l*, *A* is invertible and *<sup>∂</sup><sup>T</sup> <sup>∂</sup><sup>u</sup>* ¼ *A*.

*Assumption* 5.4. Assume that *u*∈P, is a neighborhood of the set *M* and super-set *S* in Eq. (55). This set P is a connected neighborhood super set *S* of the *OA<sup>τ</sup>* and inside the interior set *M* of *IAτ*, the following conditions hold true for the approximation smooth functions.


*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

**Theorem 5.5.** *If the assumption (5.4) holds true for any family of the parametric function* <sup>f</sup>*θ*ð*si*ð Þ *<sup>u</sup>* , *<sup>τ</sup>*Þgj*si*¼*γ<sup>i</sup> for any i*<sup>∈</sup> *I, the probabilistic function p u*ð Þ *has a continuous differentiable or a continuous gradient in some open ball W around point u* ∈ *Uadm and the gradient of inner and outer approximation functions respectively converges uniformly to the gradient of p u*ð Þ *on W for any decreasing sequence* f g *τ<sup>k</sup> <sup>k</sup>*<sup>∈</sup> *<sup>N</sup>* ∈ð Þ 0, 1 *for any i*∈*I such that*

$$\lim\_{k \to \infty} \inf\_{u \in \mathcal{W}} \nabla\_u (\mathbf{1} - \boldsymbol{\mu}\_i(\tau\_k, u)) = \nabla\_u p\_i(u);\tag{66}$$

$$\limsup\_{k \to \infty} \sup\_{u \in W} \nabla\_u \rho\_i(\tau\_k, u) = \nabla\_u p\_i(u);\tag{67}$$

for a particular *τ* ! 0 þ , all of the gradient of the functions is zero. Thus, as *k* ! ∞,

$$\lim\_{\tau\_k \to 0+} \inf\_{u \in W} \nabla\_u (\mathbf{1} - \boldsymbol{\mu}\_i(\tau\_k, u)) = \nabla\_u p\_i(u) = \mathbf{0};\tag{68}$$

$$\lim\_{\tau\_k \to 0+} \sup\_{u \in W} \nabla\_u \rho\_i(\tau\_k, u) = \nabla\_u p\_i(u) = 0. \tag{69}$$

This theorem has been proved in the previous submitted work [25]. Suppose each property above assumption (5.4) holds true. Then, for the appropriate parametric function *θ* � H the functions *φ γ*ð Þ , *τ* and *ψ γ*ð Þ , *τ* are continuously partial differentiable w.r.t. u up to order b on *Uadm*, for all *τ* ∈ð Þ 0, 1 . The higher derivatives,

$$\partial \sum\_{j=1}^{r} E\left(\mathcal{H}(\mathfrak{r}, \mathfrak{y})/\partial u\_1 \partial u\_2 \partial u\_3 \dots \partial u\_j = \int\_{\Omega} \partial^{\sum\_{j=1}^{r} l\_j} \varphi(\mathfrak{y}, \mathfrak{z})/\partial u\_1 \partial u\_2 \partial u\_3 \dots \partial u\_j \phi(\mathfrak{z}) d\mathfrak{z} \quad \text{(70)}\right)$$

And the case for outer approximation

$$\partial \overline{\partial^{\sum\_{j=1}^{n} l\_{j}}} E\left( \mathcal{H}(\mathfrak{r}, -\chi) / \partial u\_1 \partial u\_2 \partial u\_3 \dots \partial u\_j = \int\_{\Omega} \partial^{\sum\_{j=1}^{n} l\_{j}} \rho(\chi, \mathfrak{r}, u) / \partial u\_1 \partial u\_2 \partial u\_3 \dots \partial u\_j \phi(\xi) d\xi. \tag{71}$$

Further analysis of convergence approach and uniform convergence gradient of the approximation functions are seen in [29] of proposition 4.2.; thus, the equation given in (70) and (70) are equivalent.

#### **6. Convexity property of inner and outer approximations**

We notice that the coefficient parameter is either constant or depends on the *x*∈ *D*, and the convexity issues depend on the following three cases.

Case 1 *κ* is constant dependent of x.

Then ð Þ *u*, *ξ* ↦*y u*ð Þ , *x*, *ξ* : *U* � Ω ! is linear by see Eq. (16), hence, jointly convex w.r.t. ð Þ *u*, *ξ* , for each *x*∈ *D* (see Theorem 5.4 of the continuous dependent of the solution on the random parameter.

Case 2 *κ* is a positive independent *x* but depends on random variable, and concave function.

Case 3 *κ* depends on x, positive and nonlinear; we have to use linear approximation of order 1 such as Taylor approximation of order one for k.

For the above two cases, *κ* ∈ K, the elliptic PDE system defined in Eqs. (3)–(4) becomes

$$-\Delta y(\varkappa,\xi) = \frac{1}{\varkappa} f(u,\varkappa), \text{for } \varkappa \in D,\tag{72}$$

$$g(\mathbf{x}, u, \xi) = g(\mathbf{x}, u, \xi), \text{for} \\ \mathbf{x} \in \partial \mathcal{D}, \ \xi \in \Omega \\ \text{a.s.} \tag{73}$$

For all three cases, the *κ* is independent or depends on *x* we can solve the PDE system by stochastic finite difference method (SFDM). It follows ð Þ <sup>A</sup>ð Þ*<sup>κ</sup>* �<sup>1</sup> <sup>¼</sup> <sup>1</sup> *<sup>κ</sup>* <sup>A</sup>�<sup>1</sup> and thus

$$\mathcal{Y}(\boldsymbol{\mu},\boldsymbol{\omega},\boldsymbol{\xi}) = \frac{1}{\kappa} \mathcal{A}^{-1} \left[ \boldsymbol{j}\_{\mathcal{H}}(\boldsymbol{f}(\cdot)) + \boldsymbol{k} \mathbf{g}(\cdot,\boldsymbol{\xi}) \right](\boldsymbol{\omega}) \leq \boldsymbol{y}\_{\text{max}}.\tag{74}$$

It is equivalent to

$$\left[\mathcal{A}^{-1}j\_{\mathcal{H}}(f\_{\vec{\eta}}(\cdot)) + k\mathbf{g}\_{\vec{\eta}}(\cdot,\xi)\right](\varkappa) - \kappa \mathbf{y}\_{\max}(\varkappa) \le \mathbf{0}.\tag{75}$$

The expression on the left-hand side of the last inequality is jointly convex w.r.t. ð Þ *u*, *ξ* because of the linearity of *f* w.r.t. ð Þ *u*, *ξ* , the concavity of *κ* and the nonnegativity of *ymax*. Since

$$\Pr\left\{\mathbf{y}(u,\mathbf{x},\xi)\leq\mathbf{y}\_{\max},\forall\mathbf{x}\in D\right\} = \Pr\left\{\mathcal{A}^{-1}\left[\mathbf{j}\_{\mathcal{H}}(f(\mathbf{x})) + \mathbf{k}\mathbf{g}(\cdot,\xi)\right]\right] - \kappa \mathbf{y}\_{\max}(\mathbf{x}) \leq \mathbf{0}, \forall\mathbf{x}\in D\right\}.\tag{76}$$

is valid, in Proposition 4.2 [13] yields the following result with measure zero property. Hence, for any arbitrary random function *g* ∈B, the internal part of the probabilistic function is a linear, it implies the quasi-concavity/quasi-convexity of *γ*. Hence, the probabilistic uniform-constrained function is a convex; see proposition (3.10). For any general case, the convexity of the proposed approach is proved in the following proposition.

**Proposition 6.1.** *Let γ*ð Þ¼ *x*, *u*, *ξ y* � *ymaxd* ≤0, ∀*ξ*∈ Ω, *u*∈ *Uadm with a measure zero property see in the proposition 4.2. and x*∈ *D: If γ*ð Þ *x*, *u*, *ξ is quasi convex or quasi concave, then the approximation functions ψi*ð Þ¼ *τ*, *u*, *ξ* 1 � *E*ð*θ γ*ð ð Þ *x*, *u*, *ξ* ≤1 � *α and φi*ð Þ¼ *τ*, *u*, *ξ E*ð*θ τ*ð Þ , �*γ*ð Þ *x*, *u*, *ξ* ≥*α are convex.*

*Proof.* The outer approximation convexity is proved in [13, 29]. Let *m s*ðÞ¼ *<sup>e</sup>*�*s=<sup>τ</sup>* is a quasi-convex function for all *s* ∈ and *τ* ∈½ � 0, 1 . Then, the sum and the constant multiple of quasi-convex functions are a quasi-convex. Also, based on this statement *l s*ðÞ¼ <sup>1</sup> <sup>þ</sup> *<sup>m</sup>*2*τe*�*s=<sup>τ</sup>* is a quasi-convex function. The reciprocal of *l s*ð Þ which is

$$\left(l(\mathfrak{s})\right)^{-1} = \left(\mathbf{1} + m\_2 \pi e^{-\mathfrak{s}/\mathfrak{r}}\right)^{-1},\tag{77}$$

is said to be reciprocally quasi-concave [31]. It is from

$$\frac{\partial \left(l(s)^{-1}\right)}{\partial s} = m\_2 \pi e^{-s/\tau} \left[1 + m\_2 \pi e^{-s/\tau}\right]^{-2},\tag{78}$$

*<sup>∂</sup>*ð Þ *l s*ð Þ �<sup>1</sup> *<sup>∂</sup><sup>s</sup>* ¼ 0, as *τ* ! 0þ. The second derivative with respect to *s* is negative at the stationary point *τ* ! 0þ. To check convexity property through the second-order derivative test

$$\frac{\partial^2 \left(l(\mathbf{s})^{-1}\right)}{\partial \mathbf{s}^2} = -\left(m\_2 \pi e^{-s/\tau}\right) \left[\mathbf{1} + m\_2 \pi e^{-s/\tau}\right]^{-2} \left(\mathbf{1} - 2m\_2 \pi e^{-s/\tau}\right) < 0,\tag{79}$$

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

not valid for 0 <sup>&</sup>lt; *<sup>m</sup>*<sup>2</sup> <sup>≤</sup> *<sup>m</sup>*1*=*ð Þ <sup>1</sup> <sup>þ</sup> *<sup>m</sup>*<sup>1</sup> and 2*m*2*τe*�*s=<sup>τ</sup>* <sup>&</sup>gt;1. The function *<sup>l</sup>* is a strictly concave function for every *τ* ∈ð � 0, 1 *:* Every concave function is a quasi-concave. Therefore, ð Þ *l s*ð Þ �<sup>1</sup> <sup>¼</sup> <sup>1</sup> <sup>þ</sup> *<sup>m</sup>*2*τe*�*s=<sup>τ</sup>* � ��<sup>1</sup> is a concave function. Also, it holds the quasi-concavity property. It follows that *θ τ*ð Þ¼ , *<sup>s</sup>* ð Þ <sup>1</sup> <sup>þ</sup> *<sup>m</sup>*1*<sup>τ</sup> :* <sup>∗</sup> *l s*ð Þ�<sup>1</sup> � � is a constant multiple of concave function and a concave function. The negative of concave function is a convex, it implies that �*θ τ*ð Þ¼� , *<sup>s</sup>* ð Þ <sup>1</sup> <sup>þ</sup> *<sup>m</sup>*1*<sup>τ</sup> :* <sup>∗</sup> *l s*ð ÞÞ is a convex function. Since, the integral of the convex function is convex, �*<sup>E</sup> θ τ*ð Þ¼� , *<sup>s</sup>* <sup>Ð</sup> <sup>Ω</sup> <sup>1</sup> <sup>þ</sup> *<sup>m</sup>*1*τ:* <sup>∗</sup> ð Þ *l s*ð Þ *ϕ ξ*ð Þ*d<sup>ξ</sup>* � is a convex function. Furthermore, *ψ τ*ð Þ¼ , *s* 1 � *E*ð Þ *θ τ*ð Þ , *s* ≥ *α* is a convex function, based on the convexity property, which stipulates that the sum of convex functions is convex. Therefore, the inner approximation is generally a convex function. □

**Theorem 6.2.** *The properties of the functions E*ð Þ¼ *θ τ*ð Þ , *s* 1 � *ψ τ*ð Þ , *s* ≥ *α and φ τ*ð Þ¼ , *u*, *ξ E*ð Þ *θ τ*ð Þ , �*s* ≥*α are convex in the proposition (6.1). The regularization parameter ρ* ≥0, *the objective function E J y* ð Þ ð Þ ð Þ*: is convex in Eq. (38), If P(u,x)is a continuous and convex function w.r.t. u, then the IA<sup>τ</sup> and the OA<sup>τ</sup> have the unique optimal solutions as τ* ! 0 þ *: The optimal solutions of the approximations converge to the optimal solution of CCPDE.*

This theorem has been proved in the work [13].

#### **7. Numerical implementation and case study**

This work considers the stationary boundary value elliptic PDEs, where the randomness comes from the boundary conditions of the PDE system. It analyzes the observed temperature variation and distribution processes widely used in biological applications, such as hyperthermia treatment of cancer tissue in human body organs. This work is an extension of the previous works Kibru et al. [13, 14]. The optimal heating of an enclosed, thermally well-insulated, spatial domain *Dc* ⊂ *D* to the desired temperature *yd* is given. The heat injection is elected through a distributed stationary heat source (*y u*<sup>∗</sup> ð Þ¼ *T u*<sup>∗</sup> ð Þ) [20, 32].

The heat source is assumed to be highly affected by uncertainties, e.g., due to inaccuracies arising from heating devices and inlet heating processes. The boundary condition is supposed to be nonhomogeneous and nonlinear depending on random parameters so that the thermal conductivity is spatially constant, which is not precisely known or position-dependent *x*∈ *D*. Furthermore, despite the specified overall desired temperature *yd* is deterministic. The required state solution is *y x*, *<sup>u</sup>*<sup>∗</sup> ð Þ , *<sup>ξ</sup>* is random, and it should be kept below a maximum allowed value *ymax* with a highreliability level *α* ≤0*:*95 in a given subset *Dc* of *D*. More practical applications are studied in the work [14, 33].

The numerical random state solution *y* is obtained from the stochastic finite difference method (SFDM [34]). We have discretized 10,000 points from the *x*<sup>1</sup> and *x*<sup>2</sup> axes with the mesh generation. After the solution *y x*ð Þ ¼ ð Þ *x*1, *x*<sup>2</sup> , *u*, *ξ* obtained from the infinite-dimensional space *D* of the PDEs, the optimization problem is reduced to finite-dimensional reduced *CCPDEred*. The nonsmooth analysis is relevant for solving this nondifferentiable CCPDE where the random fluctuation comes from the system's boundary. We have developed a smooth parametric approximation called IA and OA in equations.

The variables *ξ*∈ Ω are identical independently distributed (iid) as standard normal distributed random variables. The samples for the random variable are generated by using the multilevel Monte Carlo method (ML-MCM) sampling approach. Subsequently, the PDE system is solved through the SFDM using a MATLAB implementation at each step of the optimization algorithm. After discretization, the inner and outer approximation problems are solved using the MATLAB Optimization function **fmincon.m**, each decreasing values of *<sup>τ</sup>* <sup>¼</sup> <sup>10</sup>�*k*, *<sup>k</sup>* <sup>¼</sup> 1, … , 4.

In our previous work, we have considered practical applications on hyperthermia treatment (HT) and planning as a case study Kibru et al. [14]. The HT and planning are used as an accompanying strategy in modern clinical cancer therapy [20]. Hyperthermia treatment consists of the heating of tumor tissue in order to subdue or eradicate the growth of tumor cells from a given organ. The hyperthermia treatment procedure aims to heat the tumor tissue in the human body to a given temperature without causing damage to healthy surrounding sensitive tissue due to overheating.

The heating is usually done through multiple electromagnetic (EM) sources, where each EM source generates an electric field *G x*ð Þ of the heat capacity *c* and density *ρ* the phases and amplitude p. As a result, the electric fields facilitate a net power deposition on the tumor region given by [20] (e.g., *Q x*ð Þ¼ *σ ξ*ð Þ <sup>2</sup> j j *G x*ð Þ <sup>2</sup> , where *G x*ð Þ¼ <sup>P</sup>*<sup>N</sup> <sup>j</sup>*¼<sup>1</sup>*pj Gjx* is a linear superposition of the individual fields and *σ*ð Þ *x* the electric conductivity. In general, the phases and the power *Q*, corresponding to each individual antenna, are not known in advance.

**Figure 1** displays the surface of the state solution *y x*, *<sup>u</sup>*<sup>∗</sup> ð Þ , *<sup>ξ</sup>* and the adjoint operator *P* for the optimality criteria of the infinite-dimensional CCPDE problem where

$$(\mathbb{S}^\* \left( y - y\_d \right) + \rho(u - u^\* \text{ } \right) \ge 0, \forall u \in U\_{adm},\tag{80}$$

where *S* : *Uadm*↦H is a control to state map,

$$\mathcal{Y}^\* = \mathbb{S}(\mathfrak{u}^\*),\tag{81}$$

is the optimal state of CCPDE, since CCPDE is a convex optimization problem w.r. t. *u* in the proposition expressed (3.10), and

$$P = \mathbb{S}^\* \left( y - y\_d \right) = \mathbb{S}^\* \left( \mathbb{S}(u) - y\_d \right) \tag{82}$$

**Figure 1.** *The solution of PDEs and adjoint P. (a) State y. (b) Adjoint P.*

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

**Figure 2.**

*The control IA<sup>τ</sup> . (a) IA, tau=0.1 (b) IA, tau=0.01 (c) IA, tau=0.001 (d) IA, tau=0.0001.*

**Figure 3.**

*The control OA<sup>τ</sup> . (a) OA, tau=0.1 (b) OA, tau=0.01 (c) OA, tau=0.001 (d) OA, tau=0.0001.*

are displayed **Figure 1** in (a) and (b), respectively. Hence, the state and adjoint are depending on the random variable. So, they are the random state and adjoint functions. The solution of boundary value PDEs is displayed in (a) and (b) from SFDM in this **Figure 1**.

The controls obtained from IA and OA are displayed in **Figures 2** and **3** with different values of *<sup>τ</sup>* <sup>¼</sup> <sup>10</sup>*<sup>k</sup>* for *<sup>k</sup>* <sup>¼</sup> 1, … , 4*:* the optimal controls from the optimization approach of the IA and OA are displayed in (a) to (d) respectively.

The error between the IA and OA is displayed in (a–d) **Figure 4**, when *τ* is reduced, that mean *τ* ! 0þ the error is near to zero. This shows that the controls of inner and outer approximation are equal at *τ* ¼ 0*:*001.

#### **Figure 4.**

*The errors of IA<sup>τ</sup>* , *OAτ, Obj. (a) Error of IA and OA, tau=0.1; (b) Error of IA and OA, tau=0.01; (c) Error of IA and OA, tau=0.001; (d) Error of IA and OA, tau=0.0001; (e) Error of IA and OA, objective of OA and IA.*

Finally, the **Figure 4e** shows the objective functions of *JIA*ð Þ*τ* and *JOA*ð Þ*τ* when *<sup>τ</sup>* ! <sup>0</sup> <sup>þ</sup> , the *IA* � *OA*, *<sup>τ</sup>* <sup>¼</sup> <sup>10</sup>�4. The error is zero when tau is less than 0.001 in **Figure 4a**.

**Example:** To solve the following CCPDE, where *ξ* are iid *μ* ¼ 0 and *σ* ¼ 1:

$$-\nabla(\kappa(\mathbf{x})\nabla \mathbf{y}) = \Delta(\mathbf{y}),\\\kappa(\mathbf{x}) = \mathbf{1}\_{\mathbf{f}} f(\mathbf{x}) = \mathbf{0},\\\mathbf{g}(\mathbf{u}, \xi, \mathbf{x}) = (u(x) - \mathbf{y})\xi,\\\mathbf{y}\_{\text{max}} = \mathbf{2}.\text{999;}\tag{83}$$

$$yd = 2 - 2.^{\*} \left( \text{x1.}^{\*} \left( \text{x1} - \text{1} \right) + \left( \text{x2.}^{\*} \left( \text{x2} - \text{1} \right) \right) \right), \\ \mu\_{\text{min}} = 0.5, \\ \mu\_{\text{max}} = 4.5; \tag{84}$$

$$
\rho = \mathbf{10}^{-3}, a = \mathbf{0.95}; \tag{85}
$$

$$D = [\mathbf{0}, \mathbf{1}] \times [\mathbf{0}, \mathbf{1}].\tag{86}$$

#### **Acknowledgements**

This work is financially supported by KAAD and DFG.

#### **Author details**

Kibru Teka\*, Abebe Geletu and Pu Li Process Optimization Group, Technical University of Ilmenau, Ilmenau, Germany

\*Address all correspondence to: kibru-teka.nida@tu-ilmenau.de

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

#### **References**

[1] Robert AA, Fournier JJF. Sobolev Spaces. In: Pure and Applied Mathematics. Second ed. Vol. 140. Amsterdam: Acadamic press (Elsevier); 2003

[2] Babusk I, Nobile F, Tempone R. A Stochastic collocation method for elliptic partial differential equations with random input data. SIAM Review. 2010; **52**(2):317-355

[3] Babuska I, Chelbun J. Effects of uncertainities in the domain on the solution of Neumann boundary value problems in two spatial dimensions. Mathematics of Computation. 2001;**71**: 1339-1370

[4] Babuska I, Tempone R, Zourakis GE. Galerkin finite element approximations of stochastic elliptic partial differential equations. SIAM Journal on Numerical Analysis. 2004;**24**:800-825

[5] Babuska I, Tempone R, Zourakis GE. Solving elliptic boundary value problems with uncertain coefficients by the finite element method - the stochastic formulation. Computer Methods in Applied Mechanics and Engineering. 2005;**194**:1251-1294

[6] Geletu A, Hoffmann A, Klöppel M, Li P. An inner-outer approximation approach to chance constrained optimization. SIAM Journal on Optimization SIAM Journal on Optimization. 2017;**27**(3):1834-1857

[7] Ahmed NU. Stablity class of stochastic distributed parameter systems with random boundary conditions. Journal of Mathematical Analysis and Applications. 1983;**92**:274-298

[8] Alla A, Volkwein S. Asymptotic stability of POD based model preditive control for a semilinear parabolic PDE. Advances in Computational Mathematics. 2015;**41**:1073-1102

[9] Ghanem RG, Kreuger RM. Numerical solution of spectral Stochastic finite element system. Computer Methods in Applied Mechanics and Engineering. 1996;**168**(289–303):1996

[10] Ghanem R. Ingredients for a general perpose Stochastic finite element formulation. Computer Methods in Applied Mechanics and Engineering. 1999;**168**(19–34):1999

[11] Pre'kopa A. Stochastic programming, Text book on stochastic optimization. In: Mathematics and Its Applications (MAIA, volume 324). Dordreicht: Kluwer; 1995. ISBN 978-90- 481-4552-2 ISBN 978-94-017-3087-7

[12] Casas E, Töltzsch F. First-and second-order optimality conditions for a class of optimal control problems with quasilinear ellipitc equations. SIAM Journal on Control and Optimization. 2009;**49**(2):688-718

[13] Geletu A, Hoffmann A, Schmidt P, Li P. Chance constrained Optimization of ellptic PDE system with smoothing convex approximations. International Journal of Systems Science. 2018;**26**:70 1991 Mathematics subject classification

[14] Teka K, Geletu A, Li P. A Computation Approach to Chance Constrained Optimization of Boundary-Value Parabolic Partial Differential Equation Systems. In: IFAC World Congress 2020: Control and Athomatica. Berlin, Germany: Paper Plaza; 2020

[15] Kibru TN, Geletu A, Li P. A computation approach to chance constrained optimization of

boundary-value parabolic partial differential equation systems. In: Publication in IFAC World Congress 2020: Control and Athomatica. 2020

[16] Farshbaf-Shaker MH, Henrion R, Hömberg D. Properties of chance constraints in infinite dimessional with an Application to PDE consrained optimization. Set Valued Variational Analysis, Mathematics subject classification. 2018;**26**:821-841

[17] Altmüller N, Grüne L. Distributed and boundary model predictive control for the heat equation. GAMM-Mitteilungen. 2012;**35**(2):131-145

[18] Brezis H. Functional Analysis, Sobolev Spaces and Partial Differential Equations. USA: Book of springer in Applied Mathematics; 2011

[19] Karhunen K. Über lineare Methoden in der Wahrscheinlichkeitsrechnung. Annales Academiæ Scientiarum Fennicæ Series A. I. 1947;**37**(3–79):18

[20] Deuflhard P, Schiela A, Martin W. Mathematical Cancer Therapy Planning in Deep Regional Hyperthermia. Berlin; ZIB-Report: Konrad-Zuse-Zentrum für Informationstechnik Berlin; 2011. pp. 11-39

[21] Hille E, Plillips RS. Functional Analysis and semi-Groups. Vol. 31. Providence, Rhode Island: American Mathematical Society Colloquium Publications; 2018). International Journal of Systems Science, Book of Functional Analysis 2000 Mathematics Subject Classification. P. 47-02, sec. 46-02, 47Dxx

[22] Tröeltzsh F. Optimal control of partial differential equations theory, methods and applications. Rhode Island: American Mathematical society providence; 2005). 2000 MSC

[23] Grossman C, Roos HG, Stynes M. Numerical treatment of PDEs. In: Springer Verlag, Berlin, Heidelberg. New York: Mathematics subject classification; 2007

[24] Kouri DP, Heinkenschloss M, Ridzal D, B: G., Waanders VB. A trudt region Algorithm with adaptive stochastic collocation for PDE optimization under uncertainity. SIAM Journal on Scientific Computing. 2013;**35**(4):A1847-A1879

[25] Kibru TN, Geletu A, Li P. New development on the properties of generalized derivatives and smoothing approximation approach for the chance constrained optimization of boundary value elliptic PDE systems. Submitted for publication in AMS; 2022

[26] Nemiroroviski A, Shapiro A. Convex approximation of chance constraint programs. SIAM Journal on Optimization. 2006;**17**:969-996

[27] Zowe J, Kurcyuusz S. Regularity and stablity for the mathematical programming in Banach space. Applied Mathematics and Optimization. WiJrzburg, Am Hubland, West Germany. 1979;**5**:49-62

[28] van Ackooij W, Aleksovsca I, Munoz-Zuniga M. (Sub) diffrentiability of Probablity Function with Elliptical Distributions, Mathematics Subject Classification. WiJrzburg, Am Hubland, West Germany: Springer Science; 2010

[29] Geletu A, Hoffmann A, Li P. Analytic Approximation and Differentiablity of Joint Chance Constraints. Tu-Ilmenau, Germany: Taylor and Francis; 2019 AMS classification, 90c15; 90c31; 26A42: 57R12: 58B10

[30] Hantoute A, Henrion R, Pérez-Aros P. Subdifferential characterization of

*Structural Properties and Convergence Approach for Chance-Constrained Optimization… DOI: http://dx.doi.org/10.5772/intechopen.104620*

probability functions under Gaussian distribution. Germany, part of Springer Nature and Mathematical Optimization Society: Springer-Verlag GmbH; 2018. p. 2018

[31] Merkele M. Reciprocally convex functions. ELSEVIER. Journal of Mathematical Analysis and Applications. 2004;**293**(1):210-218

[32] Dubljevic S, Christofides PD. Predictive output feedback control of parabolic partial differential equations (PDEs). Industrial and Engineering Chemistry Research. 2006;**45**:8421-8429

[33] Dubljevic S, El-Fara NH, Mhaskar P, Christofides PD. Predictive control of parabolic PDEs with state and control constraints. International Journal of Robust and Nonlinear Control. 2006;**16**: 749-772

[34] Eiermann M, Ernst OG, Ullmann E. Computational aspects of the stochastic finite element method. Computing and Visualization in Science. 2007;**10**(1):3-15

#### **Chapter 8**

## Isochronous Oscillations of Nonlinear Systems

*Jean Akande, Kolawolé Kêgnidé Damien Adjaï, Marcellin Nonti and Marc Delphin Monsia*

#### **Abstract**

Real-world systems, such as physical and living systems, are generally subject to vibrations that can affect their long-term integrity and safety. Thus, the determination of the law that governs the evolution of the oscillatory quantity has become a major topic in modern engineering design. The process often leads to solving nonlinear differential equations. However, one can admit that the main objective of the theory of differential equations to obtain explicit solutions is far from being carried out. If we know how to solve linear systems, the case of systems of nonlinear differential equations is not in general solved. Isochronous nonlinear systems have therefore received particular attention. This chapter is devoted to presenting some recent developments and advances in the theory of isochronous oscillations of nonlinear systems. The harmonic oscillator as a prototype of isochronous systems is investigated to state some useful definitions (section 2), and the existence of second-order isochronous nonlinear systems having explicit elementary first integrals with an exact sinusoidal solution and higher-order autonomous nonlinear systems that reproduce the dynamics of the harmonic oscillator is proven (section 3). Finally, higher-order nonautonomous nonlinear systems that can exhibit isochronous oscillations are shown (section 4), and a conclusion for the chapter is presented.

**Keywords:** nonlinear dynamic systems, Hamiltonian systems, higher-order autonomous and nonautonomous equations, isochronous oscillations

#### **1. Introduction**

The enormous literature generated by the qualitative theory of dynamic systems suggests that all questions about nonlinear systems are well studied and answered. Far from it, we can see that there are many interesting questions that are not fully resolved. We must note that a dynamic system is a time-dependent system that can undergo regular or chaotic processes. The dynamics of such systems are often described by finite-difference equations (discrete dynamic systems) or differential equations (continuous dynamic systems). Since many problems in engineering physics, biology, and applied mathematics are formulated in

terms of differential equations, continuous dynamic systems have been the subject of an intensive investigation in the literature. In particular, planar polynomial dynamic systems given by [1–4]

$$
\dot{\mathfrak{x}} = P(\mathfrak{x}, \mathfrak{y}), \quad \dot{\mathfrak{y}} = Q(\mathfrak{x}, \mathfrak{y}), \tag{1}
$$

where overdot indicates differentiation with respect to time, and *P* and *Q* are polynomials in *x* and *y*, are widely investigated from the perspective of the existence of isochronous centers, limit cycles, and elementary first integrals. One question that has been the object of special attention is the determination of the maximum number of limit cycles of the polynomial system (1), motivated by the second part of the Hilbert 16*th* problem [1]. Another important question is the determination of polynomial and rational first integrals that ensure the complete integrability of polynomial dynamic systems of type (1) inspired by the work of Darboux [2–4]. When the first integrals do not explicitly depend on the time, the system is said, in the case of autonomous systems, conservative system and can exhibit periodic solution. A prototype of dynamic systems that can experience conservative oscillations is the harmonic oscillator described by [5–7]

$$
\ddot{x} + x = \mathbf{0},
\tag{2}
$$

such that

$$\mathbf{x}(t) = \cos t. \tag{3}$$

The harmonic oscillator (2) is characterized by a fixed constant period *T* ¼ 2*π*. Therefore, the system (2) is said to be an isochronous dynamic system in contrast to dynamic systems exhibiting amplitude-dependent frequency oscillations known as nonlinear systems. Nonlinear systems differ from linear systems that exhibit amplitude-independent frequency. A typical example of a nonlinear dynamical system is given by the well-known cubic Duffing equation [5–8]

$$
\ddot{\mathbf{x}} + a\_1 \dot{\mathbf{x}} + a\_2 \mathbf{x} + a\_3 \mathbf{x}^3 = \mathbf{0},
\tag{4}
$$

which can exhibit *a*<sup>1</sup> ¼ 0, an amplitude-dependent period, and experience softening and hardening phenomena under a periodic forcing function, where *a*<sup>1</sup> >0, *a*<sup>2</sup> >0, and *a*<sup>3</sup> are constants. Linear systems, such as Eq. (2), cannot exhibit softening and hardening, leading in general to fatigue and failure of material systems [9, 10]. Consequently, the problem of finding dynamic systems, more precisely nonlinear dynamic systems since real-world systems are nonlinear systems, preserving the feature of amplitude-independent frequency, has become a vital question for modern engineering design. Thus, the design and identification of nonlinear isochronous systems have generated a major and attractive research field in the theory of dynamic systems such that the well-established qualitative theory of dynamic systems has been widely applied to identify isochronous centers or systems that can exhibit amplitude-independent periods. In this way, theorems for the existence of a center and an isochronous center are established, particularly for the system (1), where *P x*ð Þ , *y* and *Q x*ð Þ , *y* are not necessarily

polynomials in *x* and *y* [5, 6, 11–17]. Additionally, a multitude of approximation methods for periodic solutions has been developed in the literature on the basis of the predictions of the qualitative theory of differential equations and numerical results, while no exact explicit solutions are known. However, many of these studies are not mathematically consistent, as shown by the recent developments and advances in the theory of differential equations due to the considerable contribution of Monsia and coworkers. Consider as an example of illustration the unusual Lienard equation

$$
\ddot{\mathbf{x}} - \frac{d\mathbf{x}}{\sqrt{\mu^2 - \mathbf{x}^2}} \dot{\mathbf{x}} = \mathbf{0},\tag{5}
$$

investigated by Akande et al. [18], where *a* and *μ* are constants. The authors [18] showed that Eq. (5) has the exact isochronous harmonic solution

$$\varkappa(t) = \mu \sin\left[-a(t+K)\right],\tag{6}$$

where *a*<0, *μ*> 0, *a* 6¼ �*μ* and *K* is an arbitrary constant, while Eq. (5) obviously does not satisfy the classical existence theorems for a center for the Lienard equation of the form

$$
\ddot{\mathbf{x}} + f(\mathbf{x})\dot{\mathbf{x}} + \mathbf{g}(\mathbf{x}) = \mathbf{0},
\tag{7}
$$

where *f x*ð Þ and *g x*ð Þ are functions of *x* [5, 11, 14, 16, 17]. The inadequacy of the mentioned theorems can also be shown by considering the following exceptional quadratic Lienard-type [19]

$$
\ddot{\boldsymbol{x}} + \frac{\boldsymbol{u}'(\boldsymbol{x})}{\boldsymbol{u}(\boldsymbol{x})} \dot{\boldsymbol{x}}^2 = \mathbf{0},\tag{8}
$$

where *u x*ð Þ is a function of *x* and prime means differentiation with respect to *x*. The authors [19] proved that Eq. (8) can exhibit, for example, that when *u x*ð Þ¼ *<sup>μ</sup>*<sup>2</sup> � *<sup>x</sup>*<sup>2</sup> ð Þ�<sup>1</sup> 2 , the exact isochronous harmonic solution

$$\mathbf{x}(t) = \mu \sin\left[b(t + K\_1)\right],\tag{9}$$

where *μ*> 0, *b*> 0, and *K*<sup>1</sup> are arbitrary parameters such that *b* 6¼ *μ*. Eq. (8) belongs to the general class of Lienard-type equation

$$
\ddot{\mathbf{x}} + \mathfrak{G}(\mathbf{x})\dot{\mathbf{x}}^2 + \mathbf{g}(\mathbf{x}) = \mathbf{0},\tag{10}
$$

where *ϑ*ð Þ *x* is a function of *x*. Eq. (10) can be generalized in the form

$$
\ddot{\mathbf{x}} + h(\mathbf{x}, \dot{\mathbf{x}})\dot{\mathbf{x}} + \mathbf{g}(\mathbf{x}) = \mathbf{0},
\tag{11}
$$

where *h x*ð Þ , *x*\_ is a function of *x* and *x*\_. Obviously, Eq. (8), where

*Nonlinear Systems - Recent Developments and Advances*

$$u(\mathfrak{x}) = \left(\mu^2 - \mathfrak{x}^2\right)^{-\frac{1}{2}},\tag{12}$$

does not satisfy the classical theorems for the existence of at least one periodic solution [5, 6] or for the existence of an isochronous center, as stated in Refs. [12, 16, 17]. Other counterexamples of classical existence theorems can be seen in Refs. [20–27]. If some progress has been made with the work of Calogero and coworkers [28], it will be very difficult to say the same thing concerning the dynamic systems represented by nonlinear differential equations having an exact elementary function solution, more precisely an exact explicit isochronous sinusoidal solution before the contribution of Monsia and his group (see Refs. [29–31] and References therein). The work of Monsia and his group revealed not only the inadequacy of the qualitative theory of dynamic systems to predict the effective behavior of nonlinear systems but also showed the existence of many autonomous and nonautonomous nonlinear dynamic systems with an exact explicit isochronous sinusoidal solution of a second and high order. The present chapter aims to contribute to these recent developments and advances in identifying and generating second-order and higher-order autonomous and nonautonomous nonlinear dynamic systems with an exact isochronous sinusoidal solution. To do so, we study the harmonic oscillator considered as the prototype of isochronous systems (section 2), the isochronous oscillations of higherorder autonomous nonlinear systems (section 3), and the isochronous oscillations of higher-order nonautonomous nonlinear systems (section 4). Finally, we present a conclusion for the chapter.

#### **2. Harmonic oscillator**

The equation of the harmonic oscillator (2) can be rewritten as a dynamical system in the form

$$
\dot{\mathfrak{x}} = \mathfrak{y}, \quad \dot{\mathfrak{y}} = -\mathfrak{x}, \tag{13}
$$

such that the integral curves are given by

$$\frac{dy}{dx} = -\frac{x}{y}.\tag{14}$$

By separation of variables and integration, we have

$$H(\mathbf{x}, \mathbf{y}) = \frac{1}{2}\mathbf{y}^2 + \frac{1}{2}\mathbf{x}^2,\tag{15}$$

where *H* is a constant of integration known as the Hamiltonian or

$$H(\mathbf{x}, \dot{\mathbf{x}}) = \frac{1}{2}\dot{\mathbf{x}}^2 + \frac{1}{2}\mathbf{x}^2,\tag{16}$$

so that Eq. (2) is said to be a Hamiltonian system. When *H x*ð Þ¼ , *<sup>x</sup>*\_ <sup>1</sup> 2 , the formula *Isochronous Oscillations of Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.106354*

$$\mathbf{x}(t) = \cos\left(t + \rho\right),\tag{17}$$

such that

$$
\dot{\mathbf{x}}(t) = -\sin\left(t + \rho\right),
\tag{18}
$$

where *φ* is an arbitrary constant, verifies the first integral (16). Thus, Eq. (17) is the general solution of the harmonic oscillator (2), which exhibits periodic oscillations of period *T* ¼ 2*π*, independent of the oscillation amplitude, as shown in **Figure 1**.

Such oscillations are said to be isochronous. Since all solutions given by Eq (17) are periodic with a fixed constant period *T*, the harmonic oscillator is called an isochronous system. Therefore we can state the following definitions.

**Definition 1**: *A system exhibits isochronous oscillations if the period T is independent of amplitude.*

**Definition 2:** *If the periodic general solution with a fixed constant period T of a system (S) verifies H x*ð Þ¼ , *x*\_ *c where c is a constant, then such a system (S) of differential equations corresponding to the Hamiltonian H is an isochronous system.*

On the basis of these definitions, we can investigate the isochronicity of nonlinear systems below.

**Figure 1.** *Typical behavior of solution (17) when φ* ¼ 0*.*

#### **3. Autonomous nonlinear systems**

Recently, Monsia and coworkers introduced a new class of first integrals in the literature [29, 30, 32]. This type of class of first integrals contains ð Þ *n* þ 1 first integrals *H x*ð Þ , *x*\_ such that *H x*ð Þ¼ , *x*\_ *c* when *x t*ðÞ¼ cosð Þ *t* þ *φ* , where *n*≥0 is an integer. The corresponding ð Þ *n* þ 1 second-order autonomous nonlinear differential equations admit the exact sinusoidal general solution cosð Þ *t* þ *φ* . In this part, we consider such classes of first integrals to secure isochronous oscillations of autonomous nonlinear systems.

#### **3.1 Isochronous nonlinear systems**

Consider a second-order autonomous equation

$$E(\mathbf{x}, \dot{\mathbf{x}}, \ddot{\mathbf{x}}) = \mathbf{0}.\tag{19}$$

Thus, we have the following results. Theorem 1.1: Assume that

$$H\_1(\mathbf{x}, \dot{\mathbf{x}}) = b = \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}}(\mathbf{x}^2 - \mathbf{1}) + \mathbf{x}^{2n+2},\tag{20}$$

is a class of ð Þ *n* þ 1 first integrals of Eq. (19), where *b* is a constant and *n* ≥0 is an integer. Then, Eq. (19) takes the form

$$\frac{d}{dt}H\_1(\mathbf{x}, \dot{\mathbf{x}}) = \ddot{\mathbf{x}} \left( 2\dot{\mathbf{x}} \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + 3\dot{\mathbf{x}}^2 + \mathbf{x}^2 - \mathbf{1} \right) + 2\dot{\mathbf{x}} \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \ell \mathbf{x}^{2\ell - 1} + \mathbf{x}\dot{\mathbf{x}} + (n+1)\mathbf{x}^{2n+1} \right] = \mathbf{0},\tag{21}$$

with the exact sinusoidal general solution

$$\mathbf{x}(t) = \cos\left(t + \varphi\right) \tag{22}$$

where *φ* is an arbitrary constant.

**Proof.** Differentiating with respect to time, the first integral (20) immediately yields Eq. (21). To prove that formula (22) is a solution of Eq. (21), it suffices to prove that Eq. (22) verifies Eq. (20). However, it is also possible to give direct proof by substituting Eq. (22) into Eq. (21). From Eq. (22),

$$
\dot{\mathbf{x}}(t) = -\sin\left(t + \rho\right),
\tag{23}
$$

and

$$
\ddot{x}(t) = -\cos\left(t + \rho\right),
\tag{24}
$$

Inserting Eqs. (22)–(24) and the trigonometric equation

$$
\cos^2(t+\rho) + \sin^2(t+\rho) = \mathbf{1},\tag{25}
$$

into Eq. (21), leads to

*Isochronous Oscillations of Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.106354*

*x*€ 2*x*\_ X*n* ℓ¼0 *<sup>x</sup>*<sup>2</sup><sup>ℓ</sup> <sup>þ</sup> <sup>3</sup>*x*\_ <sup>2</sup> <sup>þ</sup> *<sup>x</sup>*<sup>2</sup> � <sup>1</sup> ! <sup>þ</sup> <sup>2</sup>*x*\_ *<sup>x</sup>*\_ <sup>2</sup>X*<sup>n</sup>* ℓ¼0 <sup>ℓ</sup>*x*<sup>2</sup>ℓ�<sup>1</sup> <sup>þ</sup> *xx*\_ <sup>þ</sup> ð Þ *<sup>n</sup>* <sup>þ</sup> <sup>1</sup> *<sup>x</sup>*<sup>2</sup>*n*þ<sup>1</sup> " # ¼ � cosð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* 2 sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup><sup>ℓ</sup>ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* 3 sin <sup>2</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup> ð Þ� *t* þ *φ* 1 " # � 2 sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin <sup>2</sup> ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þ� *t* þ *φ* cosð Þ *t* þ *φ* sin ð Þ *t* þ *φ* " <sup>þ</sup> ð Þ *<sup>n</sup>* <sup>þ</sup> <sup>1</sup> cos <sup>2</sup>*n*þ<sup>1</sup> ð Þ *t* þ *φ* # <sup>¼</sup> 2 sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>1</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* 3 cosð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin <sup>2</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>3</sup> ð Þþ *t* þ *φ* cosð Þ *t* þ *φ* � 2 sin <sup>3</sup> ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* 2 cosð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin <sup>2</sup> ð Þ *t* þ *φ* � <sup>2</sup>ð Þ *<sup>n</sup>* <sup>þ</sup> <sup>1</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>1</sup> ð Þ *t* þ *φ* <sup>¼</sup> 2 sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>1</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cosð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin <sup>2</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>3</sup> ð Þþ *t* þ *φ* cosð Þ *t* þ *φ* � 2 sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>1</sup> � cos <sup>2</sup> ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* � �X*<sup>n</sup>* ℓ¼0 ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>2</sup>ð Þ *<sup>n</sup>* <sup>þ</sup> <sup>1</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>1</sup> ð Þ *t* þ *φ* <sup>¼</sup> 2 sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>1</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cosð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin <sup>2</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>1</sup> � � � 2 sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* 2 sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 ℓ cos <sup>2</sup>ℓþ<sup>1</sup> ð Þ *t* þ *φ* � <sup>2</sup>ð Þ *<sup>n</sup>* <sup>þ</sup> <sup>1</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>1</sup> ð Þ *t* þ *φ* <sup>¼</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 ð Þ <sup>2</sup><sup>ℓ</sup> <sup>þ</sup> <sup>2</sup> cos <sup>2</sup>ℓþ<sup>1</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 2ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þ *t* þ *φ* � <sup>2</sup>ð Þ *<sup>n</sup>* <sup>þ</sup> <sup>1</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>1</sup> ð Þ *t* þ *φ* <sup>¼</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>*�<sup>1</sup> ℓ¼0 ð Þ <sup>2</sup><sup>ℓ</sup> <sup>þ</sup> <sup>2</sup> cos <sup>2</sup>ℓþ<sup>1</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 2ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þ *t* þ *φ* " # <sup>¼</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* 2 cosð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* 4 cos <sup>3</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* 6 cos <sup>5</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* … <sup>þ</sup> <sup>2</sup>*<sup>n</sup>* cos <sup>2</sup>*n*�<sup>1</sup> ð Þ *t* þ *φ* " � <sup>X</sup>*<sup>n</sup>* ℓ¼0 2ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þ *t* þ *φ* # <sup>¼</sup> *sin t*ð Þ <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 2ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 2ℓ cos <sup>2</sup>ℓ�<sup>1</sup> ð Þ *t* þ *φ* " # ¼ 0,

(26)

such that Theorem 1.1 is proved.

**Remark 1**. *The n*ð Þ þ 1 *first integrals given by Eq.* (20) *are the n*ð Þ þ 1 *Hamiltonians of the n*ð Þ þ 1 *equations given by the class of Eq.* (21)*. Theorem 1.1 shows that H*1ð Þ *x*, *x*\_ *is a time-independent constant. One can check that H*1ð Þ¼ *x*, *x*\_ 1 *under x t*ðÞ¼ cosð Þ *t* þ *φ . Therefore, the n*ð Þ þ 1 *systems of differential Eq.* (21) *are isochronous and exactly reproduce the dynamics of the harmonic oscillator. These results are impossible to predict by the qualitative theory of dynamic systems, mainly by the classical existence theorems [5, 6]. Indeed, the class of Eq.* (21) *can be rewritten as*

$$\ddot{\boldsymbol{x}} + \frac{2\left[\dot{\boldsymbol{\omega}}^2 \sum\_{\ell=0}^n \ell \mathbf{x}^{2\ell-1} + \mathbf{x}\dot{\mathbf{x}} + (n+1)\mathbf{x}^{2n+1}\right]}{3\dot{\mathbf{x}}^2 + \mathbf{x}^2 - 1 + 2\dot{\mathbf{x}}\sum\_{\ell=0}^n \mathbf{x}^{2\ell}} \dot{\mathbf{x}} = \mathbf{0} \tag{27}$$

Eq. (27) has the form of the mixed Lienard-type differential Eq. (11), where

$$h(\mathbf{x}, \dot{\mathbf{x}}) = \frac{2\left[\dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \ell \mathbf{x}^{2\ell-1} + \mathbf{x}\dot{\mathbf{x}} + (n+1)\mathbf{x}^{2n+1}\right]}{3\dot{\mathbf{x}}^2 + \mathbf{x}^2 - \mathbf{1} + 2\dot{\mathbf{x}}\sum\_{\ell=0}^n \mathbf{x}^{2\ell}},\tag{28}$$

**Figure 2.** *Phase portrait and vector field of Eq. (27) for n* ¼ 1*.*

and

$$\mathbf{g}(\mathbf{x}) = \mathbf{0}.\tag{29}$$

Since *h*ð Þ¼ 0, 0 0 is not negative, *g x*ð Þ¼ 0 for *x* ¼6 0 and *g x*ð Þ is not odd, then Eq. (21) does not satisfy the classical theorems for the existence of at least one periodic solution (see Theorem 11*:*2 of ([5], p. 387) and the Lienard-Levinson-Smith theorem of [6]) in contrast to Theorem 1.1. As an example of illustration, let *n* ¼ 0. Thus, Eq. (27) becomes

$$
\ddot{\mathbf{x}} + \frac{2\mathbf{x}(\mathbf{1} + \dot{\mathbf{x}})}{3\dot{\mathbf{x}}^2 + 2\dot{\mathbf{x}} + \mathbf{x}^2 - 1} \dot{\mathbf{x}} = \mathbf{0} \tag{30}
$$

The phase portrait and vector field of Eq. (27) are shown in **Figures 2** and **3** for *n* ¼ 1 and *n* ¼ 2. Consider now the class of the first-order differential equation

**Figure 3.** *Phase portrait and vector field of Eq. (27) for n* ¼ 2*.*

#### **Figure 4.**

*Phase portrait and vector field of Eq. (36).*

$$H\_2(\mathbf{x}, \dot{\mathbf{x}}) = b = \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 \left(\mathbf{1} + \sum\_{\ell=0}^n \mathbf{x}^{2\ell+2}\right) + \dot{\mathbf{x}} \left(\mathbf{x}^{2n+4} - \mathbf{1}\right) + \mathbf{x}^{2n+2},\tag{31}$$

where *n*≥ 0 is an integer and *b* is a constant. Therefore, we have the following theorem. Theorem 1.2: If Eq. (31) is a first integral or Hamiltonian of Eq. (19), then Eq. (19) can be written as

$$\begin{aligned} &\ddot{\boldsymbol{x}} \left[ 3\dot{\boldsymbol{x}}^2 \left( 1 + \sum\_{\ell=0}^n \boldsymbol{x}^{2\ell+2} \right) + 2\dot{\boldsymbol{x}} \sum\_{\ell=0}^n \boldsymbol{x}^{2\ell} + \boldsymbol{x}^{2n+4} - \mathbf{1} \right] \\ &+ \left\{ \dot{\boldsymbol{x}}^3 \sum\_{\ell=0}^n (2\ell+2)\dot{\boldsymbol{x}}^{2\ell+1} + \dot{\boldsymbol{x}}^2 \sum\_{\ell=0}^n 2\ell \dot{\boldsymbol{x}}^{2\ell-1} + \left[ (2n+4)\boldsymbol{x}^{2n+3} \right] \dot{\boldsymbol{x}} + (2n+2)\boldsymbol{x}^{2n+1} \right\} \dot{\boldsymbol{x}} = \mathbf{0}, \end{aligned} \tag{32}$$

with the exact general solution

$$\mathbf{x}(t) = \cos\left(t + \varphi\right). \tag{33}$$

**Proof.** By differentiation with respect to time, Eq. (31) immediately leads to Eq. (32). It suffices to show that Eq. (33) verifies Eq. (31) to prove that formula (33) is a solution of Eq. (32). Substituting Eqs. (22)–(25) into Eq. (31) yields

*<sup>H</sup>*2ð Þ¼ *<sup>x</sup>*, *<sup>x</sup>*\_ sin <sup>2</sup> ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup><sup>ℓ</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin <sup>3</sup> ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>1</sup> <sup>þ</sup>X*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ *t* þ *φ* " # � sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>4</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>1</sup> � � <sup>þ</sup> cos <sup>2</sup>*n*þ<sup>2</sup> ð Þ *t* þ *φ* <sup>¼</sup> <sup>1</sup> � cos <sup>2</sup> ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* � �X*<sup>n</sup>* ℓ¼0 cos <sup>2</sup><sup>ℓ</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>1</sup> � cos <sup>2</sup> ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* � � <sup>1</sup> <sup>þ</sup>X*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ *t* þ *φ* " # � sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>4</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>1</sup> � � <sup>þ</sup> cos <sup>2</sup>*n*þ<sup>2</sup> ð Þ *t* þ *φ* <sup>¼</sup> <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup><sup>ℓ</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ� *t* þ *φ* sin ð Þ *t* þ *φ* � sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup> ð Þ *t* þ *φ* <sup>þ</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>4</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>4</sup>ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>þ</sup> cos <sup>2</sup>*n*þ<sup>2</sup> ð Þ *t* þ *φ* <sup>¼</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>4</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ *t* þ *φ* " � cos <sup>2</sup>*n*þ<sup>4</sup>ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* # þX*<sup>n</sup>* ℓ¼0 cos <sup>2</sup><sup>ℓ</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼0 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>2</sup> ð Þ *t* þ *φ* <sup>¼</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>4</sup>ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>*�<sup>1</sup> ℓ¼0 cos <sup>2</sup>ℓþ<sup>4</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup> ð Þ *t* þ *φ* " � <sup>X</sup>*<sup>n</sup>* ℓ¼1 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>4</sup>ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* # þX*<sup>n</sup>* ℓ¼0 cos <sup>2</sup><sup>ℓ</sup>ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* � <sup>X</sup>*<sup>n</sup>*�<sup>1</sup> ℓ¼0 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>2</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>2</sup>*n*þ<sup>2</sup> ð Þ *t* þ *φ* <sup>¼</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>*�<sup>1</sup> ℓ¼0 cos <sup>2</sup>ℓþ<sup>4</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼1 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ *t* þ *φ* " # <sup>þ</sup> <sup>1</sup> <sup>þ</sup>X*<sup>n</sup>* ℓ¼1 cos <sup>2</sup><sup>ℓ</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>*�<sup>1</sup> ℓ¼0 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ *t* þ *φ* " # <sup>¼</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>4</sup>ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>6</sup>ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* … <sup>þ</sup> cos <sup>2</sup>*n*þ<sup>2</sup> ð Þ *t* þ *φ* " � <sup>X</sup>*<sup>n</sup>* ℓ¼1 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ *t* þ *φ* # <sup>þ</sup> <sup>1</sup> <sup>þ</sup> <sup>X</sup>*<sup>n</sup>* ℓ¼1 cos <sup>2</sup><sup>ℓ</sup>ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* " � cos <sup>2</sup> ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* cos <sup>4</sup>ð Þþ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* … <sup>þ</sup> cos <sup>2</sup>*<sup>n</sup>*ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* � �# <sup>¼</sup> <sup>1</sup> <sup>þ</sup> sin ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼1 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼1 cos <sup>2</sup>ℓþ<sup>2</sup> ð Þ *t* þ *φ* " # <sup>þ</sup> <sup>X</sup>*<sup>n</sup>* ℓ¼1 cos <sup>2</sup><sup>ℓ</sup>ð Þ� *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* <sup>X</sup>*<sup>n</sup>* ℓ¼1 cos <sup>2</sup><sup>ℓ</sup>ð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* " # ¼ 1,

(34)

proving Theorem 1.2.

**Remark 2**. *Eq. (32) takes the form*

$$\ddot{\boldsymbol{x}} + \frac{\left\{\dot{\mathbf{x}}^{3} \sum\_{\ell=0}^{n} (2\ell + 2)\mathbf{x}^{2\ell+1} + \dot{\mathbf{x}}^{2} \sum\_{\ell=0}^{n} 2\ell \mathbf{x}^{2\ell-1} + [(2n+4)\mathbf{x}^{2n+3}]\dot{\mathbf{x}} + (2n+2)\mathbf{x}^{2n+1}\right\}}{\left[3\dot{\mathbf{x}}^{2} \left(1 + \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell+2}\right) + 2\dot{\mathbf{x}}\sum\_{\ell=0}^{n} \mathbf{x}^{2\ell} + \mathbf{x}^{2n+4} - 1\right]} \dot{\mathbf{x}} = \mathbf{0},\tag{35}$$

Obviously, Eq. (35) has the form of Eq. (27), so the classical existence theorems cannot predict its general solution (33). As previously stated, Eq. (35) contains ð Þ *n* þ 1 nonlinear isochronous Hamiltonian oscillators. One can verify that *H*2ð Þ¼ *x*, *x*\_ 1, when *x t*ðÞ¼ cosð Þ *t* þ *φ* . As an example of Eq. (35), put *n* ¼ 0. Then, Eq. (35) becomes

$$
\ddot{\mathbf{x}} + \frac{2\mathbf{x}\dot{\mathbf{x}}^3 + (4\mathbf{x}^3 + 2\mathbf{x})\dot{\mathbf{x}}}{3\dot{\mathbf{x}}^2(1 + \mathbf{x}^2) + 2\dot{\mathbf{x}} + \mathbf{x}^4 + \mathbf{x}^2 - 1}\dot{\mathbf{x}} = \mathbf{0}.\tag{36}
$$

The phase diagram and vector field of Eq. (36) are shown in **Figure 4**. **Figures 5** and **6** show the phase portrait and vector field of Eq. (35) for *n* ¼ 1 and *n* ¼ 2.

**Figure 5.** *Phase portrait and vector field of Eq. (35) for n* ¼ 1*.*

#### **3.2 Higher-order nonlinear equation**

Nonlinear systems have been extensively investigated from the perspective of chaotic behavior. Chaos in higher-order systems has been widely studied in the literature since they are subject in general to a dramatic change in their qualitative behavior under a small change in initial conditions. Consequently, the determination of exact explicit solutions has been less explored in the literature. It follows the high importance of finding higher-order systems that admit a general solution with a regular predictable behavior when the initial conditions change. In this regard, higherorder systems having a sinusoidal general solution such as the harmonic oscillator cannot, in an analytic way, exhibit chaotic behavior. We, therefore, focus on these systems in this part. It is obvious that [30, 32], if

$$H(\mathfrak{x}, \dot{\mathfrak{x}}) = b,\tag{37}$$

where *b* is a constant, and

$$\mathbf{x}(t) = \cos\left(t + \varphi\right) \tag{38}$$

then

$$\frac{d^m}{dt^m}[H(\mathbf{x},\ \dot{\mathbf{x}}) - b] = \mathbf{0},\tag{39}$$

where *m* ≥ 0 is an integer, with the exact solution (38). Indeed

$$\frac{d^m}{dt^m}[H(\mathbf{x},\ \dot{\mathbf{x}}) - b] = \frac{d^{m-1}}{dt^{m-1}}\left\{\frac{d}{dt}[H(\mathbf{x},\ \dot{\mathbf{x}}) - b]\right\} = \mathbf{0}.\tag{40}$$

Therefore, the following theorems have been proven. Theorem 1.3: Consider the Hamiltonian (20). Then, the equation

$$\frac{d^m}{dt^m} \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}} \left( \mathbf{x}^2 - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] = \mathbf{0},\tag{41}$$

where *b* ¼ þ1, has the general solution

$$\mathbf{x}(t) = \cos\left(t + \varphi\right). \tag{42}$$

**Remark 3**. *Eq. (41) is a class of n*ð Þ þ 1 *nonlinear m*ð Þ þ 1 *th order autonomous systems that can exhibit isochronous oscillations.*

Theorem 1.4: Consider the Hamiltonian or first integral (31), where *b* ¼ 1. Then, equation

$$\frac{d^m}{dt^m} \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 \left( \mathbf{1} + \sum\_{\ell=0}^n \mathbf{x}^{2\ell+2} \right) + \dot{\mathbf{x}} \left( \mathbf{x}^{2n+4} - \mathbf{1} \right) + \mathbf{x}^{2n+2} \right] = \mathbf{0},\tag{43}$$

possesses the general and harmonic isochronous solution

$$\mathfrak{x}(t) = \cos\left(t + \varphi\right). \tag{44}$$

**Figure 6.** *Phase portrait and vector field of Eq. (35) for n* ¼ 2*.*

**Remark 4** *. Eq. (43) is a class of n*ð Þ þ 1 *nonlinear m*ð Þ þ 1 *th order autonomous systems that can exhibit isochronous oscillations. It is interesting to note that the constant φ can be determined by using two initial conditions*

$$\boldsymbol{\mathfrak{x}}(t=\mathbf{0})=\boldsymbol{\mathfrak{x}}\_{0}, \quad \dot{\boldsymbol{\mathfrak{x}}}(t=\mathbf{0})=\boldsymbol{\nu}\_{0} \tag{45}$$

whereas the Cauchy initial value problem requires *q* initial conditions for *qth* order systems of differential equations. Additionally, we can prove the following results.

Theorem 1.5: Let

$$\frac{d^m}{dt^m} \left\{ \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}} (\mathbf{x}^2 - \mathbf{1}) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \mathbf{x} \right\} = \mathbf{0}. \tag{46}$$

Then, Eq. (46) has the general and exact isochronous harmonic solution

$$\mathbf{x}(t) = \cos\left(t + \varphi\right). \tag{47}$$

**Proof.** Eq. (46) can be rewritten in the form

$$\begin{aligned} \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}} (\mathbf{x}^2 - \mathbf{1}) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \frac{d^m \mathbf{x}}{dt^m} \\ + \mathbf{x} \frac{d^m}{dt^m} \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}} (\mathbf{x}^2 - \mathbf{1}) + \mathbf{x}^{2n+2} - \mathbf{1} \right] = \mathbf{0}. \end{aligned} \tag{48}$$

Since *x*\_ <sup>2</sup>P*<sup>n</sup>* <sup>ℓ</sup>¼0*x*2<sup>ℓ</sup> <sup>þ</sup> *<sup>x</sup>*\_ <sup>3</sup> <sup>þ</sup> *x x* \_ð Þþ <sup>2</sup> � <sup>1</sup> *<sup>x</sup>*2*n*þ<sup>2</sup> � <sup>1</sup> <sup>¼</sup> 0, when *x t*ðÞ¼ cosð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* , the first term of Eq. (48) is zero. The second term is also zero under Theorem 1.3. Thus, Theorem 1.5 is proved.

Theorem 1.6: Let

$$\frac{d^m}{dt^m} \left\{ \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}} \left( \mathbf{x}^2 - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] e^\mathbf{x} \right\} = \mathbf{0}. \tag{49}$$

Then, Eq. (49) admits the general and exact isochronous sinusoidal solution

$$\mathbf{x}(t) = \cos\left(t + \varphi\right) \tag{50}$$

**Proof.** Writing Eq. (49) yields

$$\left[\dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}} (\mathbf{x}^2 - \mathbf{1}) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \mathbf{e}^x$$

$$+ e^x \frac{d^m}{dt^m} \left[\dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}} (\mathbf{x}^2 - \mathbf{1}) + \mathbf{x}^{2n+2} - \mathbf{1} \right] = \mathbf{0}. \tag{51}$$

allows us to note that the second term is zero under Theorem 1.3. Since *x*\_ <sup>2</sup>P*<sup>n</sup>* <sup>ℓ</sup>¼<sup>0</sup>*x*<sup>2</sup><sup>ℓ</sup> <sup>þ</sup> *<sup>x</sup>*\_ <sup>3</sup> <sup>þ</sup> *x x* \_ð Þþ <sup>2</sup> � <sup>1</sup> *<sup>x</sup>*<sup>2</sup>*n*þ<sup>2</sup> � <sup>1</sup> <sup>¼</sup> 0 for *x t*ðÞ¼ cosð Þ *<sup>t</sup>* <sup>þ</sup> *<sup>φ</sup>* , the first term of Eq. (51) is equal to zero, so Theorem 1.6 is proved.

Theorem 1.7: Let

$$\frac{d^m}{dt^m} \left\{ \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 \left( \mathbf{1} + \sum\_{\ell=0}^n \mathbf{x}^{2\ell+2} \right) + \dot{\mathbf{x}} \left( \mathbf{x}^{2n+4} - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \mathbf{x} \right\} = \mathbf{0} \tag{52}$$

Then, Eq. (52) exhibits the general and exact isochronous harmonic solution

$$\mathbf{x}(t) = \cos\left(t + \varphi\right). \tag{53}$$

**Proof.** Eq. (52) can take the form

$$\begin{split} & \left[ \dot{\mathbf{x}}^{2} \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^{3} \left( \mathbf{1} + \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell+2} \right) + \dot{\mathbf{x}} \left( \mathbf{x}^{2n+4} - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \frac{d^{m} \mathbf{x}}{dt^{m}} \\ & + \mathbf{x} \frac{d^{m}}{dt^{m}} \left[ \dot{\mathbf{x}}^{2} \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^{3} \left( \mathbf{1} + \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell+2} \right) + \dot{\mathbf{x}} \left( \mathbf{x}^{2n+4} - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] = \mathbf{0} \end{split} \tag{54}$$

Since *x*\_ <sup>2</sup>P*<sup>n</sup>* <sup>ℓ</sup>¼0*x*2<sup>ℓ</sup> <sup>þ</sup> *<sup>x</sup>*\_ <sup>3</sup> <sup>1</sup> <sup>þ</sup> <sup>P</sup>*<sup>n</sup>* <sup>ℓ</sup>¼0*x*2ℓþ<sup>2</sup> � � <sup>þ</sup> *x x* \_ð Þþ <sup>2</sup>*n*þ<sup>4</sup> � <sup>1</sup> *<sup>x</sup>*2*n*þ<sup>2</sup> � <sup>1</sup> <sup>¼</sup> 0 when *x t*ðÞ¼ cosð Þ *t* þ *φ* , the first term of Eq. (54) is equal to zero. The second term is equal to zero under Theorem 1.4. Therefore, Theorem 1.7 is proved.

Theorem 1.8: Let

$$\frac{d^m}{dt^m} \left\{ \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 \left( \mathbf{1} + \sum\_{\ell=0}^n \mathbf{x}^{2\ell+2} \right) + \dot{\mathbf{x}} \left( \mathbf{x}^{2n+4} - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \mathbf{e}^x \right\} = \mathbf{0}. \tag{55}$$

Then, Eq. (55) admits the general and exact isochronous solution

$$\mathbf{x}(t) = \cos\left(t + \varphi\right). \tag{56}$$

**Proof.** Applying the rule of differentiation of a product of two functions, Eq. (55) can be written as

$$\begin{split} & \left[ \dot{\mathbf{x}}^{2} \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^{3} \left( \mathbf{1} + \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell+2} \right) + \dot{\mathbf{x}} \left( \mathbf{x}^{2n+4} - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \mathbf{e}^{\mathbf{x}} + \\ & \epsilon^{\kappa} \frac{d^{m}}{dt^{m}} \left[ \dot{\mathbf{x}}^{2} \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^{3} \left( \mathbf{1} + \sum\_{\ell=0}^{n} \mathbf{x}^{2\ell+2} \right) + \dot{\mathbf{x}} \left( \mathbf{x}^{2n+4} - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] = \mathbf{0}. \end{split} \tag{57}$$

Under Theorem 1.4, the second term of Eq. (57) is equal to zero when *x t*ðÞ¼ cosð Þ *t* þ *φ* . The first term is also zero when *x t*ðÞ¼ cosð Þ *t* þ *φ* since

$$\dot{\boldsymbol{\pi}}^2 \sum\_{\ell=0}^n \boldsymbol{\pi}^{2\ell} + \dot{\boldsymbol{\pi}}^3 \left( \mathbf{1} + \sum\_{\ell=0}^n \boldsymbol{\pi}^{2\ell+2} \right) + \dot{\boldsymbol{\pi}} \left( \mathbf{x}^{2n+4} - \mathbf{1} \right) + \boldsymbol{\pi}^{2n+2} - \mathbf{1} = \mathbf{0}.$$

This completes the proof of Theorem 1.8. **Remark 5***. If H x*ð Þ¼ , *x*\_ *b, when x* ¼ cosð Þ *t* þ *φ , then*

$$\frac{d^m}{dt^m} \{ [H(\mathbf{x}, \ \dot{\mathbf{x}}) - b] Q(\mathbf{x}, \ \dot{\mathbf{x}}) \} = \mathbf{0},\tag{58}$$

has the exact solution cos*t*, where *Q x*ð Þ , *x*\_ 6¼ 0 is a function of its arguments. Now, we can investigate higher-order nonautonomous nonlinear systems.

#### **4. Nonautonomous nonlinear systems**

In recent decades, nonautonomous systems have been the subject of intensive investigation in the literature, given their applications in physics and applied mathematics [33–35]. In particular, these systems have been used to describe time-varying parameter processes in many areas of physical and life sciences [33, 35]. Nonautonomous systems are generally investigated within the framework of the qualitative theory of differential equations. The Lyapunov method is often used to study the stability, boundedness, and conditions for the existence of periodic solutions of these systems. However, the recent literature shows that the classical existence theorems are not sufficient to predict the behavior of nonlinear dynamic systems. Additionally, qualitative results are not sufficient for engineering and industrial

applications [23]. By definition [34, 35], a nonautonomous dynamic system is distinguished from an autonomous system by the fact that the solution of the associated initial value problem depends not only on the elapsed time *t* � *t*<sup>0</sup> but also on the initial time *t*0. In this part, we prove the existence of nonautonomous dynamic systems whose solution to the initial value problem does not depend on the initial time *t*0. To that end, we have the following result.

Theorem 1.9: Consider the Hamiltonian *H x*ð Þ , *x*\_ such that

$$H(\mathbf{x}, \dot{\mathbf{x}}) = b,\tag{59}$$

when *x* ¼ cosð Þ *t* þ *φ* , where *b* and *φ* are constants. Then, the nonautonomous equation

$$\frac{d^m}{dt^m}[H(\mathbf{x},\ \dot{\mathbf{x}}) - b] \mathbf{Q}(t) = \mathbf{0},\tag{60}$$

has the general and exact isochronous sinusoidal solution

$$\mathbf{x}(t) = \cos\left(t + \rho\right),\tag{61}$$

where *Q t*ð Þ 6¼ 0 is a function of *t*.

**Proof.** Using the rule of differentiation of a product of two functions, we can rewrite Eq. (60) in the form

$$(H - b)\frac{d^m Q(t)}{dt^m} + Q(t)\frac{d^m}{dt^m}(H - b) = 0.\tag{62}$$

From Eq. (59), the first term of Eq. (62) is equal to zero for *x t*ðÞ¼ cosð Þ *t* þ *φ* . Now *dm dt<sup>m</sup>* ð Þ¼ *<sup>H</sup>* � *<sup>b</sup> <sup>d</sup>m*�<sup>1</sup> *dtm*�<sup>1</sup> *d dt*ð Þ *<sup>H</sup>* � *<sup>b</sup>* � � so that *<sup>d</sup> dt*ð Þ *<sup>H</sup>* � *<sup>b</sup>* � � <sup>¼</sup> *dH dt* ¼ 0, using Eq. (59) when *x t*ðÞ¼ cosð Þ *t* þ *φ* . This completes the proof of Theorem 1.9.

#### **4.1 Examples of illustration**

#### *4.1.1 Example 1*

Let us consider Eq. (20), where *b* ¼ 1. Then, the nonlinear ð Þ *m* þ 1 *th* order nonautonomous equation

$$\frac{d^m}{dt^m} \left\{ \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 + \dot{\mathbf{x}} (\mathbf{x}^2 - \mathbf{1}) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \cos t \right\} = \mathbf{0},\tag{63}$$

exhibits isochronous oscillations corresponding to the general and exact sinusoidal solution

$$\mathbf{x}(t) = \cos\left(t + \varphi\right). \tag{64}$$

Solution (64) is also the general and exact isochronous sinusoidal solution of the harmonic oscillator

$$
\ddot{\mathbf{x}} + \mathbf{x} = \mathbf{0}.\tag{65}
$$

Thus, the constant *φ* can be determined using two initial conditions

$$\boldsymbol{\mathfrak{x}}(t=\mathbf{0})=\boldsymbol{\mathfrak{x}}\_{0}, \quad \dot{\boldsymbol{\mathfrak{x}}}(t=\mathbf{0})=\boldsymbol{\mathfrak{v}}\_{0} \tag{66}$$

such that, as is well-known, solution (64) does not depend on the initial time *t*0.

#### *4.1.2 Example 2*

Consider Eq. (31) where *b* ¼ 1. Then, we have the nonlinear ð Þ *m* þ 1 *th* order nonautonomous equation

$$\frac{d^m}{dt^m} \left\{ \left[ \dot{\mathbf{x}}^2 \sum\_{\ell=0}^n \mathbf{x}^{2\ell} + \dot{\mathbf{x}}^3 \left( \mathbf{1} + \sum\_{\ell=0}^n \mathbf{x}^{2\ell+2} \right) + \dot{\mathbf{x}} \left( \mathbf{x}^{4n+2} - \mathbf{1} \right) + \mathbf{x}^{2n+2} - \mathbf{1} \right] \cos t \right\} = 0,\tag{67}$$

that can exhibit isochronous sinusoidal oscillations with the general and exact solution cosð Þ *t* þ *φ* . It is interesting to note that the classes of Eqs. (63) and (67) contain ð Þ *n* þ 1 nonlinear ð Þ *m* þ 1 *th* order nonautonomous systems that reproduce in an exact way the isochronous harmonic oscillations of the harmonic oscillator. In this context, we can present a conclusion for the chapter.

#### **5. Conclusion**

In this chapter, we explicitly proved some results concerning isochronous sinusoidal oscillations of nonlinear systems. These results contribute to recent developments and major advances in the field of second-order and higher-order autonomous and nonautonomous nonlinear dynamic system theory.

#### **Author details**

Jean Akande†, Kolawolé Kêgnidé Damien Adjaï†, Marcellin Nonti† and Marc Delphin Monsia\*† Department of Physics, University of Abomey-Calavi, Cotonou, Benin

\*Address all correspondence to: monsiadelphin@yahoo.fr

† These authors contributed equally.

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Isochronous Oscillations of Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.106354*

### **References**

[1] Lilbre J, Mereu AC, Teixeira MA. Limit cycles of the generalized polynomial Liénard differential equations. In: Mathematical Proceedings of the Cambridge Philosophical Society. 2010;**148**(2):363-383. DOI: 10.1017/ S0305004109990193

[2] Chavarriga J, Garcia B, Llibre J. Polynomial first integrals of quadratic vector fields. Journal of Differential Equations. 2006;**230**(2): 393-421. DOI: 10.1016/j.jde.2006.07.022

[3] Gine J, Grau MM, Llibre J. Polynomial and rational first integrals for planar homogeneous polynomial differential systems. Publ.Mat. 2014:255-278. DOI: 10.5565/PUBLMAT\_Extra14\_14

[4] Christopher C, Llibre J, Pantazi C, Walcher S. On planar polynomial vector fields with elementary first integrals. Journal of Differential Equations. 2019; **267**(8):4572-4588. DOI: 10.1016/j.jde. 2019.05.007

[5] Jordan DW, Smith P. Nonlinear Ordinary Differential Equations: An Introduction for Scientists and Engineers. Fourth ed. New York: Oxford University Press; 2007

[6] Mickens RE. Oscillations in Planar Dynamic Systems, Series on Advances in Mathematics for Applied Sciences. World Scientific; 1996

[7] Mickens RE. Truly Nonlinear Oscillators. Singapore: World Scientific; 2010

[8] Adjaï KKD, Koudahoun LH, Akande J, Kpomahou YJF, Monsia MD. Solutions of the Duffing and Painlevé-Gambier equations by generalized Sundman transformation. Journal of Mathematics and Statistics. 2018;**14**(1):241-252. DOI: 10.3844jmssp.2018.241.252

[9] Monsia MD, Kpomahou YJF. Simulating nonlinear oscillations of viscoelastically damped mechanical systems. Engineering, Technology & Applied Science Research. 2014;**4**(6):714-723

[10] Kpomahou YJF, Monsia MD. Asymptotic perturbation analysis for nonlinear oscillations in viscoelastic systems with hardening exponent, Int. J. Adv. Appl. Math. and Mech. 2015;**3**(1): 49-56

[11] Sabatini M. On the period function of Lienard systems. Journal of Differential Equations. 1999;**152**(1):467-487

[12] Sabatini M. On the period function of *<sup>x</sup>*€ <sup>þ</sup> *f x*ð Þ*x*\_ <sup>2</sup> <sup>þ</sup> *g x*ð Þ¼ 0. Journal of Differential Equations. 2004;**196**(1): 151-168

[13] Sabatini M. Characterizing isochronous centres by Lie brackets. Differential Equations and Dynamical Systems. 1997;**5**(1):91-99

[14] Christopher C, Devlin J. On the classification of Lienard systems with amplitude-independent periods. Journal of Differential Equations. 2004;**200**(1): 1-17

[15] Christopher C, Devlin J. Isochronous centers in planar polynomial systems. Siam J. Math. Anal. 1997;**28**(1):162-177. DOI : 10.1137/ 50036141093259245

[16] Guha P, Choudhury GA. The Jacobi last multiplier and isochronicity of Lienard type systems. Reviews in Mathematical Physics. 2013;**25**:(6): 1330009-1-1330009-31

[17] Kovacic I, Rand R. About a class of nonlinear oscillators with amplitudeindependent frequency. Nonlinear Dynamics. 2013;**74**(1):455-465

[18] Akande J, Adjaï KKD, Yehossou AVR, Monsia MD. Exact and sinusoidal periodic solutions of Lienard equation without restoring force. Int. J. Anal. Appl. 2022;**20**(4):1-6

[19] Kpomahou YJF, Nonti M, Adjaï KKD, Monsia MD. On the linear harmonic oscillator solution for a quadratic Lienard type equation. Mathematical Physics. 2021. Available online: https://viXra.org/abs/ 2101.0010v1 (preprint)

[20] Yessoufou AB, Adjaï KKD, Akande J, Monsia MD. Modified Emden type Oscillator Equations with Exact Harmonic solutions. Int. J. Anal. Appl. 2022;**20**(39):1-17

[21] Akplogan ARO, Adjaï KKD, Akande J, Avossevou GYH, Monsia MD. Modified Van der Pol-Helmohltz oscillator equation with exact harmonic solutions. 2021. DOI: 10.21203/rs.3.rs-1229125v1 (preprint)

[22] Adjaï KKD, Nonti M, Akande J, Monsia MD. Unusual non-polynomial Van der Pol oscillator equations with exact harmonic and isochronous solutions. 2021. DOI: 10.13140/RG.2.2. 17308.41606 (preprint)

[23] Adjaï KKD, Akande J, Yehossou AVR, Monsia MD. Periodic solutions and limit cycles of mixed Lienard-type differential equations. AIMS Mathematics. 2022;**7**(8): 15195-15211

[24] Akande J, Adjaï KKD, Nonti M, Monsia MD. Counter-examples to the existence theorems of limit cycles of differential equations. 2021. DOI: 10.13140/RG.2.2.15940.76167 (preprint)

[25] Monsia MD. On the exact periodic solution of a truly nonlinear oscillator equation. 2020. Available online: https:// viXra.org/pdf/2009.0057v2 (preprint)

[26] Monsia MD. The non-periodic solution of a truly nonlinear oscillator with power nonlinearity. 2020. Available online: https://viXra.org/pdf/2009. 0174v1 (preprint)

[27] Doutetien EA, Yehossou AR, Mallick P, Rath B, Monsia MD. On the general solutions of a nonlinear pseudooscillator equation and related quadratic lienard systems. Proceedings of the Indian National Science. 2020;**86**(4):1361-1365

[28] Françoise JP. Isochronous systems and perturbation theory. Journal of Nonlinear Mathematical Physics. 2005; **12**:315-326

[29] Akande J, Adjaï KKD, Yehossou AVR, Monsia MD. On unusual first integrals. 2022. DOI: 10.13140/ RG.2.2.16734.72006 (preprint)

[30] Adjaï KKD, Akande J, Monsia MD. On certain first integrals. 2022. DOI: 10.13140/RG.2.2.35928.57601v1 (preprint)

[31] Adjaï KKD, Akande J, Monsia MD. Higher-order nonautonomous isochronous dynamical systems. 2022. DOI: /10.13140/ RG.2.2.18766.33606 (preprint)

[32] Adjaï KKD, Akande J, Monsia MD. Limit cycles and isochronous systems via first integrals. 2022. DOI: 10.13140/ RG.2.2.14252.54409 (preprint)

[33] Akande J, Adjaï KKD, Monsia MD. On damped Mathieu and periodic Lienard type equations. 2021. DOI: 10.6084/m9. figshare.14547102.V1 (preprint)

[34] Kloeden PE, Rasmussen M. Nonautonomous Dynamical Systems. AMS, Mathematical Surveys and Monographs. New York: American Mathematical Society; 2011

[35] Kloeden PE, Pötzsche C. Nonautonomous Dynamical Systems in the Life Sciences. London: Springer; 2013

#### **Chapter 9**

## Control Configuration Selection for Nonlinear Systems

*Sujatha Vijayaraghavan*

#### **Abstract**

Very popular research in the field of Modern control Engineering is design of controllers for nonlinear systems. It is obvious that all the real-world systems are multivariable and nonlinear in nature which is highly challenging to control these nonlinear systems as it exhibits complexity. In addition to these, all the real systems exhibit uncertainty due to slow or sudden changes in process parameters. Hence, the design of robust nonlinear controller should have an ability to handle these uncertainties. The design of controllers for nonlinear system needs proper selection of appropriate input–output pairing. This book chapter focus on the conventional and proposed method of control configuration selection for nonlinear systems.

**Keywords:** input–output pairing, closed loop undesired responses, benchmark nonlinear systems, linearization, control configuration, nonlinear controller

#### **1. Introduction**

Research in nonlinear systems is developing rapidly and it is observed that useful results tend to appear. The classical approach in nonlinear system is linearization and thereby design of linear controllers. This classical approach is recommended when the nonlinearities are mild. Also, this approach is not recommended when the nonlinearities are pronounced more. For such case, variable transformation techniques are adopted and hence effective controllers are designed. The philosophy of nonlinear controller design is indicated using following four schemes: i. Local linearization, ii. Local linearization with adaptation, iii. Linearization using variable transformations and iv. Special purpose procedure.

Most of the nonlinear systems exhibit sustained oscillations for wide range of operating point. Investigation of stability for nonlinear systems is based on linearization of nonlinear system around the steady state. If the linearized system is stable at the vicinity, then it is concluded that the corresponding nonlinear system will be stable in the vicinity of the point.

There are four methods involved to analyse the dynamic behaviour of nonlinear systems. Rigorous analytical approach is used to characterise qualitatively the dynamics of nonlinear system. Analytical approach will evolve to the state of the nonlinear system. Exact linearization by variable transformation technique will first carry out the transformation, then dynamic analysis on linear transformed version and finally

transforming back to original variables. Numerical analysis will processes the numerical values at specific points in time. Finally in approximate linearization method nonlinear system will be approximated to linear system.

Many researchers from 'drive by wire' cars to 'fly by air' flight control systems have shown interest in analysis and design of nonlinear control strategies. This growing interest in design of nonlinear control is due to improvement in linear control system and hard nonlinearities analysis. Hence researcher need to deal with model uncertainties and robust design.

Modern technology requires high speed and accurate robots. Inverted pendulum is an example of nonlinear system which finds application in positioning of robots and control of manipulators. As the nonlinear systems exhibit limit cycle, it is not ease to use Kalman test for checking controllability and observability. Also, stability is not simply location of poles as the system is having multiple stable/unstable equilibrium points.

In eighteenth century, nonlinear control was introduced to control steam engine by using centrifugal flyball governor reviewed by Jamshed Iqbal et al. [1]. Lyapunov [2] in 1892 proposed the stability for nonlinear system by finding stability of linear approximation of nonlinear system at equilibrium point is equivalent to analysing stability for nonlinear system at vicinity of equilibrium point. Two benchmark nonlinear system on Duffing's research [3] in 1918 on nonlinear vibrations and van der pol findings [4] in 1926 on electronic oscillations representing nonlinear control systems. The various phenomenon in nonlinear systems is jump resonance, limit cycle, subharmonic oscillations and frequency entrainment. Control Engineering in 1930 [5], Poincare approximated servomechanisms to second order system using phaseplane method. During second world war, research in nonlinear control led to control of guided vehicles for defence. In the year 1940–1960, nonlinear systems are represented analytically using describing function and phase plane method [6]. Modern era for nonlinear control was developed in the year 1960. The key applications of nonlinear control are defence sector and industrial arena. As nonlinear real-world systems are multivariable, high dimensional, poorly modelled which is outside the boundary of classical control theory. Thus, nonlinear control falls under the modern control Engineering in which digital controllers are used Fuller 1979 [7, 8]. In 1970, scientist proposed dynamic system can be viewed as energy transformation mechanism. Sontag and Wang [9] in 1995 proposed input–output stability of nonlinear system using elementary subsystems. By the introduction of geometric control theory by Isidori [10] in 2013 to analyse the stability of nonlinear system. 1990s is considered as decade of 'activation process' in nonlinear control systems.

The aim of this chapter is to analysis the nonlinear system and nonlinear control. The rest of the chapter is carried out as follows: Section 2 discusses benchmark nonlinear system. Conventional method of input–output pairing is explained in Section 3. The proposed method of control structure selection and determination of input/output pairs are given in Section 4. At the end, conclusion is drawn.

#### **2. Process description**

A nonlinear system does not obey the principle of superposition. In this system, the response to sum of inputs is not equal to sum of individual responses. Also, response to step input of magnitude A is not equal to A times magnitude of step, step-down

response is not equal to mirror image of step-up response. A sinusoidal input to nonlinear system will not lead to perfect sinusoidal response.

Most of the chemical processes are nonlinear. Some of the examples of nonlinear chemical process [11] are as follows:

1.The blending process

2. Stirred mixing tank process

3.Nonisothermal CSTR process - mildly nonlinear system [12]

4.pH neutralisation process–moderate to high nonlinear system and

5.Distillation column–highly non-linear system

#### **2.1 The blending process**

In the blending process shown in **Figure 1**, it is required to blend the pure material A and pure material B whose respective flowrates FA and FB. The objective of blending process is to control the flow rate and composition.

Representing the mathematical modelling of blending process: Total mass balance:

$$\mathbf{F} = \mathbf{F}\_\mathbf{A} + \mathbf{F}\_\mathbf{B} \tag{1}$$

Component A mass balance:

$$\mathbf{x} = \mathbf{F\_A}/\mathbf{F\_A} + \mathbf{F\_B} \tag{2}$$

On linearizing, the process transfer function is obtained as:

$$G\_{11}(s) = \frac{R'(s)}{W\_1'(s)} = 10^{-3} \tag{3}$$

$$G\_{12}(s) = \frac{R'(s)}{W\_2'(s)} = 10^{-3} \tag{4}$$

**Figure 1.** *Blending process.*

$$G\_{21}(s) = \frac{X'(s)}{W\_1'(s)} = \frac{-2.5 \ast 10^{-6}}{s + 4 \ast 10^{-4}}\tag{5}$$

$$G\_{22}(s) = \frac{X'(s)}{W\_2'(s)} = \frac{-2.5 \ast 10^{-6}}{s + 4 \ast 10^{-4}}\tag{6}$$

#### **2.2 Stirred mixing tank reactor**

The stirred mixing tank reactor is having two input variables: hot and cold stream flowrate and two output variables: liquid level and temperature of the liquid in the tank.

For this stirred mixing tank reactor in **Figure 2**, assume cross sectional area is uniform, liquid physical properties are constant. The mathematical model of the tank reactor is

$$A\_c \frac{dh}{dt} = F\_H + F\_C + F\_D - k\sqrt{h} \tag{7}$$

$$
\rho \mathbf{C}\_p \mathbf{A}\_c \frac{d(hT)}{dt} = \rho \mathbf{C}\_p \tag{8}
$$

This process is considered as nonlinear because of square root term and product functions of h and T.

Process transfer function for stirred mixing tank reactor is:

$$G(s) = \begin{bmatrix} \frac{0.7}{1+9s} & 0\\ \frac{2}{1+8s} & 0.4\\ \hline \mathbf{1}+8s & \mathbf{1}+9s \end{bmatrix} \tag{9}$$

#### **2.3 CSTR process**

One of the nonlinear chemical processes is nonisothermal CSTR shown in **Figure 3**. In this process, irreversible chemical reaction takes place, where the feed material having composition C moles/volume enters the reactor at the temperature, Tf.

**Figure 2.** *Stirred mixing tank reactor.*

*Control Configuration Selection for Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.107303*

**Figure 3.** *The nonisothermal CSTR.*

Assuming the concentration and temperature are uniform throughout, exit temperature and composition are also same as within the reactor.

The mathematical model for this process is:

$$\frac{dC\_A}{dt} = \frac{-1}{\theta}c\_A - k\_0 e^{-\left(\frac{E}{RT}\right)}c\_A + \frac{1}{\theta}c\_{Af} \tag{10}$$

$$\frac{dT}{dt} = \frac{-1}{\theta} T + \beta k\_0 e^{-\left(\frac{\theta}{\theta T}\right)} c\_A + \frac{1}{\theta} T\_f - \chi \tag{11}$$

The CSTR process is having two manipulated variables and two controlled variables (reaction temperature and concentration). Based on mathematical model of CSTR process, it is observed that the nonlinearities are due to nonlinear function of temperature – involving exponential of temperature and products of concentration and function of temperature.

The process transfer function matrix of CSTR process is presented in Eq. (12):

$$
\begin{pmatrix} C\_A \\ T \end{pmatrix} = \begin{pmatrix} \frac{0.022e^{-0.33t}}{15t+1} & \frac{5e^{-0.33t}}{21s+1} \\ \frac{0.0056e^{-6s}}{21s+1} & \frac{5.9e^{-0.33t}}{21s+1} \end{pmatrix} \begin{pmatrix} F \\ F\_f \end{pmatrix} \tag{12}
$$

#### **2.4 pH neutralisation process**

Most of the pH processes exhibit nonlinear behaviour to degree of either mild or high. The pH neutralisation process is shown in **Figure 4**.

The dynamic model of pH neutralisation system is derived using conservation and equilibrium relations. Assuming perfect mixing, constant density and complete solubility of the ions involved.

The following differential equations for the effluent reaction invariants can be derived:

$$A\_1 h\_1 \frac{d W\_{a4}}{dt} = q\_{1\epsilon} (W\_{a1} - W\_{a4}) + q\_2 (W\_{a2} - W\_{a4}) + q\_3 (W\_{a3} - W\_{a4}) \tag{13}$$

$$A\_1 h\_1 \frac{d W\_{b4}}{dt} = q\_{1\mathfrak{e}} (W\_{b1} - W\_{b4}) + q\_2 (W\_{b2} - W\_{b4}) + q\_3 (W\_{b3} - W\_{b4}) \tag{14}$$

**Figure 4.** *pH neutralisation process.*

The pH and level transmitters are modelled as first order transfer functions. The desired flow rates q1 and q3 serve as setpoints.

The process is treated as square MIMO systems where pH2 and h2 are to be controlled variables using Q4 and Q6 as the manipulated variables with Q1 and Q3 held constant.

The resulting process transfer function matrix is represented in the following Eq. (15).

$$
\begin{pmatrix} \mathsf{pH}\_2 \\ h\_2 \end{pmatrix} = \begin{pmatrix} \frac{-0.32e^{-0.8\epsilon}}{2.36s+1} & \frac{0.32e^{-0.4\epsilon}}{2.03s+1} \\ \frac{0.42e^{-0.4\epsilon}}{3.32s+1} & \frac{0.41e^{-0.1\epsilon}}{2.07s+1} \end{pmatrix} \begin{pmatrix} \mathsf{Q}\_4 \\ \mathsf{Q}\_6 \end{pmatrix} \tag{15}
$$

#### **2.5 Distillation column**

The design of binary distillation process shown in **Figure 5** is a highly nonlinear chemical process.

The potential manipulated variables of the distillation process are reflux (FR) and reboiler flow rate (FV) whereas distillate (XD) and bottom composition (XB) are the controlled variables here. This nonlinear system is having 3 input variables and 2 output variables.

The transfer functions are obtained for the distillation column is described by the following equation.

$$X\_D(s) = \frac{0.0747e^{-3s}}{12s+1} F\_R(s) - \frac{0.0667e^{-2s}}{15s+1} F\_V(s) + \frac{0.7e^{-5s}}{14.4s+1} X\_F(s) \tag{16}$$

$$X\_B(s) = \frac{0.1173e^{-3.8}}{11.7s + 1} F\_R(s) - \frac{0.1253e^{-2s}}{10.2s + 1} F\_V(s) + \frac{1.3e^{-5s}}{12s + 1} X\_F(s) \tag{17}$$

*Control Configuration Selection for Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.107303*

**Figure 5.** *Binary distillation tower.*

whose transfer function matrix Eq. (18) is described as:

$$
\begin{pmatrix} X\_D(s) \\ X\_B(s) \end{pmatrix} = \begin{pmatrix} \frac{0.0747e^{-3s}}{12s+1} & \frac{-0.0667e^{-2s}}{15s+1} \\ \frac{0.1173e^{-3.3s}}{11.7s+1} & \frac{-0.1253e^{-2s}}{10.2s+1} \end{pmatrix} \begin{pmatrix} F\_R(s) \\ F\_V(s) \end{pmatrix} \tag{18}
$$

#### **3. Conventional method of loop pairing for nonlinear systems**

For all the processes namely SISO and MIMO it is required to pair the input and output variables before designing the controller. Later the controller can be designed according to any one of such input–output pair. Further any one of this configuration will lead to better overall system performance.

For the linear systems, Relative Gain Array (RGA) is obtained based on transfer function models [11]. Interaction analysis using RGA is based on steady-state information. But for nonlinear systems, by assuming that process model is available, two approaches are used to obtain RGA.

1.Using steady state version of nonlinear model from first principles, it is possible to obtain the analytical expressions.

2.By linearizing the nonlinear model around a steady state using approximate K matrix.

#### **3.1 RGA based loop pairing for blending process**

RGA is computed using the steady state version of nonlinear model for this process. For the blending process, two input variables are u1 and u2 and the two output variables are F and x.

$$\mathbf{F} = \mathbf{u}\_1 + \mathbf{u}\_2 - \text{Linear} \tag{19}$$

$$\mathbf{X} = \mathbf{u}\_1/\mathbf{u}\_1 + \mathbf{u}\_2 - \text{Nonlinear} \tag{20}$$

For this 2x2 MIMO process, the elements of RGA is given by:

$$\lambda = \frac{\left(\frac{dF}{du\_1}\right)\_{bathloop spot}}{\left(\frac{dF}{du\_1}\right)\_{secondloop closed}}\tag{21}$$

From Eq. (19), when both loops open, *dF du*<sup>1</sup> *bothloopsopen* <sup>¼</sup> <sup>1</sup>*:*

Upon closing the second loop, what value u2 will take in order for any change in u1, x is restored to desired steady state value x\* . Solving for u2 in terms of u1 and x\* .

$$
u\_2 = \frac{
u\_1}{
x} - x \tag{22}$$

Thus, when the second loop is closed, subs Eq. (22) in Eq. (19)

$$\mathbf{F} = \mathbf{u}\_{1+} \, \frac{\boldsymbol{u}\_1}{\boldsymbol{\kappa}} - \boldsymbol{\kappa}$$

$$\mathbf{F} = \frac{\boldsymbol{u}\_1}{\boldsymbol{\kappa}} \tag{23}$$

Differentiate F w.r.t u1, we get *dF du*<sup>1</sup> *secondloopclosed* <sup>¼</sup> <sup>1</sup> *x* Finally, *<sup>λ</sup>* <sup>¼</sup> <sup>1</sup> 1 *x*

$$
\lambda = \mathfrak{x} \tag{24}
$$

Therefore, RGA for blending system is

$$
\lambda = \begin{pmatrix} \mathbf{x} & \mathbf{1} - \mathbf{x} \\ \mathbf{1} - \mathbf{x} & \mathbf{x} \end{pmatrix} \tag{25}
$$

where *x* is the mole fraction of species A in the blend.

RGA depends on only one steady state operating point, *x* whose value lies between 0 and 1.

Loop pairing for blending process:

1.When x\* is closer to 1, recommended pairing is F-u1 and m-u2.

2.When product composition is closer to 1, recommended pairing is F-u1 and m-u2.

3.When product composition is closer to 0, recommended pairing is F-u2 and m-u1.

4.When x\* = 0.5, which input variable is used to control which output variable.

#### **3.2 RGA based loop pairing for stirred mixing tank reactor**

Using the technique of approximating the nonlinear model around the steady state value, RGA is obtained for stirred mixing tank reactor.

Steady state gain matrix for the reactor is

$$K = G(\mathbf{0}) = \frac{1}{k} \begin{bmatrix} 2\sqrt{h\_s} & 2\sqrt{h\_s} \\ \frac{(T\_H - T\_s)}{\sqrt{h\_s}} & \frac{(T\_C - T\_s)}{\sqrt{h\_s}} \end{bmatrix} \tag{26}$$

Two output variables are y1 – liquid level and y2 – temperature Two input variables are u1 – hot stream flowrate and u2 – cold stream flowrate RGA for this system at steady state operating point TS,

$$A = \begin{bmatrix} T\_C - T\_s & -(T||H - T\_s) \\ \hline T\_C - T\_H & T\_C - T\_H \end{bmatrix} \frac{-(T||H - T\_s)}{T\_C - T\_H} \frac{T\_C - T\_s}{T\_C - T\_H} \tag{27}$$

For illustrations, numerical values of *TC* ¼ 15°C and *TH*=65°C Condition 1: TS > 40°C, (TS = 55°C)

$$A = \begin{bmatrix} \mathbf{0.8} & \mathbf{0.2} \\ \mathbf{0.2} & \mathbf{0.8} \end{bmatrix} \tag{28}$$

RGA recommends (u1-y1) and (u2-y2) pairing. Condition 2: TS < 40°C, (TS = 25°C)

$$A = \begin{bmatrix} \mathbf{0.2} & \mathbf{0.8} \\ \mathbf{0.8} & \mathbf{0.2} \end{bmatrix} \tag{29}$$

RGA recommends (u2-y1) and (u1-y2) pairing. Condition 3: TS = 40°C,

$$A = \begin{bmatrix} \mathbf{0.5} & \mathbf{0.5} \\ \mathbf{0.5} & \mathbf{0.5} \end{bmatrix} \tag{30}$$

Here either pairing is equally bad. Condition 4: TS = TH

$$A = \begin{bmatrix} \mathbf{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} \end{bmatrix} \tag{31}$$

Here we can achieve the perfect control.

#### **3.3 RGA based loop pairing for mild, mild to high and high nonlinear process**

RGA for mild, mild to high and high nonlinear processes are given in **Table 1** based on steady state value of process transfer function matrix.

It is clear that from RGA matrix for all the benchmark process, the desirable input– output pair is (y1-m1); (y2-m2).

#### **4. Proposed loop pairing for the nonlinear process**

The proposed method is based on finding the area under the closed loop undesired response and choosing the pair based on minimum area under the response [13, 14]. The controllers are designed using the method proposed by Panda [15] **Figures 10** and **11**.


#### **Table 1.**

*Conventional method of loop pairing for nonlinear chemical process.*

#### **4.1 Loop pairing for blending process**

#### **Case 1: Diagonal pairing**

Based on the comparing the area tabulated in **Table 2** under undesired response for both diagonal and off diagonal shown in **Figures 6**–**9**, the recommended input output pairing is 1–1/2–2 using the proposed method.


#### **Table 2.**

*Comparison of area under undesired response of blending process.*

**Figure 6.**

**Figure 7.** *Closed loop undesired response (y2) for diagonal pairing.*

*Control Configuration Selection for Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.107303*

**Figure 8.**

*Closed loop undesired response (y1) for off diagonal pairing.*

**Figure 9.** *Closed loop undesired response (y2) for off diagonal pairing.*

**Figure 10.** *Stirred mixing tank undesired response (y1) for diagonal pairing.*

#### **4.2 Loop pairing for stirred mixing tank process**

### **Case1: Diagonal pairing: Figures 10** and **11 Case 2: off-diagonal pairing: Figures 12** and **13**

Closed loop undesired response is shown in **Figures 9**–**12** and area is compared in **Table 3** conclude that off diagonal pairing is the recommended input output pairing.

#### **4.3 Loop pairing of CSTR process**

By assuming both the diagonal and off diagonal pairing, responses are obtained and its area under the responses are compared to find the desired input output pairing.

#### **Figure 11.**

*Stirred mixing tank undesired response (y2) for diagonal pairing.*


#### **Table 3.**

*Area under the undesired response.*

#### **Figure 12.**

*Stirred mixing tank undesired response (y1) for off diagonal pairing.*

**Figure 13.** *Stirred mixing tank undesired response (y2) for off diagonal pairing.*

#### **Case 1: Diagonal pairing: (y1-m1); (y2-m2) pairing**

**Figure 14** represents the closed loop undesired response, y2 of CSTR process by assuming the diagonal pairing when there is change in input m1 while m2 = 0.

For CSTR process, the closed-loop undesired response, y1 when there is change in m2 for diagonal pairing of CSTR process is represented in **Figure 15**.

#### **Case 2: Off-diagonal pairing: (y2-m1); (y1-m2) pairing**

**Figure 16** shows that for CSTR process, the closed loop undesired response y2 for the change in m2 by assuming off-diagonal pairing.

**Figure 14.**

*Closed loop undesired response for diagonal pairing (y2/m1).*

**Figure 15.** *Closed loop undesired response for diagonal pairing (y1/m2).*

**Figure 16.** *Closed loop undesired response for off diagonal pairing (y2/m2).*

The closed loop undesired response y1 for the change in m1 for off diagonal pairing in CSTR process is represented in **Figure 17**.

#### **4.4 Loop pairing of pH process**

For the pH process, closed loop undesired responses are obtained for both diagonal and off-diagonal pairing as shown in **Figures 18**–**21**. The areas obtained under these curves are given in **Table 4** and the same is compared to obtain the desired input–output pair based on minimum value.

**Figure 17.** *Closed loop undesired response for off diagonal pairing (y1/m1).*

**Figure 18.** *Closed loop undesired response for diagonal pairing (y2/m1).*

**Figure 19.** *Closed loop undesired response for diagonal pairing (y1/m2).*

*Control Configuration Selection for Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.107303*

**Figure 20.**

*Closed loop undesired response for off diagonal pairing (y2/m2).*

#### **Figure 21.**

*Closed loop undesired response for off diagonal pairing (y1/m1).*


#### **Table 4.**

*Proposed loop pairing - Comparison of areas under the load responses.*

#### Case 1: **Diagonal pairing: (y1-m1); (y2-m2) pairing**

**Figure 18** represents the closed loop undesired response y2 for the change in m1 for diagonal pairing of pH process.

For diagonal pairing in pH process, closed loop undesired response y1 for change in m2 is shown in **Figure 19**.

#### **Case 2: Off diagonal pairing (y2-m1); (y1-m2) pairing**

For the pH process, the closed loop undesired response y2 for change in m2 is represented in **Figure 20** for off-diagonal pairing.

For off diagonal pairing in pH process, **Figure 21** represents the closed-loop undesired response y1 for the change in m1.

**Figure 22.** *Closed loop undesired response for diagonal pairing (y2/m1).*

**Figure 23.** *Closed loop undesired response for diagonal pairing (y1/m2).*

**Figure 24.** *Closed loop undesired response for off diagonal pairing (y2/m2).*

*Control Configuration Selection for Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.107303*

**Figure 25.**

*Closed loop undesired response for off diagonal pairing (y1/m1).*

#### **4.5 Loop pairing of distillation column**

Similar to CSTR and pH process, the pairing for distillation column is also carried out to choose the correct input–output pairing.

#### **Case 1: Diagonal pairing:**

Closed loop undesired response for diagonal pairing of distillation column is represented in **Figure 22**.

**Figure 23** shows that closed loop response of distillation column for diagonal pairing.

#### **Case 2: Off- diagonal pairing**

For the distillation column, closed loop undesired response y2 for the change in m2 for off diagonal pairing is shown in **Figure 24**.

**Figure 25** represents that the closed loop undesired response y1 for off diagonal pairing of distillation column

Similarly, **Figures 22**–**25** represent undesired responses for distillation column and its area is given in **Table 4**.

From the **Table 4**, for the CSTR, pH and distillation column benchmark nonlinear chemical processes, it is clear that the minimum area is obtained only for (y1-m1); (y2-m2) pairing. Hence, desirable pairing for all these processes is **(y1-m1); (y2-m2)**.

#### **5. Conclusion**

As real-world physical systems are nonlinear, it is required to control these nonlinear processes. In order to design the nonlinear controller, one needs to choose the proper input–output pair. This chapter discusses the conventional method of loop pairing for the class of benchmark nonlinear system. RGA is calculated based on steady state information for the nonlinear system. The proposed method of input– output pair is also calculated for nonlinear benchmark process. The proposed method of input–output pair is validated with the conventional method. The proposed control configuration selection is based on closed loop response whereas the conventional method of pairing is based on gain. Thus, using the proposed method of control configuration selection one can design the good nonlinear control.

#### **Author details**

Sujatha Vijayaraghavan Department of Mechatronics Engineering, SRM Institute of Science and Technology, Chennai, India

\*Address all correspondence to: dr.vijaysuji@gmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Control Configuration Selection for Nonlinear Systems DOI: http://dx.doi.org/10.5772/intechopen.107303*

#### **References**

[1] Iqbal J, Ullah M, Khan SG, Khelifa B, Ćuković S. Nonlinear control systems – A brief overview of historical and recent advances. Nonlinear Engineering. 2017; **6**(4):301-312

[2] Lyapunov AM. The general problem of motion stability. Annals of Mathematics Studies. 1892;**17**

[3] Duffing G. Erzwungene Schwingungen bei veränderlicher Eigenfrequenz und ihre technische Bedeutung: R. Vieweg & Sohn; 1918

[4] Van der Pol B. On "relaxationoscillations". The London, Ed-inburgh, and Dublin Philosophical Magazine and Journal of Science. 1926;**2**: 978-992

[5] Bennett S. A history of Control Engineering 1800–1930. UK: Peter Peregrinus, Stevenage; 1979

[6] Kalman R, Bertram J. Control system analysis and design via the second method of Lyapunov. Transactions of the American Society of Mechanical Engineers. 1960;**1**:394-400

[7] Fuller AT. The early development of control theory. Transactions of the AMSE Journal of Dynamic Systems, Measurement, and Control. 1976a;**98**: 109-118

[8] Fuller A. The early development of control theory II. Transactions of the AMSE Journal of Dynamic Systems, Measurement, and Control. 1976b;**98**: 224-235

[9] Sontag ED, Wang Y. On characterizations of the input-to-state stability property. Systems & Control Letters. 1995;**24**:351-359

[10] Isidori A. Nonlinear Control Systems. Springer Science & Business Media; 2013

[11] Ogunnaike BA, Ray WH. Process Dynamics, Modeling and Control. Oxford/New York: Oxford University Press; 1994

[12] Sujatha V, Panda RC. Time domain modeling and control of complex nonlinear chemical processes using relay feedback test. Transactions of Institute of Measurement and Control. 2020; **42**(15):2885-2907

[13] Panda RC, Sujatha V. Introduction to PID Controllers – Theory, Tuning and Application to Frontier Areas. Croatia: London, UK.InTech Publisher; 2012. p. 105

[14] Sujatha V, Panda RC. Control configuration selection for multi input multi output processes. Journal of Process Control. 2013;**23**:1567-1574

[15] Panda RC. Synthesis of PID controller using desired closed loop criteria. Industrial and Engineering Chemistry Research. 2009;**47**(22): 1684-1692

#### **Chapter 10**

## Feedback Linearization Control of Interleaved Boost Converter Fed by PV Array

*Erdal Şehirli*

#### **Abstract**

One of the powerful methods of nonlinear control is the feedback linearization technique. This technique consists of input state and input-output linearization methods. In this chapter, the feedback linearization technique, including input state and input-output linearization methods, is described. Then, input-output linearization method is used for output voltage control of interleaved boost converter. Firstly, mathematical model of the interleaved boost converter is derived after that the method is applied. Besides, the interleaved boost converter is fed by a PV array under irradiation level and ambient temperature change. As a result of the simulation study, output voltage control of interleaved boost converter under reference voltage change is realized as desired.

**Keywords:** feedback linearization, interleaved, boost, PV

#### **1. Introduction**

In nature, most of the system is nonlinear. However, the analysis and design of a nonlinear controller require complex mathematical procedures. On the other hand, linear methods provide easy analysis and design of control systems. Nonetheless conducting the control of a wide range and with parameters changes, linear control and analysis methods are not so powerful, especially in the power electronics system. Power electronics systems have a highly nonlinear nature because of the switching states of the power switch. So, for designing the proper controller for such systems, the usage of nonlinear control methods is required.

In literature, [1–3] give the fundamental analysis, design, and methods of nonlinear systems. Ramirez and Ortega [4] applies nonlinear control methods to the basic power electronics converters. Kazimierczuk [5] describes the design analysis and operation of power electronics converters, including buck, boost, and buck-boost converters. Feedback linearization technique classified under nonlinear control is applied to fundamental power electronics converter, including boost converter in [6, 7], buck converter in [8], buck-boost converter in [9, 10]. Sira-Ramirez et al. [11, 12] present another nonlinear control method that is a sliding mode controller for boost converter, [13] buck converter, and [14] buck-boost converter. Furthermore, adaptive-based nonlinear controller is presented for boost converter [15], buck converter [16], and buck-boost converter [17]. Robust nonlinear controller is designed for buck in [18], boost in [19], and buck-boost converter in [20].

In this chapter, firstly feedback linearization technique, one of the most useful nonlinear methods, is described. Then, input-output linearization method classified under the feedback linearization technique is applied to the interleaved boost converter. Besides, as a power supply of interleaved boost converter, PV array is used with solar irradiation and ambient temperature changes. Furthermore, a nonlinear controller is designed to control the output voltage of the interleaved boost converter. After designing the nonlinear controller of interleaved boost converter, it is compared to a linear controller. As a result of the simulation study, it is concluded that a nonlinear controller for the output voltage of interleaved boost converter gives better results than a linear type controller.

#### **2. Feedback linearization**

Feedback linearization techniques have become very popular in recent years because of providing the linear equivalent systems of nonlinear systems by exact linearization. Feedback linearization techniques provides the transformations of nonlinear systems into fully or partly linear systems, algebraically so that linear control techniques can be used. In feedback linearization, linearization is realized by exact state transformation and feedback, making these techniques different from conventional linearization aiming linear approximation of the system.

Feedback linearization technique can be classified into two methods that are inputstate linearization and input–output linearization.

#### **2.1 Input-state linearization**

In input-state linearization method, it is aimed to linearize state Eq. (1) completely. In order to cancel the nonlinearities in the original system, state transformation and input transformation are used. After applying the proper transformation, nonlinear system is transformed into the linear system [2].

If there is a nonlinear system given with the form of Eq. (1).

$$
\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, \mathbf{u}) \tag{1}
$$

There should be a state transformation given in Eq. (2) and an input transformation in Eq. (3) in order to apply input-state linearization.

$$\mathbf{z} = \mathbf{z}(\mathbf{x}) \tag{2}$$

$$\mathbf{u} = \mathbf{u}(\mathbf{x}, \mathbf{v}) \tag{3}$$

There are some points to bear in mind about applying input-state control, which are as follows:

• Even though the results obtained by input-state linearization control is valid in a large region, they may not be global. There may also occur some singularity

points, while the initial state is at those points, controller may not bring the system to the equilibrium point.


#### **2.2 Input-output linearization**

Another feedback linearization method is input-output linearization method. In this method, the main process is to generate a linear differential relation between the system output and new control input. This method can be summarized in three stages, as follows [1, 2]:


In order to explain input-output linearization method, think about a system given in Eqs. (4) and (5).

$$
\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}) + \mathbf{g}(\mathbf{x})\mathbf{u} \tag{4}
$$

$$\mathbf{y} = \mathbf{h}(\mathbf{x})\tag{5}$$

To have input-output relation, output should be differentiated till input appears. After differentiating Eq. (5), Eq. (6) is acquired as in refs. [1, 2, 10].

$$\mathbf{j} = \frac{\partial \mathbf{h}}{\partial \mathbf{x}} [\mathbf{f}(\mathbf{x}) + \mathbf{g}(\mathbf{x})\mathbf{u}] = \mathbf{L}\_{\mathbf{f}} \mathbf{h}(\mathbf{x}) + \mathbf{L}\_{\mathbf{g}} \mathbf{h}(\mathbf{x}) \mathbf{u} \tag{6}$$

Lfh and Lgh in Eq. (6) are described as Lie derivatives of f(x) and h(x) and given in Eq. (7).

$$\mathbf{L}\_{\mathbf{f}}\mathbf{h}(\mathbf{x}) = \frac{\partial \mathbf{h}}{\partial \mathbf{x}}\mathbf{f}(\mathbf{x}),\\\mathbf{L}\_{\mathbf{g}}\mathbf{h}(\mathbf{x}) = \frac{\partial \mathbf{h}}{\partial \mathbf{x}}\mathbf{g}(\mathbf{x})\tag{7}$$

After rj times derivation of the output, considering the condition in Eq. (8), Eq. (9) is acquired.

$$\mathbf{L}\_{\mathbf{g}^{\mathrm{i}}} \mathbf{L}\_{\mathbf{f}}^{\mathrm{r}-1} \mathbf{h}\_{\mathrm{i}}(\mathbf{x}) \neq \mathbf{0} \tag{8}$$

$$\mathbf{y}\_{i}^{\mathbf{r}\_{i}} = \mathbf{L}\_{\mathbf{f}}^{\mathbf{r}\_{i}}\mathbf{h}\_{i} + \sum\_{\mathbf{i}}^{\mathbf{n}} \left(\mathbf{L}\_{\mathbf{g}\mathbf{i}}\mathbf{L}\_{\mathbf{f}}^{\mathbf{r}\_{i}-1}\mathbf{h}\_{i}\right)\mathbf{u}\_{i} \tag{9}$$

If there is a multi-input, multi-output system, considering to apply Eq. (9) to all outputs, Eq. (10) is obtained.

$$
\begin{bmatrix} \mathbf{y}\_1^{\mathrm{r\_1}} \\ \cdots \\ \cdots \\ \mathbf{y}\_n^{\mathrm{r\_n}} \end{bmatrix} = \begin{bmatrix} \mathbf{L}\_\mathbf{f}^{\mathrm{r\_1}} \mathbf{h}\_1(\mathbf{x}) \\ \cdots \\ \cdots \\ \mathbf{L}\_\mathbf{f}^{\mathrm{r\_n}} \mathbf{h}\_n(\mathbf{x}) \end{bmatrix} + \begin{bmatrix} \mathbf{L}\_\mathbf{g} \mathbf{L}\_\mathbf{f}^{\mathrm{r\_1}-1} \mathbf{h}\_1 & \cdots & \mathbf{L}\_\mathbf{g} \mathbf{L}\_\mathbf{f}^{\mathrm{r\_n}-1} \mathbf{h}\_1 \\ \vdots & \ddots & \vdots \\ \mathbf{L}\_\mathbf{g} \mathbf{L}\_\mathbf{f}^{\mathrm{r\_n}-1} \mathbf{h}\_1 & \cdots & \mathbf{L}\_\mathbf{g} \mathbf{L}\_\mathbf{f}^{\mathrm{r\_n}-1} \mathbf{h}\_n \end{bmatrix} \begin{bmatrix} \mathbf{u}\_1 \\ \cdots \\ \cdots \\ \mathbf{u}\_n \end{bmatrix} = \mathbf{a}(\mathbf{x}) + \mathbf{E}(\mathbf{x}) \mathbf{u} \tag{10}
$$

After selecting new control variable, input–output linearization is acquired as in Eq. (11).

$$
\begin{bmatrix} \mathbf{u\_1} \\ \cdots \\ \cdots \\ \mathbf{u\_n} \end{bmatrix} = -\mathbf{E^{-1}} \begin{bmatrix} \mathbf{L\_f^{r\_1}h\_1(x)} \\ \cdots \\ \cdots \\ \mathbf{L\_f^{r\_n}h\_n(x)} \end{bmatrix} + \mathbf{E^{-1}} \begin{bmatrix} \mathbf{v\_1} \\ \cdots \\ \cdots \\ \mathbf{v\_n} \end{bmatrix} \tag{11}$$

The relation between system output y and new control input v is given in Eq. (12). In Eq. (12), k is constant to be chosen ensuring the stability of the system.

$$
\begin{bmatrix} \mathbf{y}\_1^{\mathsf{r}\_1} \\ \cdots \\ \cdots \\ \mathbf{y}\_\mathbf{n}^{\mathsf{r}\_\mathbf{n}} \end{bmatrix} = \begin{bmatrix} \mathbf{v}\_1 \\ \cdots \\ \cdots \\ \cdots \\ \mathbf{v}\_\mathbf{n} \end{bmatrix}, \begin{bmatrix} \mathbf{v}\_1 \\ \cdots \\ \cdots \\ \mathbf{v}\_\mathbf{n} \end{bmatrix} = \begin{bmatrix} -\mathbf{k}\_{1(\mathbf{r}-1)} \mathbf{y}^{\mathsf{r}-1} \dots -\mathbf{k}\_{11(\mathbf{r}-1)} \mathbf{y}^1 - \mathbf{k}\_{10} \left(\mathbf{y}\_1 - \mathbf{y}\_1^\*\right) \\ \cdots \\ \cdots \\ -\mathbf{k}\_{\mathbf{n}(\mathbf{r}-1)} \mathbf{y}^{\mathsf{r}-1} \dots -\mathbf{k}\_{21(\mathbf{r}-1)} \mathbf{y}^1 \end{bmatrix} \tag{12}
$$

Also Eq. (13) gives the closed loop error dynamics relating to system output, reference values, and k constants as in Ref. [21].

$$
\begin{bmatrix}
\mathbf{e} \\
\cdots \\
\cdots \\
\mathbf{e}^{\mathbf{r}}
\end{bmatrix} = \begin{bmatrix}
\mathbf{y} - \mathbf{y}^\* \\
\cdots \\
\mathbf{y}^r - \mathbf{y}^{\*^{r^\*}}
\end{bmatrix},
\begin{bmatrix}
\mathbf{e}\_1^r + \mathbf{k}\_{1(r-2)}\mathbf{e}\_1^{r-1} + \dots + \mathbf{k}\_{11}\mathbf{e}\_1^1 + \mathbf{k}\_{10}\mathbf{e}\_1 \\
\cdots \\
\mathbf{e}\_n^r + \mathbf{k}\_{n(r-1)}\mathbf{e}\_n^{r-1} + \dots + \mathbf{k}\_{21}\mathbf{e}\_n^1 + \mathbf{k}\_{20}\mathbf{e}\_n \\
\cdots \\
\mathbf{0}
\end{bmatrix} = \begin{bmatrix}
\mathbf{0} \\
\cdots \\
\cdots \\
\mathbf{0}
\end{bmatrix} \tag{13}
$$

#### **2.3 Interleaved boost DC-DC converter**

Circuit structure of interleaved boost DC-DC converter is shown in **Figure 1**. It is seen that two separate boost converters are connected to the DC bus. The difference of interleaved boost converter from boost converter is that both switches are conducted with time delay in order to have input current having less ripple content.

Operation of the interleaved boost converters can be summarized as follows: When S1 is in switch-on position, S2 is turned off, L1 current increases linearly, L2 current decreases, and D2 conducts. When S1 is switched off, S2 is switched on and L2 current increases linearly, L1 current decreases, and D1 conducts. While the inductor's current decreases, inductors transfer their energy to the load. Passive components of the interleaved boost converter can be chosen by using Eqs. (14) and (15) as in Ref. [5].

*Feedback Linearization Control of Interleaved Boost Converter Fed by PV Array DOI: http://dx.doi.org/10.5772/intechopen.106355*

**Figure 1.** *Interleaved boost DC-DC converter.*

**Figure 2.** *(a) Switch-on, (b) switch-off position of the boost converter.*

$$\mathbf{L} = \frac{\text{RD}(\mathbf{1} - \mathbf{D})^2}{\mathbf{2f\_s}} \tag{14}$$

$$\mathbf{C} = \frac{\mathbf{D} \mathbf{V}\_o}{\mathbf{f}\_s \mathbf{R} \Delta \mathbf{V}\_\mathbf{C}} \tag{15}$$

While deriving a mathematical model of interleaved boost DC-DC converter, model of boost DC-DC converter can be used. A mathematical model of the boost converter is obtained by switching on and off positions, shown in **Figure 2**. By applying Kirchhoff voltage and current laws for both circuits, a mathematical model of the boost converter is written.

At switch-on interval, after applying Kirchhoff voltage and current law, Eqs. (16) and (17) are obtained. The model for switch-on interval can be written in Eq. (19) with the form Eq. (18) of state-space representation.

$$\frac{\text{di}\_{\text{L}}}{\text{dt}} = \frac{\text{V}\_{\text{in}}}{\text{L}\_{1}} \tag{16}$$

$$\frac{d\mathbf{V\_o}}{dt} = -\frac{\mathbf{V\_o}}{\mathbf{RC}}\tag{17}$$

$$
\dot{\mathbf{x}} = \mathbf{A}\mathbf{x} + \mathbf{B}\mathbf{u} \tag{18}
$$

$$
\begin{bmatrix} \dot{\mathbf{i}}\_{\mathcal{L}} \\ \mathbf{V}\_{\bullet} \end{bmatrix} = \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\mathbb{I}\zeta\_{\mathrm{RC}} \end{bmatrix} \begin{bmatrix} \dot{\mathbf{i}}\_{\mathcal{L}} \\ \mathbf{V}\_{\bullet} \end{bmatrix} + \begin{bmatrix} \mathbb{I}\zeta\_{\mathrm{L}} \\ \mathbf{0} \end{bmatrix} \mathbf{V}\_{\mathrm{in}} \tag{19}
$$

At the switch-off interval, Kirchhoff voltage and current law are applied to **Figure 2b** and Eqs. (20) and (21) are obtained. It is also written in the form of Eq. (22).

$$\frac{\text{di}\_{\text{L}}}{\text{dt}} = \frac{\text{V}\_{\text{in}}}{\text{L}\_{\text{1}}} - \frac{\text{V}\_{\text{o}}}{\text{L}\_{\text{1}}} \tag{20}$$

$$\frac{d\mathbf{V\_o}}{dt} = -\frac{\mathbf{i\_L}}{\mathbf{C}} - \frac{\mathbf{V\_o}}{\mathbf{RC}}\tag{21}$$

$$
\begin{bmatrix}
\dot{\mathbf{i}}\_{\mathcal{L}} \\
\mathbf{V}\_{\bullet}
\end{bmatrix} = \begin{bmatrix}
\mathbf{0} & -\mathbb{1}\not\mathsf{L} \\
\mathbb{1}\not\mathsf{C} & -\mathbb{1}\not\mathsf{R}
\end{bmatrix} \begin{bmatrix}
\dot{\mathsf{i}}\_{\mathcal{L}} \\
\mathbf{V}\_{\bullet}
\end{bmatrix} + \begin{bmatrix}
\mathsf{1}\not\mathsf{L} \\
\mathbf{0}
\end{bmatrix} \mathbf{V}\_{\mathrm{in}} \tag{22}
$$

Mathematical model of the boost converter can be derived in Eq. (24) by using state-space average technique given in Eq. (23).

$$\mathbf{A} = \mathbf{d} \mathbf{A}\_1 + (\mathbf{1} - \mathbf{d}) \mathbf{A}\_2, \mathbf{B} = \mathbf{d} \mathbf{B}\_1 + (\mathbf{1} - \mathbf{d}) \mathbf{B}\_2 \tag{23}$$

$$
\begin{bmatrix} \dot{\mathbf{i}}\_{\mathcal{L}} \\ \mathbf{V}\_{\bullet} \end{bmatrix} = \begin{bmatrix} \mathbf{0} & -^{1+d} \not\succ \mathbb{I} \\ ^{1-d} \not\succ -^{1} \not\succ \mathbb{K} \end{bmatrix} \begin{bmatrix} \dot{\mathbf{i}}\_{\mathcal{L}} \\ \mathbf{V}\_{\bullet} \end{bmatrix} + \begin{bmatrix} \mathbb{M} \\ \mathbf{0} \end{bmatrix} \mathbf{V}\_{\mathrm{in}} \tag{24}
$$

State-space mathematical mode in Eq. (24) can be reordered as in Eq. (25) to apply input–output linearization technique. As a system input in Eq. (25), d is chosen.

$$
\begin{bmatrix}
\dot{\mathbf{i}}\_{\mathcal{L}} \\
\mathbf{V}\_{\mathbf{o}}
\end{bmatrix} = \begin{bmatrix}
\mathbf{V}\_{\circ}\boldsymbol{\zeta}\_{\mathsf{L}} + \mathbf{V}\_{\mathsf{in}}\boldsymbol{\zeta}\_{\mathsf{L}} \\
\mathbf{i}\_{\mathsf{i}}\boldsymbol{\zeta}\_{\mathsf{C}} - \mathbf{V}\_{\mathsf{o}}\boldsymbol{\zeta}\_{\mathsf{RC}}
\end{bmatrix} + \begin{bmatrix}
\mathbf{V}\_{\circ}\boldsymbol{\zeta}\_{\mathsf{L}} \\
\mathbf{-i}\_{\mathsf{i}}\boldsymbol{\zeta}\_{\mathsf{C}}
\end{bmatrix} \mathbf{d} \tag{25}
$$

The purpose of the control is to regulate output voltage Vo, so as an output variable Vo is chosen as in Eq. (26).

$$\mathbf{y} = \mathbf{h}(\mathbf{x}) = \mathbf{V}\_{\mathbf{o}} \tag{26}$$

The way of using input–output linearization techniques is to derive system output until system input is obtained in output. So, after derivation of system output Vo, Eq. (27) is obtained.

$$
\dot{\mathbf{y}} = \dot{\mathbf{V}}\_{\bullet} = \mathbb{W}\_{\bullet} - \mathbb{V}\_{\bullet} \langle\_{\mathrm{RC}} \text{--i.d} \rangle\_{\mathrm{C}} \tag{27}
$$

It is observed in (27) that at the first derivation, system input is found at system output. It means that the relative degree of the system is "1." Eq. (27) is rearranged in Eq. (28) with respect to system input.

$$\mathbf{d} = \left(-\dot{\mathbf{V}}\_{\bullet} + \mathbb{H}\boldsymbol{\angle}\_{\bullet} - \mathbb{V}\_{\mathbb{V}\nmid \mathbb{R}}\right) \frac{\mathbf{C}}{\mathbf{i}\_{\bullet}} \tag{28}$$

The next stage is to choose a new control input. The control input is chosen regarding to relative degree in Eq. (29).

*Feedback Linearization Control of Interleaved Boost Converter Fed by PV Array DOI: http://dx.doi.org/10.5772/intechopen.106355*

$$\mathbf{y}^1 = \mathbf{V}\_1 \tag{29}$$

After choosing new control input as in Eq. (30), and replacing it in Eq. (28), Eq. (31) is obtained. Eq. (31) is the system input, nonlinear controller is operated with respect to it.

$$\mathbf{V\_1 = k\_1(V\_o - V\_o^\*)}\tag{30}$$

$$\mathbf{d} = \left(-\mathbf{k}\_1 \left(\mathbf{V}\_\mathbf{o} - \mathbf{V}\_\mathbf{o}^\*\right) + \mathbf{i}\sqrt{\mathbf{c}} - \mathbf{V}\_\mathbf{v}\mathbf{/}\_\mathbf{RC}\right) \frac{\mathbf{C}}{\mathbf{i}\_\mathbf{L}}\tag{31}$$

In order to provide the operation of the interleaved boost converter, S1 is switched by using the duty cycle calculated in Eq. (31), and S2 is switched by using the same duty cycle having 90° delay.

#### **3. Simulations**

Interleaved boost DC-DC converter fed by PV array is controlled by input–output linearization technique by means of the simulation. Simulation study is realized by Matlab/Simulink software. The circuit diagram of the study is shown in **Figure 3**. It is seen in the figure that there is a nonlinear controller that is based on the Eq. (31) in chapter 2.

It is seen that interleaved boost converter is connected to the output of the PV array. Because of the interleaved nature of the converter, input current has a lower

**Figure 3.** *PV-fed interleaved boost DC-DC converter with nonlinear control.*

ripple than the classical boost converter. The simulation diagram of the circuit is shown in **Figure 4**.

Parameters used in the simulation are given in **Table 1**.

#### **Figure 4.**

*PV-fed interleaved boost DC-DC converter simulation diagram.*


#### **Table 1.**

*Parameter values used in the study.*

**Figure 5.** *Irradiation level and ambient temperature change of PV array.*

#### *Feedback Linearization Control of Interleaved Boost Converter Fed by PV Array DOI: http://dx.doi.org/10.5772/intechopen.106355*

Simulation studies are realized under irradiation and ambient temperature change of the PV array; these changes are sketched in **Figure 5**.

Output voltage under reference change is obtained as in **Figure 6** by the nonlinear controller. Reference voltage is changed from 150 V to 200 V at 1 s, from 200 V to 250 V at 2 s, from 250 V to 200 V at 3 s, and from 200 V to 150 V at 4 s. Under the reference changes, output voltage is achieved as desired with 0.2 V at 150 V reference, 0.5 V at 200 V reference, 1.1 V at 250 V reference, steady-state error. Also, steady-state error is obtained as 0.015 s at 150 V reference, 0.005 s at 200 V

**Figure 6.** *Output voltage of interleaved boost DC-DC converter.*

**Figure 7.** *Output voltage of interleaved boost DC-DC converter with PI and nonlinear controller.*

reference, 0.008 s at 250 V reference, and 0.07 s at second 200 V reference, 0.125 s at second 150 V.

In order to compare the performance of the nonlinear controller, the same system is controlled by the PI controller. In **Figure 7**, the results obtained by both controllers are sketched.

**Figure 7** shows that by PI controller desired reference voltages are not acquired, however, nonlinear controller provides desired reference voltages.

#### **4. Conclusions**

In this chapter, firstly feedback linearization techniques, including input-state and input–output linearization, are described. Then input–output linearization technique is applied to interleaved boost converter that is connected to the output of the PV array. Besides, solar irradiation and ambient temperature of PV array are changed during the simulation study.

The result obtained by the nonlinear controller is compared to the linear PI controller. It is determined by the study that the nonlinear controller ensures the desired output voltage with a maximum 1.1 V steady-state error and 0.125 s settling time, whereas the linear PI controller could not provide the reference voltage as it desired.

In future work, the implementation of the study is targeted to be carried out.

#### **Conflict of interest**

The authors declare no conflict of interest.

#### **Author details**

Erdal Şehirli Kastamonu University, Kastamonu, Turkey

\*Address all correspondence to: esehirli@kastamonu.edu.tr

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Feedback Linearization Control of Interleaved Boost Converter Fed by PV Array DOI: http://dx.doi.org/10.5772/intechopen.106355*

#### **References**

[1] Khalil H. Nonlinear Systems. 3rd ed. New Jersey: Pearson Education; 2000. p. 505

[2] Slotine JJE, Weiping L. Applied Nonlinear Control. 1st ed. New Jersey: Prentice Hall; 1990. p. 207

[3] Isıdori A. Nonlinear Control Systems: An Introduction. 2nd ed. Heidelberg: Springer; 1995. p. 178. DOI: 10.1007/ BFb0006368

[4] Ramirez HS, Ortega R. Control Design Techniques in Power Electronics Devices. 1st ed. Germany: Springer; 2010. p. 235

[5] Kazimierczuk MK. Pulse-Width Modulated DC-DC Power Converters. 1st ed. Singapore: Wiley; 2008. p. 23

[6] Zheng H, Shuai D. Nonlinear control of boost converter by state feedback exact linearization. In: Proceedings of the IEEE 24th Chinese Control and Decision Conference (CCDC). Taiyuan, China: IEEE; 2012. pp. 3502-3506

[7] Arora S, Balsara P, Bhatia D. Input– output linearization of a boost converter with mixed load (constant voltage load and constant power load). IEEE Transactions on Power Electronics. 2019; **34**:815-825. DOI: 10.1109/ TPEL.2018.2813324

[8] Salimi M, Siami S. Closed-loop control of DC-DC buck converters based on exact feedback linearization. In: Proceedings of the IEEE 4th International Conference on Electric Power and Energy Conversion Systems (EPECS). Sharjah: IEEE; 2015. p. 1

[9] Ding-xin S. State feedback exact linearization control of buck-boost converter. In: Proceedings of the IEEE International Power Electronics and Application Conference and Exposition. Shanghai, China: IEEE; 2014. p. 1

[10] Sehirli E, Altınay M. Input-output linearization control of single-phase buck-boost power factor corrector. In: Proceedings of the IEEE 47th International Universities Power Engineering Conference (UPEC); 2012; UK: IEEE; 2012. p. 1-6

[11] Sira-Ramirez H, Rios-Bolivar M. Sliding mode control of dc-to-dc power converters via extended linearization. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications. 1994;**41**:652-661. DOI: 10.1109/81.329725

[12] Oucheriah S, Guo R. PWM-based adaptive sliding-mode control for boost DC–DC converters. IEEE Transactions on Industrial Electronics. 2013;**60**: 3291-3294. DOI: 10.1109/ TIE.2012.2203769

[13] Wang Z, Li S, Li Q. Discrete-time fast terminal sliding mode control design for DC–DC Buck converters with mismatched disturbances. IEEE Transactions on Industrial Informatics. 2020;**16**:1204-1213. DOI: 10.1109/ TII.2019.2937878

[14] Lineas-Flores J, Juarez-Abad A, Hernandez-Mendez A, Castro-Heredia O, Guerrero-Castellanos JF, Heredia-Barba R, et al. Sliding mode control based on linear extended state observer for DC-to-DC Buck–Boost power converter system with mismatched disturbances. IEEE Transactions on Industry Applications. 2022;**58**:940-950. DOI: 10.1109/TIA.2021.3130017

[15] Bellinaso L, Figueira H, Basquera MF, Vieira RP, Grundling HA, Michels L. Cascade control with adaptive voltage controller applied to photovoltaic boost converters. IEEE Transactions on Industry Applications. 2019;**55**:1903-1912. DOI: 10.1109/ TIA.2018.2884904

[16] Wei Z, Bao-bin L. Analysis and design of DC-DC buck converter with nonlinear adaptive control. In: Proceedings of the IEEE 7th International Conference on Computer Science & Education (ICCSE). Melbourne, Australia: IEEE; 2012. pp. 1036-1038

[17] Soriano-Rangel C, He W, Mancilla-David F, Ortega R. Voltage regulation in Buck–Boost converters feeding an unknown constant power load: An adaptive passivity-based control. IEEE Transactions on Control Systems Technology. 2021;**29**:395-402. DOI: 10.1109/TCST.2019.2959535

[18] Song P, Cui C, Bai Y. Robust output voltage regulation for DC-DC Buck converters under load variations via sampled-data Sensorless control. IEEE Access. 2018;**6**:10688-10698. DOI: 10.1109/ACCESS.2018.2794458

[19] Mummamdi V, Mohan BK. Robust digital voltage-mode controller for fifthorder boost converter. IEEE Transactions on Industrial Electronics. 2011;**58**:263-277. DOI: 10.1109/ TIE.2010.2044130

[20] Buso S. Design of a Robust Voltage Controller for a Buck-boost converter using μ-synthesis. IEEE Transactions on Control Systems Technology. 1999;**7**: 222-229. DOI: 10.1109/87.748148

[21] Lee T. Input output linearization and zero-dynamics control of three-phase AC/DC voltage-source converters. IEEE Transactions on Power Electronics. 2003;**18**:11-22. DOI: 10.1109/ TPEL.2002.807145

#### **Chapter 11**

## Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines

*Dongran Song, Ziqun Li, Jian Yang, Mi Dong, Xiaojiao Chen and Liansheng Huang*

#### **Abstract**

This chapter presents a nonlinear intelligent predictive control using multi-step prediction model for the electrical motor-based yaw system of an industrial wind turbine. The proposed method introduces a finite control set under constraints for the demanded yaw rate, predicts the multi-step yaw error using the control set element and the prediction wind directions, and employs an exhaustive search method to search the control output candidate giving the minimal value of the objective function. As the objective function is designed for a joint power and actuator usage optimization, the weighting factor in the objective function is optimally determined by the fuzzy regulator that is optimized by an intelligent algorithm. Finally, the proposed method is demonstrated by simulation tests using real wind direction data.

**Keywords:** nonlinear model predictive, intelligent algorithm, yaw control, wind turbine

#### **1. Introduction**

With the increase in social demand, the scale of wind power generation continues to expand. The total installed capacity of global wind power in 2020 has reached 743GW, which means that wind turbines (WTs) are moving towards large-scale and high-capacity. Typical WT controls include pitch, torque, and yaw control, of which about 80% of the research is on the first two, while yaw control has received limited attention [1]. Meanwhile, with the large-scale development of WTs [2], problems such as power reduction and load increase caused by the yaw misalignment can no longer be ignored. According to the investigation result, the potential energy loss due to yaw misalignment is about 2.7% and the failure rate of yaw system accounts for approximately 12.5% of the total failure rate of WTs [3]. Therefore, there is an urgent need to improve the yaw control performance of WT.

Yaw control changes the direction of blade rotating surface by turning the nacelle horizontally. Traditional yaw control methods include Logic Control [4], PID, fuzzy PID [5], and so on. Yet, wind direction sensor suffers from the disturbance of rotor rotation, which makes it difficult to accurately measure the incoming wind direction. In order to

avoid measurement error, some wind direction estimation methods are proposed, including the hill-climbing search algorithm [6] and so on. However, these improved methods have limited effect on large-scale WTs, and they are rarely applied in industry. In short, with the development of WT towards large-scale, traditional yaw control methods generally have shortcomings, which promote the development of new methods.

As the development of advanced prediction technologies like LiDAR [7], more recent research has concentrated on the advanced predictive controls [8]. Model predictive control (MPC) is a typical representative of predictive control, which has been proposed for the torque and yaw control of WTs, and has achieved good control performance [9–11]. The MPC for yaw system involves performance indicators such as energy capture efficiency and yaw actuator usage [12]. By adding weight coefficients, each performance indicator can be combined into a single objective function. Obviously, the setting of weight coefficient could influence the control performance. In order to find the connotative knowledge, potential regulations and methods, the Pareto optimization theory is used in [13] to explore and gives the suggestion that weight coefficients should be regulated according to the wind characteristics. However, how to effectively adjust the weight coefficients in real time remains unsolved.

Fuzzy logic (FL) is a potential solution to regulate the weight coefficient for model-predictive yaw control. FL is an abstraction of the approximate reasoning characteristics of human decision-making, which has been applied in many fields [14]. Yet, the excessive dependence of FL on expert experience leads to artificially set membership functions (MFs) and fuzzy rules that could limit the control performance. To this end, the optimization of FL is proposed to enhance its effect [15]. In summary, the advantages of FL and the potential room for optimization make it possible to effectively regulate the weight coefficient of MPC.

Motivated by above observations, in this study, the nonlinear MPC (NMPC) method with multi-step prediction models for the yaw control system is proposed. Specifically, an "ideal" NMPC controller that employs perfect previewed wind directions into the prediction model is used in this study and the NMPC problem is solved by using an exhaustive search method based on the sequential diagram. Further, a novel method of using the mind of FL to dynamically regulate the weight coefficient of the NMPC is proposed, which is called as fuzzy inference weight coefficient regulator (FIWR). Specifically, the fuzzy rules and MFs of FIWR are simultaneously optimized by an advanced intelligent algorithm, so as to fully exert the advantage of FIWR. By doing so, it is achieved the deep optimization of NMPC performance for yaw system.

#### **2. Model and methodology**

This study aims at proposing and studying the NMPC method for the yaw control system on a horizontal-axis WT. The yaw model for WTs will be introduced first, followed by the design of finite-set NMPC. On this basis, a weight coefficient regulator based on fuzzy inference is proposed, and the multi-objective optimization problem of fuzzy inference is summarized. Then a proposed solution strategy for this problem is introduced, and a multi-objective intelligent optimization algorithm is improved to solve it.

#### **2.1 Yaw system modeling**

The yaw system can be modeled according to the three types of yaw dynamics: rigid yaw, flexible yaw, and controlled yaw torque. Since the yaw rate of large WTs is *Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines DOI: http://dx.doi.org/10.5772/intechopen.105484*

very slow, the yaw rate is set to a fixed value of 0.5 deg./s. There are three yaw situations for WTs: no deflection (0 deg./s), clockwise deflection (0.5 deg./s) and counterclockwise deflection (�0.5 deg./s). In realistic operation, considering the safety requirement of the yaw actuator, the yaw rate at current moment is affected by the yaw rate at previous moment and meets following constraint:

$$\dot{\theta}\_{np}(k+1) = \begin{cases} 0.5 \text{deg/s} & \exists \dot{\theta}\_{np}(k) \in \{0, 0.5 \text{deg/s}\} \\ 0 \text{deg/s} & \exists \dot{\theta}\_{np}(k) \in \{0, 0.5, -0.5 \text{deg/s}\} \\ -0.5 \text{deg/s} & \exists \dot{\theta}\_{np}(k) \in \{0, -0.5 \text{deg/s}\} \end{cases} \tag{1}$$

where \_ *<sup>θ</sup>np*ð Þ*<sup>k</sup>* is the yaw rate in *<sup>k</sup>*-th control period, and \_ *θnp*ð Þ *k* þ 1 is the yaw rate in (*k* + 1)-th control period. According to Eq.(1), in a certain state at current moment, the control action of the system at next moment is an element in a finite set. This yaw action mode provides the basis for the finite-set NMPC.

#### **2.2 Nonlinear model predictive control**

The NMPC for the yaw system aims at maximizing energy capture through tracking the wind direction while avoiding over-usage of the yaw actuator. Accordingly, the yaw error and the yaw actuator usage are used to form the overall objective function. In the following, the proposed NMPC method will be specified in terms of the prediction model, the objective function, and the finite-set NMPC solver.

#### *2.2.1 Multi-step prediction model*

The primary control objective of the yaw control system is to minimize the yaw error. Thus, the yaw error *θye* is selected as the state variable, and its one-step prediction model in the form of the discrete equation can be given as follows:

$$
\theta\_{\text{pe}}(k+\mathbf{1}|k) = \theta\_{\text{ud}}(k+\mathbf{1}|k) - \theta\_{\text{vp}}(k+\mathbf{1}|k) \tag{2}
$$

where *k* is the *k*-th control period; *θye*ð Þ *k* þ 1j*k* , *θwd*ð Þ *k* þ 1j*k* and *θnp*ð Þ *k* þ 1j*k* are the next-step prediction values of yaw error, wind direction, and nacelle position, respectively.

Since the nacelle is rotated by the yaw control system at a certain yaw rate, the predicted nacelle position *θnp*ð Þ *k* þ 1j*k* can be obtained by:

$$
\theta\_{np}(k+\mathbf{1}|k) = \theta\_{np}(k) + \dot{\theta}\_{np}(k+\mathbf{1}) \cdot T\_s \tag{3}
$$

where *θnp*ð Þ*k* is the nacelle position at the *k*-th control period, *Ts* is the control period, and \_ *θnp*ð Þ *k* þ 1 is the yaw rate at the (*k* + 1)-th control period.

By using Eq. (2) and (3), the *m*-step prediction model of the yaw error *θye*ð Þ *k* þ *m*j*k* can be obtained by:

$$
\theta\_{\rm jet}(k+m|k) = \theta\_{\rm wd}(k+m|k) - \theta\_{\rm np}(k) - \sum\_{i=1}^{m} \dot{\theta}\_{\rm np}(k+i) \cdot T\_s \tag{4}
$$

where *θwd*ð Þ *k* þ *m*j*k* is the variable that needs to be predicted, which can be predicted by LiDAR or some prediction methods.

#### *2.2.2 Objective function*

The first goal of yaw control is to improve the energy capture of the WT, and the second goal is to reduce the yaw actuator usage time. Considering that the energy capture of the WT has a cosine-squared relation to the yaw error, the two objectives can be express as:

$$E\_{cap} = \sum\_{i=k+1}^{k+m} \frac{1}{2} \rho A\_r C\_p V\_0^3 \cos^2 \left(\theta\_{\text{ye}}(i)\right) \tag{5}$$

$$t\_{yaw} = \sum\_{i=k+1}^{k+m} \left( |\dot{\theta}\_{np}(i)| > 0 \right) \cdot T\_s \tag{6}$$

where *ρ* is air density, *Ar* is rotor area, *Cp* is aerodynamic power coefficient, *V*<sup>0</sup> is the effective wind speed. Considering the dimensional difference, the two control objectives after normalization can be expressed as follows:

$$\xi = \frac{\left(E\_{ideal} - E\_{cap}\right)}{E\_{ideal}} = \mathbf{1} - \left(\sum\_{n=k+1}^{k+m} \cos^2\left(\theta\_{\mathcal{H}}(n)\right)\right) / m \tag{7}$$

$$\zeta = \frac{t\_{yaw}}{t\_{tol}} = \left(\sum\_{n=k+1}^{k+m} \left( |\dot{\theta}\_{np}(n)| > 0 \right) \right) / m \tag{8}$$

where *ξ* is the energy capture loss ratio caused by the yaw error, *Ecap* is the energy capture considering yaw error, *Eideal* is the energy capture in an ideal state; *ζ* is the yaw actuator usage ratio, *tyaw* is the yaw time, *ttol* is the running time of the WT.

By adding a weight coefficient between *ξ* and *ζ*, the objective function *QF* of the NMPC can be written as:

$$\begin{split} QF &= (\mathbf{1} - \boldsymbol{\alpha}) \cdot \xi + \boldsymbol{\alpha} \cdot \zeta \\ &= (\mathbf{1} - \boldsymbol{\alpha}) \cdot \left( \mathbf{1} - \left( \sum\_{n=k+1}^{k+m} \cos^2(\theta\_{\mathbb{M}}(n)) \right) / m \right) \\ &+ \boldsymbol{\alpha} \cdot \left( \left( \sum\_{n=k+1}^{k+m} \left( |\dot{\theta}\_{\mathbb{M}}(n)| > \mathbf{0} \right) \right) / m \right) \end{split} \tag{9}$$

where *ω* is the weight coefficient, which is used to balance energy capture loss ratio *ξ* and yaw actuator usage ratio *ζ*.

#### *2.2.3 Finite-set NMPC solver*

So far, the NMPC problem for the yaw system has been formulated by Eq. (1)–(9), which is a nonlinear optimization issue under constraints. To facilitate the problem solver, the control horizon is selected to be equal to the prediction horizon. According to Eq. (1), under the finite prediction horizon, the control law of NMPC always belongs to a finite set. Thus, the designed optimal problem can be effectively solved by using an exhaustive search (ES) method.

*Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines DOI: http://dx.doi.org/10.5772/intechopen.105484*

**Figure 1** illustrates the case of the NMPC with the three-step prediction model, from which the ES method is explained as follows:

During the initialization period (*m* ¼ 0), the yaw control system is inactivated, and thus the total yaw state is 1.


#### **2.3 Intelligent fuzzy inference weight coefficient regulator**

There is a contradiction between increasing energy capture and reducing yaw actuator usage. In Eq. (9), *ω* is used to balance *ξ* and *ζ* in *QF*, so the choice of weight coefficient affects the performance of NMPC to a large extent. Even if an optimal constant *ω* is selected according to the Pareto theory, it cannot ensure that the best control performance is always provided. Therefore, the FIWR is proposed to dynamically adjust *ω* according to the predicted wind direction in each control period. Moreover, to better play the effect of FIWR and avoid the subjectivity of manual tuning, the intelligent optimization of FIWR is also necessary.

#### *2.3.1 Design of FIWR*

The proposed FIWR scheme is shown in **Figure 2**. This is a fuzzy inference system with two inputs and one output. Inputs *WDav* and *WDstd* have three and five linguistic

#### **Figure 1.** *Sequential diagram of the ES for a three-step prediction model.*

**Figure 2.** *The scheme of FIWR.*

values, respectively, and output *ω* has five linguistic values. The initial membership function (IMF) adopts the equally divided triangular MF. Fuzzy inference adopts Mamdani-type algorithm and the center of area (COA) is utilized in defuzzification process.

#### *2.3.1.1 Design of input/output*

The yaw error is the core part of the input. Which directly determines the action of the yaw system. Therefore, the adjustment of *ω* takes the yaw error as a reference. Because the future yaw action is the control law to be solved, which is unknown, the predicted yaw error here refers to the difference between the predicted average wind direction of the (*k* + *i*)-th control period and the first sampled value of the nacelle position of the current *k*-th control period, expressed by *θye*ð Þ *k* þ *i* ! *k* , which denotes the difference between the future wind direction obtained by LiDAR and the nacelle position at the current moment.

Different from the error and error derivative method used by ordinary two-input fuzzy inference, the designed input of the FIWR is related to the statistical characteristics of the predicted yaw error. The two inputs are designed as the weighted average and standard deviation of yaw error in prediction horizon *m* respectively, named *WDav* and *WDstd*. The calculation of *WDav* is:

$$WD\_{aw} = \sum\_{i=1}^{m} (m+1-i) \cdot \theta\_{j\epsilon}(k+i \to k) / \sum\_{i=1}^{m} i \tag{10}$$

where *m* is the prediction step, and *θye*ð Þ *k* þ *i* ! *k* can be calculated by setting the yaw rate as zero in Eq. (4). Based on *WDav*, *WDstd* is calculated by:

$$\text{WD}\_{\text{std}} = \sqrt{\sum\_{i=1}^{m} \left( \theta\_{\text{pe}} (\mathbf{k} + i \to \mathbf{k}) - \overline{\text{WD}\_{\text{av}}} \right)^{2} / m} \tag{11}$$

In practice, the yaw error might be affected by some subtle factors, so moving average filter is presented to process the wind direction data. In this study, the filter value of each sample is the mean value of the N sample values in the sliding window. For wind direction, N usually takes 12.

*Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines DOI: http://dx.doi.org/10.5772/intechopen.105484*

**Figure 3.** *Initial membership functions.*


**Table 1.** *Initial fuzzy rules.*

#### *2.3.1.2 Design of MFs and fuzzy rules*

The IMFs corresponding to input *WDav*, *WDstd* and output *ω* are illustrated in **Figure 3**, respectively. The types of IMFs are all selected as sensitive and simple triangular MFs, with the bottom edge equal and overlapping with adjacent IMFs by 50%. The universe of discourse (UOD) of *WDav* and *WDstd* is defined as [0 deg., 15 deg] and [0 deg., 25 deg] respectively. The best value range of *ω* is [0, 0.1], so the UOD of output is defined on [0, 0.1]. The linguistic values VS, S, M, L, and VL represent very small, small, medium, large, and very large, respectively. *WDav* and *WDstd* are mapped to 3 and 5 linguistic values, respectively, so the fuzzy rule table will contain 15 different rules. The initial fuzzy rules of the proposed FIWR are listed in **Table 1**. The derivation of the fuzzy rules is based on the expert experience, that is, a larger yaw error and a smaller standard deviation will lead to the yaw action towards improving energy capture.

#### *2.3.2 Intelligent optimization of FIWR*

The advantage of fuzzy inference is that it can fully incorporate expert experience. However, when the expert experience is insufficient or wrong, the result of fuzzy inference will no longer be reliable; and the fuzzy relationship under complex input sometimes cannot be directly given by the expert experience. Therefore, the optimization of FIWR is proposed.

#### *2.3.2.1 Fuzzy optimization problem formulation*

The goal of the proposed FIWR is to reduce the yaw actuator usage and the energy capture loss, so the optimization problem can be expressed as:

$$\begin{aligned} \min F(\mathbf{x}\_{munderhip}, \mathbf{x}\_{rule}) &= \left( \frac{\int \xi dt}{\int dt}, \frac{\int \zeta dt}{\int dt} \right) \\ \text{s.t.} & \begin{cases} \mathbf{x}\_{munderhip} \in \Omega\_m \\ \mathbf{x}\_{mlef} \in \Omega\_r \end{cases} \end{aligned} \tag{12}$$

where *xmembership* and *xrule* is the optimization vector of MFs and fuzzy rules, respectively; *Ω<sup>m</sup>* and *Ω<sup>r</sup>* is the feasible regions of the two optimization vectors, respectively.

For *Ωm*, there are three kinds of constraints: the number constraint of MF, the type constraint of MF, and the position constraint of MF. As for this study, in order to simplify the optimization problem, the optimization variables corresponding to the first two constraints are fixed, that is, the number and type of MF do not need to be optimized. Assuming that the position of each MF is uniquely determined by a certain vertex of the triangle, the optimization dimension of MF is further reduced. Obviously, the optimization of position is subject to constraints, that is, a small linguistic value cannot exceed a large linguistic value and each MF must be changed within the UOD.

For *Ωr*, it is affected by the number of inputs and outputs. For the fuzzy inference using Mamdani model, the consequence of the fuzzy rule is a certain fuzzy set of output. If there are *s* inputs and *h* outputs in fuzzy inference, the feasible region of the fuzzy rule can be expressed as:

$$\mathfrak{U}\mathfrak{Q}\_r = rule \left( \prod\_{i=1}^h num\_i^{\bigwedge\_{j=1}^l num\_j} \right) \tag{13}$$

where *numj* is the number of linguistic values of *j*-th input, and *numi* is the number of linguistic values of *i*-th output.

#### *2.3.2.2 Fuzzy optimization problem simplification*

Although Eq. (12) has been simplified, it is still difficult to solve directly. For this complex problem, it is necessary to simplify the problem as much as possible on the premise of ensuring a certain solution accuracy. Therefore, a solution strategy is proposed to simplified the complex optimization problem with the purpose of quickly and reliably solving FIWR optimization parameters.

As mentioned earlier, the MFs are subject to order constraints that the MF with smaller linguistic value must be front of the MF with larger linguistic value. This constraint is to avoid repeated searches and ensure the logical accuracy of fuzzy inference. However, if the output MF is no longer constrained, the current linguistic value sequence of output can be used as a reference for the optimization of fuzzy rule. Therefore, the fuzzy rule is associated with MF, which can greatly reduce the complexity of optimization problem and ensure the search ability.

Specifically, the output linguistic value sequence after the sequence change can be expressed as:

$$B = \mathbf{A} \cdot \mathbf{S} \tag{14}$$

*Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines DOI: http://dx.doi.org/10.5772/intechopen.105484*

where *A* is the original sequence, and *S* is the identity matrix after elementary transformation, called the transformation matrix.

For example, in a certain change, the output linguistic value sequence is transformed into [S M L VS VL], then it can be expressed as:

$$[\text{SMLVSVL}] = [\text{VSSMLVL}] \begin{bmatrix} 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \tag{15}$$

After each iteration, *A* and *B* are known, so *S* can be calculated according to Eq. (14). Then the updated fuzzy rule table can be calculated according to *S*:

$$R\_{new}[\mathfrak{Z}\times\mathfrak{S}] = R\_{old}[\mathfrak{Z}\times\mathfrak{S}] \cdot \mathfrak{S}[\mathfrak{Z}\times\mathfrak{S}] \tag{16}$$

where *Rnew*½ � 3 � 5 and *Rold*½ � 3 � 5 are the fuzzy rule tables after and before the update, respectively, with a size of 3 � 5. In order to ensure the exploitation of the solution process, each linguistic value in the fuzzy rule table is transformed correspondingly with a certain probability. For example, the S linguistic value is changed to VS with a small probability *p*. This probabilistic processing procedure can improve the search ability that find the optimal MF under the current fuzzy rule.

#### *2.3.2.3 Improved AGA-MOPSO solver*

Although Eq. (12) is simplified by the solution strategy, its objective function is complex, which makes it difficult to be solved by ordinary multi-objective optimization algorithms. Therefore, an improved multi-objective particle swarm optimization (MOPSO) algorithm based on adaptive grid algorithm MOPSO (AGA-MOPSO) is designed. AGA-MOSPO is an efficient variant of PSO for multi-objective problem. By combining with adaptive grid algorithm (AGA), it achieves nice balance between exploration and exploitation [16].

For a two-dimensional multi-objective problem, AGA-MOPSO first calculates the search ranges min *f k* <sup>1</sup> , max *f k* 1 h i and min *<sup>f</sup> k* <sup>2</sup>, max *f k* 2 h i of the objective space after *<sup>k</sup>*-th iteration, then calculates the grid number of the *i*-th particle according to the following equation:

$$\mathbf{x}\left(\mathbf{x}\_{1}^{i},\mathbf{x}\_{2}^{i}\right) = \left(\operatorname{Int}\left(\frac{f\_{1}^{i}-\min f\_{1}^{k}}{\left(\max f\_{1}^{k}-\min f\_{1}^{k}\right)/M}\right) + 1, \operatorname{Int}\left(\frac{f\_{2}^{i}-\min f\_{2}^{k}}{\left(\max f\_{2}^{k}-\min f\_{2}^{k}\right)/M}\right) + 1\right) \tag{17}$$

where *xi* <sup>1</sup> and *xi* <sup>2</sup> is the grid numbers of the particles, *M* is the number of grids, *f i* 1 and *f i* <sup>2</sup> are the fitness values of the two targets respectively, and *Int* is rounding.

*M* will be adaptively increased with the iteration to balance the computational cost and accuracy. The density information of each grid can be obtained according to the grid number. According to the density information, the global optimal particles are selected and the Archive set is clipped.

Considering the complexity of the optimization problem, the following improvements are proposed for the selection of the global optimal particle and the truncation of Archive set in AGA-MOPSO:

(1) First is to use two methods to determine the corresponding *gbest* for each particle: 1) Select *gbest* with the smallest grid density. This method focuses on the exploration of the search space to improve the ductility of Pareto front (PF). 2) Select the global optimal particle as *gbest* according to the technique for order preference by similarity to an ideal solution (TOPSIS):

$$d = \frac{\sqrt{\left(f\_1 - f\_1^{\min}\right)^2 + \left(f\_2 - f\_2^{\min}\right)^2}}{\sqrt{\left(f\_1 - f\_1^{\min}\right)^2 + \left(f\_2 - f\_2^{\min}\right)^2} + \sqrt{\left(f\_1 - f\_1^{\max}\right)^2 + \left(f\_2 - f\_2^{\max}\right)^2}}\tag{18}$$

where *d* is the deviation between a certain point and the ideal point. The smaller the *d*, the smaller the deviation from the ideal point. This method focuses on the exploitation of the search space to make PF closer to the real optimal solution. In the early stage of algorithm, the probability of choosing *gbest* by the first method is greater, so as to find as many non-dominated solutions as possible; in the later stage, the probability of choosing the second method is greater to approximate the true solution.

(2) Second is to truncate the Archive set by adaptive dynamic threshold. It is calculated using:

$$\text{Th} \cdot \text{M} = \text{C} \tag{19}$$

**Figure 4.** *The flowchart of the improved AGA-MOPSO.*

where *Th* is the threshold, and *C* is a constant. When the number of particles in the grid exceeds *Th*, the grid is truncated; *Th* is reduced following an increased *M*. This ensuring that the number of particles on the PF is relatively stable.

The main procedure of the improved AGA-MOPSO solver is shown in **Figure 4**.

#### **3. Validation and discussion**

The experiment is based on MATLAB/SIMULINK. First, the optimized parameters of FIWR are obtained by the solution strategy that run on MATLAB. Then, the proposed FIWR-NMPC controller is simulated in SIMULINK. The common experimental parameters in **Table 2** are set to the same value. The UOD is set to a uniform value [0, 10] to facilitate the handling of the constraints of optimization variables, so the universe conversion scale coefficients corresponding to *WDav*, *WDstd*, and *ω* are 1.5, 2.5, and 0.01, respectively.

The wind direction data used in the experiment is from the actual wind direction data of a wind farm in operation for one day with a sampling period of 1 s, as shown in **Figure 5(a)**. The wind direction after sliding average filtering is shown in **Figure 5(b)**.

#### **3.1 Optimization results**

The optimization results are analyzed and discussed first. Taking the case of prediction step m = 6 as an example, the optimization results of AGA-MOPSO are shown in **Figure 6**, where the horizontal axis is the yaw actuator usage ratio, and the vertical axis is the energy capture loss ratio. After 50 iterations, the particles finally converge to PF. The particles are evenly distributed in the search space near the PF, which indicate that this associated idea will not lead to the coupling optimization of PF and fuzzy rule in the search process.

The optimization results of the design variables are shown in **Table A** in Appendix. Considering space reasons, only ten points on PF are randomly selected to discuss. The simplification solution strategy associates fuzzy rule with MF to optimize, so as to reduce optimization complexity. The MF of *ω* does not strictly follow the linguistic value order. This random information is utilized to adjust the fuzzy rule, thereby simultaneously optimizing the fuzzy rule and the MF.

The prediction step *m* in the proposed FIWR-NMPC is variable and has a greater impact on the performance of controller. Therefore, the influence of m on the optimization effect of FDWE is discussed. **Figure 7** shows the optimization result under *m* = 1–6. In Figure 11, the PF when *m* =1 is significantly different from the PF when *m* = 2–6; as *m* increases, the yaw actuator usage ratio and energy capture loss ratio


**Table 2.**

*Common parameters in the experiment.*

**Figure 5.** *24-hour wind direction used in experiment: (a) actual wind direction; (b) filtering wind direction.*

**Figure 6.** *Iterative results.*

show a downward trend. This is because *m* =1 is very short, and the predicted wind information is very few. But the actual wind direction greatly fluctuates due to the existence of turbulence, and the yaw system has a large time lag. So, it is difficult for the yaw system to track the change of the wind direction. When *m* increases, the predicted wind direction information increases, and the controller can start yawing several control periods in advance to compensate for the time lag.

*Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines DOI: http://dx.doi.org/10.5772/intechopen.105484*

**Figure 7.** *Optimization results under different m.*

Furthermore, in the enlarged part, the PF of *m* =5 and *m* =6 is very close, and the increase of *m* gradually reduces the improvement effect of the FIWR-NMPC control performance. These results show that the increase in *m* can provide better performance for FIWR-NMPC, but there is a limit to the performance improvement. Among the six FIWR-NMPCs with different *m*, the controller with *m* =6 provides the best performance, which is very close to the ultimate performance.

#### **3.2 Simulation results**

A specific FIWR-NMPC controller and a baseline NMPC controller is designed based on the foregoing discussion. The MFs and fuzzy rules in FIWR are shown in **Figure 8** and **Table 3**. The MFs and fuzzy rules are derived from the optimal solution obtained through TOPSIS. The remaining parameters of FIWR-NMPC and baseline NMPC are shown in **Table 2**, where *m* =6.

The simulation results are shown in **Figure 9**. **Figure 9(a)** and **(b)** respectively represent the energy capture loss ratio and the yaw actuator usage ratio of the WT within 24 hours. Compare with the baseline NMPC, the proposed FIWR-MPC

**Figure 8.** *MFs used by FIWR.*

#### *Nonlinear Systems - Recent Developments and Advances*


#### **Table 3.**

*Fuzzy rules used by FIWR.*

increases the energy capture by about 0.3% while reducing the yaw actuator usage ratio by about 1%. This improvement benefits from the dynamically adjusted weight coefficient, that is, dynamically weighing the two control objectives based on the predicted wind information to determine the yaw action.

#### **4. Conclusions**

In this study, an advanced nonlinear model predictive control solution including multi-step prediction models has been proposed and investigated for the yaw control system of a horizontal-axis wind turbine. The noticeable feature of the proposed

#### *Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines DOI: http://dx.doi.org/10.5772/intechopen.105484*

solution is to use a finite control set under constraints for the possible demanded yaw rate, and thus the optimal control demand for the yaw system has been conveniently solved using an exhaustive search method based on the sequential diagram. On the other hand, the weighting coefficient in the objective function of NMPC has been dynamically tuned by employing the fuzzy inference regulator, as the proposed solution is designed for a joint energy capture and actuator usage optimization which is basically a two objective tradeoff that depends on the selection of the weighting factor. To give full play to the ability of the regulator, its parameter tuning is refined into an optimization problem and a solution strategy is designed to simplify it, and then the optimal fuzzy rules and membership functions are solved by the improved AGA-MOPSO algorithm. The final optimized FIWR-NMPC achieves deep optimization of wind turbine yaw performance. The important investigation findings include:


#### **Acknowledgements**

This work is supported by the National Natural Science Foundation of China under Grant 52177204; the Natural Science Foundation of Hunan Province (No. 2020JJ4744); the Innovation-Driven Project of Central South University (No. 2020CX031).

### **Conflict of interest**

The authors declare no conflict of interest.

### **A. Appendix**



#### (b) Optimization results of fuzzy rules.


### **Table A.**

*Design variable optimization results.*

*Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines DOI: http://dx.doi.org/10.5772/intechopen.105484*

### **Author details**

Dongran Song<sup>1</sup> \*, Ziqun Li<sup>1</sup> , Jian Yang<sup>1</sup> , Mi Dong1 , Xiaojiao Chen<sup>2</sup> and Liansheng Huang<sup>2</sup>

1 School of Automation, Central South University, Changsha, China

2 Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, China

\*Address all correspondence to: humble\_szy@163.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Yang J, Fang L, Song D, et al. Review of control strategy of large horizontalaxis wind turbines yaw system. Wind Energy. 2020;**24**(2):97-115. DOI: 10.1002/we.2564

[2] Song D, Shanmin X, Huang L, et al. Multi-site and multi-objective optimization for wind turbines based on the design of virtual representative wind farm. Energy. 2022;**252**:123995. DOI: 10.1016/j.energy.2022.123995

[3] Perez JMP, Marquez FPG, Tobias A, et al. Wind turbine reliability analysis. Renewable & Sustainable Energy Reviews. 2013;**23**:463-472. DOI: 10.1016/j.rser.2013.03.018

[4] Song D, Fan X, Yang J, et al. Power extraction efficiency optimization of horizontal-axis wind turbines through optimizing control parameters of yaw control systems using an intelligent method. Applied Energy. 2018;**224**: 267-279. DOI: 10.1016/j.apenergy. 2018.04.114

[5] Campos-Mercado E, Cerecero-Natale LF, Garcia-Salazar O, et al. Mathematical modeling and fuzzy proportionalintegral-derivative scheme to control the yaw motion of a wind turbine. Wind Energy. 2020;**24**(4):379-401. DOI: 10.1002/we.2579

[6] Guo F, Jiang W, Shao H, et al. Research on the wind turbine yaw system based on PLC. In: Presented at the 2017 29th Chinese Control and Decision Conference. IEEE; 2017

[7] Han Zhao, Lawu Zhou, Yu Liang et al. "A 2.5MW wind turbine TL-EMPC yaw strategy based on ideal wind measurement by LiDAR". IEEE Access. 2021;**9**:89866-89877. DOI: 10.1109/ access.2021.3089513

[8] Song D, Yang Y, Zheng S, et al. New perspectives on maximum wind energy extraction of variable-speed wind turbines using previewed wind speeds. Energy Conversion and Management. 2020;**206**:112496. DOI: 10.1016/j. enconman.2020.112496

[9] Song D, Li Z, Wang L, et al. Energy capture efficiency enhancement of wind turbines via stochastic model predictive yaw control based on intelligent scenarios generation. Applied Energy. 2022;**312**:118773. DOI: 10.1016/j. apenergy.2022.118773

[10] Song D, Yanping T, Wang L, et al. Coordinated optimization on energy capture and torque fluctuation of wind turbines via variable weight NMPC with fuzzy regulator. Applied Energy. 2022; **312**:118821. DOI: 10.1016/j.apenergy. 2022.118821

[11] Song D, Liu J, Yang Y, et al. Maximum wind energy extraction of large-scale wind turbines using nonlinear model predictive control via yin-Yang grey wolf optimization algorithm. Energy. 2021;**221**:119866. DOI: 10.1016/j.energy.2021.119866

[12] Song D, Chang Q, Zheng S, et al. Adaptive model predictive control for yaw system of variable-speed wind turbines. Journal of Modern Power Systems and Clean Energy. 2021;**9**(1): 219-224. DOI: 10.35833/mpce.2019. 000467

[13] Song D, Li Q, Cai Z, et al. Model predictive control using multi-step prediction model for electrical yaw system of horizontal-Axis wind turbines. IEEE Transactions on Sustainable Energy. 2019;**10**(4):2084-2093. DOI: 10.1109/tste.2018.2878624

*Nonlinear Intelligent Predictive Control for the Yaw System of Large-Scale Wind Turbines DOI: http://dx.doi.org/10.5772/intechopen.105484*

[14] Masero E, Francisco M, Maestre JM, et al. Hierarchical distributed model predictive control based on fuzzy negotiation. Expert Systems with Applications. 2021;**176**:1-13. DOI: 10.1016/j.eswa.2021.114836

[15] Castillo O, Martínez-Marroquín R, Melin P, et al. Comparative study of bioinspired algorithms applied to the optimization of type-1 and type-2 fuzzy controllers for an autonomous mobile robot. Information Sciences. 2012;**192**: 19-38. DOI: 10.1016/j.ins.2010.02.022

[16] Yang J, Zhou J, Fang R, et al. Multi-objective particle swarm optimization based on adaptive grid algorithms. Journal of System Simulation. 2008;**21**:5843-5847. DOI: 10.16182/j.cnki.joss.2008.21.041

### *Edited by Bo Yang and Dušan Stipanović*

In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of input. Nonlinear control systems, which are among the new technologies most widely used in many fields such as economic management, industrial production, technology research and development, ecological prevention and control, are at the core of worldwide automation control technology. In contrast to linear control systems, the nonlinear control system has the characteristics of a data model: stability, zero-input system response, self-excited oscillation or limit cycle, and a more complex structure, increasing the difficulty of its theoretical analysis and technical development. Nonlinear systems are common phenomena in real life and as such cannot be ignored. Analysis and research of nonlinear systems are therefore important, and researchers need to clarify their characteristics, explore scientific and effective application measures, and finally enhance their control quality. This book comprehensively investigates the main principles, core mechanisms, typical problems, and relevant solutions involved in nonlinear systems. In general, this book aims to provide advanced research on nonlinear systems and control schemes for researchers and engineers working in related fields, and thus promote future study in this research area.

Published in London, UK © 2023 IntechOpen © Vladimir Zotov / iStock

Nonlinear Systems - Recent Developments and Advances

Nonlinear Systems

Recent Developments and Advances

*Edited by Bo Yang and Dušan Stipanović*