We are IntechOpen, the world's leading publisher of Open Access books Built by scientists, for scientists

6,200+

Open access books available

169,000+

International authors and editors

185M+ Downloads

156 Countries delivered to Our authors are among the

Top 1% most cited scientists

12.2%

Contributors from top 500 universities

Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI)

### Interested in publishing with us? Contact book.department@intechopen.com

Numbers displayed above are based on latest data collected. For more information visit www.intechopen.com

## Meet the editors

Dr. Kamal Shah is a senior researcher in the Department of Mathematics and Sciences at Prince Sultan University, Riyadh, Saudi Arabia. His previous appointment was as an associate professor in the Department of Mathematics at the University of Malakand, Pakistan. His research interests are fractional calculus, nonlinear analysis, and numerical solutions of differential equations. He has published hundreds of research articles

in internationally recognized journals and serves as an academic editor for many journals of high repute. He has supervised several doctoral and master's students in the field of applied mathematics.

Bruno Carpentieri obtained a Laurea degree in applied mathematics from Bari University in 1997, and a Ph.D. in computer science from the Institut National Polytechnique de Toulouse (INPT), France. After holding various post-doctoral positions, he became an assistant professor at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen, and then a Reader in applied math-

ematics at Nottingham Trent University, UK. Since May 2017, he has been an associate professor in applied mathematics at the Faculty of Computer Science, Free University of Bozen-Bolzano. His research interests are in the fields of applied mathematics, numerical linear algebra, and high-performance computing. Bruno Carpentieri has served as a member of several scientific advisory boards in computational mathematics. He is an editorial board member of the *Journal of Applied Mathematics*, an editorial committee member of *Mathematical Reviews* (American Mathematical Society), and a reviewer of about 30 numerical analysis journals. He has co-authored about 50 publications in peer-reviewed scientific journals.

Dr. Arshad Ali comes from the beautiful valley of Swat, Khyber Pakhtunkhwa, Pakistan and is currently teaching in a government school in Swat. He obtained his master's degree in mathematics from the Government Post-graduate Jehan Zeb College Swat, Khyber Pakhtunkhwa, in 2009, and his MPhil in 2017. His Ph.D. in applied mathematics was supervised by Dr. Kamal Shah. He has published several research articles in

highly reputed journals.

### Contents


### **Chapter 8 121** Electrical Circuits as Dynamical Systems *by Alexandru G. Gheorghe and Mihai E. Marin* **Chapter 9 147** Computation of Numerical Solution via Non-Standard Finite Difference Scheme *by Eiman Ijaz, Johar Ali, Abbas Khan, Muhammad Shafiq and Taj Munir*

## Preface

Dynamical systems play an essential role in describing various real-world processes and phenomena that evolve over time. In the analysis of dynamical systems, several important theoretical and computational aspects need to be addressed, from the basic and more advanced theory to the development of efficient numerical methods, and finally their application to the simulation of practical real-world problems. The book discusses these ideas, covering basic techniques, including some review material, and more advanced topics such as the fuzzy concept and the application of discrete calculus to the study of specific problems.

The book contains nine chapters. Chapter 1 reviews the Laplace transform and its applications in the study of solutions of dynamical systems. In Chapter 2, numerical methods based on variants of the fourth-order Runge‒Kutta and Euler algorithms are discussed. Chapter 3 is devoted to the development and analysis of an efficient region-merging algorithm in raster space. Chapter 4 investigates the application of discrete mathematics for programming discrete mathematics calculations. Chapter 5 presents a criticality study of fast critical experimental benchmarks using MCNP code to qualify different evaluations. Chapters 6 and 7 are about the use of fuzzy concepts to investigate problems of dynamical systems. In Chapter 8, a case study of dynamical systems arising in circuit analysis is proposed. Finally, in Chapter 9, a dynamical system of infectious disease is studied numerically using a non-standard finite difference method.

> **Kamal Shah**  Department of Mathematics and Sciences, Prince Sultan University, Riyadh, Saudi Arabia

#### **Bruno Carpentieri**

Faculty of Computer Science, Free University of Bozen-Bolzano, Bolzano, Italy

**Arshad Ali**

Department of Mathematics University of Malakand, Chakdara Dir(L), KPK, Pakistan

**Chapter 1**

**Abstract**

dynamical systems.

**1. Introduction**

**1**

A Review Note on Laplace

in Dynamical Systems

Transform and Its Applications

*Shivram Sharma, Praveen Kumar Sharma and Jitendra Kaushik*

Laplace Transform is one of the essential transform techniques. It has many applications in engineering and science. The Laplace transform techniques can be used to solve various partial differential equations and ordinary differential equations that cannot be resolved using conventional techniques. The Laplace transform approach is practically the essential functional method for engineers. The Laplace transform and variations like the fuzzy Laplace transform are advantageous because they directly solve issues such as initial value problems, fuzzy initial value problems, and nonhomogeneous differential equations without first resolving the corresponding homogeneous equation. This chapter uses the Laplace transform and its variations to

**Keywords:** Laplace transform, Inverse Laplace transform, Fuzzy Laplace transform

A problem can be solved easily in another domain using an integral transform, and

**Definition 1.1.** Suppose that *f* is real or complex-valued function of time *t*>0, and

*p* is a real or complex parameter, then the Laplace transform of *f t*ð Þ is defined as

the solution is then transformed using an inverse transform back to the original domain. The Laplace transformation is one such transformation that Pierre-Simon Laplace found in 1785. A popular integral transform in mathematics with several uses in science and engineering is the Laplace Transform. A transformation from the time domain, where the inputs and outputs are time functions, to the frequency domain, where the inputs and outputs are functions of complex angular frequency, is what the Laplace Transform entails. The Laplace transform is frequently used to convert a differential equation system into an algebraic equation system and to multiply a convolution [1–5]. It entails decomposing a system of differential equations into a set of linear equations that must be solved and then applying the inverse Laplace transform to return the answer to the time domain. In many circumstances, the result is divided into "patterns," for which the inverse transform is known. The inverse Laplace transform is then used to return the solution to the time domain.

properties, applications, initial value problem, electrical circuits

#### **Chapter 1**

## A Review Note on Laplace Transform and Its Applications in Dynamical Systems

*Shivram Sharma, Praveen Kumar Sharma and Jitendra Kaushik*

#### **Abstract**

Laplace Transform is one of the essential transform techniques. It has many applications in engineering and science. The Laplace transform techniques can be used to solve various partial differential equations and ordinary differential equations that cannot be resolved using conventional techniques. The Laplace transform approach is practically the essential functional method for engineers. The Laplace transform and variations like the fuzzy Laplace transform are advantageous because they directly solve issues such as initial value problems, fuzzy initial value problems, and nonhomogeneous differential equations without first resolving the corresponding homogeneous equation. This chapter uses the Laplace transform and its variations to dynamical systems.

**Keywords:** Laplace transform, Inverse Laplace transform, Fuzzy Laplace transform properties, applications, initial value problem, electrical circuits

#### **1. Introduction**

A problem can be solved easily in another domain using an integral transform, and the solution is then transformed using an inverse transform back to the original domain. The Laplace transformation is one such transformation that Pierre-Simon Laplace found in 1785. A popular integral transform in mathematics with several uses in science and engineering is the Laplace Transform. A transformation from the time domain, where the inputs and outputs are time functions, to the frequency domain, where the inputs and outputs are functions of complex angular frequency, is what the Laplace Transform entails. The Laplace transform is frequently used to convert a differential equation system into an algebraic equation system and to multiply a convolution [1–5]. It entails decomposing a system of differential equations into a set of linear equations that must be solved and then applying the inverse Laplace transform to return the answer to the time domain. In many circumstances, the result is divided into "patterns," for which the inverse transform is known. The inverse Laplace transform is then used to return the solution to the time domain.

**Definition 1.1.** Suppose that *f* is real or complex-valued function of time *t*>0, and *p* is a real or complex parameter, then the Laplace transform of *f t*ð Þ is defined as

*Qualitative and Computational Aspects of Dynamical Systems*

$$\mathcal{L}[f(t)] = \int\_0^\infty e^{-pt} f(t) \, dt = F(p) \tag{1}$$

(Provided the integral defined in (1) exists)

**Example 1.1.** If *f t*ðÞ¼ 1 for *t* ≥0, then The Laplace Transform of this function *f t*ðÞ¼ 1 can be obtained by using (1) as:

$$\mathcal{L}\left[\mathbf{1}\right] = \int\_0^\infty e^{-pt} \mathbf{1} \, dt = \int\_0^\infty e^{-pt} \, dt = \lim\_{t \to \infty} \left[\frac{e^{-pt}}{-p}\right]\_0^t = \frac{\mathbf{1}}{p}$$

Thus, we have Laplace transform of *f t*ðÞ¼ <sup>1</sup>**,** which is given by <sup>L</sup> ½ �¼ <sup>1</sup> <sup>1</sup> *p* .

Taking this formula's, Inverse Laplace Transform (L�<sup>1</sup> ), we get the inverse Laplace Transform of <sup>1</sup> *<sup>p</sup>*, which is given by <sup>L</sup>�<sup>1</sup> <sup>1</sup> *p* � � <sup>¼</sup> 1.

Similarly, following the above procedure, we can find the Laplace transform and inverse Laplace transforms of other valuable functions.

Some essential formulae of LT and ILT are given below in tabular form **Table 1**, [5], which are very useful for finding out the results of problems/systems by the Laplace transform method.


**Table 1.** *Formulae of LT and ILT.* *A Review Note on Laplace Transform and Its Applications in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.108251*

#### **2. A sufficient condition for the Existence of Laplace transforms any function**

The existence of LT of any function*f t*ð Þ depends on the piecewise continuity of the function on the interval 0½ Þ ∞ and the exponential order of the function.

If a function is piecewise continuous on the interval 0, ½ Þ ∞ and has exponential order *α* (for *t*≥ 0, for some *p*> *α*Þ, then the LT of function *f t*ð Þ exists.

The above conditions are only sufficient conditions but not necessary conditions for the existence of the Laplace Transform of any function. A function may have Laplace transform even if it violates the above conditions/existence conditions.

For example: The function *f t*ðÞ¼ *t* �1 <sup>2</sup> is not continuous at *t* ¼ 0, even though its Laplace transform exists.

Moreover, when the analysis depends on a mathematical model of differential equations due to the uncertainty of future aspects, we require mathematical tools to prescience other risks. We can categorize uncertainties into two categories. 1. Possible sets or fuzzy sets handle uncertainty. 2. Unpredictable uncertainty, which probability models shall deal with.

The publication of a seminal paper by Zadeh [6], a computer scientist from the University of California, was a crucial turning point in developing the modern concept of uncertainty. In his seminal paper, the USA was the first to introduce the concept of fuzzy set theory as a new way to represent vagueness in our daily lives. Zadeh defined "fuzzy sets" in his theory as sets with ill-defined bounds. This idea is employed and found to be superior for solving issues across many fields.

Recently, the analysis of dynamical systems has obtained more attention due to mathematical models of fuzzy applications through Laplace transforms. Hence, preeminence and optimal solution are required through fuzzy concepts of H- derivatives and SGH derivatives (1) gH-derivative, g-derivative and gr-derivative. Few researchers such as Najariyan, & Farahi [7], Najariyan, & Farahi [8], Najariyan, & Zhao [9], Mazandarani & Pariz [10] and Najariyan & Zhao [11] attempt the work of optimal solution through the dynamic systems of the fuzzy approach.

Allahviranloo and Ahmadi [12] recently proposed a fuzzy Laplace transformation for first-order fuzzy differentiation equations. They keep focused on the fuzzy Laplace transform concept yet to explain the fuzzy valued conditions. Then, Salahshour and Allahviranloo [13] focused on first- and second-order derivative Laplace transform for linear, continuous, uniform, and convergence problems. They considered FIVP, H-differentiation, and second-order derivatives. The large-size fuzzy-valued function can be executed by fuzzy Laplace transformation.

Using fuzzy differential equations (FDEs) to model dynamic systems with uncertainty makes sense. First-order linear fuzzy differential equations are among the most fundamental FDEs in several applications. Chang and Zadeh initially presented the fuzzy derivative concept in 1972 [14]. Kandel and Byatt [15, 16] analyzed fuzzy dynamical issues using the idea of FDEs. The fuzzy Laplace transform method solves FDEs and their fuzzy beginning and boundary value problems. Fuzzy Laplace transforms reduce an FDE to an algebraic problem, which facilitates its solution.

Here in this chapter, we aim to study the applications of Laplace transform in dynamical systems, so we present our study under four subsections 2.1, 2.2, 2.3, and 2.4. At the end of this chapter, we give a brief conclusion about the proposed research.

#### **3. Main results/discussion/ application of Laplace transform in a dynamic system**

In this chapter, our main aim is to apply Laplace transform in a dynamic system. **A dynamical system** [17] is one in which something evolves over time or in which a function describes the time dependence of a point in the surrounding space. For instance, mathematical representations of the pendulum of a clock, the flow of water through a pipe, the number of fish in a lake each spring, population growth, and so forth**.**

Both a continuous timeline and discrete time increments can be used to depict a dynamic system**.**

**Discrete-time dynamical system [17]**

$$\mathbf{x}\_t = F(\mathbf{x}\_{t-1}, t) \tag{2}$$

This type of model is called a difference equation, a recurrence equation, or an iterative map (if the right-hand side is not dependent on)

**Continuous-time dynamical system [17]**

$$\frac{d\mathbf{x}}{dt} = F(\mathbf{x}, t) \tag{3}$$

This type of model is called a differential equation.

The system's state variable at time t in both scenarios is *xt* or *x*, which may take a scalar or vector value. The rule by which the system changes its state over time is determined by a function called F.

Differential equations often model dynamical systems.

So, here we discuss an application of Laplace transform and its variant (e.g., fuzzy Laplace transform) to solve differential equations /electrical circuits/mechanical systems/ fuzzy differential equations:

#### **3.1 Solution of differential equations (including IVP and system of simultaneous differential equations) by using Laplace transform technique**

**Example 2.1.1.** Consider an IVP

$$\frac{dy}{dt} + \mathcal{y} = \text{sint}; \mathcal{y}(\mathbf{0}) = \mathbf{1}$$

Taking the Laplace transform of both sides of the above equation, we have

$$
\mathcal{L}\left[\frac{dy}{dt}\right] + \mathcal{L}[y] = \mathcal{L}[sim]
$$

$$
p\mathcal{L}[y(t)] - y(0) + \mathcal{L}[y(t)] = \frac{1}{p^2 + 1}
$$

$$
(p+1)\mathcal{L}[y(t)] = 1 + \frac{1}{p^2 + 1}
$$

$$
\mathcal{L}[y(t)] = \frac{1}{p+1} + \frac{1}{(p^2 + 1)(p+1)}
$$

*A Review Note on Laplace Transform and Its Applications in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.108251*

Taking the Inverse Laplace transform of both sides of the above equation, we have

$$y(t) = \mathcal{L}^{-1}\left[\frac{1}{p+1}\right] + \mathcal{L}^{-1}\left[\frac{1}{(p^2+1)(p+1)}\right]$$

$$y(t) = e^{-t} + \frac{1}{2}\mathcal{L}^{-1}\left[\frac{1}{(p+1)} - \frac{p}{p^2+1} + \frac{1}{p^2+1}\right]$$

$$y(t) = e^{-t} + \frac{1}{2}[e^{-t} - \cos t + \sin t]$$

$$y(t) = \frac{3}{2}e^{-t} + \frac{1}{2}[\sin t - \cos t]$$

Which is the required solution for given IVP.

**Remark 2.1.1.** Mathematical modeling of the system is a must for analyzing/ predicting the nature and behavior of any physical system. There are many physical systems wherein we get IVP, for example, the growth and decay population model of Malthus, which is represented by *dy dt* ¼ *ky* where *y t*ð Þ is the population at any time *t* and the constant of proportionality k is the growth constant and is used for finding the bacteria in a culture, life of radioactive substance and newtons law of cooling. A solution to these problems can be obtained by following the procedure in example 2.1.1 using Laplace transform techniques.

**Example 2.1.2.** Consider a system of simultaneous equations

$$\begin{cases} \frac{d\mathbf{x}}{dt} + \mathbf{y} = \sin t\\ \frac{d\mathbf{y}}{dt} + \mathbf{x} = \mathbf{1} + \cos t \end{cases}; \text{with } \mathbf{x}(\mathbf{0}) = \mathbf{0} \& \mathbf{y}(\mathbf{0}) = \mathbf{0} \tag{4}$$

Using Laplace transform, we have

$$\mathcal{L}\left[\frac{d\mathbf{x}}{dt} + \mathbf{y}\right] = \mathcal{L}[\sin t], \mathcal{L}\left[\frac{d\mathbf{y}}{dt} + \mathbf{x}\right] = \mathcal{L}[t + \cos t]$$

$$p\,\mathcal{L}[\mathbf{x}(t)] - \mathbf{x}(0) + \mathcal{L}[\mathbf{y}(t)] = \frac{1}{p^2 + 1}, \mathcal{L}[\mathbf{y}(t)] - \mathbf{y}(0) + \mathcal{L}[\mathbf{x}(t)] = \frac{1}{p} + \frac{p}{p^2 + 1}$$

$$p\,\mathcal{L}[\mathbf{x}(t)] + \mathcal{L}[\mathbf{y}(t)] = \frac{1}{p^2 + 1}, \mathcal{L}[\mathbf{y}(t)] + \mathcal{L}[\mathbf{x}(t)] = \frac{1}{p} + \frac{p}{p^2 + 1}$$

On solving the above pair of equations for L½ � ð Þ *et* , we have

$$\mathcal{L}[\mathbf{y}(t)] = \frac{2}{p-1} - \frac{2}{(p-1)(p^2+1)}$$

Taking the Inverse Laplace Transform of both sides of the above equation, we have

$$y(t) = \mathcal{L}^{-1}\left[\frac{2}{p-1}\right] - \mathcal{L}^{-1}\left[\frac{2}{(p-1)(p^2+1)}\right]$$

$$y(t) = 2e^t - \mathcal{L}^{-1}\left[\frac{1}{(p-1)} - \frac{p}{p^2+1} - \frac{1}{p^2+1}\right]$$

$$y(t) = e^t - \cos t - \sin t \tag{5}$$

Putting the above value of *y t*ð Þ in the second equation of equation no. (4)

$$\mathbf{x}(t) = \mathbf{1} + \cos t - \frac{d}{dt} [e^t - \cos t - \sin t]$$

$$\mathbf{x}(t) = \mathbf{1} + \cos t - [e^t + \sin t - \cos t]$$

$$\mathbf{x}(t) = \mathbf{1} - e^t - \sin t \tag{6}$$

Equations 2nd and 3rd together give the solution of the given system of simultaneous differential equations.

**Remark 2.1.2**. In many physical problems/situations like particle movement along any curve at time t, two tanks in mixing problems, and two circuits in electrical networks, etc., we get a system of ordinary differential equations. The solution to all these problems can also be obtained by following the procedure in example 2.1.2 using Laplace transform techniques.

#### **3.2 Solution/response/current in electrical circuits by using Laplace transform technique**

This section finds the electrical circuits' response/solution/current *i t*ð Þ using Laplace transform techniques. Many authors have done work in this field, but their results were different and proved using a different methodology. Some basic rules and principles of the area are required to prove results under this section. Although there are many advanced books where we can get these basic rules/ definitions /principles, we refer to [5] for this purpose.

**Definition 2.2.1 [5]. Kirchhoff's Laws**


### **Example 2.2.1. (Response/Solution/Current** *i t*ð Þ **in the L-R Series Circuit)**

A simple R-L series circuit is shown in **Figure 1**, where resistance (R) and Inductance (L) are in series with voltage source E(t). Let *i t*ð Þ be the current flowing in the circuit at any time t.

**Figure 1.** *R-L series circuit.*

*A Review Note on Laplace Transform and Its Applications in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.108251*

Then, the model of the L-R series circuit is given by: *R i* <sup>þ</sup> *<sup>L</sup> di dt* ¼ *E t*ð Þ (By voltage law)

$$\frac{di}{dt} + \frac{R}{L}i = \frac{E}{L} \tag{7}$$

It is assumed that the current is initially zero, i.e., *i* ¼ 0 *at t* ¼ 0. Taking the Laplace transform of both sides of the above equation, we get

$$
\mathcal{L}\left[\frac{di}{dt} + \frac{R}{L}i\right] = \mathcal{L}\left[\frac{E}{L}\right]
$$

$$
\mathcal{L}\left[\frac{di}{dt}\right] + \frac{R}{L}\mathcal{L}[i] = \frac{E}{L}\mathcal{L}[\mathbf{1}]
$$

$$
p\,\mathcal{L}[i(t)] - i(0) + \frac{R}{L}\mathcal{L}[i(t)] = \frac{E}{L}\left[\frac{\mathbf{1}}{p}\right]
$$

$$
\mathcal{L}[i(t)] = \frac{E}{L}\left[\frac{\mathbf{1}}{p\left(p + \frac{R}{L}\right)}\right] \tag{8}
$$

Taking the inverse Laplace transform of both sides of the above equation, we get

$$i(t) = \frac{E}{L} \mathcal{L}^{-1} \left[ \frac{1}{p\left(p + \frac{R}{L}\right)} \right]$$

$$i(t) = \frac{E}{R} \mathcal{L}^{-1} \left[ \frac{1}{p} - \frac{1}{p + \frac{R}{L}} \right]$$

$$i(t) = \frac{E}{R} \left[ 1 - e^{-\frac{pt}{L}} \right] \tag{9}$$

Which is the required solution of the given L-R series circuit.

### **Example 2.2.2. (Response/Solution/Current** *i t*ð Þ **in the RLC circuit)**

In **Figure 2**, an RLC circuit is shown. An RLC circuit is obtained from an RL circuit by adding a capacitor. We have modeled the RL circuit in example 2.2.1, which is given by

**Figure 2.** *RLC circuit.*

We obtain the RLC circuit simply by adding the voltage drop Q/C across the capacitor.

Here, current *i t*ðÞ¼ *dQ dt* (or) *Q t*ðÞ¼ <sup>Ð</sup> *i t*ð Þ*dt*

Assuming a sinusoidal EMF as in the figure. Thus, the model of the RLC circuit is given by:

$$L\frac{d\mathbf{f}}{dt} + \dot{\imath}\mathbf{R} + \frac{1}{C}\int \mathbf{i}(t)dt = E(t) \tag{10}$$

Differentiating 1st concerning t, we have

$$L\frac{d^2l}{dt^2} + R\frac{di}{dt} + \frac{1}{C}i = E'(t) = E\tag{11}$$

The solution of 2nd will give the current *I* in the RLC circuit Taking Laplace transform of both sides of 2nd, we have

$$L\mathcal{L}\left[\frac{d^2f}{dt^2}\right] + R\mathcal{L}\left[\frac{d\mathbf{j}}{dt}\right] + \frac{1}{C}\mathcal{L}[\mathbf{i}] = E\mathcal{L}[\mathbf{1}]$$

$$\left(\mathbf{L}\mathbf{C}\right)\left[\mathbf{p}^2\mathcal{L}\{\mathbf{i}(\mathbf{t})\} - \mathbf{p}\left\|\mathbf{(0)} - \mathbf{i}'(\mathbf{0})\right\| + \left(\mathbf{R}\mathbf{C}\right)\left[\mathbf{p}\left\mathcal{L}\{\mathbf{i}(\mathbf{t})\} - \mathbf{i}(\mathbf{0})\right] + \mathcal{L}[\mathbf{i}(\mathbf{t})] = \mathbf{E}\mathcal{C}\left[\frac{\mathbf{1}}{\mathbf{p}}\right]$$

$$\left(\mathbf{L}\mathbf{C}\mathbf{p}^2 + R\mathbf{C}\mathbf{p} + \mathbf{1}\right)\mathcal{L}[\mathbf{i}(\mathbf{t})] = E\mathcal{C}\frac{\mathbf{1}}{\mathbf{p}}$$

(As current and capacitor charge are 0 when t = 0, so *i*ð Þ¼ 0 0, and *i* 0 ð Þ¼ 0 0) This implies that

$$\mathcal{L}[\mathbf{i}(\mathbf{t})] = \text{EC}\left[\frac{\mathbf{1}}{p(\,\,\delta\mathbf{p}^2 + \eta p + \mathbf{1})}\right] \tag{12}$$

(where *γ* ¼ *RC*,*δ* ¼ *LC*)

Taking Inverse Laplace Transform of both sides of the above equation we have

i tðÞ¼ ð Þ EC <sup>L</sup>�<sup>1</sup> <sup>1</sup> *<sup>p</sup>* <sup>1</sup> <sup>þ</sup> *<sup>γ</sup><sup>p</sup>* <sup>þ</sup> *<sup>δ</sup>p*<sup>2</sup> ð Þ � � i tðÞ¼ ð Þ EC <sup>L</sup>�<sup>1</sup> <sup>1</sup> *<sup>p</sup>* � *<sup>γ</sup>* <sup>þ</sup> *<sup>δ</sup><sup>p</sup>* 1 þ *γp* þ *δp*<sup>2</sup> � � i tðÞ¼ EC � EC <sup>L</sup>�<sup>1</sup> ð Þ*<sup>γ</sup>* 1 þ *γp* þ *δp*<sup>2</sup> � � � *EC* <sup>L</sup>�<sup>1</sup> ð Þ*<sup>δ</sup> <sup>p</sup>* 1 þ *γp* þ *δp*<sup>2</sup> � � i tðÞ¼ EC � EC ð Þ*<sup>γ</sup>* <sup>L</sup>�<sup>1</sup> <sup>1</sup> *<sup>p</sup>* <sup>þ</sup> *<sup>γ</sup>* 2*δ* � �<sup>2</sup> <sup>þ</sup> <sup>1</sup> *<sup>δ</sup>* � *<sup>γ</sup>*<sup>2</sup> 4*δ*<sup>2</sup> � � 2 4 3 <sup>5</sup> � *EC* ð Þ*<sup>δ</sup>* <sup>L</sup>�<sup>1</sup> *<sup>p</sup>* <sup>þ</sup> *<sup>γ</sup>* 2*δ* � � � *<sup>γ</sup>* 2*δ <sup>p</sup>* <sup>þ</sup> *<sup>γ</sup>* 2*δ* � �<sup>2</sup> <sup>þ</sup> <sup>1</sup> *<sup>δ</sup>* � *<sup>γ</sup>*<sup>2</sup> 4*δ*<sup>2</sup> � � 2 4 3 5 i tðÞ¼ EC � EC ð Þ*γ e* � *γ* 2*δ t* <sup>L</sup>�<sup>1</sup> <sup>1</sup> *<sup>p</sup>*<sup>2</sup> <sup>þ</sup> <sup>1</sup> *<sup>δ</sup>* � *<sup>γ</sup>*<sup>2</sup> 4*δ*<sup>2</sup> � � 2 4 3 5 � *EC* ð Þ*δ e* � *γ* 2*δ t* <sup>L</sup>�<sup>1</sup> *<sup>p</sup>* � *<sup>γ</sup>* 2*δ <sup>p</sup>*<sup>2</sup> <sup>þ</sup> <sup>1</sup> *<sup>δ</sup>* � *<sup>γ</sup>*<sup>2</sup> 4*δ*<sup>2</sup> � � 2 4 3 5

*A Review Note on Laplace Transform and Its Applications in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.108251*

This implies that

$$\mathbf{i}(\mathbf{t}) = \mathbf{E}\mathbf{C} - \mathbf{E}\mathbf{C}(\mathbf{y}) \left(\frac{2\delta}{\sqrt{4\delta - \mathbf{y}^2}}\right) e^{-\frac{\mathbf{y}}{2\delta}} \sin\left(\frac{\sqrt{4\delta - \mathbf{y}^2}}{2\delta}\right) \mathbf{t} \tag{13}$$

$$-EC(\delta)e^{-\frac{\mathbf{y}}{2\delta}} \left[\cos\left(\frac{\sqrt{4\delta - \mathbf{y}^2}}{2\delta}\right) \mathbf{t} - \left(\frac{\mathbf{y}}{\sqrt{4\delta - \mathbf{y}^2}}\right) \sin\left(\frac{\sqrt{4\delta - \mathbf{y}^2}}{2\delta}\right) \mathbf{t}\right]$$

(where, *γ* ¼ *RC*,*δ* ¼ *LC*)

Which is the required solution of the given RLC circuit.

#### **3.3 Solution of mechanical systems by using Laplace transform technique**

Some basic rules and principles of the field are required to prove our main result in this section. We refer to [5] for this purpose.

**Example 2.3.1. (Torsional Pendulum Experiment)**

In **Figure 3**, a Torsional Pendulum is demonstrated.

It consists of a disk or rod suspended at the end of the wire.

When the end of the wire is twisted at an angle ∅

The restoring torque *τ* arises

Then by Hooke's Law: *τ* ¼ �*k*∅ … ð Þ1 (where *k* is called the torsional constant).

If the wire is twisted and released, the oscillating system is called a torsional pendulum.

By Newton's Second Law: *<sup>τ</sup>* <sup>¼</sup> *Ia* … ð Þ<sup>2</sup> (where <sup>¼</sup> *<sup>d</sup>*<sup>2</sup> ∅ *dt*<sup>2</sup> )

By 1st and 2nd and we have

$$-k\mathcal{D} = I \left. \frac{d^2 \mathcal{D}}{dt^2} \right|$$

Which can be rewritten as

$$\frac{d^2 \mathcal{D}}{dt^2} + \frac{k}{I} \mathcal{D} = \mathbf{0}$$

(Or)

$$\frac{d^2 \mathcal{Q}}{dt^2} + \alpha^2 \mathcal{Q} = \mathbf{0} \tag{14}$$

This is called the equation of a simple harmonic oscillator, whose angular frequency is *ω* ¼ ffiffi *k I* q and period is *T* ¼ 2*π* ffiffiffi *I K* q

**Figure 3.** *A Torsional Pendulum.* Now taking Laplace transform of both sides of the above equation, we have

$$\mathcal{L}\left[\frac{d^2\mathcal{Q}}{dt^2}\right] + \boldsymbol{\omega}^2 \mathcal{L}[\mathcal{Q}] = \mathbf{0}$$

$$\left[p^2 \mathcal{L}\{\mathcal{Q}(t)\} - p\mathcal{Q}(\mathbf{0}) - \mathcal{Q}'(\mathbf{0})\right] + \boldsymbol{\omega}^2 \mathcal{L}[\mathcal{Q}(t)] = \mathbf{0}$$

[Let, ∅ (0) = A and∅<sup>0</sup> ð Þ¼ 0 *B* ]

$$\left(p^2 + \alpha^2\right)\mathcal{L}[\mathcal{Q}(t)] = pA + B$$

$$\mathcal{L}[\mathcal{Q}(t)] = \frac{pA + B}{p^2 + \alpha^2} \tag{15}$$

Now taking the Inverse Laplace Transform of both sides of Eq. (16)

$$\mathcal{Q}(t) = \mathcal{L}^{-1} \left[ \frac{pA + B}{p^2 + \alpha^2} \right]$$

$$\mathcal{Q}(t) = A \, \mathcal{L}^{-1} \left[ \frac{p}{p^2 + \alpha^2} \right] + B \mathcal{L}^{-1} \left[ \frac{1}{p^2 + \alpha^2} \right]$$

$$\mathcal{Q}(t) = A \cos \alpha t + B \frac{\sin \alpha t}{\alpha} \tag{16}$$

Which is the required solution of the given Mechanical System.

**Remark 2.3.1.** The balance wheel in a clock or wristwatch is an example of a torsional pendulum. All simple harmonic oscillators satisfy a differential equation 3rd of the above example. Hence, the general solution (displacement) of all bodies moving with simple harmonic motion can be obtained by following the procedure given in the above example using Laplace transform technique.

#### **3.4 Solution of fuzzy differential equations (FDEs) by using fuzzy Laplace transform**

The chapter by Sharma et al. [17] titled "Applications of fuzzy set and fixed point theory in Dynamical systems" and published in the open-access book "Qualitative and Computational Aspects of Dynamical Systems" with the ISBN: 978-1-80356-567-5) contains a detailed discussion of this section (one can refer to this chapter and can go through the definition 2.1, theorem2.2, formulae 2.3, and remark 2.5 of this chapter to understand the concept of this sub-section 2.4).

**Example 2.4.1.** Consider the initial value problem

$$\begin{cases} \mathcal{Y}'(t) = \mathcal{Y}(t)\mathbf{0} \le t \le T \\\\ \mathcal{Y}(\mathbf{0}) = \begin{pmatrix} \mathcal{Y}(\mathbf{0}, & a), \ \bar{\mathcal{Y}}(\mathbf{0}, & a) \end{pmatrix}. \end{cases}$$

by using the fuzzy Laplace transform method, we have

*L y*<sup>0</sup> ½ �¼ ð Þ*<sup>t</sup> L yt* ½ � ð Þ , and *L y*<sup>0</sup> ½ �¼ ð Þ*<sup>t</sup>* <sup>Ð</sup> <sup>∞</sup> <sup>0</sup> *y*<sup>0</sup> ð Þ*<sup>t</sup>* <sup>⨀</sup>*e*�*ptdt* in (i)-differentiable then by using Case (i), we have *L y*<sup>0</sup> ½ �¼ ð Þ*t* ð Þ *sL y t* ½ � ð Þ ⊖ *y*ð Þ 0 Therefore, *Ly t* ½ �¼ ð Þ *sL y t* ½ �Þ ð Þ ⊖ *y*ð Þ 0

$$l[\bar{\boldsymbol{\nu}}(t,\ \boldsymbol{a})] = \operatorname{s\,l} \begin{bmatrix} \boldsymbol{\nu}(t,\ \ \boldsymbol{a}) \\ \end{bmatrix} - \boldsymbol{\mathcal{\boldsymbol{\nu}}}(\mathbf{0},\ \ \boldsymbol{a})$$

$$l \begin{bmatrix} \boldsymbol{\nu}(t,\ \ \boldsymbol{a}) \\ \end{bmatrix} = \operatorname{s\,l} [\bar{\boldsymbol{\nu}}(t,\ \ \boldsymbol{a})] - \bar{\boldsymbol{\nu}}(\mathbf{0},\ \ \boldsymbol{a}) \dots \end{bmatrix} \tag{17}$$

$$l[\bar{\mathbf{y}}(t,\ a)] = -\bar{\mathbf{y}}(\mathbf{0},a) \left(\frac{\mathfrak{s}}{\mathfrak{s}^2 - 1}\right) + \underset{-}{\mathbf{y}}(\mathbf{0},a) \left(-\frac{1}{\mathfrak{s}^2 - 1}\right)$$

$$l\begin{bmatrix} \mathbf{y}(t,\ a) \end{bmatrix} = -\mathfrak{y}(\mathbf{0},a) \left(\frac{\mathfrak{s}}{\mathfrak{s}^2 - 1}\right) + \bar{\mathbf{y}}(\mathbf{0},a) \left(-\frac{1}{\mathfrak{s}^2 - 1}\right)$$

$$\bar{\jmath}(t,a) = -\bar{\jmath}(0,a)l^{-1}\left[\left(\frac{\mathfrak{s}}{\mathfrak{s}^2 - \mathbf{1}}\right)\right] + \underline{\jmath}(0,a)l^{-1}\left[\left(-\frac{\mathfrak{1}}{\mathfrak{s}^2 - \mathbf{1}}\right)\right]$$

$$\underline{\jmath}(t,a) = -\underline{\jmath}(0,a)l^{-1}\left[\left(\frac{\mathfrak{s}}{\mathfrak{s}^2 - \mathbf{1}}\right)\right] + \bar{\jmath}(0,a)l^{-1}\left[\left(-\frac{\mathfrak{1}}{\mathfrak{s}^2 - \mathbf{1}}\right)\right]$$

$$\bar{y}(t,a) = e^{-t} \left( \frac{\mathcal{Y}(0, \ a) - \bar{\mathcal{Y}}(0, \ a)}{2} \right) - e^t \left( \frac{\mathcal{Y}(0, \ a) + \bar{\mathcal{Y}}(0, \ a)}{2} \right)$$
 
$$\bar{y}(t,r) = e^{-t} \left( \frac{-\mathcal{Y}(0, \ a) + \bar{\mathcal{Y}}(0, \ a)}{2} \right) - e^t \left( \frac{\mathcal{Y}(0, \ a) + \bar{\mathcal{Y}}(0, \ a)}{2} \right)$$

$$l[\bar{\boldsymbol{y}}(t,\ a)] = \boldsymbol{s} \, l[\bar{\boldsymbol{y}}(t,\ a)] - \bar{\boldsymbol{y}}(0,\ a)$$

$$l\begin{bmatrix} \boldsymbol{y}(t,\ a) \\ \text{\\_} \\ \text{\\_} \end{bmatrix} = \boldsymbol{s}l\begin{bmatrix} \boldsymbol{y}(t,\ a) \\ \text{\\_} \end{bmatrix} - \boldsymbol{y}(0,\ a) \dots \text{\\_} \tag{18}$$

$$l[\bar{\mathbf{y}}(t,\ a)] = -\bar{\mathbf{y}}(\mathbf{0},a) \left(\frac{\mathbf{1}}{\mathbf{1}+s}\right)$$

$$l\begin{bmatrix} \mathbf{y}(t,\ a) \\ - \end{bmatrix} = -\mathbf{y}(\mathbf{0},a) \left(\frac{\mathbf{1}}{\mathbf{1}+s}\right)$$

$$\bar{\jmath}(t,a) = -\bar{\jmath}(0,a)l^{-1}\left(\frac{1}{1+s}\right)$$

*Qualitative and Computational Aspects of Dynamical Systems*

$$\underset{-}{\nu}(t,a) = -\underset{-}{\gamma}(0,a)l^{-1}\left(\frac{1}{1+s}\right).$$

Finally, we have:

$$\begin{aligned} \bar{\jmath}(t, a) &= -\bar{\jmath}(\mathbf{0}, a)e^{-t} \\ \jmath(t, a) &= -\jmath(\mathbf{0}, a)e^{-t} \\ \hfil- &\hfil- \end{aligned}$$

#### **4. Conclusion**

This chapter discussed the Laplace transform and fuzzy Laplace transform in dynamic systems. Using the Laplace transform and the fuzzy Laplace transform methods, we provided solutions to first- and second-order ordinary differential equations and first- and second-order fuzzy ordinary differential equations. We demonstrated how the Laplace transform method could determine the solution (current flow) of first- and second-order electrical circuits and a mechanical system's solution (vibration frequency).

#### **Author details**

Shivram Sharma<sup>1</sup> , Praveen Kumar Sharma<sup>2</sup> \* and Jitendra Kaushik<sup>3</sup>

1 Department of Mathematics, Govt. PG College, Guna, MP, India

2 Department of Mathematics, SVIS, Shri Vaishnav Vidyapeeth Vishwavidyalaya, Indore, MP, India

3 MIT Art, Design and Technology University, Pune, India

\*Address all correspondence to: praveen\_jan1980@rediffmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*A Review Note on Laplace Transform and Its Applications in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.108251*

#### **References**

[1] Dass HK, Verma R. Higher Engineering Mathematics. New Delhi: S. Chand Publication

[2] Duffy DG. Transform Methods for Solving Partial Differential Equations. USA: CRC Press; 1994

[3] Franklin G, Powell D, Emami-Naeini A. Feedback Control of Dynamic Systems. USA: Prentice-Hall; 2002

[4] Rosenbrock HH. State-Space and Multivariable Theory. UK: Nelson; 1970

[5] Ramanna BV. Higher Engineering Mathematics. India: Tata McGraw Hill Publication

[6] Zadeh LA. Fuzzy sets. Information Controlled. 1965;**8**:338-353

[7] Najariyan M, Farahi MH. A new approach for the optimal fuzzy linear time-invariant controlled system with fuzzy coefficients. Journal of Computational and Applied Mathematics. 2014;**259**:682-694

[8] Najariyan M, Farahi MH. A new approach for solving a class of fuzzy optimal control systems under generalized Hukuhara differentiability. Journal of the Franklin Institute. 2015; **352**(5):1836-1849

[9] Najariyan M, Zhao Y. Fuzzy fractional quadratic regulator problem under granular fuzzy fractional derivatives. IEEE Transactions on Fuzzy Systems. 2017;**26**(4):2273-2288

[10] Mazandarani M, Pariz N. Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept. ISA Transactions. 2018;**76**: 1-17

[11] Najariyan M, Zhao Y. On the stability of fuzzy linear dynamical systems. Journal of the Franklin Institute. 2020;**357**(9):5502-5522

[12] Allahviranloo T, Abbasbandy S, Salahshour S, Hakimzadeh A. A new method for solving fuzzy linear differential equations. Computing. 2011; **59**(3):181-197

[13] Salahshour S, Allahviranloo T. Applications of fuzzy Laplace transforms. Soft Computing. 2013;**17**(1): 145-158

[14] Chang SSL, Zadeh L. On fuzzy mapping and control. IEEE Transactions on Systems and Cybernetics. 1972;**2**: 30-34

[15] Kandel A. Fuzzy dynamical systems and the nature of their solutions. In: Wang PP, Chang SK, editors. Fuzzy Sets Theory and Application to Policy Analysis and Information Systems. New York: Plenum Press; 1980. pp. 93-122

[16] Kandel A, Byatt WJ. Fuzzy differential equations. In: Proceedings of the International Conference on Cybernetics and Society. Tokyo; 1978. pp. 1213-12160

[17] Sharma PK, Sharma S, Kaushik J, Goyal P. A Book chapter with the title "Applications of fuzzy set and fixed point theory in Dynamical systems" under an Open Access Book with the title. In: Qualitative and Computational Aspects of Dynamical Systems. UK: Intech Open; 2022

#### **Chapter 2**

## Numerical Methods: Euler and Runge-Kutta

*Victor Akinsola*

#### **Abstract**

Most real life phenomena change with time, hence dynamic. Differential equations are used in mathematical modeling of such scenarios. Linear differential equations can be solved analytically but most real life applications are nonlinear. Numerical solutions of nonlinear differential equations are approximate solutions. Euler and Runge-Kutta method of order four are derived, explained and illustrated as useful numerical methods for solving single and systems of linear and nonlinear differential equations. Accuracy of a numerical method depends on the step size used and degree of nonlinearity of the equations. Stiffness is another challenge with numerical solutions of nonlinear differential equations. Although better accuracy can be obtained with smaller step size, this takes more computational effort and time. Algorithms and codes can be written using available computer programming software to overcome this challenge and to avoid computational error. The Runge-Kutta method is more applicable and accurate for diverse classes of differential equations.

**Keywords:** numerical solution, Euler method, Runge-Kutta method of order four (RK4), accuracy

#### **1. Introduction**

In numerical analysis, mathematical methods are employed to produce numerical answers to mathematical expressions. It entails developing, examining, and putting into practice computer algorithms to solve continuous mathematics problems numerically [1]. These problems may originate from real-life applications in the natural sciences, social sciences, engineering, medical sciences and business where variables which vary continuously are involved.

Differential equations are the foundation of the majority of mathematical models used in the natural sciences and engineering. The numerical technique finds the solution function at a discrete set of points, approximating the derivatives or integrals in the relevant equation.

In an effort to obtain solutions that are more precise, more affordable, or more resilient to instabilities in the issue data, a number of techniques have been developed. Many of the differential equations that arise in practical problems are usually nonlinear. Nonlinear differential equations are nearly impossible to be solved precisely [2]. Most differential equations that arise in practical problems are typically

nonlinear. The computational labour of solving a system of first order equations may be formidable even when the exact solution is possible [3]. It is for these reasons that techniques have been developed for computing to any degree of accuracy the numerical solution of almost any of such problems.

These techniques typically fall into "families" of increasing order of accuracy, with Euler's method (or a close relative) frequently making up the lowest order. Adams-Bashforth techniques are "multistep" methods, whereas Runge-Kutta methods are "single-step" methods. These methods have been very successful and reliable.

In this chapter, the derivation of Euler and Runge Kutta methods are explained and illustrative examples given to explain how they are used. Algorithms/codes of the examples written with Maple mathematical programming software were also included for better comprehension and further practice.

#### **2. The Euler method**

The popular Euler method published in 1768 is attributed to Leonhard Euler (1707– 1783). The method is one of the most straightforward and fundamental method of solving single and systems of ordinary differential eqs. [1]. The basic idea is as follows;

Consider initial value system:

$$\frac{d\mathbf{y}}{dt} = f(t, \boldsymbol{x}, \boldsymbol{y})\boldsymbol{y}(\mathbf{0}) = \boldsymbol{y}\_0 \tag{1}$$

$$\frac{d\mathbf{x}}{dt} = \mathbf{g}(t, \mathbf{x}, \boldsymbol{y})\mathbf{x}(0) = \mathbf{x}\_0\tag{2}$$

By the definition of a derivative,

$$y'(t) = \lim\_{h \to 0} \frac{f(t+h) - f(t)}{h} \tag{3}$$

$$\varkappa'(t) = \lim\_{h \to 0} \frac{g(t+h) - f(t)}{h} \tag{4}$$

for small *h*> 0, then, Eqs. (3) and (4) imply that a reasonable difference approximation for *y*<sup>0</sup> ð Þ*t* and *x*<sup>0</sup> ð Þ*t* [4] are.

$$y'(t) = \frac{f(t+h) - f(t)}{h} \tag{5}$$

$$\mathbf{x}'(t) = \frac{\mathbf{g}(t+h) - f(t)}{h} \tag{6}$$

Substituting Eq. (5) and Eq. (6) into Eq. (1) and Eq. (2) respectively yield the difference equations

$$\frac{f(t+h) - f(t)}{h} = f(t, x, y) \tag{7}$$

$$\frac{\mathbf{g}(t+h) - f(t)}{h} = \mathbf{g}(t, \mathbf{x}, y) \tag{8}$$

which approximates the differential Eqs. (1) and (2).

According to [1], the Euler's method approximates a solution (x, y) with step size *h* by:

$$\mathcal{Y}\_{k+1} = \mathcal{Y}\_k + h f\left(\mathbf{x}\_k, \mathcal{Y}\_k\right) \tag{9}$$

$$\mathbf{x}\_{k+1} = \mathbf{x}\_k + h\mathbf{g}\left(\mathbf{x}\_k, \mathbf{y}\_k\right) \tag{10}$$

Expressing Eq. (1) and Eq. (2) in vector notation:

$$\frac{d\mathbf{Y}}{dt} = F(Y), Y\_0 \tag{11}$$

Where *Y* ¼ ð Þ *y*, *x* , *Y*<sup>0</sup> ¼ *y*0, *x*<sup>0</sup> , *F Y*ð Þ¼ ð Þ *f x*ð Þ , *<sup>y</sup>* , *g x*<sup>ð</sup> , *<sup>y</sup>*<sup>Þ</sup> and dY dt <sup>¼</sup> *dy dt* , *dx dt* **.** The Euler's method approximates a solution (x, y) by:

$$(\mathbf{x}\_{k+1}, \mathbf{y}\_{k+1}) = (\mathbf{x}\_k, \mathbf{y}\_k) + hF(\mathbf{x}\_k, \mathbf{y}\_k) \tag{12}$$

#### **Illustration 1**

Given the initial value problem in [5]:

$$\mathcal{y}' = \frac{\mathcal{y} - \mathfrak{x}}{\mathcal{y} + \mathfrak{x}}, \mathcal{y}(0) = 1.$$

Determine.

*y* for *x* ¼ 0*:*1, using the Euler method with the step size *h* ¼ 0*:*02. **Solution**

The Euler formula is.

*yn*þ<sup>1</sup> <sup>¼</sup> *yn* <sup>þ</sup> *hf xn*, *yn* where *n* is the number of iteration. For the first iteration which is *n* ¼ 0, the Euler formula becomes

$$\mathcal{y}\_1 = \mathcal{y}\_0 + h f(\mathfrak{x}\_0, \mathfrak{y}\_0)$$

*x*<sup>0</sup> ¼ 0 and *y*<sup>0</sup> ¼ 1 as the given initial condition.

$$\begin{aligned} \mathbf{x}\_1 &= \mathbf{x}\_0 + h\\ \mathbf{x}\_n &= \mathbf{x}\_0 + nh \end{aligned}$$

$$n = \frac{\mathbf{x}\_n - \mathbf{x}\_0}{h} = \frac{\mathbf{0}.\mathbf{1} - \mathbf{0}}{\mathbf{0}.\mathbf{0}\mathbf{2}} = \mathbf{5}$$

$$\mathbf{y}\_1 = \mathbf{1} + \mathbf{0}.\mathbf{0}\mathbf{2}\mathbf{f}(\mathbf{0}, \mathbf{1}) = \mathbf{1} + \mathbf{0}.\mathbf{0}\mathbf{2}\left(\frac{\mathbf{1} - \mathbf{0}}{\mathbf{1} + \mathbf{0}}\right) = \mathbf{1} + \mathbf{0}.\mathbf{0}\mathbf{2} = \mathbf{1}.\mathbf{0}\mathbf{2}$$

When *n* ¼ 1, *x*<sup>1</sup> ¼ *x*<sup>0</sup> þ *h* ¼ 0 þ 0*:*02 ¼ 0*:*02.

*y*<sup>2</sup> ¼ *y*<sup>1</sup> þ 0*:*02*f x*1, *y*<sup>1</sup> <sup>¼</sup> <sup>1</sup>*:*<sup>02</sup> <sup>þ</sup> <sup>0</sup>*:*02*f*ð Þ¼ <sup>0</sup>*:*02, 1*:*<sup>02</sup> <sup>1</sup>*:*02<sup>þ</sup> <sup>0</sup>*:*<sup>02</sup> <sup>1</sup>*:*02�0*:*<sup>02</sup> <sup>1</sup>*:*02þ0*:*<sup>02</sup> <sup>¼</sup> 1*:*039230769 When *n* ¼ 2, *x*<sup>2</sup> ¼ *x*<sup>0</sup> þ 2*h* ¼ 0 þ 2 0ð Þ¼ *:*02 0*:*04

$$y\_3 = y\_2 + 0.02f(\mathbf{x}\_2, y\_2) = 1.039230769 + 0.02 \left( \frac{1.039230769 - 0.04}{1.039230769 + 0.04} \right) = 1.057748232$$
 
$$\text{When } n = 3, \mathbf{x}\_3 = \mathbf{x}\_2 + h = 0.04 + 0.02 = 0.06$$


#### **Table 1.**

*Numerical solution of Example on Euler's method.*

$$y\_4 = y\_3 + 0.02f(x\_3, y\_3) = 1.057748232 + 0.02 \left(\frac{1.057748232 - 0.06}{1.057748232 + 0.06}\right) = 1.075601058$$

$$\text{When } n = 4, \varkappa\_4 = \varkappa\_3 + h = 0.06 + 0.02 = 0.08$$

$$y\_5 = y\_4 + 0.02f(x\_4, y\_4) = 1.075601058 + 0.02 \left(\frac{1.075601058 - 0.08}{1.075601058 + 0.08}\right) = 1.092831936$$

**Table 1** gives the numerical solution of Illustration 1 using different step sizes. Obviously this takes more computational effort since more iteration is needed but gives more accurate result.

The algorithm/ code given below can be adapted to various single differential equations for the Euler method (**Figure 1**).

#### **Algorithm/ Codes using Maple programming software**

## Illustration on Euler method# **ode:=diff(y(x),x)=(y(x)-x)/(y(x)+x); sol:dsolve({ode,y(0)=1},y(x)); ##restart:y:=array(0..200):x:=array(0..200):n:=5.0:A:=0.0:h:=0.02:x[0]:= 0.0:y[0]:=1.0: for m from 0 to n do x[m]:=A+m\*h:y[0]:=1.0:od: for m from 0 to n do y[m+1]:=y[m]+h\*((y[m]-x[m])/(y[m]+x[m])):od: for m from 0 to n do print (m, x[m], y[m]);od: 0, 0., 1.0 1, 0.02, 1.020000000 2, 0.04, 1.039230769 3, 0.06, 1.057748232 4, 0.08, 1.075601058 5, 0.10, 1.092831936**

For more on Maple programming codes and algorithms see [6]. **Illustration 2**

Consider the Lorenz dynamical system given as.

$$\frac{d\mathbf{x}}{dt} = \sigma(\mathbf{y}(t) - \mathbf{x}(t))\mathbf{x}(\mathbf{0}) = \mathbf{0}.\mathbf{1}$$

#### **Figure 1.**

*Graph of numerical solution of Illustration 1 for six iterations.*

$$\frac{dy}{dt} = -\varkappa(t)z(t) + r\varkappa(t) - \jmath(t)\jmath(0) = \mathbf{0.1}$$

$$\frac{dz}{dt} = \varkappa(t)\jmath(t) - bz(t)\varkappa(0) = \mathbf{0.1}$$

Where *σ* ¼ 10, *r* ¼ 28 and *b* ¼ <sup>8</sup>*=*<sup>3</sup> with step size 0.01 determine the solution at *t* ¼ 10 using the Euler's method.

#### **Solution**

The Lorenz system is a classical nonlinear dynamical system. Due to the high degree of nonlinearity of the system and the number of iterations required, its numerical solution is only easily tractable using computer programming. The algorithm/Code in Maple mathematical software is given below after the table of numerical solution (**Figures 2**–**4**) and (**Table 2**).

**Figure 2.**

*Graphical representation of numerical solution of Lorenz system X(t) using Euler method for 1000 iterations as in Table 2.*

**Figure 3.**

*Graphical representation of numerical solution of Lorenz system Y(t) using Euler method for 1000 iterations as in Table 2.*

**Figure 4.**

*Graphical representation of numerical solution of Lorenz system Z(t) using Euler method for 1000 iterations as in Table 2.*


#### **Table 2.**

*Numerical solution of the Lorenz system using Euler method at step size 0.01.*

$$t\_n = t\_0 + nh$$

$$n = \frac{t\_n - t\_0}{h} = \frac{10 - 0}{0.01} = 10000$$

#### **Algorithm/Maple programming Code of the numerical solution of Lorenz system using Euler's method**

## This is the numerical solution of Lorenz system using Euler method## sys:=diff(x(t),t)=sigma\*(y(t)-x(t)),diff(y(t),t)=-x(t)\*z(t)+r\*x(t)-y(t), diff(z(t),t)=x(t)\*y(t)-b\*z(t); ##restart:X:=array(0..2000000):Y:=array(0..2000000):Z:=array (0..2000000):T:=array(0..2000000):A:=0.0:h:=0.01:N:=1000:sigma:= 10.0:b:=(8/3):r:=28.0: for m from 0 to N do T[m]:=A+m\*h:X[0]:=0.1: Y[0]:=0.1:Z[0]:=0.1:od: for m from 0 to N do X[m+1]:=X[m]+ h\*(sigma\*(Y[m]-X[m])):Y[m+1]:=Y[m]+h\*(-X[m]\*Z[m]+r\*X[m]-Y[m]): Z[m+1]:=Z[m]+h\*(X[m]\*Y[m]-b\*Z[m]):od: for m from 0 by 100 to N do print (m, T[m], X[m],Y[m],Z[m]);od: with(plots): func1:=listplot([seq(X[m],m=0..N)],style=line,color=black,thickness=1): display(func1,labels=["t","X(t)"],axes=boxed,caption="Figure1:Euler solution for X(t)"); func2:=listplot([seq(Y[m],m=0..N)],style=line,color=blue,thickness=1): display(func2,labels=["t","Y(t)"],axes=boxed,caption="Figure2:Euler solution for Y(t)"); func3:=listplot([seq(Z[m],m=0..N)],style=line,color=red, thickness=1): display(func3,labels=["t","Z(t)"],axes=boxed,caption="Figure 3:Euler solution for Z(t)");

For more on Lorenz system see [7].

#### **3. The Runge-Kutta method of order four**

One of the well-known numerical techniques for solving differential equations is the Runge-Kutta method. The Runge-Kutta method is a family of methods which depend on the order of derivatives involved. The Runge-Kutta method of order four is the classical form of the method in which four values of the derivatives are used for each iteration [1].

#### **3.1 Derivation of Runge-Kutta method of order four**

The Taylor series of functions of one variable is:

$$y(\mathbf{x}) = y(\mathbf{x}\_0) + y'(\mathbf{x}\_0)(\mathbf{x} - \mathbf{x}\_0) + \frac{y''(\mathbf{x}\_0)}{2!}(\mathbf{x} - \mathbf{x}\_0)^2 + \frac{y''(\mathbf{x}\_0)}{3!}(\mathbf{x} - \mathbf{x}\_0)^3 + \dotsb \tag{13}$$

$$\mathcal{Y}(\mathbf{x}) = \sum\_{n=0}^{\infty} \frac{\mathcal{Y}^{(n)}}{n!} (\mathbf{x} - \mathbf{x}\_0)^n \tag{14}$$

when *x* ¼ *x*<sup>0</sup> þ *h*, *x* � *x*<sup>0</sup> ¼ *h*. Then Eq. (13) becomes.

*Qualitative and Computational Aspects of Dynamical Systems*

$$y(\mathbf{x}) = y(\mathbf{x}\_0) + hy'(\mathbf{x}\_0) + h^2 \frac{y''(\mathbf{x}\_0)}{\mathbf{2}!} + h^3 \frac{y''(\mathbf{x}\_0)}{\mathbf{3}!} + \cdots \tag{15}$$

Taylor series of functions of two variables is:

$$f(\mathbf{x}, y) = \sum\_{i=0}^{\infty} \frac{1}{n!} \left\{ (\mathbf{x} - \mathbf{x}\_0) \frac{\partial}{\partial \mathbf{x}} + (y - y\_0) \frac{\partial}{\partial y} \right\} \overset{i}{f}(\mathbf{x}, y) \tag{16}$$

when *x* ¼ *x*<sup>0</sup> þ *mh*, *x* � *x*<sup>0</sup> ¼ *mh*and *y* � *y*<sup>0</sup> ¼ *nh*

$$f(\mathbf{x}, \boldsymbol{y}) = \sum\_{i=0}^{\infty} \frac{1}{n!} \left\{ mh \frac{\partial}{\partial \boldsymbol{\omega}} + nh \frac{\partial}{\partial \boldsymbol{\eta}} \right\}^i f(\mathbf{x}, \boldsymbol{y}) \tag{17}$$

$$f(\mathbf{x}, \mathbf{y}) = f(\mathbf{x}\_0, \mathbf{y}\_0) + f'(\mathbf{x}\_0, \mathbf{y}\_0)(mh + nh) + \frac{f''(\mathbf{x}\_0, \mathbf{y}\_0)(mh + nh)^2}{2!}$$

$$+ \frac{f'''(\mathbf{x}\_0, \mathbf{y}\_0)(mh + nh)^3}{3!} + \dotsb \tag{18}$$

$$f(\mathbf{x}, \mathbf{y}) = f(\mathbf{x}\_0, \mathbf{y}\_0) + mh f'(\mathbf{x}\_0, \mathbf{y}\_0) + nh f'(\mathbf{x}\_0, \mathbf{y}\_0) + \frac{f''(\mathbf{x}\_0, \mathbf{y}\_0) \left( (mh)^2 + 2(mh)(nh) + (nh)^2 \right)}{2!}$$

$$+ \frac{f'''(\mathbf{x}\_0, \mathbf{y}\_0) \left( (mh)^3 + 3(mh)^2(nh) + 3(mh)(nh)^2 + (nh)^3 \right)}{3!} + \dotsb \tag{19}$$

$$\begin{aligned} f(\mathbf{x}, \boldsymbol{\eta}) &= f(\mathbf{x}\_0, \boldsymbol{\eta}\_0) + h \left[ m \mathbf{f}\_{\mathbf{x}} + n \boldsymbol{f}\_{\mathbf{y}} \right] + \frac{h^2}{2!} \left[ m^2 f\_{\mathbf{x}\mathbf{x}} + 2m \boldsymbol{n} f\_{\mathbf{x}\mathbf{y}} + n \boldsymbol{\varrho} f\_{\mathbf{y}\mathbf{y}} \right] \\ &+ \frac{h^3}{3!} \left[ m^3 f\_{\mathbf{x}\mathbf{x}\mathbf{x}} + 3m^2 n \boldsymbol{f}\_{\mathbf{x}\mathbf{y}} + 3m n^2 f\_{\mathbf{x}\mathbf{y}\mathbf{y}} + n^3 f\_{\mathbf{y}\mathbf{y}} \right] + \cdots \end{aligned} \tag{20}$$

where the partial derivatives are all evaluated at the point *x*0, *y*<sup>0</sup> � �. Consider the differential equation

$$\frac{dy}{d\mathbf{x}} = y' = f(\mathbf{x}, y) \tag{21}$$

$$df = \frac{\partial f}{\partial \mathbf{x}} d\mathbf{x} + \frac{\partial f}{\partial \mathbf{y}} dy \tag{22}$$

$$\frac{d^2f}{dx^2} = \frac{df}{dx} = f' = \frac{\partial f}{\partial x}\frac{dx}{dx} + \frac{\partial f}{\partial y}\frac{dy}{dx} \tag{23}$$

$$y'' = f' = \frac{\partial f}{\partial x} + \frac{\partial f}{\partial y}\frac{dy}{dx} \tag{24}$$

$$
\gamma'' = f' = \frac{\partial f}{\partial x} + \frac{\partial f}{\partial y} f \tag{25}
$$

$$\mathbf{y''} = \mathbf{f'} = \mathbf{f\_x} + \mathbf{f}\mathbf{f\_y} \tag{26}$$

$$\mathbf{y''} = \mathbf{f''} = \begin{bmatrix} f\_x + \mathbf{f}f\_y \end{bmatrix}' = \begin{bmatrix} f\_x \end{bmatrix}' + \begin{bmatrix} \mathbf{f}f\_y \end{bmatrix}' \tag{27}$$

*Numerical Methods: Euler and Runge-Kutta DOI: http://dx.doi.org/10.5772/intechopen.108533*

$$(f\_{\mathbf{x}})' = f\_{\mathbf{x}} + f\mathbf{f}\_{\mathbf{xy}} \tag{28}$$

$$\left(\left.f\right|\_{\mathcal{Y}}\right)' = f\left(f\_{\mathcal{Y}}\right)' + f\_{\mathcal{Y}}f'\tag{29}$$

$$\boldsymbol{y}^{\prime} = \boldsymbol{f}^{\prime\prime} = \boldsymbol{f}\_{\infty} + 2\boldsymbol{f}\_{\infty} + \boldsymbol{f}^{2}\boldsymbol{f}\_{\mathcal{D}} + \boldsymbol{f}\_{\omega}\boldsymbol{f}\_{\chi} + \boldsymbol{f}\boldsymbol{f}\_{\chi}^{\prime} \tag{30}$$

From Eq. (15).

$$y(\mathbf{x}) - y(\mathbf{x}\_0) = hy'(\mathbf{x}\_0) + h^2 \frac{y''(\mathbf{x}\_0)}{2!} + h^3 \frac{y''(\mathbf{x}\_0)}{3!} + \dotsb \tag{31}$$

Hence

$$y(\mathbf{x}) - y(\mathbf{x}\_0) = h\mathbf{f}(\mathbf{x}\_0) + \frac{h^2}{2!} \left(\mathbf{f}\_x + \mathbf{f}\mathbf{f}\_y\right) + \frac{h^3}{3!} \left(\mathbf{f}\_{xx} + 2\mathbf{f}\mathbf{f}\_{xy} + \mathbf{f}^2\mathbf{f}\_{yy} + \mathbf{f}\_x\mathbf{f}\_y + \mathbf{f}\mathbf{f}\_y^2\right) + \dotsb \tag{32}$$

The main idea is to select several points so that the Taylor series of expansion Eq. (19), coincide with the terms on the right hand side of Eq. (32).

$$\text{Suppose } m\_1 = hf(\mathbf{x}\_0, \mathbf{y}\_0) \tag{33}$$

$$m\_2 = hf\left(\varkappa\_0 + nh, \mathcal{y}\_0 + nm\_1\right) \tag{34}$$

$$m\_3 = hf(x\_0 + ph, \, y\_0 + pm\_2) \tag{35}$$

$$m\_4 = hf\left(\varkappa\_0 + qh, \mathcal{Y}\_0 + qm\_3\right) \tag{36}$$

Using Eq. (32), these values become;

$$m\_1 = hf \tag{37}$$

$$m\_2 = h \left[ f + nh \left( f\_x + f f\_y \right) + \frac{\left( nh \right)^2}{2} \left( f\_{xx} + 2f f\_{xy} + f^2 f\_{yy} \right) + \dotsb \right] \tag{38}$$

$$m\_3 = h \left[ f + ph \left( f\_x + f\_y' \right) + \frac{\left( ph \right)^2}{2} \left( f\_{xx} + 2f\_{xy} + f^2 f\_{yy} \right) + 2mp \left( f\_x f\_y + f\_y' \right)^2 + \cdots \right] \tag{39}$$

$$m\_4 = h \left[ f + qhc + \frac{(qh)^2}{2} \left( f\_{xx} + 2f\_{xy} + f^2 f\_{yy} \right) + 2pq \left( f\_x f\_y + f f\_y^{-2} \right) + \cdots \right] \tag{40}$$

where all functions are evaluated at the point (x0, y0). Considering an expression of the form

$$am\_1 + bm\_2 + cm\_3 + dm\_4\tag{41}$$

Equating the expression to the right hand side of Eq. (37) through Eq. (40) to obtain four equations in seven unknowns.

$$\text{Coefficient of } hf: a+b+c+d=1\tag{42}$$

$$\text{Coefficient of } lbf \left( f\_x f\_y + f\_y^{r^2} \right) bn + cp + dq = \frac{1}{2} \tag{43}$$

$$\text{Coefficient of } hf\left(f\_{\text{xx}} + 2f\_{\text{xy}} + f^2 f\_{\text{yy}}\right) : bn^2 + cp^2 + dq^2 = \frac{1}{3} \tag{44}$$

$$\text{Coefficient of } hf\left(f\_{\sf s}f\_{\sf y} + ff\_{\sf y}^{\sf 2}\right) : cnp + dpq = \frac{1}{6} \tag{45}$$

Choosing three unknown quantities arbitrarily since there are four equations in seven unknowns.

Let *<sup>n</sup>* <sup>¼</sup> *<sup>p</sup>* <sup>¼</sup> <sup>1</sup> <sup>2</sup> and *q* ¼ 1 in Eq. (42) through Eq. (45). Then

$$a + b + c + d = \mathbf{1} \tag{46}$$

$$b + c + \mathcal{2}d = \mathbf{1} \tag{47}$$

$$\mathbf{3}b + \mathbf{3}c + \mathbf{12}d = 4\tag{48}$$

$$\mathfrak{B} \mathfrak{c} + \mathfrak{G}d = \mathfrak{2} \tag{49}$$

Solving Eq. (46) through Eq. (47) simultaneously produced.

$$a = d = \frac{1}{6}, b = c = \frac{1}{3}$$

Connecting Eq. (33) and the expression (41).

$$y(\mathbf{x}) - y(\mathbf{x}\_0) = am\_1 + bm\_2 + cm\_3 + dm\_4 \tag{50}$$

$$
\gamma(\mathbf{x}) - \gamma(\mathbf{x}\_0) = \frac{m\_1}{6} + \frac{m\_2}{3} + \frac{m\_3}{3} + \frac{m\_4}{6} \tag{51}
$$

$$\chi(\mathbf{x}) = \chi(\mathbf{x}\_0) + \frac{1}{6}(m\_1 + 2m\_2 + 2m\_3 + m\_4) \tag{52}$$

In general

$$\mathcal{Y}\_{n+1} = \mathcal{Y}\_n + \frac{1}{6}(m\_1 + 2m\_2 + 2m\_3 + m\_4) \tag{53}$$

Where

$$m\_1 = hf\left(\mathbf{x}\_n, \boldsymbol{y}\_n\right)$$

$$m\_2 = hf\left(\mathbf{x}\_n + \frac{1}{2}h, \boldsymbol{y}\_n + \frac{1}{2}m\_1\right)$$

$$m\_3 = hf\left(\mathbf{x}\_n + \frac{1}{2}h, \boldsymbol{y}\_n + \frac{1}{2}m\_2\right)$$

$$m\_4 = hf\left(\mathbf{x}\_n + h, \boldsymbol{y}\_n + m\_3\right)$$

This is the Runge-Kutta formula of order four for a single first order differential equation.

The formula can be generalized for system of differential equations.

Consider the system of two initial-value differential equation:

*Numerical Methods: Euler and Runge-Kutta DOI: http://dx.doi.org/10.5772/intechopen.108533*

$$\frac{d\mathbf{x}}{dt} = f(t, \mathbf{x}, \mathbf{y})\mathbf{x}(\mathbf{0}) = \mathbf{y}\_0 \tag{54}$$

$$\frac{dy}{dt} = \mathbf{g}(t, \boldsymbol{x}, \boldsymbol{y})\\y(\mathbf{0}) = \mathbf{x}\_0\tag{55}$$

The Runge-Kutta formula of order four for system of two initial-value differential equation of the form of Eq. (54) and Eq. (55) is

$$\mathbf{x}\_{n+1} = \mathbf{x}\_n + \frac{1}{6}(k\_1 + 2k\_2 + 2k\_3 + k\_4) \tag{56}$$

$$\mathcal{Y}\_{n+1} = \mathcal{Y}\_n + \frac{1}{6}(m\_1 + 2m\_2 + 2m\_3 + m\_4) \tag{57}$$

Where

$$k\_1 = hf\left(t\_n, \mathbf{x}\_n, \mathbf{y}\_n\right)$$

$$m\_1 = hg\left(t\_n, \mathbf{x}\_n, \mathbf{y}\_n\right)$$

$$k\_2 = hf\left(t\_n + \frac{1}{2}h, \mathbf{x}\_n + \frac{1}{2}k\_1, \mathbf{y}\_n + \frac{1}{2}m\_1\right)$$

$$m\_2 = hg\left(t\_n + \frac{1}{2}h, \mathbf{x}\_n + \frac{1}{2}k\_1, \mathbf{y}\_n + \frac{1}{2}m\_1\right)$$

$$k\_3 = hf\left(t\_n + \frac{1}{2}h, \mathbf{x}\_n + \frac{1}{2}k\_2, \mathbf{y}\_n + \frac{1}{2}m\_2\right)$$

$$m\_3 = hg\left(t\_n + \frac{1}{2}h, \mathbf{x}\_n + \frac{1}{2}k\_2, \mathbf{y}\_n + \frac{1}{2}m\_2\right)$$

$$k\_4 = hf\left(t\_n + h, \mathbf{x}\_n + k\_3, \mathbf{y}\_n + m\_3\right)$$

$$m\_4 = hg\left(t\_n + h, \mathbf{x}\_n + k\_3, \mathbf{y}\_n + m\_3\right)$$

Generalizations of the formular to system of more than two equations follow a similar pattern.

Further insights and resources on the Runge- Kutta method can be found in [8–10]. **Illustration 3**

Given the initial value problem.

$$\boldsymbol{\jmath}' = \boldsymbol{\varkappa} + \boldsymbol{\jmath}, \boldsymbol{\jmath}(\mathbf{0}) = \mathbf{1} \dots$$

Determine.

*y* for *x* ¼ 0*:*2, using the Runge-Kutta method of order four (RK4) with the step size *h* ¼ 0*:*1.

**Solution**

The RK4 formula is:

$$\mathcal{y}\_{n+1} = \mathcal{y}\_n + \frac{1}{6}(m\_1 + 2m\_2 + 2m\_3 + m\_4),$$

Where

$$m\_1 = hf\left(\mathbf{x}\_n, \boldsymbol{\mathcal{y}}\_n\right)$$

$$m\_2 = hf\left(\mathbf{x}\_n + \frac{1}{2}h, \boldsymbol{\mathcal{y}}\_n + \frac{1}{2}m\_1\right)$$

$$m\_3 = hf\left(\mathbf{x}\_n + \frac{1}{2}h, \boldsymbol{\mathcal{y}}\_n + \frac{1}{2}m\_2\right)$$

$$m\_4 = hf\left(\mathbf{x}\_n + h, \boldsymbol{\mathcal{y}}\_n + m\_3\right)$$

$$\mathbf{x}\_n = \mathbf{x}\_0 + nh$$

$$n = \frac{\boldsymbol{\mathcal{x}}\_n - \mathbf{x}\_0}{h} = \frac{0.2 - 0}{0.1} = 2$$

Hence two iterations shall be needed. When *n* ¼ 0

$$\mathcal{Y}\_1 = \mathcal{Y}\_0 + \frac{1}{6}(m\_1 + 2m\_2 + 2m\_3 + m\_4)$$

$$m\_1 = hf(x\_0, y\_0) = 0.9f(0, 1) = 0.1(0 + 1) = 0.1$$

$$m\_2 = hf\left(x\_0 + \frac{1}{2}h, y\_0 + \frac{1}{2}m\_1\right) = 0.1f\left(0 + \frac{1}{2}(0.1), 1 + \frac{1}{2}(0.1)\right)$$

$$= 0.1f(0.05, 1.05) = 0.1(0.05 + 1.05) = 0.1(1.1) = 0.11$$

$$m\_3 = hf\left(x\_0 + \frac{1}{2}h, y\_0 + \frac{1}{2}m\_2\right) = 0.1f\left(0 + \frac{1}{2}(0.1), 1 + \frac{1}{2}(0.11)\right)$$

$$= 0.1f(0.05, 1.055) = 0.1(0.05 + 1.055) = 0.1(1.105) = 0.1105$$

$$m\_4 = hf(x\_0 + h, y\_0 + m\_3) = 0.1f(0 + 0.1, 1 + 0.1105)$$

$$= 0.1f(0.1, 1.1105) = 0.1(0.1 + 1.1105) = 0.1(1.2105) = 0.12105$$

$$\mathcal{Y}\_1 = \mathcal{y}\_0 + \frac{1}{6}(m\_1 + 2m\_2 + 2m\_3 + m\_4)$$

$$= 1 + \frac{1}{6}(0.1 + 2(0.11) + 2(0.1105) + 0.12105) = 1.110342$$

For the second iteration, when n = 1:

$$\mathcal{Y}\_2 = \mathcal{Y}\_1 + \frac{1}{6}(m\_1 + 2m\_2 + 2m\_3 + m\_4)^2$$

Where

$$\mathbf{x}\_1 = \mathbf{x}\_0 + h = \mathbf{0} + \mathbf{0.1} = \mathbf{0.1}$$

$$\begin{aligned} m\_1 &= hf(\mathbf{x}\_1, \mathbf{y}\_1) = \mathbf{0.1}f(\mathbf{0.1}, \mathbf{1.110342}) = \mathbf{0.1}(\mathbf{0.1} + \mathbf{1.110342}) \\ &= \mathbf{0.1}(\mathbf{1.210342}) = \mathbf{0.1210342} \\ m\_2 &= hf\left(\mathbf{x}\_1 + \frac{1}{2}h, \mathbf{y}\_1 + \frac{1}{2}m\_1\right) = \mathbf{0.1}f\left(\mathbf{0.1} + \frac{1}{2}(\mathbf{0.1}), \mathbf{1.110342} + \frac{0.1210342}{2}\right) \\ &= \mathbf{0.1}f(\mathbf{0.15}, \mathbf{1.1708591}) = \mathbf{0.13208591} \\ m\_3 &= \mathbf{0.1}f(\mathbf{0.15}, \mathbf{1.176385}) = \mathbf{0.1326385} \\ m\_4 &= \mathbf{0.1}f(\mathbf{0.2}, \mathbf{1.2429805}) = \mathbf{0.144298049} \end{aligned}$$

*Numerical Methods: Euler and Runge-Kutta DOI: http://dx.doi.org/10.5772/intechopen.108533*

$$\begin{aligned} y\_2 &= 1.110342 + \frac{1}{6}(0.1210342 + 2(0.13208591) + 2(0.1326385) + 0.144298049) \\ &= 1.242805512 \end{aligned}$$

**Illustration 4**

Consider the Lorenz dynamical system given as.

$$\frac{d\mathbf{x}}{dt} = \sigma(\mathbf{y}(t) - \mathbf{x}(t))\mathbf{x}(\mathbf{0}) = \mathbf{0}.\mathbf{1}.$$

$$\frac{d\mathbf{y}}{dt} = -\mathbf{x}(t)\mathbf{z}(t) + r\mathbf{x}(t) - \mathbf{y}(t)\mathbf{y}(\mathbf{0}) = \mathbf{0}.\mathbf{1}.$$

$$\frac{d\mathbf{z}}{dt} = \mathbf{x}(t)\mathbf{y}(t) - b\mathbf{z}(t)\mathbf{z}(\mathbf{0}) = \mathbf{0}.\mathbf{1}.$$

Where*σ* ¼ 10, *r* ¼ 28 and *b* ¼ <sup>8</sup>*=*3with step size 0.1 determine the numerical solution at*t* ¼ 10 using the method of RK4 (**Figures 5**–**7**).

**Figure 5.**

*Graphical representation of numerical solution of Lorenz system X(t) using Runge-Kutta method of order four for 100 iterations as in Table 3.*

*Graphical representation of numerical solution of Lorenz system Y(t) using Runge-Kutta method of order four for 100 iterations as in Table 3.*

**Figure 7.**

*Graphical representation of numerical solution of Lorenz system X(t) using Runge-Kutta method of order four for 100 iterations as in Table 3.*


**Table 3.**

*Numerical solution of the Lorenz system using RK4 method at step size 0.1.*

#### **Solution**

The number of iteration needed is determine thus:

$$t\_n = t\_0 + nh$$

$$n = \frac{t\_n - t\_0}{h} = \frac{10 - 0}{0.1} = 100$$

The numerical solution is generated using Maple mathematical programming software. The codes and solution are presented below (**Table 3**).

#### **Algorithm/Maple programming Code of the numerical solution of Lorenz system using RK4 method.**

## This is the numerical solution of Lorenz system using RK 4method## sys1:=diff(x(t),t)=sigma\*(y(t)-x(t)),diff(y(t),t)=-x(t)\*z(t)+r\*x(t)-y(t),diff(z (t),t)=x(t)\*y(t)-b\*z(t); ## restart: X:=array(0..20000):Y:=array(0..20000):Z:=array(0..20000):T:=array(0..20000):A: =0.0:h:=0.1:N:=100:sigma:=10.0:b:=(8/3):r:=28.0: for m from 0 to N do T[m]:=A+m\*h:X[0]:=0.1:Y[0]:=0.1:Z[0]:=0.1:od: for m from 0 to N-1 do K1:=h\*(sigma\*(Y[m]-X[m])): M1:=h\*(-X[m]\*Z [m]+r\*X[m]-Y[m]):N1:=h\*(X[m]\*Y[m]-b\*Z[m]):K2:=h\*(sigma\*((Y[m]+ M1/2)-(X[m]+K1/2))): M2:=h\*(-(X[m]+K1/2)\*(Z[m]+N1/2)+r\*(X[m]+ K1/2)-(Y[m]+M1/2)):N2:=h\*((X[m]+K1/2)\*(Y[m]+M1/2)-b\*(Z[m]+N1/2)) :K3:=h\*(sigma\*((Y[m]+M2/2)-(X[m]+K2/2))): M3:=h\*(-(X[m]+K2/2)\*(Z [m]+N2/2)+r\*(X[m]+K2/2)-(Y[m]+M2/2)):N3:=h\*((X[m]+K2/2)\*(Y[m]+ M2/2)-b\*(Z[m]+N2/2)):K4:=h\*(sigma\*((Y[m]+M3)-(X[m]+K3))):M4:=h\* (-(X[m]+K3)\*(Z[m]+N3)+r\*(X[m]+K3)-(Y[m]+M3)):N4:=h\*((X[m]+K3)\*(Y [m]+M3)-b\*(Z[m]+N3)):X[m+1]:=X[m]+(K1+2\*K2+2\*K3+K4)/6:Y[m+1]:=Y [m]+(M1+2\*M2+2\*M3+M4)/6: Z[m+1]:=Z[m]+(N1+2\*N2+2\*N3+N4)/6:od: for m from 0 by 10 to N do print (m, T[m], X[m],Y[m],Z[m]);od:

For further insight on programming with Maple see [6].

#### **4. Conclusions**

This chapter discusses the numerical solution of differential equation using Euler and Runge-Kutta methods. The formulas were derived and illustrations given to understand their applications. Algorithms written in Maple computational environment were provided for better understanding and further practice.

#### **Author details**

Victor Akinsola Computational Laboratory, Department of Mathematics, Adeleke University, Ede, Nigeria

\*Address all correspondence to: akinsolaolajide@adelekeuniversity.edu.ng; solajide123@gmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### **References**

[1] Akinsola VO, Oluyo TO. Mathematical analysis with numerical solutions of the mathematical model for the complications and control of diabetes mellitus. Journal of Statistics and Management Systems. 2019;**22**(5): 845-869. DOI: 10.1080/ 09720510.2018.1556409

[2] Bronshtein IN, Semendyayev KA, Musiol G, Mühlig H. Handbook of Mathematics. 6th ed. London: Springer; 2015

[3] Baumann G. Mathematics for Engineers IV: Numerics. München: Oldenbourg Wissenschaftsverlag; 2010

[4] Lynch S. Dynamical Systems with Applications using Maple. 2nd ed. Basel: Birkhäuser Verlag AG, Springer; 2010

[5] Shah NH. Numerical Methods with C + + Programming. New Delhi: PHI Learning; 2009. p. 243

[6] Richard H. Computer Algebra Recipes for Mathematical Physics. Boston: Birkhauser; 2005

[7] Montoya CA, Sánchez RD, Castaño LF. Approach to the numerical solution of Lorenz system on SoC FPGA. In: 2016 XXI Symposium on Signal Processing, Images and Artificial Vision (STSIVA). Colombia: Universidad Pontificia Bolivariana (UPB) Seccional Bucaramanga; 2016. pp. 1-4. DOI: 10.1109/ STSIVA.2016.7743305

[8] Enns RH, McGuire GC. Nonlinear Physics with Maple for Scientists and Engineers. 2nd ed. Massachusetts, USA: Birkhäuser Boston, Springer Science +Business Media; 2000

[9] Steele WA, Vallauri R. Computer simulations of pair dynamics in molecular fluids. Molecular Physics. 1987;**61**(4):1019-1030. DOI: 10.1080/ 00268978700101621

[10] Zou Y. Multi-Variable Calculus, A First Step. Berlin/Boston: Walter de Gruyter GmbH; 2020

#### **Chapter 3**

## An Efficient Region Merging Algorithm in Raster Space

*Borut Žalik, David Podgorelec, Niko Lukač, Krista Rizman Žalik and Domen Mongus*

#### **Abstract**

This work introduces a new region merging algorithm operating in raster space represented by a 4-connected graph. Necessary definitions are introduced first to derive a new merging function formally. An implementation is described after that, which consists of two steps: a determination of the shared trails of the input cycles, and construction of the resulting merged region. The cycles defining the regions are represented by the Freeman crack chain code in four directions. The algorithm works in linear time *O n*ð Þ, where *n* is the number of total graph vertices, i.e. pixels. However, the expected time complexity for one merging operation performed by the algorithm is *O*ð Þ1 .

**Keywords:** computer science, algorithms, 4-connected graph, merging function, chain code

#### **1. Introduction**

Region merging is one of the most commonly performed tasks in image processing that enables Object-Based Image Analysis (OBIA). Early approaches to OBIA performed image segmentation by the classical split and merge approach. Here, a meaningful partition was defined by applying a split process to define a set of elementary (homogeneous) regions that are then merged under certain conditions [1]. The latter may be based on geometric attributes like area, texture attributes like statistical moments of intensity distribution, shape attributes like shape factors, or any of their combinations [2–4]. On the other hand, more recent approaches to OBIA focus on hierarchical image segmentations that are based on scale-space representation, i.e. a set of image segmentations at different detail levels, in which the segmentation at finer levels are nested with respect to those at coarser levels [1, 5]. Some popular examples of such hierarchies include max-tree [6], *α*-tree [7, 8], and watershed hierarchies [9]. Unfortunately, hierarchical segmentation results in a huge number of nested partitions, which have to be merged efficiently. The region merging becomes in this way one of the most critical parts of the segmentation process.

Region merging can be considered from different theoretical aspects. A set merging problem, which has a long history in computing, is the first of them. Hopcroft and Ullman [10] proposed two algorithms based on quadtrees, both working in *O n*ð Þ log *n* time, where *n* is the number of elements in the sets. In the first algorithm, the elements

can be placed only in the leaves, while, in the second algorithm, the elements can exist in any of the tree vertices. Another tree-based algorithm was proposed by Tarjan [11] with the time complexity of *O m*ð Þ *α*ð Þ *m*, *n* , where *α*ð Þ *m*, *n* is related with the inverse Ackermann function, while *m* and *n* correspond to the numbers of elements in both sets. His algorithm was also used by Najman et al. [9] for hierarchical watershed cuts. Tarjan and van Leeuwen [12] performed the worst-case analysis of the algorithms and concluded that the linear-time set-merging algorithm remains an open problem. Cormen et al. [13] also considered merging of the disjoint sets using either linked lists or trees. Another solution for merging regions was introduced by Horowitz and Pavlidis [14]. This method is also based on quadtrees, with, as pointed out by Brun and Domenger, considerable limitations [15]. They recognised that the regions differ importantly from the classical understanding of the sets. Namely, the elements of the regions also have spatial attributes (i.e. raster coordinates), and, therefore, it is possible to determine the border of the regions uniquely. Brun and Domenger developed a method by placing the image in the Khalimsky plane [16]. The region is considered as a set of topological maps which are mapped in the Euclidean plane. Another approach is based on the theory of geometric and solid modelling [17, 18], where merging is considered as a special case of the Boolean union. The so-called regularised Boolean operations were introduced to preserve the dimension homogeneity of the resulting object [19]. The solution is, typically, found in two steps. First, the intersection points between the involved geometric objects are determined, and second, the resulting shape is determined by the so-called walkabout. In 2D, the first part is solved in the expected time *O n* ð Þ ð Þ þ *m* log ð Þþ *n* þ *m I* , where *n* and *m* are the number of vertices determining the input polygons, and *I* is the number of actual intersections [20]. If the proper data structure is used, the second step is realised in linear time. Such data structures have been proposed by Grainer and Horman [21], Vatti [22], and Liu et al. [23]. Rivero and Feito [24] proposed an approach for Boolean operations on polygons based on the theory of simplices. Their idea was later improved in ref. [25]. Very recently, an algorithm for Boolean operations for rasterised shapes was presented in ref. [26]. A space-filling curve was applied for the determination of the intersected pixels, while the walkabout was performed with a Greiner and Horman-like data structure. The proposed geometric approaches, however, cannot be applied in the OBIA, as they are based on the theory of regularised Boolean operations, which preserves the dimensional homogeneity of the resulting objects strictly. Consequently, this approach cannot handle all possible cases which may appear during region growth.

In this chapter, a new solution is proposed for a general region merging problem suitable for hierarchical OBIA. The main contributions are a theoretical derivation of the merging function in the raster space, represented by a 4-connected graph, and a proposal of an efficient implementation based on chain codes that ensure compact region representation.

The chapter is structured in five sections. Section 2 introduces the problem and formalises it. Brief implementation hints are given in Section 3. Section 4 presents empirical results, while Section 5 concludes the chapter.

#### **2. Definitions**

The key terms, needed to present the problem and to derive its formal solution, are defined in this section. Among other concepts, the region, raster space, and region merging are defined, which appeared in the title of this chapter.

**Directed graph.** *G* ¼ ð Þ *V*, *E* defined by a vertex set *V* ¼ f g *vi* and an edge set *E* ¼ *ei*, *<sup>j</sup>* is a directed graph if *E* is given by ordered pairs (directed edges) of vertices *ei*, *<sup>j</sup>* ¼ *vi*, *v <sup>j</sup>* .

**Raster space.** Let *G* ¼ ð Þ *V*, *E* be a directed graph. If *V* is determined by regularly spaced vertices *vi* ¼ *xi*, *yi* with integer coordinates *xi* <sup>∈</sup>½ � 0,*<sup>X</sup>* and *yi* <sup>∈</sup> ½ � 0, *<sup>Y</sup>* , and *<sup>E</sup>* imposes 4-connectivity on them, then *G* defines the raster space. In other words, for each pair of adjacent vertices *vi*, *v <sup>j</sup>* linked by edge *ei*, *<sup>j</sup>* ∈*E*, there exists either relation *ei*, *<sup>j</sup>* ! *xj*, *y <sup>j</sup>* <sup>¼</sup> *xi* � 1, *yi* or *ei*, *<sup>j</sup>* ! *xj*, *<sup>y</sup> <sup>j</sup>* <sup>¼</sup> *xi*, *yi* � <sup>1</sup> . Each vertex can also be linked to itself, thus, ∀*vi* ∈*V* ! *ei*,*<sup>i</sup>* ∈ *E*.

Intuitively, a region is a group of connected raster cells (grid cells or pixels). It may be represented either as a collection of the pixels themselves or by its boundary. This second possibility is used in this work. It is based on the concepts of trail and cycle which must, therefore, be introduced first.

**Trail.** Trail *ti*0,*iL* ¼ *vi*<sup>0</sup> , *vi*<sup>1</sup> , … , *viL* h i in G with length *L* is a sequence of adjacent vertices where for each pair *vil* , *vil*þ<sup>1</sup> ∈*ti*0,*iL* ! *eil*,*il*þ<sup>1</sup> ∈*E*. Trails *ti*0,*iL* and *tj* 0, *j <sup>K</sup>* are connected if they share at least one subtrail, i.e. if a set of subtrails *T* ¼ *ti*0,*iL* ∩ *tj* 0, *j <sup>K</sup>* ¼6 Ø.

**Figure 1** shows two cases of connected trails. Trails *t*0,6 and *t*7,8 are connected through subtrail *t*4,4 ¼ h i *v*4, *v*<sup>4</sup> in **Figure 1a**, while trails *t*0,6 and *t*9,7 in **Figure 1b** share two subtrails, namely *T* ¼ *t*0,6 ∩ *t*9,7 ¼ f g *t*1,1, *t*3,5 , where *t*1,1 ¼ h i *v*1, *v*<sup>1</sup> and *t*3,5 ¼ h i *v*3, *v*4, *v*<sup>5</sup> . The shared subtrails will be hereinafter referred to as the intersection trails.

Trail *ti*0,*iL* can be split into two trails *ti*0,*il* and *til*þ1,*iL* at any *vil* ∈*ti*0,*iL*. *ti*0,*iL* is, therefore, a concatenation of *ti*0,*il* and *til*þ1,*iL* as formally shown in Eq. (1):

$$\mathbf{t}\_{i\_0, i\_L} = \mathbf{t}\_{i\_0, i\_l} \overset{\frown}{\mathbf{t}}\_{i\_{l+1}, i\_L}. \tag{1}$$

**Cycle.** Trail *ti*0,*iL* ¼ *vi*<sup>0</sup> , *vi*<sup>1</sup> , … , *viL*þ<sup>1</sup> is cycle *ci*0,*iL* <sup>¼</sup> *vi*<sup>0</sup> , *vi*<sup>1</sup> , … , *viL* h i, if *<sup>i</sup>*<sup>0</sup> <sup>¼</sup> *iL*þ1. As each vertex can be linked to itself, the smallest cycle *ci*0,*i*<sup>0</sup> ¼ *vi*<sup>0</sup> h i is defined by trail *ti*0,*i*<sup>0</sup> ¼ *vi*<sup>0</sup> , *vi*<sup>0</sup> h i. Contrary to the traditional definition of cycle, we do require that all vertices, except the end vertices, are distinct in *ci*0,*iL*. Any cycle can, because of this, be composed of more than one cycle, where intermediate vertices are contained more than once.

**Figure 1.**

*Connected trails: (a) t*0,6 ¼ *v*0, *v*1, *v*2, *v*3, *v*4, *v*5, *v*<sup>6</sup> , *<sup>t</sup>*7,8 <sup>¼</sup> *<sup>v</sup>*7, *<sup>v</sup>*4, *<sup>v</sup>*<sup>8</sup> *, (b) t*0,6 <sup>¼</sup> *<sup>v</sup>*0, *<sup>v</sup>*1, *<sup>v</sup>*2, *<sup>v</sup>*3, *<sup>v</sup>*4, *<sup>v</sup>*5, *<sup>v</sup>*<sup>6</sup> , *t*9,7 ¼ *v*9, *v*1, *v*8, *v*3, *v*4, *v*5, *v*<sup>7</sup> .

**Figure 2.** *Cycles: (a) c*0,7 ¼ *v*0, *v*1, *v*2, *v*3, *v*4, *v*5, *v*6, *v*<sup>7</sup> *, (b) c*0,10 ¼ h*v*0, *<sup>v</sup>*1, *<sup>v</sup>*2, *<sup>v</sup>*3, *<sup>v</sup>*4, *<sup>v</sup>*5, *<sup>v</sup>*6, *<sup>v</sup>*7, *<sup>v</sup>*4, *<sup>v</sup>*3, *<sup>v</sup>*8, *<sup>v</sup>*9, *<sup>v</sup>*2, *<sup>v</sup>*10i.

**Figure 2** shows examples of two cycles. The one in **Figure 2a** contains each vertex exactly once, while vertices *v*2, *v*3, and *v*<sup>4</sup> are contained twice in the cycle in **Figure 2b**. Note that any cycle can be rotated by any number of vertices 0<*l* ≤*L*, i.e. *ci*0,*iL* ¼ *vi*<sup>0</sup> , *vi*<sup>1</sup> , … , *viL* h i ¼ *vil* , *vil*þ<sup>1</sup> , … , *viL* , *vi*<sup>0</sup> , *vi*<sup>1</sup> , … , *vil*�<sup>1</sup> <sup>¼</sup> *cil*,*il*�<sup>1</sup> . Its decomposition can then be described as concatenation from Eq. (2):

$$\mathbf{c}\_{i\_0, i\_L} = \mathbf{t}\_{i\_l, i\_k} \stackrel{\frown}{\mathbf{t}}\_{i\_{k+1}, l\_{L-1}}.\tag{2}$$

As shown in **Figure 3**, any subtrail *til*,*ik* ⊆*ci*0,*iL* , 0≤ *l*, *k*≤ *L*, can be removed from cycle *ci*0,*iL* according to Eq.(3). The obtained result is also a subtrail.

$$t\_{i\_{k+1},i\_{l-1}} = \mathcal{c}\_{i\_0,i\_L} \backslash t\_{i\_l,i\_k}.\tag{3}$$

Let *ci*0,*i*<sup>3</sup> ¼ *vi*<sup>0</sup> , *vi*<sup>1</sup> , *vi*<sup>2</sup> , *vi*<sup>3</sup> be an elementary clockwise oriented cycle, where *vi*<sup>0</sup> <sup>¼</sup> *xi*, *yi* , *vi*<sup>1</sup> <sup>¼</sup> *xi*, *yi* <sup>þ</sup> <sup>1</sup> , *vi*<sup>2</sup> <sup>¼</sup> *xi* <sup>þ</sup> 1, *yi* <sup>þ</sup> <sup>1</sup> , and *vi*<sup>3</sup> <sup>¼</sup> *xi* <sup>þ</sup> 1, *yi* . This elementary cycle defines a grid cell, with its interior on the right side of each edge *eil*,*il*þ<sup>1</sup> ¼ *vil* , *vil*þ<sup>1</sup> , 0≤ *l*≤3 as shown in **Figure 4**.

**Region.** The region *R* is either a grid cell defined by an elementary clockwise oriented cycle or a group of grid cells bounded by the resulting cycle(s) of the region merging function (defined soon after this definition). For simplicity, a region will be equated with its boundary in the continuation, i.e. *R* will be treated as a cycle or a set of cycles.

**Figure 5** shows the result of merging two elementary cycles *ci*0,*iL* and *cj* 0, *j K* ð Þ *L* ¼ *K* ¼ 3 , which share either an edge (**Figure 5a**) or a vertex (**Figure 5b**). It indicates that the resulting merged region is defined by a concatenation of both

**Figure 3.**

*Removing trail til*,*ik (on the right) from cycle ci*0,*iL results in subtrail tik*þ1,*il*�<sup>1</sup> *(on the left).*

*An Efficient Region Merging Algorithm in Raster Space DOI: http://dx.doi.org/10.5772/intechopen.102679*

#### **Figure 5.**

*Merging two elementary cycles with a shared edge (a) and vertex (b); the resulting cycles are in violet.*

elementary cycles without their intersection *ci*0,*iL* ∩ *cj* 0, *j <sup>K</sup>* . Note, however, that *ci*0,*iL* and *cj* 0, *j <sup>K</sup>* are both oriented in the clockwise direction, and, therefore, the orientation of the shared edges is opposite. To make them equal and, thus, to make their intersection non-empty, the orientation changing operation is defined here, such that *ci*0,*iL* ¼ *viL* , *viL*�<sup>1</sup> , … , *vi*<sup>0</sup> h i. **Figure 6** shows an example of merging two non-elementary cycles which still share a single, but longer intersection trail. A similar conclusion as above may be made. The region merging function may be formally defined now.

**Region merging function.** Two cycles *ci*0,*iL* and *cj* 0, *j <sup>K</sup>* , which share a single intersection trail *til*,*ik* can be merged into a region *R* by a merging function Mdefined by Eq. (4):

$$\begin{split} \mathcal{M}(\mathfrak{c}\_{i\_{0};\bar{\iota}\_{\mathrm{L}}},\mathfrak{c}\_{j\_{0};\bar{\jmath}\_{\mathrm{K}}},\mathfrak{t}\_{i\_{l};\bar{\iota}\_{\mathrm{L}}}) &= \left(\mathfrak{c}\_{i\_{0};\bar{\iota}\_{\mathrm{L}}}\backslash\left(\mathfrak{c}\_{i\_{0};\bar{\iota}\_{\mathrm{L}}}\cap\mathfrak{t}\_{j\_{0};\bar{\jmath}\_{\mathrm{K}}}\right)\right)^{\circ}\left(\mathfrak{c}\_{i\_{l}}\backslash\left(\mathfrak{c}\_{j\_{0};j\_{\mathrm{K}}}\backslash\mathfrak{t}\_{i\_{l};\bar{\iota}\_{\mathrm{L}}}\right)\right)^{\circ}\left(\mathfrak{v}\_{i\_{\mathrm{L}}}\right) = \\ &= \left(\mathfrak{c}\_{i\_{0};\bar{\iota}\_{\mathrm{L}}}\backslash\mathfrak{t}\_{i\_{l};\bar{\iota}\_{\mathrm{L}}}\right)^{\circ}\left(\mathfrak{v}\_{i\_{l}}\right)^{\circ}\left(\mathfrak{c}\_{j\_{0};j\_{\mathrm{K}}}\backslash\mathfrak{t}\_{j\_{\mathrm{m}};\bar{\jmath}\_{\mathrm{m}}}\right)^{\circ}\left(\mathfrak{v}\_{i\_{\mathrm{L}}}\right) = \\ &=\mathfrak{t}\_{i\_{k+1};\bar{\iota}\_{\mathrm{L}-1}}\cdot\left(\mathfrak{v}\_{i\_{l}}\right)^{\circ}\mathfrak{t}\_{j\_{m+1};j\_{\mathrm{m}-1}}\cdot\left(\mathfrak{v}\_{i\_{\mathrm{L}}}\right) = \\ &=\mathfrak{t}\_{i\_{k+1};\bar{\iota}\_{\mathrm{L}-1}}\cdot\left(\mathfrak{v}\_{j\_{m}}\right)^{\circ}\mathfrak{t}\_{j\_{m+1};j\_{\mathrm{m}-1}}\cdot\left(\mathfrak{v}\_{j\_{\mathrm{m}}}\right). \end{split} \tag{4}$$

In a general case, where the intersection of the input cycles consists of more trails, the merged region R is defined by Eq. (5):

$$R = \bigcap\_{\mathbf{r}\_{i\_l i\_k}^{(h)} \in T\_i} \mathcal{M} \Big( \mathbf{c}\_{i\_0, i\_L}, \mathbf{c}\_{j\_0, j\_K}, \mathbf{t}\_{i\_l, i\_k}^{(h)} \Big). \tag{5}$$

Eq. (4) is thus applied when ∣*Ti*∣ ¼ ∣*T <sup>j</sup>*∣ ¼ 1, where *Ti* ¼ *tik*,*il* <sup>¼</sup> *ci*0,*iL* <sup>∩</sup> *cj* 0, *j K* and *T <sup>j</sup>* ¼ *tj <sup>m</sup>*, *j n* <sup>¼</sup> *cj* <sup>0</sup>, *j <sup>K</sup>* ∩ *ci*0,*iL* . On the other hand ∣*Ti*∣ ¼ ∣*T <sup>j</sup>*∣>1 implies utilisation of

**Figure 7.** *Applying Eq. (5) to merge two cycles whose intersection consists of two intersection trails, i.e.* ∣*Ti*∣ ¼ 2*.* Eq. (5) and each intersection trail then results in a new cycle. ∣*Ti*∣ cycles are, therefore, constructed, and the resulting region is described by the intersection of these cycles. Note that *h*, such that 0 ≤*h*<∣*Ti*∣, is an index of the intersection trail in Eq. (5). It is obvious that Eq. (5) is valid also when ∣*Ti*∣ ¼ 1.

**Figure 7** shows an illustrative example. The set of intersection trails is *Ti* ¼ *ti*5,*i*<sup>6</sup> , *ti*8,*i*<sup>9</sup> � �. Applying Eq. (4) on the intersection trail *ti*5,*i*<sup>6</sup> ∈ *Ti* results in **Figure 7b**. Similarly, using Eq. (4) on the second intersection trail *ti*8,*i*<sup>9</sup> ∈*Ti* gives **Figure 7c**. The final result is then obtained as an intersection (Eq. (5)) between cycles from **Figure 7b** and **c**. As seen in **Figure 7d**, the resulting region *R* consists of two cycles, which are exactly the same as ∣*Ti*∣.

#### **3. Implementation**

The concept of chain codes is used to represent regions *R*⊆ *G* in the presented method. The chain code, introduced by Freeman [27], consists of a few simple commands by which navigation through the edges of *G* is made possible. Freeman proposed two chain codes known as Freeman chain code in eight (*F*8) and four (*F*4) directions. Other chain codes were discovered by Bribiesca (Vertex Chain Code, *VCC*) [28], Sánchez-Cruz and Rodríguez-Dagnino (Three-Orthogonal chain code, 3*OT*) [29], Žalik et al. (Unsigned Manhattan Chain Code, *UMCC*) [30], and Dunkelberger and Mitchell (Mid-crack Chain Code) [31]. In general, there are two types of chain codes: those operating on raster pixels and those working with raster edges. The latter is known as crack-chain codes [32, 33]. *F*4 is the only chain code which can be used in both contexts, and its crack interpretation is used in this algorithm.

The *F*4 alphabet consists of four commands/symbols *σ<sup>i</sup>* ∈Σ*F*4, Σ*<sup>F</sup>*<sup>4</sup> ¼ f g 0, 1, 2, 3 , shown in **Figure 8**. Let h i *σ<sup>i</sup>* be the sequence of *F*4 commands. To embed the chain code in *G*, the position of the chain code starting vertex *v*<sup>0</sup> is needed, while the positions of the remaining vertices are determined from the *F*4 commands according to Eq. (6):

$$v\_{i+1} = \begin{cases} (\boldsymbol{x}\_i + \mathbf{1}, \boldsymbol{y}\_i), & \text{if } \ \boldsymbol{\sigma}\_i = \mathbf{0}; \\ (\boldsymbol{x}\_i, \boldsymbol{y}\_i - \mathbf{1}), & \text{if } \ \boldsymbol{\sigma}\_i = \mathbf{1}; \\ (\boldsymbol{x}\_i - \mathbf{1}, \boldsymbol{y}\_i), & \text{if } \ \boldsymbol{\sigma}\_i = \mathbf{2}; \\ (\boldsymbol{x}\_i, \boldsymbol{y}\_i + \mathbf{1}), & \text{if } \ \boldsymbol{\sigma}\_i = \mathbf{3}. \end{cases} \tag{6}$$

**Figure 8b** shows the elementary cycle *ci*0,*i*<sup>3</sup> determined as *vi*<sup>0</sup> h i 1, 0, 3, 2 , where *vi*<sup>0</sup> ¼ *xi*<sup>0</sup> , *yi*<sup>0</sup> � � are the coordinates of the cycle's starting vertex.

**Figure 8.** *F*4 *chain code symbols (a); the elementary cycle described by F*4 *chain code symbols (b).*

Any region in *G* can be represented in this way. For example, regions containing more cycles are shown in **Figure 9**, where, for better presentation, the inner cells of the region are shadowed. According to the definitions from Section 2, a vertex *vi* ∈*R* can be part of two cycles at the same time, or, within each cycle, *vi* can be passed twice, too. In the continuation, the cycle corresponding to the outer border of *R* is considered as a *loop*, while cycles representing holes are named *rings* [19]. The orientation of the loop is clockwise, while the rings' is the opposite (**Figure 9**).

A data structure for representing *R* is shown schematically in **Figure 10**. It consists of an array of starting points and an array of *F*4 chain code sequences. The loop is always located at index 0, and *k*, *k*≥0, rings follow in an arbitrary order. The algorithm, which implements Eq. (5), consists of two main steps:


#### **Figure 9.**

*Region with two rings: The rings' vertices (green, black) can be shared with the loop (red); the vertices within the loop can be used twice.*

#### **Figure 10.**

*Data structure of region R: Array of starting points (left) of individual cycles and the corresponding sequences of F*4 *chain codes (right).*

#### **3.1 Determining the intersection trails**

Determination of intersection points between edges of regions can be related with the problem of finding intersections between polygon edges. As the naive implementation of the latter works with *O n*<sup>2</sup> ð Þ time complexity, various approaches were suggested to reduce it [13, 34–36]. The presented solution exploits the fact that regions *Ri* and *Rj* are embedded into common directed graph *G*, i.e. *Ri* ∈ *G*∧*Rj* ∈ *G*. The following data are associated with each vertex *vi* ∈ *G*:


Let us consider the example in **Figure 11**, where the loop's edges of *Ri* are plotted in red, the edges of its ring in black, while edges of region *Rj* are in cyan. The content of data structures for both regions is given in **Table 1**. **Table 2** shows the information of some characteristic vertices in *G*. Vertex *va*, for example, belongs to *Ri*, its *Pi*<sup>1</sup> points to the index 1 in the array of *F*4 chain codes, and the vertex belongs to the loop (*LRi*<sup>2</sup> ¼ 0). Vertex *v <sup>f</sup>* is met two times by the edges of the *Ri* loop and, therefore, two pointers are pointing to the 3rd and 15th positions. Vertex *vh* is the most interesting. In this vertex three cycles are met. Pointer *Pi*<sup>1</sup> points to the 7*th Ri* loop

**Figure 11.**

*Regions Ri (red and black) and Rj (cyan) embedded into G, and some characteristic vertices considered in Table 2.*


#### **Table 1.**

*Data structures for regions Ri and Rj from Figure 11.*


#### **Table 2.**

*The content of G at specific vertices marked in Figure 11.*

position, and *Pi*<sup>2</sup> points to the 3*rd* position of the first *Ri* ring. The loop of *Rj* is accessed by the chain code command stored at position 9 in the *F*4 array.

Having marked the vertices in *G* properly, it is easy to determine the intersection trails. The region with the smallest number of edges is found (let us suppose it is *Rj*), and all its vertices are visited. The sequence of edges marked with pointers of both regions is identified as being a part of the intersection trail.

#### **3.2 Performing the walkabout**

Those trails which were not labelled as the intersection ones, are united into the new region by the algorithm, consisting of the following steps:

1.Mark all edges from intersection trails as *visited* and the remaining edges as *not visited*.

2.Find an arbitrary non-visited edge *e*∈*Rj*. If such edge does not exist, jump to step 9.


These steps are highlighted in Algorithm 1. The decision whether the cycle defines a loop or a ring depends on its orientation. It can be determined by Eq. (7), where *Q* ¼ h i *σ<sup>i</sup>* (*F*4 chain code commands *σ<sup>i</sup>* are treated as integers for this purpose).

$$\sigma = \sum\_{i=0}^{|Q|-1} \begin{cases} -\mathbf{1} & : \ \sigma\_i = \mathbf{0} \wedge \sigma\_{(i+1)md} \ \vert \underline{\mathbf{v}}\_{\mathcal{F}4} \vert = \vert \Sigma\_{\mathcal{F}4} \vert -\mathbf{1} \\\\ \mathbf{1} & : \ \sigma\_i = \vert \Sigma\_{\mathcal{F}4} \vert -\mathbf{1} \wedge \sigma\_{(i+1)md} \ \vert \underline{\mathbf{v}}\_{\mathcal{F}4} \vert = \mathbf{0} \\\\ \sigma\_{(i+1)md} \ \vert \underline{\mathbf{v}}\_{\mathcal{F}4} \vert -\sigma\_i & : \ \text{otherwise} \end{cases} \tag{7}$$

The equation evaluates each right turn with �1 and each left turn with 1. The clockwise oriented cycles result in *o* ¼ �4, while the counter-clockwise cycles achieve *o* ¼ 4.



#### **4. Analysis of the algorithm**

#### **4.1 Time and space complexity estimation**

The proposed algorithm consists of three parts: finding the intersection trails, performing the walkabout, and determination of loop and rings.

Let the four-connected graph *G* consist of *n* vertices. Let *ki* and *k <sup>j</sup>* represent the number of *F*4 edges defining cycles of *Ri* and *Rj*, respectively. *ki* ¼ *k <sup>j</sup>* ¼ *k* may be assumed without loss of generality. At first, both regions are embedded into *G*. This is done in *Te*ð Þ¼ *k T k*ð Þþ*<sup>i</sup> T kj* <sup>¼</sup> <sup>2</sup>*<sup>k</sup>* time. One of the regions is walked-about and the

edges of the intersection trails are determined in time *Tw*ð Þ¼ *k k*. The first part of the algorithm is, therefore, executed in *T*1ð Þ¼ *k Te*ð Þþ *k Tw*ð Þ¼ *k* 3*k* time.

During the walkabout, all edges of both regions not being part of the intersection trail, are visited exactly once. In the worst case, all 2*k* edges must be visited in time *T*2ð Þ¼ *k* 2*k*.

The orientation of the cycles of the merged region is determined in the last step. This task is also terminated in *T*<sup>3</sup> ¼ 2*k* time, as the merged region cannot have more than 2*k* edges.

The proposed merging algorithm is, therefore, realised in *T k*ð Þ¼ *T*1ð Þþ *k T*2ð Þþ *k T*3ð Þ¼ *k* 3*k* þ 2*k* þ 2*k* ¼ 7*k* ¼ *O k*ð Þ. *k* cannot be greater than *n*, and therefore, one merging operation is terminated in the worst case in *O n*ð Þ time. However, in the majority of cases, *k*< < *n*. In such, an expected case, one merging operation is terminated in a constant time *O*ð Þ1 .

The algorithm needs memory for *G* with *n* vertices, i.e. *SG*ð Þ¼ *n n*. In addition, two regions need to be stored. In the worst case, both regions require additional *SR*ð Þ¼ *n n* memory space. The algorithm, therefore, works in *S n*ð Þ¼ 2*n* ¼ *O n*ð Þ space.

#### **4.2 Experiment**

The standard benchmark images shown in **Figure 12** have been used in the experiment with different resolutions. A criterion for merging two neighbouring regions was the colour similarity. By doubling the size of the image in both directions iteratively, the number of pixels *n* is increasing by the power of 4, and the number of required merging operations follows this exponential growth; actually, the number of merging operations is *n* � 1.

**Table 3** shows spent CPU time while performing merging from single pixels up to the entire image. A personal computer with a 3,5 GH Intel® Core™ i5–6600 K processor and 32 G bytes of RAM was used in the experiment. The program was implemented in C++ and compiled with C++ Visual Studio 2019 under the Windows 10 operating system. As can be seen, the actual CPU time spent depends on the colour characteristics of the images. The image Lenna has large parts of very similar colours and therefore, the regions grow rapidly which is reflected in the shortest CPU spent time. The image Peppers exposes a similar characteristic. On the other hand, the image Baboon consists of very small homogeneous regions, reflecting in longer CPU time spent.

**Figure 12.** *Images used in the experiments: (a) Lenna, (b) peppers, and (c) baboon.*


**Table 3.**

*Spent CPU time for images Lenna (L), peppers (P), and baboon (B) at different resolutions.*


#### **Table 4.**

*Spent CPU time with the referenced approach.*

We implemented the set-based version of the merging operation for comparison. In this case, the region is represented by the STD structure *unordered\_map*. The obtained results are shown in **Table 4**. As can be seen, the set-based approach performs considerably slower. In addition, it obviously spends more memory, as images with the highest resolutions cannot be processed any more.

#### **5. Conclusion**

A new region merging algorithm, suitable for hierarchical object-based image analysis, is proposed in this chapter. The raster space is represented by a 4-connected graph, and a merging function is derived formally upon it. The implementation follows the theoretical investigation strictly. The edges forming the border of the region embedded in the 4-connected graph are represented by the Freeman crack chain code in four directions. The implementation works in two main steps: a determination of the common vertices and edges of the regions being merged, and a walkabout, which realises the theoretically derived merging function. A classification of the obtained region's edges to those representing the holes and those defining the outer border, may be done at the end.

The algorithm's worst-case time complexity is *O n*ð Þ for one merging operation, where *n* is the number of graph vertices. However, as the number of edges defining the two regions being merged is much smaller than *n*, the expected time complexity is actually independent on *n*, i.e. the expected time complexity of the proposed algorithm is *O*ð Þ1 .

In addition to the methodical implementation of the region merging procedure, the proposed chain code-based approach enables efficient extraction of various essential shape descriptors [3]. The approaches for extracting these descriptors can be divided roughly into the region- and contour-based approaches, and the latter are known as

*An Efficient Region Merging Algorithm in Raster Space DOI: http://dx.doi.org/10.5772/intechopen.102679*

being computationally demanding for traditional hierarchical segmentation and region growing. Namely, they require the boundary to be extracted after each region merging operation. Because of this, these are rarely used, e.g. as stopping criteria during the region growing, or as thresholds for hierarchical cuts. On the other hand, chain codes by themselves allow for efficient description of shapes.

#### **Acknowledgements**

The authors acknowledge the financial support from the Slovenian Research Agency (Research Core Funding No. P2-0041 and Project No. N2-0181), and the Company IGEA, d.o.o., for co-financing this research.

#### **Thanks**

Thanks to Anže Ferc*^* ec, who implemented the referenced merging algorithm.

#### **Nomenclature**


#### **Author details**

Borut Žalik\*†, David Podgorelec†, Niko Lukač†, Krista Rizman Žalik† and Domen Mongus† Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, Slovenia

\*Address all correspondence to: borut.zalik@um.si

† These authors contributed equally.

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*An Efficient Region Merging Algorithm in Raster Space DOI: http://dx.doi.org/10.5772/intechopen.102679*

#### **References**

[1] Guimarães SJF, Cousty J, Kenmochi Y, Najman L. A Hierarchical Image segmentation algorithm based on an observation scale. In: Gimel'farb G, Hancock E, Imiya A, Kuijper A, Kudo M, Omachi S, Windeatt T, Yamada K, editors. Structural, Syntactic, and Statistical Pattern Recognition. Lecture Notes in Computer Science;7626. Berlin, Heidelberg: Springer; 2012. pp. 116-125. DOI: 10.1007/978-3-642-34166-3\_13

[2] Materka A, Strzelecki M. Texture analysis methods – a review. Lodz, Poland: COST B11 report, Institute of Electronics, Technical University of Lodz; 1998. Available from: https:// www.researchgate.net/publication/ 249723259\_Texture\_Analysis\_Methods\_- \_A\_Review [Accessed: 2021-12-22]

[3] Zhang D, Lu G. Review of shape representation and description techniques. Pattern Recognition. 2004; **37**(1):1-19. DOI: 10.1016/j. patcog.2003.07.008

[4] Kurnianggoro L, Jo KH. A survey of 2D shape representation: Methods, evaluations, and future research directions. Neurocomputing. 2018;**300**: 1-16. DOI: 10.1016/j.neucom.2018.02.093

[5] Xu Y, Carlinet E, Géraud T, Najman L. Efficient computation of attributes and saliency maps on treebased image representations. In: Benediktsson J, Chanussot J, Najman L, Talbot H, editors. Mathematical Morphology and Its Applications to Signal and Image Processing. Lecture Notes in Computer Science; 9082. Berlin, Heidelberg: Springer; 2015. pp. 693-704. DOI: 10.1007/978-3-319-18720-4\_58

[6] Salembier P, Oliveras A, Garrido L. Antiextensive connected operators for image and sequence processing. IEEE

Transactions on Image Processing. 1998;**7**(4):555-570. DOI: 10.1109/ 83.663500

[7] Ouzounis GK, Soille P. Pattern spectra from partition pyramids and hierarchies. In: Soille P, Pesaresi M, Ouzounis GK, editors. Mathematical Morphology and Its Applications to Image and Signal Processing. Berlin, Heidelberg: Lecture Notes in Computer Science; 6671, Springer; 2011. pp. 108-119. DOI: 10.1007/978-3-642-21569-8\_10

[8] Ouzounis GK, Soille P. The alpha-tree algorithm: Theory, algorithms and applications. Luxembourg: Technical reports JCR74511, European Commission, Joint Research Centre; 2012. DOI: 10.2788/48773

[9] Najman L, Cousty J, Perret B. Playing with Kruskal: Algorithms for morphological trees in edge-weighted graphs. In: Hendriks CLL, Borgefors G, Strand R, editors. Mathematical Morphology and Its Applications to Signal and Image Processing. Lecture Notes in Computer Science; 7883. Berlin, Heidelberg: Springer; 2013. pp. 135-146. DOI: 10.1007/978-3- 642-38294-9\_12

[10] Hopcroft JE, Ullman JD. Set merging algorithms. SIAM Journal of Computing. 1973;**2**(4):294-303. DOI: 10.1137/ 0202024

[11] Tarjan RE. Efficiency of a good but non-linear set union algorithm. Journal of ACM. 1975;**22**(2):215-225. DOI: 10.1145/321879.321884

[12] Tarjan RE, Leeuwen J. Worst-case analysis of set union algorithms. Journal of ACM. 1984;**31**(2):245-281. DOI: 10.1145/62.2160

[13] Cormen TH, Leiserson CE, Rivest RL, Stein C. Introduction to algorithms. 3rd ed. Cambridge, London: MIT; 2009. p. 1292

[14] Horowitz SL, Pavlidis T. Picture segmentation by a tree traversal algorithm. Journal od ACM. 1976; **23**(2):368-388. DOI: 10.1145/ 321941.321956

[15] Brun L, Domenger JP. A new split and merge algorithm with topological maps. In: Proceedings of the 5th International Conference in Central Europe on Computer Graphics and Visualization (WSCG'97), 10–14 February 1996, Plzen, Czech Republic: University of West Bohemia. p. 21–31. Available from: https://www. researchgate.net/publication/2658919\_ A\_New\_Split\_and\_Merge\_Algorithm\_ with\_Topological\_Maps [Accessed: 2021-12-22]

[16] Khalimsky E, Kopperman R, Meyer PR. Boundaries in digital planes. Journal of Applied Mathematics and Stochastic Analysis. 1990;**3**(1):27-55. DOI: 10.1155/S1048953390000041

[17] Hoffmann CM. Geometric and solid modeling: An introduction. San Francisco: Morgan Kaufmann; 1989. 338 p. Available from: https://dl.acm.org/ doi/book/10.5555/74803 [Accessed: 2021-12-23]

[18] Mäntylä M. An introduction to solid modeling. New York: Computer Science Press; 1987. 401 p. Available from: https://dl.acm.org/doi/book/10.5555/ 39278 [Accessed: 2021-12-23]

[19] Mortenson ME. Geometric Modeling. 2nd ed. New York: Wiley; 1997. 523 p. Available from: https://dl. acm.org/doi/book/10.5555/248381 [Accessed: 2021-12-23]

[20] Mairson HG, Stolfi J. Reporting and counting intersections between two sets of line segments. In: Earnshaw R, editor. Theoretical Foundations of Computer Graphics and CAD. NATO ASI Series; F40. Berlin, Heidelberg: Springer; 1988. pp. 307-325. DOI: 10.1007/978-3-642- 83539-1\_11

[21] Greiner G, Hormann K. Efficient clipping of arbitrary polygons. ACM Transaction on Graphics. 1998;**17**(2):71-83. DOI: 10.1145/274363. 274364

[22] Vatti BR. A generic solution to polygon clipping. Communications of ACM. 1992;**35**(7):56-63. DOI: 10.1145/ 129902.129906

[23] Liu YK, Wang XQ, Bao SZ, Gomboši M, Žalik B. An algorithm for polygon clipping, and for determining polygon intersections and unions. Computers & Geosciences. 2007;**33**(5): 589-598. DOI: 10.1016/j.cageo.2006. 08.008

[24] Rivero M, Feito FR. Boolean operations on general planar polygons. Computers & Graphics. 2000;**24**(6): 881-896. DOI: 10.1016/S0097-8493(00) 00090

[25] Peng Y, Yong JH, Dong WM, Zhang H, Sun JG. A new algorithm for Boolean operations on general polygons. Computers & Graphics. 2005; **29**(1):57-70. DOI: 10.1016/j.cag.2004. 11.001

[26] Žalik B, Mongus D, Rizman Žalik K, Lukac*^* N. Boolean operations on rasterized shapes represented by chain codes using space filling curves. Journal of Visual Communication and Image Representation 2017;49:420–432. DOI: 10.1016/j.jvcir.2017.10.003

*An Efficient Region Merging Algorithm in Raster Space DOI: http://dx.doi.org/10.5772/intechopen.102679*

[27] Freeman H. On the encoding of arbitrary geometric configurations. IRE Transactions on Electronic Computers. 1961;**EC10**(2):260-268. DOI: 10.1109/ TEC.1961.5219197

[28] Bribiesca E. A new chain code. Pattern Recognition. 1999;**32**(2): 235-251. DOI: 10.1016/S0031-3203(98) 00132-0

[29] Sánchez-Cruz H, Rodríguez-Dagnino RM. Compressing bi-level images by means of a 3-bit chain code. Optical Engineering. 2005;**44**(9): 097004. DOI: 10.1117/1.2052793

[30] Žalik B, Mongus D, Liu YK, Lukac*^* N. Unsigned Manhattan chain code. Journal of Visual Communication and Image Representation 2016;**38**:186–194. DOI: 10.1016/j.jvcir.2016.03.001

[31] Dunkelberger KA, Mitchell OR. Contour tracing for precision measurements. St. Luis, USA: IEEE International conference on robotics and automation (ICRA); 25–28 March 1985. pp. 22-27. DOI: 10.1109/ ROBOT.1985.1087356

[32] Wilson GR. Properties of contour codes. IEE Proceedins – Vision, Image and Signal Processing. 1997;**144**(3): 145-149. DOI: 10.1049/ip-vis:19971159

[33] Kabir S. A compressed representation of Mid-Crack code with Huffman code. International Journal on Image, Graphics and Signal Processing. 2015;**7**(10):11-18. DOI: 10.5815/ ijigsp.2015.10.02

[34] de Berg M, Cheong O, van Kreveld M, Overmars M. Computational geometry: Algorithms and applications. 3rd ed. Berlin Heidelberg: Springer; 2008. p. 386. DOI: 10.1007/978-3-540-77974-2

[35] Chazelle B, Edelsbrunner H. An optimal algorithm for intersection line segments in the plane. Journal of ACM. 1992;**39**(1):1-54. DOI: 10.1145/147508. 147511

[36] Žalik B. Two efficient algorithms for determining intersection points between simple polygons. Computers & Geosciences. 2000;**26**(2):137-151. DOI: 10.1016/S0098-3004(99)00071-0

#### **Chapter 4**

## Application of Discrete Mathematics for Programming Discrete Mathematics Calculations

*Carlos Rodriguez Lucatero*

#### **Abstract**

In the discrete mathematics courses, topics, such as the calculation of the element in any position of a sequence of numbers generated by some recurrence relation, calculation of multiplicative inverses in algebraic ring structures modulo a number *n*, obtaining the complete list of combinations without repetition, for which you can take advantage of the computing power of computers and perform such calculations using computer programs in some programming language. The implementations of these calculations can be carried out in many ways and therefore their algorithmic performance can be very varied. In this chapter, I propose to illustrate by means of some Matlab programs, how the use of results of the same discrete mathematics allows to improve the algorithmic performance of said computer programs. Another topic addressed in regular discrete mathematics courses where calculations arise that could become very expensive both in time and in occupied space, if the calculations are implemented directly from the definitions is modular arithmetic. Such calculations can be carried out much more efficiently by making use of results from discrete mathematics and number theory. The application of these ideas will be developed in the following sections of this chapter.

**Keywords:** recurrence relations, algorithms, generating functions, modular arithmetic, Matlab

#### **1. Introduction**

Discrete mathematics provides very useful calculation tools to enumerate mathematical objects of a certain type, such as the number of graphs that meet a certain particular property or how many regions will be formed within a circle as the number of points that grows, we will join by means of secant lines [1]. The methods and tools of discrete mathematics are of enormous relevance in the field of computer science when one wants to compare different algorithmic solutions to a problem. The goodness of an algorithmic solution must take into account, the use of runtime resources and memory space since they are limited. For this, it is necessary to carry out an analysis of the algorithm and count how many execution steps, such processing takes and how many memory locations it will occupy. When carrying out this analysis,

mathematical expressions known as recurrence relationships are obtained, which reflect the behavior at execution time or the memory space occupied by the algorithm. Once the recurrence relationship is obtained, it can be evaluated to estimate, for example, how many executions steps an algorithm will take as the number of input data increases. This generates a succession of values that can eventually be graphed to give us an idea of the temporal complexity of such an algorithm. Such evaluation can be done by hand or it can be implemented, in some programming language available, a function. Normally the direct translation of the mentioned recurrences to a program produces compact and clear recursive routines. The evaluation of such routines is inefficient since many of the recursive calls repeat calculations and it is in this case that it is worth looking for more efficient iterative versions of these calculations. Even more, if possible, we can solve the recurrences to obtain a closed mathematical expression that describes the behavior of the growth in the time of the algorithm. That is where the tools of discrete mathematics acquire special relevance. In algorithm analysis books, one can find many techniques to solve certain types of recurrences and we can illustrate by giving some examples in this chapter. Either way, there are more general discrete mathematics tools for this purpose that we will also try to illustrate with examples. This process of solving recurrences can be seen as a generalization of the solution of classical problems. Some problems consist of finding the next element in a sequence of numbers and for which there are techniques, such as the method of divided differences. Sometimes obtaining the closed solution of a recurrence is not possible, however, discrete mathematics and mathematical analysis allow to define upper and lower bounds as well as approximate calculations of very good quality.

Another topic addressed in discrete mathematics courses, which requires the performance of many calculations, is that of modular arithmetic. Such calculations can be carried out much more efficiently by making use of results from discrete mathematics and number theory. The application of these ideas will be developed in the following sections of this chapter.

#### **2. Body of the manuscript**

The chapter will have the following structure. In Section 3, we will make a quick revision of some calculations problems in combinatorics. After that, in Section 4, we will talk about where the recurrence relations arise and some methods as well as mathematical tools that are frequently used to solve them. In each example, we will begin by obtaining the mathematical relationship that describes the behavior of the problem to be solved. Once the mathematical relationship is obtained, we will use it to give us an idea of the behavior it describes by evaluating it by means of some function programmed in Matlab. Later, we will obtain the associated mathematical expression or a good approximation function that we will evaluate implementing a Matlab function. This will allow us to compare the performance in the time of the evaluation of an expression and the direct application of a recurrence by implementing both in Matlab.

In Section 5, we will describe characteristics or properties of ring algebraic structures on the set of positive integers modulo some number *n* and directly from the definitions, we will obtain the tables of the two operations of the ring in question. Later, we will obtain the same tables more efficiently from the application of properties and theoretical results of modular arithmetic to illustrate the advantage of making use of the results of modular arithmetic to make calculations of modular arithmetic algorithmically more efficient both in time and space.

*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

#### **3. Combinatorics calculation**

One of the most relevant topics in discrete mathematics is combinatorial analysis, where, among other things, it is important to calculate the number of all possible linear arrangements of a given size of objects of a set with repetitions or without repetitions. In some cases, the order of appearance of these objects must be taken into account. When the order of the elements is important and it is allowed to have a repetition of them, we are talking about permutations with repetition. If the repetition of elements is forbidden, then we are talking about permutations without repetition. To clarify ideas, suppose that we have a set *A* ¼ f g *a*, *b*,*c*, *d* and that we want to calculate all possible linear arrangements of 3-position elements allowing repetitions of elements in each of the positions. The cardinality of the set of objects *A* is 4 and as each of the three positions can be occupied by any of the elements of *A* then the total number of possible dispositions will be 4 � <sup>4</sup> � 4 that is to say 4<sup>3</sup> <sup>¼</sup> 64. The resulting list of possibilities appear in **Table 1**.

```
This table was generated by the following Matlab function.
function y = enumeraPCR(A)
% Autor: Carlos Rodriguez Lucatero
[m,n]=size(A);
cuenta=0;
  for i=1:n
    for j=1:n
         for k=1:n
           fprintf('%s,%s,%s \n',A(1,i),A(1,j),A(1,k));
           cuenta=cuenta+1;
         end
    end
  end
y=cuenta;
end
```
If the repetitions of elements of *A* ¼ f g *a*, *b*,*c*, *d* are not allowed the linear dispositions of three positions of those elements have four possibilities for the first position, three possibilities for the second position, and two possibilities for the third position, giving as a total number of linear dispositions 4 � 3 � 2 ¼ 24. The listing of those dispositions can be seen in **Table 2**.


#### **Table 1.**

*Complete listing of permutation with repetitions.*


#### **Table 2.**

*Complete listing of the permutations without repetitions.*

```
This last table was generated by the following Matlab function.
function [y,lista] = ListaPSR(A,k)
% Autor: Carlos Rodriguez Lucatero
[m,n]=size(A);
r=factorial(n)/factorial(n-k);
y(1,1:k)=A(1,1:k);
s=1;
while r-1 > 0
    j=n-1;
    while((A(1,j) >= A(1,j+1)) & (j>0))
         j=j-1;
    end
    l=n;
    while ((A(1,j) >= A(1,l)) & (l>j))
         l=l-1;
    end
    temp=A(1,j);
    A(1,j)=A(1,l);
    A(1,l)=temp;
    t=j+1;
    l=n;
    while (t<l)
         temp=A(1,t);
         A(1,t)=A(1,l);
         A(1,l)=temp;
         t=t+1;
         l=l-1;
    end
    s=s+1;
    y(s,1:k)=A(1,1:k);
    r=r-1;
end
lista=y;
```

```
end
```
In general, if the cardinality of the set of objects is *n* and the number of possible positions within the linear arrangement is *k* then the total of possible permutations with repetition would be equal to *n<sup>k</sup>*. In the event that repetitions of elements are not allowed in the arrangement, the first position can be occupied by any of the *n* elements of the set of objects. Once the element is assigned to position 1, the next position can be filled by any of the remaining *n* 1 elements and so on until filling the *k* positions of the linear arrangement. In the general case, the

calculation of the total of possible dispositions under the aforementioned conditions would be *n* � ð Þ� *n* � 1 … � ð Þ *n* � *k* þ 1 . As you can see, this calculation can result in an excessive number of factors and for that reason the *n*! symbol is introduced, which is equal to *n* � ð Þ� *n* � 1 ð Þ *n* � 2 … � 3 � 2 � 1. Thus, the calculation of the total number of permutations without repetition of *n* objects taken from *k* in *k* is calculated with the following formula

$$\frac{n!}{k!(n-k)!} \,. \tag{1}$$

The factorial can be defined as a recurrence relation. Every recurrence relationship has a stop condition and a recursive step. To do this, it is necessary to define both parts of the recurrence. For this, we agree that 0! ¼ 1 which constitutes the stop condition and since *n*! ¼ *n* � ð Þ� *n* � 1 ð Þ� *n* � 2 … � 3 � 2 � 1 ¼ *n* � ð Þ *n* � 1 !. So, we can rewrite *n*! as recurrence as follows

$$\begin{cases} \mathbf{0}! = \mathbf{1} & \text{for } n = \mathbf{0}. \\ n! = n \times (n - \mathbf{1})! & \text{for } n > \mathbf{0}. \end{cases} \tag{2}$$

The recurrence 2 to calculate the factorial is one of the first recurrence relationships to appear in discrete mathematics. If we try to implement in Matlab a direct translation of the recurrence 2 this could be the following.

```
function y=fact(n)
if n<0
    disp('argument should be positive');
    y=-1;
elseif n==0
    y=1;
  else
    y=n*fact(n-1);
  end
end
```
Such an implementation is compact and elegant but has the disadvantage that it can easily overwhelm the system's recursive call stack, which is why it is worth looking for an equivalent iterative implementation. Fortunately, it is known that recursive routines whose last instruction is a recursive call can always be translated into an iterative version. Therefore, we propose the following iterative implementation.

```
function y= factI( n )
acum=1;
for i=1:n
    acum=acum*i;
end
y=acum;
end
```
Sometimes when calculations have to be made, it may be enough to use good approximations. The calculation of *n*! can be done in an approximate and efficient way using the Stirling approximation formula whose proof can be consulted in ref. [2] and whose mathematical expression is the following

$$n! \approx \sqrt{2 \cdot \pi} \cdot n^{n + \frac{1}{2}} e^{-n}. \tag{3}$$

This approximation turns out to be quite good and this allows us to calculate *n*! very efficiently with a computer, avoiding the problem of excessive recursive calls in the case that *n* is very large. A possible implementation of this approach using Matlab would be the following.

function y = factStirling(n) % Stirling approximation of the factorial % Feller Vol. I pag 70 \$ Autor: Carlos Rodriguez Lucatero y=sqrt(2\*pi)\*n^(n+1/2)\*exp(-1\*n); end

We can compare the algorithmic complexities of these three different functions to calculate *n*! and we can notice that both the recursive and iterative versions have a time complexity of *O n*ð Þ while the Stirling approximation version is of *O*ð Þ1 which is more efficient but at the price of the calculation being approximate. This example of the different ways of calculating *n*! is the first example of how we can benefit from mathematical results to improve the temporal performance of the programs that we implement for this purpose.

#### **4. Recurrences calculation and analysis of algorithms**

With regard to algorithm efficiency, it is in this topic that some recurrence relationships and methods of solving said recurrences can be found to determine the time behavior of the algorithms in terms of the number of input data. One of the design techniques that allows you to build efficient algorithms is known as *Divide and Conquer*. A well-known problem in computing is that of ordering in ascending order a set of numerical data in disorder. In regular algorithm analysis courses, various algorithmic solutions for the said problem are studied and some of the most efficient algorithmic solutions are designed using the *Divide and Conquer* approach. One of the most famous Divides and Conquer algorithms is the Merge-Sort. The operation of the Merge-Sort starts by dividing the arrangement into subarrays in half and each subarray divides it again, in the same way, proceeding recursively until it reaches a point where it is not possible to further subdivide subarrays, after which it begins to sort by interleaving the subarrays until merging the ordered subarrays of the first partition. A possible Matlab implementation of this algorithm is shown below.

```
function y = mergesort(A,p,r)
```

```
if (p < r)
   q=floor((p+r)/2);
   y1=mergesort(A,p,q);
   y2=mergesort(A,q+1,r);
   y=merge(A,y1,y2,p,q,r);
else
   y=A;
end
end
function y = merge(A,y1,y2,p,q,r)
L=y1(1,p:q);
[m1,n1]=size(L);
L(1,n1+1)=99999999;
R=y2(1,q+1:r);
```
*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

```
[m2,n2]=size(R);
R(1,n2+1)=99999999;
i=1;
j=1;
for k=p:r
   if L(i) <= R(j)
      A(k)=L(i);
      i=i+1;
   else
      A(k)=R(j);
      j=j+1;
   end
end
y=A;
end
```
When analyzing the Merge-Sort, a recurrence of the time or number of steps in total that it takes depending on the size of the input is obtained, which we will denote as *T n*ð Þ. The recurrence obtained as a product of the analysis of this algorithm can have the following form

$$\begin{cases} T(\mathbf{1}) = \mathbf{1} & \text{for } n = \mathbf{1}. \\ T(n) = 2T\left(\frac{n}{2}\right) + n & \text{for } n > \mathbf{1}. \end{cases} \tag{4}$$

To solve the recurrence relation, four several methods, such as substitution, recurrence tree, or iteration of recurrence can be applied [3]. Here we will use the iteration method of recurrence. To simplify the problem, we will assume that the size of the array is *<sup>n</sup>* <sup>¼</sup> <sup>2</sup>*<sup>m</sup>*, that is, it is a power of 2. Taking into account the above, we can operate a variable change in recurrence 4 which is rewritten as follows

$$\begin{cases} T(\mathbf{2}^0) = \mathbf{1} & \text{for } m = 0. \\ T(\mathbf{2}^m) = \mathbf{2}T(\mathbf{2}^{m-1}) + \mathbf{2}^m & \text{for } m > 0. \end{cases} \tag{5}$$

We iterate over the recurrence 5 we obtain the following expression

$$T(\mathfrak{Z}^{m}) = \mathfrak{Z}^{m} + \mathfrak{Z}^{m} + \mathfrak{Z}^{m} + \dots + \mathfrak{Z}^{m} = m \times \mathfrak{Z}^{m}.\tag{6}$$

We know that *<sup>n</sup>* <sup>¼</sup> <sup>2</sup>*<sup>m</sup>* and therefore that *<sup>m</sup>* <sup>¼</sup> log <sup>2</sup>ð Þ *<sup>n</sup>* . Then we can make the change of variable in expression 5 and obtain the solution to recurrence 4 that would have the following form

$$T(n) = n \log\_2(n). \tag{7}$$

Recurrence relationships are present in undergraduate courses in algorithmic analysis, mathematical thinking, or discrete mathematics. One of the topics that are usually addressed in the courses of mathematical thought is that of sequences of numbers and the detection of patterns of behavior of these sequences to illustrate the mathematical process of discovering the properties of sequences of numbers. Typical examples consist of presenting a sequence of integer values and inferring what is the next number in the sequence. In classic mathematical thinking, textbooks such as [4] systematic methods are exposed to solve this type of mathematical puzzles, such as the method of successive differences. Normally these types of exercises remain at the point of discovering the next element in the sequence, but you can go further in the problem and try to discover the mathematical expression that allows obtaining the term of a sequence of numbers in an arbitrary position. It is there where we can resort to the recurrence relations from which a recursive program can be implemented to obtain elements of the numerical sequence in any given position or even more use the tools of discrete mathematics to solve said recurrences to obtain in computationally efficient way elements in arbitrary positions in a sequence of numbers. Next, we will show these ideas with an example with numerical sequences in which we will obtain the following element of the sequence by the method of successive differences, then we will obtain some recurrence relation from which we will implement a function in Matlab that allows us to obtain any element of the sequence given the position and finally, we will solve the recurrence in question to obtain a mathematical formula that represents the form of the n-th term of the sequence from which a function will be implemented to Matlab to evaluate said mathematical expression giving the position to obtain the number that occupies that position within the given sequence of numbers.

#### **4.1 Numerical sequences and recurrences**

There are sequences of numbers known as figurative numbers since they are related to figures of a certain type that are formed by joining a certain number of points with lines. These points and lines can form, for example, triangles, squares, pentagons, or heptagons, and the associated sequences will be called triangular numbers, square numbers, pentagonal numbers, or heptagonal numbers, respectively. In **Figure 1**, we can see how pentagons can be formed from the number of points given in each case.

The sequence associated with the pentagonal numbers is shown in **Table 3**.

Suppose you want to obtain the next element of the numerical sequence, that is, the element *a*5. For this, we can apply the method of successive differences that consists of taking the first difference between successive elements of the sequence, then the differences of the first successive differences are taken, and so on.

When the successive differences become constant, the process stops and we perform a backward calculation adding the last differences of each level until we reach the calculation of the last element of the original sequence.

**Figure 1.** *Pentagonal numbers sequence.*

*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*


#### **Table 4.**

*Successive differences on the pentagonal sequence.*

The steps performed by this procedure appear in **Table 4**.

The numbers that appear in boldface are generated by the backward calculation mentioned before. This procedure, although correct, can become very cumbersome if instead of wanting to calculate the next element in the sequence, you want to know the value of the element in the sequence in position 30. It is in this case that it would be worthwhile to obtain a mathematical relationship that would allow us to implement a program in any programming language that is available to obtain any element of the numerical sequence in any given position. One way to achieve this goal is to try to discover the recurrence relationship that allows the elements of the sequence to be generated until the element of the sequence in the desired position is reached. This is what we will do next. From the calculations carried out in the successive differences procedure, we can obtain the following first difference relationships between elements of the sequence

$$
\mathfrak{a}\_1 - \mathfrak{a}\_0 = \mathfrak{4}.\tag{8}
$$

$$
\mathfrak{a}\_2 - \mathfrak{a}\_1 = \mathbb{7}.\tag{9}
$$

$$a\_3 - a\_2 = \mathbf{10}.\tag{10}$$

$$a\_4 - a\_3 = \mathbf{13}.\tag{11}$$

$$a\_{\mathbb{S}} - a\_{\mathbb{A}} = \mathbb{16}.\tag{12}$$

From the first differences, we can obtain the following relationships corresponding to the second differences

$$(a\_2 - a\_1) - (a\_1 - a\_0) = \mathbf{3}.\tag{13}$$

$$(a\_3 - a\_2) - (a\_2 - a\_1) = \mathbf{3}.\tag{14}$$

$$(a\_4 - a\_3) - (a\_3 - a\_2) = \mathbf{3}.\tag{15}$$

$$(a\_5 - a\_4) - (a\_4 - a\_3) = \mathbf{3}.\tag{16}$$

We can observe the relations related to the second difference; a regular behavior that allows us to establish the following generalization

$$(a\_n - a\_{n-1}) - (a\_{n-1} - a\_{n-2}) = \mathbf{3}.\tag{17}$$

From Eq. (17), we can obtain the following difference equation

$$
\mathfrak{a}\_n - 2\mathfrak{a}\_{n-1} + \mathfrak{a}\_{n-2} = \mathfrak{3}.\tag{18}
$$

As can be seen, it is a difference equation, it is non-homogeneous linear second order and with constant coefficients, and for this reason, it requires two initial conditions that are taken from the numerical sequence in question. These initial conditions are *a*<sup>0</sup> ¼ 1 and *a*<sup>1</sup> ¼ 5. Thus, the equation in complete differences would be expressed in the following form

$$a\_n - 2a\_{n-1} + a\_{n-2} = \mathfrak{Z}, a\_0 = \mathfrak{1}, a\_1 = \mathfrak{Z}.\tag{19}$$

From Eq. (19), we can obtain the following recurrence

$$\begin{cases} a\_0 = 1 & \text{for } n = 0. \\ a\_1 = 5 & \text{for } n = 1. \\ a\_n = 2a\_{n-1} - a\_{n-2} + 3 & \text{for } n > 1. \end{cases} \tag{20}$$

Recurrence allows us to directly implement the following recursive function in Matlab.

```
function y = recpentagR(n)
%Author: Carlos Rodriguez Lucatero
if (n==0)
   y=1;
else
   if (n==1)
     y=5;
   else
     y=2*recpentagR(n-1)-recpentagR(n-2)+3;
   end
end
end
```
We can test the routine from Matlab by first calculating an element of the sequence whose value we know, for example *a*<sup>5</sup> ¼ 51. The result of calling the function from the Matlab prompt is the following.

```
>> z=recpentagR(5)
z =
51
```
Now let us try to calculate with this same routine the element of the numerical sequence at position 30, that is, *a*30. The result is shown below.

>> z=recpentagR(30)

z = 1426

If we wanted to calculate *a*<sup>50</sup> with this recursive routine we would realize that the calculation time increases a lot because many of the calculations are recalculated and also there is a risk of overflowing the system stack due to a large amount of recursive *Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

calls. Fortunately, we can reprogram an iterative version of this function and it is the one shown below.

```
function y = recpentagI(n)
%Author: CRL
a0=1;
a1=5;
if (n==0)
   y=a0;
else
   if (n==1)
       y=a1;
   else
       an=0;
       for i=2:n
          an=2*a1-a0+3;
          a0=a1;
          a1=an;
       end
       y=an;
   end
end
end
```
We will test this routine by first executing it for the known value *a*<sup>5</sup> ¼ 51, then for the value *a*<sup>30</sup> whose value obtained with the recursive version was 1426 and finally, we will obtain the value *a*<sup>50</sup> as well as the *a*<sup>100</sup> value.

```
>> z=recpentagI(5)
z =
    51
>> z=recpentagI(30)
z =
    1426
>> z=recpentagI(50)
z =
    3876
>> z=recpentagI(100)
z =
    15251
```
The runtime improvement of the iterative version over the recursive version is truly impressive. However, the execution time of a routine can be further improved to carry out this calculation by resorting to discrete mathematics tools to solve the recurrence and implement a routine that the only thing that does is evaluate in the given position the mathematical formula obtained from the solution of the recurrence. Discrete mathematics provides us with many methods to solve non-homogeneous linear difference equations, such as the one we will solve, associated with recurrence relationships. This type of difference equations can be obtained by methods, such as the iteration of recurrence, the characteristic polynomial method, or the generating function method. In this subsection, we will resolve the recurrence by applying several of these methods for illustrative purposes. For more details on solving recurrences using these methods, we recommend consulting [3, 5–8].

#### **4.2 Solving a recurrence by iteration**

We start with the application of the iteration method of recurrence. For this purpose, we will use the third row of the recurrence relation (20). This equation would be the following

$$a\_n = 2a\_{n-1} - a\_{n-2} + \text{3.}\tag{21}$$

We apply Eq. (21) to the case of *n* � 1 which would give the following equation

$$
\mathfrak{a}\_{n-1} = 2\mathfrak{a}\_{n-2} - \mathfrak{a}\_{n-3} + \mathfrak{3}.\tag{22}
$$

We substitute 22 in 21 and obtain the following equation

$$a\_n = 3a\_{n-2} - 2a\_{n-3} + 2 \cdot 3 + 3. \tag{23}$$

We apply Eq. (21) to the case of *n* � 2 which would give the following equation

$$
\mathfrak{a}\_{n-2} = 2\mathfrak{a}\_{n-3} - \mathfrak{a}\_{n-4} + \mathfrak{3}.\tag{24}
$$

We substitute 24 in 23 and obtain the following equation

$$a\_n = 4a\_{n-3} - 3a\_{n-4} + \mathfrak{Z} \cdot \mathfrak{Z} + \mathfrak{Z} \cdot \mathfrak{Z} + \mathfrak{Z}.\tag{25}$$

We apply Eq. (21) to the case of *n* � 3 which would give the following equation

$$a\_{n-3} = 2a\_{n-4} - a\_{n-5} + 3.\tag{26}$$

We substitute 26 in 25 and obtain the following equation

$$a\_n = \mathfrak{Z}a\_{n-4} - 4a\_{n-5} + 4\cdot \mathfrak{Z} + \mathfrak{Z}\cdot \mathfrak{Z} + \mathfrak{Z}\cdot \mathfrak{Z} + \mathfrak{Z}.\tag{27}$$

Continuing with this procedure until reaching the base cases and noting that certain regularities appear, such as the presence of a sum of successive natural numbers, we arrive at the following expression

$$a\_n = n \cdot a\_1 - (n - 1) \cdot a\_0 + \mathbf{3} \cdot \sum\_{i=1}^{n-1} i. \tag{28}$$

Substituting *a*<sup>0</sup> ¼ 1, *a*<sup>1</sup> ¼ 5 and applying Gauss's formula to the summation in Eq. (28), we obtain the following solution

$$a\_n = \frac{3}{2} \cdot n^2 + \frac{5}{2} \cdot n + 1. \tag{29}$$

#### **4.3 Solving a recurrence by the characteristic polynomial method**

We can try another method of solving the recurrence to illustrate another tool provided by discrete mathematics. In this case, we will use the characteristic polynomial method. The difference equation that we are going to solve is linear inhomogeneous with coefficients, which makes it capable of being solved by this method. *Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

These methods are closely related to the methods of solving differential equations of the same type. The difference equation that we are going to solve is the following

$$a\_n - 2a\_{n-1} + a\_{n-2} = \mathfrak{Z}, a\_0 = \mathfrak{1}, a\_1 = \mathfrak{Z}.\tag{30}$$

The theory on the solution of equations in non-homogeneous linear differences says that the general solution is composed of a solution of the homogeneous equation plus a particular solution related to the non-homogeneous part. The homogeneous equation associated with Eq. (30) would be the following

$$
\mathfrak{a}\_n - 2\mathfrak{a}\_{n-1} + \mathfrak{a}\_{n-2} = \mathbf{0}.\tag{31}
$$

A characteristic polynomial is associated with the homogeneous Eq. (31) that is obtained by substituting a possible solution in the difference equation. Said possible solution has the form *an* <sup>¼</sup> *<sup>c</sup>* � *<sup>r</sup><sup>n</sup>* where *<sup>c</sup>* is an arbitrary constant if we substitute said possible solution in 31 we obtain the following polynomial

$$r^2 - 2r + 1 = 0.\tag{32}$$

The roots of the characteristic polynomial 32 are *r*<sup>1</sup> ¼ 1 and *r*<sup>2</sup> ¼ 1, that is, they are real and repeated roots and since the solutions of a difference equation must be linearly independent, the form of the solution of the homogeneous difference equation would be the following

$$a\_n = c\_1 \cdot \mathbf{1}^n + c\_2 \cdot n \cdot \mathbf{1}^n. \tag{33}$$

Now we proceed to solve the particular equation whose solution must be linearly independent of the solutions of the associated homogeneous equation. We note that the right-hand side of the difference Eq. (30) is *f n*ð Þ¼ 3 which is equivalent to *f n*ð Þ¼ <sup>3</sup> � <sup>1</sup>*<sup>n</sup>*. So the form of the particular solution would be the following

$$
\mathfrak{a}\_n = \mathbf{A} \cdot \mathfrak{n}^2. \tag{34}
$$

If we evaluate the equation in differences 30 the solution 34 we obtain the following expression

$$A \cdot n^2 - 2 \cdot A \cdot (n - 1)^2 + A \cdot (n - 2)^2 = 3 \cdot 1^n. \tag{35}$$

After algebraically simplifying Eq. 35 we deduce that *<sup>A</sup>* <sup>¼</sup> <sup>3</sup> 2 , so the general solution would have the form

$$a\_n = c\_1 \cdot \mathbf{1}^n + c\_2 \cdot n \cdot \mathbf{1}^n + \frac{3}{2} \cdot n^2. \tag{36}$$

Applying on 37 *<sup>a</sup>*<sup>0</sup> <sup>¼</sup> 1 we deduce that *<sup>c</sup>*<sup>1</sup> <sup>¼</sup> 1. Applying on 37 *<sup>a</sup>*<sup>1</sup> <sup>¼</sup> 5 we get *<sup>c</sup>*<sup>2</sup> <sup>¼</sup> <sup>5</sup> 2 In this way we obtain that the general solution is

$$a\_n = 1 + \frac{5}{2} \cdot n + \frac{3}{2} \cdot n^2. \tag{37}$$

We can verify that solution 37 coincides with solution 29. Finally, we can use this solution to implement a Matlab function to perform this calculation and obtain the value of an element of the sequence given the position. The Matlab function would be the following.

```
function y = recpentag(n)
% Author: Carlos Rodriguez Lucatero
y=1+(5/2)*n+(3/2)*(n^2);
end
```
For being convinced about the correctness of the solution, we can evaluate this Matlab function for the same values used before and we obtain the following results.

```
>> y = recpentag(5)
y =
    51
>> y = recpentag(10)
y =
    176
>> y = recpentag(30)
y =
    1426
>> y = recpentag(50)
y =
    3876
>> y = recpentag(100)
y =
    15251
```
#### **4.4 Solving a recurrence by generating function method**

Another powerful method of discrete mathematics to solve difference Eq. (19) associated with the recurrence 20 is that of ordinary generating functions. Said method consists, in the simplest case, in converting a numerical sequence into an infinite polynomial of a single variable whose coefficients are precisely the elements of the numerical sequence. The reason for doing this transformation is that the algebraic manipulation of these infinite polynomials is relatively simple. For this reason, we define below the concept of the ordinary generating function of a single variable.

**Definition** 1.1 Given a numerical sequence *a*0, *a*1, *a*2, … , *ak*, … the function

$$A(\mathbf{z}) = \sum\_{k \ge 0} a\_k \mathbf{z}^k. \tag{38}$$

It is called the ordinary generating function (OGF) of the sequence. The notation *<sup>z</sup><sup>k</sup>* � �*A z*ð Þ will be used to refer to the coefficient *ak* of the k-th term of the infinite polynomial.

By means of the generating functions, we can map sequences of numbers that often are integers to power series. The coefficient of the n-th adding will, therefore, be related to the n-th element of the associated numerical sequence. For example, in the generating function P<sup>∞</sup> *<sup>i</sup>*¼<sup>0</sup>*z<sup>i</sup>* <sup>¼</sup> <sup>1</sup> <sup>þ</sup> *<sup>z</sup>* <sup>þ</sup> *<sup>z</sup>*<sup>2</sup> <sup>þ</sup> *<sup>z</sup>*<sup>3</sup> <sup>þ</sup> *<sup>z</sup>*<sup>4</sup> <sup>þ</sup> … , we observe that it is the geometric series that converges if ∣*z*∣<1 and can be expressed as <sup>1</sup> 1�*z* . We can also observe that all the terms of the sum have a coefficient of 1, so this generating function is

*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

associated with the numerical sequence 1, 1, 1, 1, … . On the other hand, since the generating functions are infinite polynomials, it is easy to apply the derivative operation to them. Thus, the derivative of the geometric series would be expressible as follows

$$\frac{d}{dx}\left(\frac{1}{\left(1-x\right)}\right) = \frac{1}{\left(1-x\right)^2} = \frac{d}{dx}\left(1+x+x^2+x^3+x^4+\dots\right) = 1+2x+3x^2+4x^3+\dots \tag{39}$$

As can be seen from 39, the derivative of the geometric series can be related to de succession of natural numbers.

It is also possible to do certain types of algebraic operations on ordinary generating functions that have an impact on the sequence of numerical values associated with the given generating function. For example, if I multiply the generating function of <sup>1</sup> ð Þ <sup>1</sup>�*<sup>z</sup>* <sup>2</sup> by *z*, we obtain the following effect in the numerical sequence

$$\frac{z}{\left(1-z\right)^{2}} = 0 + z + 2z^{2} + 3z^{3} + 4z^{4} + \dots \leftrightarrow 0, 1, 2, 3, 4, 5, \dots \tag{40}$$

that is to say that we obtain a shift to the right in the sequence of numbers. If we apply the derivative to Eq. (40), we obtain the following relationship

$$\frac{d}{dz}\left(\frac{z}{\left(1-z\right)^{2}}\right) = \frac{d}{dz}\left(\mathbf{0} + \mathbf{z} + 2\mathbf{z}^{2} + 3\mathbf{z}^{3} + 4\mathbf{z}^{4} + \dots\right) = \frac{\mathbf{z}+\mathbf{1}}{\left(1-\mathbf{z}\right)^{3}}\tag{41}$$

$$= \mathbf{1} + \mathbf{2}^{2}\mathbf{z} + \mathbf{3}^{2}\mathbf{z}^{2} + 4\mathbf{z}^{3} + \dots$$

then we have

$$\frac{z+1}{\left(1-z\right)^{3}} = 1 + 2^{2}z + 3^{2}z^{2} + 4^{2}z^{3} + 5^{2}z^{4} + \dots \leftrightarrow 1^{2}, 2^{2}, 3^{2}, 4^{2}, \dots \tag{42}$$

We have exemplified the following relationships between numerical sequences and generating functions

$$1, 1, 1, 1, \dots \leftrightarrow \frac{1}{1 - z} \tag{43}$$

$$(1,2,3,4,\ldots)\to\frac{1}{\left(1-z\right)^2}.\tag{44}$$

$$(0,1,2,3,4,\ldots \leftrightarrow \frac{z}{\left(1-z\right)^2} \tag{45}$$

$$(1^2, 2^2, 3^2, 4^2, \dots \leftrightarrow \frac{z+1}{(1-z)^3} \tag{46}$$

$$(0^2, 1^2, 2^2, 3^2, 4^2, \dots \leftrightarrow \frac{z(z+1)}{(1-z)^3} \tag{47}$$

After all those examples, we can be convinced that it is possible to establish relationships between sequences of numbers and generating functions, as shown in **Table 5**.


#### **Table 5.**

*Table of sequences and ordinary generating functions.*

For more detailed information on recurrences that appear in the analysis of algorithms as well as generating functions, we recommend the excellent book [5].

After this brief summary on ordinary generating functions, we can solve the difference Eq. (19) using generating functions and that is just what we will do next. The difference equation to solve is

$$a\_n - 2a\_{n-1} + a\_{n-2} = \mathfrak{Z}, a\_0 = \mathfrak{1}, a\_1 = \mathfrak{Z}.\tag{48}$$

We know from definition 38 that a generating function is expressed mathematically as

$$A(\mathbf{z}) = \sum\_{k \ge 0} a\_k \mathbf{z}^k. \tag{49}$$

and we apply this operator to the difference Eq. (48) keeping the same summation index for the terms of the difference Eq. (48) obtaining the following expression

$$\sum\_{n\geq 2} a\_n z^n - 2\sum\_{n\geq 2} a\_{n-1} z^n + \sum\_{n\geq 2} a\_{n-2} z^n = 3\sum\_{n\geq 2} z^n. \tag{50}$$

Because of the index shift, the first summation is *A z*ð Þ� *a*<sup>0</sup> � *a*1*z*. In the second term of the left hand of Eq. (50) the index of the coefficient and the power of *z* being different; so, we can factorize *z* and take into account the shift in the summation index and obtain the term 2*zAz* ð ð Þ� *a*0. The difference between the summation index and the power of *z* in the third term of the left hand of 50 can be arranged by factoring *z*<sup>2</sup> and in that case, we obtain the term *<sup>z</sup>*<sup>2</sup>*A z*ð Þ. The right hand of Eq. (50) is a geometric

*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

series but given the shift on the index of the summation, it is necessary to subtract the two first terms of the geometric series giving, as a result, the term 3 <sup>1</sup> <sup>1</sup>�*<sup>z</sup>* � <sup>1</sup> � *<sup>z</sup>* � �. Then taking into account these remarks, we obtain the next equation

$$A(z) - \mathbf{1} - \mathbf{5}z - 2zA(z) - \mathbf{2}z + z^2A(z) = \mathbf{3} \left( \frac{\mathbf{1}}{\mathbf{1} - z} - \mathbf{1} - z \right). \tag{51}$$

Simplifying Eq. (51) we get the next equation

$$A(z)\left(\mathbf{1} - 2\mathbf{z} + z^2\right) - \mathbf{1} - \mathbf{3}z = \mathfrak{d}\left(\frac{z^2}{\mathbf{1} - z}\right). \tag{52}$$

The left hand of 53 can be algebraically simplified obtaining the following expression

$$A(z)(\mathbf{1}-z)^2 = \mathbf{1} + \mathbf{3}z + \mathbf{3}\left(\frac{z^2}{\mathbf{1}-z}\right). \tag{53}$$

Simplifying de left hand of Eq. (53) we get the equation

$$A(z) = \frac{1}{\left(1 - z\right)^2} + 3\frac{z}{\left(1 - z\right)^2} + 3\left(\frac{z^2}{\left(1 - z\right)^3}\right). \tag{54}$$

We apply the *<sup>z</sup><sup>n</sup>* ½ � operator to Eq. (54) with the purpose of obtaining the coefficient of the nth addend of the sum that will correspond to the element in the position *n* of the sequence of numbers studied, arriving at the equation

$$a\_n = [z^n]A(z) = [z^n] \left(\frac{\mathbf{1}}{\left(\mathbf{1} - z\right)^2}\right) + \mathbf{3}[z^n] \left(\frac{z}{\left(\mathbf{1} - z\right)^2}\right) + \mathbf{3}[z^n] \left(\frac{z^2}{\left(\mathbf{1} - z\right)^3}\right). \tag{55}$$

and then to the final result

$$a\_n = (n+1) + 3n + 3\binom{n}{2} = 4n + 1 + \frac{3n^2}{2} - \frac{3n}{2} = 1 + \frac{5n}{2} + \frac{3n^2}{2}.\tag{56}$$

As you can see the expressions (29), (37), and (56) are the same.

#### **5. Modular arithmetical calculations**

In discrete mathematics courses at the undergraduate level, topics of modern algebra are addressed where the most elementary algebraic structures are defined, such as the group structure and the ring structure. The algebraic structure to which we are going to devote our attention in this section is the ring structure in the context of modular arithmetic.

This algebraic structure is often used, informally, in arithmetic courses to learn to add and multiply. Formally speaking we can say that this structure is composed of a set of objects that could be in the case of arithmetic the set of integers as well as two

operations which are addition and multiplication which is denoted as ð Þ , þ , � . The two operations must be closed, that is to say, that the results they give must belong to the same set from which the operands are taken. Additionally, the operations must respect certain properties that we will define below.

**Definition** 1.2 (Ring) Let be *R* nonempty set that has two closed binary operations denoted as þ y � . Then ð Þ *R*, þ , � it is a ring if ∀*a*, *b*,*c*∈ *R* the following conditions are met:

a) *a* þ *b* ¼ *b* þ *a* (commutative law of þ).

b) *a* þ ð Þ¼ *b* þ *c* ð Þþ *a* þ *b c* (Associative law of þ).

c) ∃*z*∈*R* such that *a* þ *z* ¼ *z* þ *a* ¼ *a*, ∀*a*∈*R* (existence of identity element for þ).

d) For each *a*∈*R*, ∃ *b*∈*R* such that *a* þ *b* ¼ *b* þ *a* ¼ *z* (existence of inverse under þ).

e) *a* � ð Þ¼ *b* � *c* ð Þ� *a* � *b c* (Associative law for �).

f) *a* � ð Þ¼ *b* þ *c a* � *b* þ *a* � *c* y ð Þ� *b* þ *c a* ¼ *b* � *a* þ *c* � *a* (Distributive laws for � over þ).

In order to simplify the notation, the operation *a* � *b* can be written as *ab*. The associative properties of both operations as well as the distributivity of one operation over the other can be generalized for the case of more than 2 operands.

The þ operation of the ring is commutative but the � operation is not commutative in the 1.2 definitions. We can apply the commutativity property to the � operation and we would obtain the commutative ring structure. Formally the definition of a commutative ring is as follows.

**Definition** 1.3 (Commutative ring) Let be *R* a nonempty set having two closed binary operations denoted as þ and � . Then ð Þ *R*, þ , � is a ring if ∀*a*, *b*,*c*∈ *R* meet the following conditions:

a) If *ab* ¼ *ba*, ∀*a*, *b*∈*R* then *R* is a commutative ring.

b) It is said that the ring *R* has no proper divisors of zero if for any *a*, *b*∈ *R*, *ab* ¼ *z* ) *a* ¼ *z*∨*b* ¼ *z*.

c) If an element *u* ∈*R* such that *u* 6¼ *z* and *ua* ¼ *au* ¼ *a*, ∀*a*∈*R* we say that *u* is a unit or identity element of multiplication and that *R* is a ring with unity.

Another interesting property that can be added to the list of ring properties is the existence of multiplicative inverse. The definition is the following.

**Definition** 1.4 (Ring with multiplicative inverse) Let *R* be a ring with the unit *u* if *a*, *b*∈*R* and *ab* ¼ *ba* ¼ *u* then *b* is called inverse multiplicative of *a* and *a* is called a unit of *R*. (An element *b* is also a unit of *R*).

Taking into account the existence of the unit property, we can add it to the ring commutative ring and obtain a commutative ring with unity.

**Definition** 1.5 (Commutative ring with unity) Let be *R* a commutative ring with unity. Then:

a) *R* is said to have an integer domain if *R* has no proper divisors of zero.

b) *R* is said to be a field if each non-zero element of *R* is a unit.

Rings can have subsets that satisfy the properties of a ring as defined in 1.2 in which case we are talking about subring structures. These algebraic structures can have additive cancelation properties as well as multiplicative cancelation properties and therefore will have symmetric or inverse elements of addition as

well as multiplication. Once we have understood the theoretical part of the rings we are better positioned to tackle the construction and use of finite rings specials and fields. We will start by presenting some results of the modular arithmetic needed for the de notion of operations in modular rings. These notions will be explained below.

#### **5.1 Integers mod** *n*

In this section, we will review some notions of modular arithmetic that usually appear in introductory courses of the Theory of Numbers and that allow us to understand important concepts, such as divisibility, primality, the greatest common divisor operation as well as the algorithm of Euclid to calculate it efficiently. Let us start with establishing the meaning of some mathematical symbols that appear in this context. The expression *d*∣*a* translates into words like d divides a to what is the same as ∃*k*∈ such that *a* ¼ *kd* and *d* is called divisor of *a*. This means that if we divide *a* by *d*, the remainder will be 0. From here we can make the following properties and definitions.

**Definition** 1.6 Every integer divides 0. That is ∀*x*∈ , *x*∣0.

**Proposition** 1.7 If *a*> 0 and *d*∣*a* then ∣*d*∣≤∣*a*∣.

**Proposition** 1.8 *d*∣*a* if and only if �*d*∣*a*.

**Definition** 1.9 A number *x* is said to be composite if it has divisors other than 1 and *x*. **Example** 1.10 The number 39 is composite since 3∣39 is true and 3 6¼ 1 is true as like 3 6¼ 39.

**Example** 1.11 The divisors of 24 are 1, 2, 3, 4, 6, 8, 12, 24 f g. The 1 and 24 are known as trivial divisors.

**Definition** 1.12 The set of divisors of a number *a* that are neither 1 nor *a* are called factors of *a*.

**Definition** 1.13 A number prime is here the one that has no factors.

**Example** 1.14 The numbers 1, 3, 5, 7, 11, 13 f g are prime.

**Definition** 1.15 A number that is not prime is said to be a composite number.

**Theorem** 1.16 (Division Theorem) For any *a*, *n*∈ there exist *q*,*r*∈ únique such that *a* ¼ *qn* þ *r* where 0 ≤*r*<*n*.

*<sup>q</sup>* <sup>¼</sup> ⌊*<sup>a</sup> <sup>n</sup>*⌋ is called the quotient and *r* ¼ *a*mod*n* is called the remainder. Then *n*∣*a* if *a*mod*n* is equal to 0. From this, in conjunction with theorem 16, it can be stated that *<sup>a</sup>* <sup>¼</sup> ⌊*<sup>a</sup> <sup>n</sup>*⌋*<sup>n</sup>* <sup>þ</sup> *<sup>a</sup>*mod*<sup>n</sup>* or equivalently that *<sup>a</sup>* � ⌊*<sup>a</sup> <sup>n</sup>*⌋*n* ¼ *a*mod*n*.

**Definition** 1.17 If ð Þ¼ *a*mod*n* ð Þ *b*mod*n* then we say that *a* � *b*mod*n* if they have the same remainder *a* and *b* when divided by *n* is to say that *a* � *b*mod*n*⇔*n*∣ð Þ *b* � *a* .

When it is true that ð Þ¼ *a*mod*n* ð Þ *b*mod*n* we will denote it as *a* � *b*mod*n* and the equivalence classes that this relation generates will be expressed as ½ � *a <sup>n</sup>* ¼

f g *a* þ *kn*j*k*∈ . In general, we can say the congruence relation modulo some number *n* partitions to the set of integers in equivalence classes by the remainder left when divided by a number *<sup>n</sup>* which we express as ½ � *<sup>n</sup>* <sup>¼</sup> ½ � *<sup>a</sup> <sup>n</sup>* : <sup>0</sup><sup>≤</sup> *<sup>a</sup>*<sup>≤</sup> *<sup>n</sup>* � <sup>1</sup> .

#### **5.2 Common divisors and greatest common divisor**

Let us start by defining the concept of a common divisor. **Definition** 1.18 *d* is a common divisor of the numbers *a* and *b* if *d*∣*a* and *d*∣*b* hold. Common divisors have the following properties. **Proposition** 1.19 *d*∣ð Þ *a* þ *b* and *d*∣ð Þ *a* � *b* .

**Proposition** [1.20] If *d*∣*a* and *d*∣*b* then *d*∣ð Þ *ax* þ *by* ∀*x*, *y*∈. That is, *d* divides every linear combination of *a* and *b*.

**Proposition** 1.21 If *a*∣*b* then ∣*a*∣≤∣*b*∣ or *b* ¼ 0.

**Proposition** 1.22 If *a*∣*b* and *b*∣*a* hold then *a* ¼ þ*b* or *a* ¼ �*b*.

**Definition** 1.23 The greatest common divisor of *a* and *b* denoted as *gcd a*ð Þ , *b* .

(greatest common divisor) is: gcdð Þ¼ *a*, *b* max f g *d* : *d*j*ayd*j*b* .

**Example** 1.24 *gcd*ð Þ¼ 24, 30 6, *gcd*ð Þ¼ 5, 7 1, *gcd*ð Þ¼ 0, 9 0.

**Proposition** 1.25 If *a* and *b* are not both equal to 0, then *gcd a*ð Þ , *b* is an integer between 1 and min ð Þ j*a*j, j*b*j .

**Definition** 1.26 *gcd*ð Þ¼ 0, 0 0.

**Definition** 1.27 *a* and *b* are relatively prime if *gcd a*ð Þ¼ , *b* 1.

**Proposition** 1.28 *gcd a*ð Þ¼ , *b gcd b*ð Þ , *a* .

**Proposition** 1.29 *gcd a*ð Þ¼ , *b gcd*ð Þ �*a*, *b* .

**Proposition** 1.30 *gcd a*ð Þ¼ , *b gcd*ð Þ j*a*j, j*b*j .

**Proposition** 1.31 *gcd a*ð Þ¼ , 0 ∣*a*∣.

**Proposition** 1.32 *gcd a*ð Þ¼ , *ka a*, ∀*k*∈ .

**Theorem** 1.33 If *a* and *b* are any integers and are not both equal to 0 then *gcd a*ð Þ , *b* is the smallest positive element of the set f g *ax* þ *by* : *x*, *y* ∈ of linear combinations of *a* and *b*.

**Proof:** Let *s* be the smallest positive element of the set of such linear combinations of *<sup>a</sup>* and *<sup>b</sup>*, then *<sup>s</sup>* <sup>¼</sup> *ax* <sup>þ</sup> *by* for some *<sup>x</sup>*, *<sup>y</sup>*∈. Let *<sup>q</sup>* <sup>¼</sup> ⌊*<sup>a</sup> s* ⌋. The equation *a*mod*n* ¼ *<sup>a</sup>* � ⌊*<sup>a</sup> <sup>n</sup>*⌋ implies *a*mod*s* ¼ *a* � *qs* ¼ *aq ax* ð Þ¼ þ *by a*ð Þþ 1 � *qx b*ð Þ �*qy* , so *s*mod*s* is a linear combination of *a* and *b*. Since *a*mod*s*< *s* then *a*mod*s* ¼ 0 since *s* is the smallest value that is a linear combination of *a* and *b*. The above means that *s*∣*a*. Reasoning in a similar way we can prove that *s*∣*b*. Then *s* is a common divisor of *a* and *b*, and it also holds that *gcd a*ð Þ , *b* ≥*s* and like *d*∣*a* and *d*∣*b* implies that *d*∣ð Þ *ax* þ *by* , so as a consequence of all this *gcd a*ð Þ , *b* ∣*s* since *gcd a*ð Þ , *b* divides any linear combination of *a* with *b* and *s* is a linear combination of *a* with *b*. But *gcd a*ð Þ , *b* ∣*s* and *s* >0 imply that *gcd a*ð Þ , *b* <*s*. Combining that is satisfied simultaneously that *gcd a*ð Þ , *b* >*s* and that *gcd a*ð Þ , *b* <*s* we can conclude that *gcd a*ð Þ¼ , *b s*.

**Corollary** 1 For any *a*, *b*∈ , if *d*∣*a* and *d*∣*b* then *d*∣*gcd a*ð Þ , *b* .

**Proof:** If *d*∣*a* and *d*∣*b* then ∀*x*, *y* ∈*d*∣ð Þ *ax* þ *by* and since *gcd a*ð Þ , *b* is a linear combination of *<sup>a</sup>* and *<sup>b</sup>* we can conclude. □.

**Corollary** 2 ∀*a*, *b*, *n*∈ and *n* ≥0 *gcd an* ð Þ¼ , *bn ngcd a*ð Þ , *b* .

**Corollary** 3 ∀*a*, *b*, *n*∈ and *a*, *b*, *n* ≥0 if *n*∣*ab* and *gcd a*ð Þ¼ , *n* 1 then *n*∣*b*. **Theorem** 1.34 ∀*a*, *b*∈þ, *gcd a*ð Þ¼ , *b gcd b*ð Þ , *a*mod*b* . **Proof:**

• (demo sketch).

• We first prove that *gcd a*ð Þ , *b* ∣*gcd b*ð Þ , *a*mod*b* .

• Let *d* ¼ *gcd a*ð Þ , *b* then *d*∣*a* and *d*∣*b*.

• We know that ð Þ¼ *<sup>a</sup>*mod*<sup>b</sup> <sup>a</sup>* � *qb* where *<sup>q</sup>* <sup>¼</sup> ⌊*<sup>a</sup> b*⌋.

• Since ð Þ *a*mod*b* is a linear combination of *a* and *b*.

• from the above and the fact that *d*∣*a* and *d*∣*b* this implies that *d*∣ð Þ *ax* þ *by* then *d*∣*b* and *d*∣ð Þ *a*mod*b* where *gcd a*ð Þ , *b* ∣*gcd b*ð Þ , *a*mod*b* .

• As a second part, it is proved in a similar way that *gcd b*ð Þ , *a*mod*b* ∣*gcd a*ð Þ , *b* .

• If both *gcd a*ð Þ , *b* ∣*gcd b*ð Þ , *a*mod*b* and *gcd b*ð Þ , *a*mod*b* ∣*gcd a*ð Þ , *b* hold we can conclude that ∀*a*, *b*∈þ, *gcd a*ð Þ¼ , *b gcd b*ð Þ , *a*mod*b* .

*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

Theorem 1.34 provides us with the mathematical elements to be able to define a recursive algorithm for the calculation of the Greatest Common Divisor is the following.

Euclid(a,b) 1) if b=0 2) then return a 3) else Euclid(b, a mod b) The following is an example of how Euclid's algorithm works. **Example** [1.35] *We will calculate the gcd*ð Þ 30, 21 *applying the Euclidean algorithm*

*that I have just explained. The execution steps are displayed in* **Table 6**.

#### **5.3 Modular rings**

**Definition** 1.36 Let *n* ∈þ, *n*>1. For *a*, *b*∈, tenths that *a* is congruent to *b* modulo *n* and is written as *a* � *b*ð Þ mod*n* , if *n*∣ð Þ *ab* , or equivalently *a* ¼ *b* þ *kn* for some *k in*.

**Example** 1.37 For example:

a) 17 � 2 mod5 ð Þ. b) �7 � �49 mod6 ð Þ.

**Theorem** 1.38 The congruence modulo *n* is an equivalence relation . **Proof:** (Do it as an exercise).

Since an equivalence relation on a set produces a partition of that set, then for *n* ≥2 the congruence relation modulo *n* produces a partition of set in the following *n* equivalence classes:

$$[0] = \{\dots, -2n, -n, 0, n, 2n, \dots\} = \{0 + n\mathbf{x} | \mathbf{x} \in \mathbb{Z}\}$$

$$[1] = \{\dots, -2n + 1, -n + 1, 1, n + 1, 2n + 1, \dots\} = \{1 + n\mathbf{x} | \mathbf{x} \in \mathbb{Z}\}$$

$$[2] = \{\dots, -2n + 2, -n + 2, 2, n + 2, 2n + 2, \dots\} = \{2 + n\mathbf{x} | \mathbf{x} \in \mathbb{Z}\}$$

$$\vdots$$

$$[n - 1] = \{\dots, -n + 1, -1, n - 1, 2n - 1, 3n - 1, \dots\} = \{(n - 1) + n\mathbf{x} | \mathbf{x} \in \mathbb{Z}\}$$

By the division algorithm, we know that ∀*t*∈, *t* ¼ *qn* þ *r* where 0≤*r*< *n*, so *t*∈ ½ �*r* or also means that ½�¼*t* ½ �*r* . We use to denote 0 f g ½ �, 1½ �, 2½ �, … ½ � *n* � 1 or when there is no ambiguity we also use 'in *<sup>n</sup>* ¼ f g 0, 1, 2, … , *n* � 1 .

We can define closed operations addition and pipeline on the set of equivalence classes of *<sup>n</sup>* as ½ �þ *a* ½ �¼ *b* ½ � *a* þ *b* and ½ � *a cdot b*½ �¼ ½ � *a* ½ �¼ *b* ½ � *ab* . **Example** [1.39] If *n* ¼ 7 then 2½ �þ ½ �¼ 6 ½ �¼ 8 ½ � 1 and 2½ �� ½ �¼ 6 ½ �¼ 12 ½ � 5 .


**Table 6.**

*Execution steps of the Euclid's algorithm.*

Before accepting the definitions of the modular operations ½ �þ *a* ½ �¼ *b* ½ � *a* þ *b* and ½ � *a* ½ �¼ *b* ½ � *ab* we must convince ourselves that they are well defined, that is, if ½ �¼ *a* ½ �*c* and ½ �¼ *b* ½ � *d* then ½ �þ *a* ½ �¼ *b* ½ �þ*c* ½ � *d* and ½ � *a* ½ �¼ *b* ½ �*c* ½ � *d* . We will prove that these operations are independent of the choice of elements within a class. So, then ½ �¼ *a* ½�)*c a* ¼ *c* þ *sn* for some *s*∈ and ½ �¼ *b* ½ �)*d b* ¼ *d* þ *tn* for some *t*∈. Then *a* þ *b* ¼ ð Þþ *c* þ *sn* ð Þ¼ *d* þ *tn* ð Þþ *c* þ *d* ð Þ *s* þ *t n* so that ð Þ� *a* þ *b* ð Þ *c* þ *d* mod*n* that is ½ �¼ *a* þ *b* ½ � *c* þ *d* .

In the same way *ab* ¼ ð Þ *c* þ *sn* ð Þ¼ *d* þ *tn cd* þ ð Þ *sd* þ *ct* þ *stn n* so that *ab* � *cd*mod*n*, that is ½ �¼ *ab* ½ � *cd* . This leads us to establish the following theorem.

**Theorem** 1.40 For *n* ∈þ, *n*>1 under the closed binary operations just defined *<sup>n</sup>* is a commutative ring with the unit 1½ �.

**Proof:** The proof is left as an exercise for the student. What would have to be done for this is to verify that the ring properties hold for the definition of the addition and multiplication operations on *n* and the properties of the ring ð Þ , þ , � .

Before continuing with more theoretical results, let us see the tables of the operations of þ and � for <sup>5</sup> and 6. The <sup>5</sup> operation tables are displayed in **Table 7** and operation tables corresponding to <sup>6</sup> ring are shown in **Table 8**.

We can see in the tables of the þ and � operations of <sup>5</sup> all elements other than 0 have an inverse element, which is why this is a field. In the case of the tables of 6, as opposed to those of 5, only 1 and 5 are elements with inverse (bf units) and 2, 3, 4 are proper divisors of 0. If we obtained the corresponding tables for 9, we could see that 3 � 3 ¼ 3 � 6 ¼ 0, that is there are also proper divisors of 0 which means that it is not enough for *n* to be an odd number for the set *<sup>n</sup>* to be a field. The latter leads us to establish the following theorem.

**Theorem** 1.41 *<sup>n</sup>* would be a field if and only if *n* is a prime number.

**Proof:** Let *n* be a prime, and suppose 0 <*a*<*n*. Then the gcdð Þ¼ *a*, *n* 1 and since the gcdð Þ *a*, *n* is a linear combination of *a* and *n* that is to say that ∃*s*, *t*∈*n*, *as* þ *tn* ¼ 1. Therefore *as* � 1 mod ð Þ *n* i.e. ½ � *a* ½�¼*s* ½ � 1 , which implies that *a* is an element with inverse (unit) which makes *<sup>n</sup>* a field. Conversely, if *n* is not a prime then it can be represented as a product of two numbers, that is *n* ¼ *n*1*n*<sup>2</sup> where 1< *n*1, *n*<sup>2</sup> <*n* and if


#### **Table 7.**

*Operations tables of the ring* 5, <sup>þ</sup> , � *.*


*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

#### **Table 8.**

*Operations tables of the ring* ð Þ 6, þ , � *.*

½ � *n*<sup>1</sup> 6¼ 0 so as ½ � *n*<sup>1</sup> 6¼ 0 but ½ � *n*<sup>1</sup> ½ �¼ *n*<sup>2</sup> 0 means that *Zn* is not an integer domain since it has proper divisors of 0 and therefore cannot be a field.

In <sup>6</sup> the 5½ � has an inverse (it is a unit) and on the other hand, 3½ � is a proper divisor of 0. It is useful and necessary to be able to detect which elements have an inverse (ie they are units) when *n* is composite. For this, the following theorem is established.

**Theorem** 1.42 In *n*, ½ � *a* has an inverse (it is a unit) if and only if gcdð Þ¼ *a*, *n* 1. **Proof:** If gcdð Þ¼ *a*, *n* 1 the proof will be the same as the theorem??. In the reverse direction, let ½ � *<sup>a</sup>* <sup>∈</sup>*<sup>n</sup>* and ½ � *<sup>a</sup>* �<sup>1</sup> <sup>¼</sup> *<sup>s</sup>*. So ½ � *<sup>a</sup>* ½�¼*<sup>s</sup>* 1, so then *as* � 1 mod ð Þ *<sup>n</sup>* and *as* <sup>¼</sup> <sup>1</sup> <sup>þ</sup> *tn* for some *t*∈ but 1 ¼ *as* þ *n*ð Þ) �*t* gcdð Þ¼ *a*, *n* 1.

In the following example, we illustrate the application of Euclid's algorithms to obtain multiplicative inverses of elements of a modular field-type group.

**Example** 1.43 Find 25 ½ ��<sup>1</sup> in 72. Since gcd 25, 72 ð Þ¼ 1 Euclid's gcd algorithm we get the following:

$$72 = 2(25) + 22, 0 < 22 < 25$$

$$25 = 1(22) + 3, 0 < 3 < 22$$

$$22 = 7(3) + 1, 0 < 1 < 3$$

since 1 is the l<sup>0</sup> ast non-zero remainder, we have

$$\mathbf{1} = \mathbf{2}\mathbf{2} - 7(\mathbf{3}) = \mathbf{2}\mathbf{2} - 7[\mathbf{25} - \mathbf{22}] = -7(\mathbf{25}) + 8(\mathbf{22}) = -7(\mathbf{25}) + 8[\mathbf{72} - \mathbf{2(25)}] = 8(\mathbf{72}) - \mathbf{23}(\mathbf{25})$$

but

1 ¼ 8 72 ð Þ� 23 25 ð Þ) 1 � �ð Þ 23 þ 72 ð Þ 25 ð Þ mod72 , so then 1½ �¼ ½ � 49 ½ � 25 then ½ �¼ <sup>49</sup> ½ � <sup>25</sup> �<sup>1</sup> in 72.

For a more detailed exposition of the previous results in modular arithmetic, greatest common divisor and Euclid's algorithm, it is recommended to consult these bibliographical references [3, 6, 9, 10].

After this brief overview of the properties of the algebraic ring structure as well as how to work with it in the context of modular arithmetic, we are ready to discuss the advantage of applying results from discrete mathematics to perform calculations of modular arithmetic efficiently. As already explained in the previous paragraphs of this subsection, we can, from the tables of the modular ring in which we are working, obtain all the multiplicative inverses of the ring as well as all the proper divisors of zero in the event that the ring does not be a field. We can do this by generating the sum and product tables of the ring and going through the multiplication table detecting a box where the value is equal to 1 in the case of multiplicative inverses, or that the box is equal to 0 in the case of proper divisors of zero. This procedure for obtaining the proper divisors of zero could be carried out with the following Matlab program.

```
function Fz = FacPropZero(n)
   % Modulo n [S, P] = TabOpMod (n)
   % Search on the table P all those factors whose product is equal to 0
   % CRL 29/sep/2021
   [S,P] = TabOpMod(n);
   [r,c] = size(P);
   k=1;
   for i=1:r
      for j=1:r
          if (P(i,j)==0)
            Fz{1,k}=[i-1,j-1];
            k=k+1;
          end
       end
   end
   end
   The procedure for obtaining the ring elements that have multiplicative inverse
could be carried out with the following Matlab program.
```

```
function U = unitsZn(n)
% Modulo n [S,P] = TabOpMod(n)
% Search on the table P all those factors whose product is equal to 1
% CRL 22/sep/2021
[S,P] = TabOpMod(n);
[r,c] = size(P);
k=1;
for i=1:r
    for j=1:r
         if (P(i,j)==1)
            U{1,k}=[i-1,j-1];
            k=k+1;
         end
    end
end
end
```
*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

The results of the execution of the Matlab function for obtaining the list of elements of the modular ring that have multiplicative inverse for the case of *n* ¼ 5 and *n* ¼ 6 are the following:

```
>> U=unitsZn(5)
U =
  1�4 cell array
    {1�2 double} {1�2 double} {1�2 double} {1�2 double}
>> U{1,1}
ans =
      1 1
>> U{1,2}
ans =
    2 3
>> U{1,3}
ans =
    3 2
>> U{1,4}
ans =
    4 4
>> U=unitsZn(6)
U =
    1�2 cell array
       {1�2 double} {1�2 double}
>> U=unitsZn(6)
U =
    1�2 cell array
       {1�2 double} {1�2 double}
>> U{1,1}
ans =
    1 1
>> U{1,2}
ans =
    5 5
```
The above Matlab routines look good and bad from an algorithmic perspective. The good side is that since it is a direct translation of the operations table generation of a modular ring, the programming of the routines is relatively simple. The negative aspect is in the fact that the size of the arrays to generate the tables can become very large as the number *n* with respect to which the module is taken grows a lot, that is, the memory occupied will be of the order of *O n*<sup>2</sup> ð Þ. It is at this point that the properties and results of discrete mathematics in the subject of modular arithmetic come to the rescue and allow us to obtain more efficient algorithms for obtaining proper divisors of zero as well as the list of elements of the modular ring that have an inverse multiplicative. The efficient algorithm for obtaining the list of elements of a modular ring with multiplicative inverse and the respective Matlab implementation will make use of discrete mathematics results, such as Euclid's algorithm for obtaining the greatest common divisor, as well as the version extension of this, and that will be called by a routine that solves linear modular equations of the type *ax* ¼ *b*mod*n*. The multiplicative inverse *x* of a number *a* modulo *n* will correspond to the solution of the modular linear equation *ax* ¼ 1mod*n*. Based on theorem 34, we can implement the Euclid's algorithm in Matlab as follows.

```
function y = mcdEuclides(a,b)
% Recursive function for the GCD using Euclid's algorithm
% The first parameter musy be bigger than the second parameter
if a < b
  temp=a;
  a=b;
  b=temp;
end
if b == 0
  y=a;
else
    y=mcdEuclides(b, mod(a,b));
end
end
```
Euclid's algorithm can be extended to obtain the coefficients *a* and *b* of *x* and *y* in the relation *d* ¼ *gcd a*ð Þ¼ , *b ax* þ *by*. This will be used for obtaining multiplicative inverses of a modular ring. The Matlab implementation of the extended Euclid algorithm is as follows.

```
Function [d,x,y] = mcdEuclidesExtend(a,b)
% Extended GCD Euclid's algorithm
if a < b
  temp=a;
  a=b;
  b=temp;
end
if b == 0
  d=a;
  x=1;
  y=0;
else
  [d1,x1,y1]=mcdEuclidesExtend(b, mod(a,b));
  d=d1;
  x=y1;
  y=x1-floor(a/b)*y1;
end
end
```
As already mentioned, obtaining the elements of a modular ring that have a multiplicative inverse can be reduced to the problem of calculating the elements *x* that satisfy the solution to the modular equation *ax* ¼ 1mod*n*. The Matlab implementation of the linear modular equation solver will be the following.

```
Function S = ModLinEqSolv(a,b,n)
% Modular linear equation solver
% That have the form ax=b mod n
% Author: Carlos Rodriguez Lucatero 29/sep/2021
[d,x,y] = mcdEuclidesExtend(a,n);
if (mod(b,d)==0)
   x0=mod((y*(b/d)),n);
   for i=0:d-1
       S(i+1)=mod(x0+(i*(n/d)),n);
   end
```
*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

else S=-1; end end

If we call this routine with the parameter *b* ¼ 1, it would give us the result of the value of *x*, which is the inverse of *a* in the relation *ax* ¼ *b*mod*n*, if it exists, or it would return �1 to indicate that the element *a* of the modular ring does not have a multiplicative inverse. The following routine calls the linear modular equation solver routine to get the list of elements of the modular ring that have an inverse.

```
Function L = unitsZnV3(n)
% Get the list of units of a modular ring
% mod n calling the routine that solves linear modular equations
% Autor: Carlos Rodriguez Lucatero 14/Ene/2022
k=1;
for i=1:n
   S = ModLinEqSolv(i,1,n);
   if (S �=-1)
      L(1,k)=S;
   end
   k=k+1;
end
if (length(L)==0)
   L=-1;
end
end
```
The elements of the list in 0 correspond to the inverse of the element that does not have an inverse and therefore we would be talking about dividing elements proper to zero of the modular ring in question. The elements in the list that are not null are the inverse of the element of the modular ring represented by the position in the list. For instance, in ð Þ 5, þ , � , we have that 2 � 3 ¼ 1 and 4 � 4 ¼ 1.

>> L = unitsZnV3(5) L = 1324 >> L = unitsZnV3(6) L = 1 000 5

### **6. Conclusions**

In this chapter, we address some calculation problems that take place in discrete mathematics and how we can use the theoretical results provided by discrete mathematics itself to be able to carry out these calculations more efficiently. In this chapter, we were able to verify the great utility of the use of generating functions to carry out some calculations more efficiently. Some interesting applications of the use of generating functions to count graphs with some property can be found in refs. [11, 12]. These counting techniques can later be applied to the famous probabilistic method [13–15]. Generating functions are also very useful for counting the number of possible partitions of integers. An excellent text that addresses this interesting topic is [16].

It is true that computers have not stopped increasing their storage capacity as well as the speed of their processing units. However, these resources are not infinite and there are even calculation-intensive problems in topics, such as combinatorics or modular arithmetic that could quickly exhaust these calculation resources. That is why it is worth taking advantage, when possible, of the theoretical results that discrete mathematics itself provides, to efficiently carry out the calculations that it requires. For this, we take two specific topics of discrete mathematics where that is possible. One is the obtainment of elements in arbitrary positions of a numerical sequence and the other topic was the obtainment of elements with multiplicative inverse of the elements that are proper divisors of zero in a modular ring. We illustrate this by programming functions in Matlab. I hope that the chapter will convince you of the goodness of taking advantage of the mathematical results offered by discrete mathematics to perform efficient calculations in the area of discrete mathematics.

#### **Acknowledgements**

I would like to thank the Universidad Autonoma Metropolitana, Unidad Cuajimalpa for the support they gave me to make and publish this book chapter.

#### **Author details**

Carlos Rodriguez Lucatero Universidad Autónoma Metropolitana Unidad Cuajimalpa, CDMX, Mexico

\*Address all correspondence to: crodriguez@cua.uam.mx

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Application of Discrete Mathematics for Programming Discrete Mathematics Calculations DOI: http://dx.doi.org/10.5772/intechopen.102990*

#### **References**

[1] Rodríguez-Lucatero C. The Moser's formula for the division of the circle by chords problem revisited. 2017. Available from: https://arxiv.org/abs/1701.08155v1

[2] Feller W. An Introduction to Probability Theory and its Applications. Vol. I. New York, USA: Wiley and Sons Inc; 1968. pp. 52-53

[3] Cormen TH, Leiserson CE, Rivest RL, Stein C. Introduction to Algorithms. 3d ed. Massachusetts, USA: The MIT Press; 2009

[4] Miller CD, Heeren VE, Hornsby J. Matematica, razonamiento y aplicaciones. USA: Pearson; 2013

[5] Sedgewick R, Flajolet P. An Introduction to the Analysis of Algorithms, Second Printing. USA: Addison-Wesley; 2001

[6] Grimaldi RP. Discrete and Combinatorial Mathematics: An Applied Introduction. 3rd ed. USA: Addison-Wesley; 1994

[7] Graham RL, Knuth DE, Patashnik O. Concretel Mathematics, 6th Printing. USA: Addison-Wesley; 1990

[8] Wilf HS. Generatingfunctionology. 3th ed. Massachusetts, USA: A. K. Peters Ltd.; 2006

[9] Hardy GH, Wright EM. Introduction to the Theory of Numbers. 5th ed. Oxford, UK: Oxford Science Publications, reprinted; 1998

[10] Vinográdov I. Fundamentos de la Teora de números. URSS: Moscu: Editorial MIR; 1977

[11] Harary F, Palmer EM. Graphical Enumeration. New York, NY, USA; London, UK: Academic Press; 1973

[12] Rodríguez-Lucatero C. Combinatorial Enumeration of Graphs. Rijeka: IntechOpen; 2019

[13] Erdös P. Graph theory and probability. Canadian Journal of Mathematics. 1959;**11**:34-38

[14] Alon N, Spencer JH. The Probabilistic Method. 2nd ed. New York Wiley-Interscience; 2000

[15] Rodríguez-Lucatero C, Alarcón L. Use of enumerative combinatorics for proving the applicability of an asymptotic stability result on discretetime SIS epidemics in complex networks. MDPI Mathematics Open access Journal. 2019;**7**(1). DOI: 10.3390/math7010030

[16] Andrews GE. In: Rota GC, editor. The Theory of Partitions Encyclopedia of Mathematics and its Applications. Vol. 2. USA: Addison-Wesley; 1976

#### **Chapter 5**

## A Criticality Study of Fast Critical Experimental Benchmarks Using MCNP Code to Qualifying Different Evaluations

*Sanae El Ouahdani, Hamid Boukhal, El Mahjoub Chakir, Ahmed Gaga, Houda Elyaakoubi, Mustapha Makhloul, Abdelaziz Ahmed, Abdessamad Didi and Mohamed Bencheikh*

#### **Abstract**

In this chapter we present our MCNP modeling, concerning fast critical experimental benchmarks, about qualifying our libraries of cross-sections deduced from the evaluations ENDF/B-VII, JEFF-3.1, JENDL-3.3, JENDL-4 processed by the code NJOY. The benchmarks analyzed are characterized by simple geometries which help to have a precise calculation. In our neutron calculation, we used the MCNP code (version 5), the reference code for the neutron transport calculation with the Monte Carlo method. It is also very efficient for criticality calculation. The cross-section data for all the isotopes that make up the material of the studied benchmarks are processed in ACE format at 300 K temperature using the NJOY 99.9 modular system. A detailed comparison of the criticality results of our simulation was carried out to highlight the influence of these evaluations on the keff calculations.

**Keywords:** benchmark, MCNP code, NJOY code, ENDF/B-VII, multiplication factor, criticality

#### **1. Introduction**

The effective multiplication factor keff is an important parameter in the design, control and safety of reactors. For safety considerations, the keff is desired to be very close to one throughout the core life. The calculation of the keff is rather a complicated problem due to the contributions of different physical phenomena related to the neutron's population change. That is why it is important to validate any reactor calculation tool and any nuclear data library with an accurate prediction of this parameter.

The main objective of the present work is to perform the qualification and analysis of the most recent nuclear data libraries available to the scientific community, in particular; ENDF/B-VII [1], JENDL-4.0 [2], JENDL-3.3 [3], and JEFF-3.1 [4], to check the accuracy of cross-section libraries for the criticality calculations. For this objective

a set of critical fast benchmarks highly enriched uranium and with 233U and with 239Pu fuel rods were used to consider as closely as possible all types of geometries to simulate the criticality coefficient of interest. The continued energy cross sections necessary for the present work were processed by the NJOY system (version 99.9, update 364) [5] in the ACE format. The analysis and the interpretation of the results were reinforced by a comparison study of the parameter with the experimental values excerpt from the literature [6]. These experiments have already been analyzed by the Monte Carlo code MCNP using the American nuclear data ENDF/B-V continuous [6] and other codes.

The first part of the paper is reserved to explain the methodology and the materials used. The materials used are the MCNP code and the NJOY code and the JANIS code. The second part cites the characteristics of the different benchmarks selected for the present study. In the third part, we develop our results obtained about the simulation of the keff parameter and their interpretations concerning the qualification of the used libraries. We finished with a conclusion.

#### **2. Methods and materials**

#### **2.1 MCNP code**

The MCNP code [7] (Monte-Carlo N Particle transport), is a code that deals with the transport of neutrons, photons and electrons or coupled/photon/electron by the Monte-Carlo method, including the possibility of calculating the values clean for critical systems. The code deals with an arbitrary three-dimensional configuration with materials in geometric cells delimited by surfaces.

The code takes into account the processed cross-sections, for neutrons, all reactions, which are proposed in particular data evaluations (for example, ENDF/B-VI), thermal neutron scattering which is treated both by the model of the free gas and S (α, β), for photons the code takes into account incoherent and coherent diffusions, the possibility of fluorescence emission after the photoelectric effect, absorption in the production of pairs with a local broadcast of annihilation radiation. It can also treat the braking radiation emitted. In this way, MCNP is qualified as a three-dimensional, continuous energy code, thus it has been proven to simulate physical phenomena correctly. The series of important features that make MCNP very flexible and easy to use code is that it includes a powerful general source, criticality source, surface source, geometry and output pointing plotters, a rich collection of variance reduction techniques, the desired result structure called "tally" and has a large collection of cross-section data.

#### **2.2 NJOY code**

The NJOY nuclear data processing code [5] is a system developed at the Los Alamos laboratory in the USA since 1974. It is a modular code allowing, from the assessments of so-called basic nuclear data, to create specific or multigroup parameters (multigroup cross sections, fission spectra, etc…) because the information contained in these files is, such as it cannot be, exploited directly by the various transport codes MCNP, WIMS, APPOLO, EPRI-CELL… etc. The role of the NJOY system is to process this information and make it usable by these codes. The data processed by this system are then stored in files in a standardized ENDF (Evaluated Nuclear Data File) format.

*A Criticality Study of Fast Critical Experimental Benchmarks Using MCNP Code to Qualifying… DOI: http://dx.doi.org/10.5772/intechopen.102449*

#### **2.3 JANIS**

The enormous amount of data stored in the standard ENDF format files as well as the different versions or evaluations do not always allow easy access to the information desired by the user for a particular application. JANIS (Java-based Nuclear Information Software) [8] is a program designed to facilitate the visualization and manipulation of nuclear data. It was developed by the "OECD Nuclear Energy Agency", the "CSNSM-Orsay" and the University of Birmingham as an extension of the JEF-PC program. The main objective of this program is to allow the user to access the numerical values and the graphic representation of the various data without any prior information on the ENDF format. It gives maximum flexibility for the comparison of different types of nuclear data.

#### **3. The fast critical benchmarks**

#### **3.1 The benchmarks**

The benchmarks are fixed points of reference used to test the results of modeling and theoretical calculations and to validate nuclear data.

There are two types of benchmarks:


#### **3.2 Characteristics of the fast benchmarks used**

The benchmarks analyzed cover different and simple geometries (spherical, cylindrical, and parallelepiped), with or without reflector, and concern the three main fissile nuclei 235U, 233U, 239Pu in metallic form.

Fast benchmarks use a fast neutron spectrum that covers the energy range greater than 100 keV, and are therefore characterized by very high fission and capture percentages in the fast energy domain.

By way of example, **Tables 1**–**3** give the average percentages of the flux as well as the fission and capture rates in the following three energy intervals [6]:



**Table 1.**

*Average percentages of flux in the three energy intervals.*


#### **Table 2.**

*Average percentages of fissions caused by neutrons in the three energy fields.*


#### **Table 3.**

*Average percentages of neutron capture in the three energy domains.*

#### **3.3 Description of the fast benchmarks studied**

As we mentioned before, to qualify our cross-section libraries as well as the modeling method, we have chosen a series of critical fast experimental benchmarks which cover different geometries and relate to the three main fissile nuclei 235U, 239Pu and 233U. These benchmarks are derived from the International Handbook of Critical Benchmarks published by the nuclear energy agency AEN [6].

#### *3.3.1 Fast benchmarks highly enriched in U-235 (HEU-MET-FAST)*

We have processed a series of 20 highly enriched benchmarks known with HEU-MET-FAST that is chosen carefully with simple geometries. It includes GODIVA, TOPSY, FLATTOP and HEU-MET-FAST-xxx.

**HEU-MF-001**: GODIVA (1950–1959, LANL, USA), sphere containing metallic uranium highly enriched in the isotope 235U (93.71% wt\*).

\*wt = mass fraction.

**HEU-MF-002**: TOPSY-8 (1950, LANL, USA), assemblages of different geometry (depending on the case) containing uranium highly enriched in the 235U isotope (93.55% wt) reflected by natural uranium, (6 cases).

**HEU-MF-003**: ORALLOY (1950, LANL, USA), spherical assemblies containing metallic uranium highly enriched in 235U (93.5% wt), reflected by reflectors of different types and thicknesses depending on the case (12 cases): seven spheres are reflected by 5.08, 7.62, 10.16, 12.7, 17.78, 20.32, 27.94 cm of natural uranium, four

*A Criticality Study of Fast Critical Experimental Benchmarks Using MCNP Code to Qualifying… DOI: http://dx.doi.org/10.5772/intechopen.102449*

spheres are reflected by 4.826, 7.366, 11.43, 16.51 cm of Tungsten carbon, one sphere is reflected by 20.32 cm of nickel.

**HEU-MF-028**: FLATTOP-25 (1964–1966, LANL, USA), a sphere containing metallic uranium highly enriched in the 235U isotope (93.24% wt) reflected by natural uranium.

#### *3.3.2 Fast benchmarks in U-233 (U233-MET-FAST)*

**U233-MF-001:** JEZEBEL-23 (1961, LANL, USA), sphere containing metallic uranium highly enriched in the 233U isotope (98.11% wt).

**U233-MF-002:** (1958, LANL, USA), sphere containing metallic uranium highly enriched in the 233U isotope (50.59% wt) reflected by a layer of 235 U, (2 cases, in both cases the mass varies critical).

**U233-MF-003:** (1958, LANL, USA), sphere containing metallic uranium highly enriched in the 233U isotope (98.89% wt) reflected by natural uranium, (2 cases, in both cases the critical mass varies).

**U233-MF-004:** (1958, LANL, USA), sphere containing metallic uranium highly enriched in the 233U isotope (98.2% wt) reflected by tungsten, (2 cases, in the 2 cases varies the critical fatigue and the reflector thickness).

**U233-MF-005:** (1958, LANL, USA), sphere containing metallic uranium highly enriched in the 233U isotope (98.2% wt) reflected by beryllium, (2 cases, varies the critical mass).

**U233-MF-006:** FLATTOP-23 (1964, LANL, USA), sphere containing metallic uranium highly enriched in the 233U isotope (98.13% wt) reflected by natural uranium.

#### *3.3.3 Fast benchmarks in Pu-239 (Pu-MET-FAST)*

**Pu-MF-001:** JEZEBEL-39 (1950, LANL, USA), metallic plutonium sphere enriched in the 239Pu isotope (95.17%), (4.5 at% 240Pu, 1.02 wt% Ga), without a reflector.

**Pu-MF-002:** JEZEBEL-40 (1964, LANL, USA), metallic plutonium sphere enriched in the 239Pu isotope (20.1 at% 240Pu, 1.01 wt% Ga), without a reflector.

**Pu-MF-005:** (1958, LANL, USA), metallic plutonium sphere enriched in the 239Pu isotope (94.76%), reflected by tungsten.

**Pu-MF-006:** FLATTOP-39 (1964–1966, LANL, USA), metallic plutonium sphere highly enriched in the 239Pu isotope (94.84% wt) reflected by natural uranium.

**Pu-MF-008:** THOR (1960–1961, LANL, USA), metallic plutonium sphere highly enriched in the 239Pu isotope (94.54% wt), reflected by thorium.

**Pu-MF-009:** (1960, LANL, USA) plutonium metallic sphere highly enriched in the 239Pu isotope (94.8% wt), reflected by aluminum.

**Pu-MF-010:** DELTA-PHASE (1958, LANL, USA): metallic plutonium sphere highly enriched in the 239Pu isotope (94.76% wt), reflected by natural uranium.

**Pu-MF-011:** ALPHA-PHASE (1968, LANL, USA), metallic plutonium sphere highly enriched in 239Pu (94.4% wt) reflected by light water.

**Pu-MF-018:** DELTA-PHASE (1958, LANL, USA), metallic plutonium sphere highly enriched in the 239Pu isotope (94.7% wt) reflected by beryllium.

**Pu-MF-023:** (1962, VNIIEF, Russia), metallic plutonium sphere highly enriched in the 239Pu isotope (98.19%), reflected by the graphite.

**Pu-MF-024:** (1964, VNIIEF, Russia), metallic plutonium sphere highly enriched in the 239Pu isotope (98.19%), reflected by polyethylene.

**Pu-MF-025:** (1964, VNIIEF, Russia), metallic plutonium sphere highly enriched in the 239Pu isotope (98.19%), reflected by stainless steel (1.55 cm).

**Pu-MF-026:** (1962, VNIIEF, Russia), metallic plutonium sphere highly enriched in the 239Pu isotope (98.19%), reflected by stainless steel (11.9 cm).

**Pu-MF-027:** (1965, VNIIEF, Russia), metallic plutonium sphere highly enriched in the 239Pu isotope (89.66%), reflected by polyethylene.

**Pu-MF-028:** (1965, VNIIEF, Russia), spherical assembly in metallic plutonium highly enriched in the 239Pu isotope (89% wt) reflected by stainless steel.

**Pu-MF-029:** (1965, VNIIEF, Russia), spherical assembly in metallic plutonium highly enriched in the 239Pu isotope (88% wt), without a reflector.

**Pu-MF-030:** (1965, VNIIEF, Russia), spherical assembly in metallic plutonium highly enriched in the 239Pu isotope (88% wt) reflected by the graphite.

**Pu-MF-031:** (1965, VNIIEF, Russia), spherical assembly in metallic plutonium highly enriched in the 239Pu isotope (88% wt) reflected by polyethylene.

**Pu-MF-032:** (1965, VNIIEF, Russia), spherical assembly in metallic plutonium highly enriched in the 239Pu isotope (88% wt) reflected by stainless steel.

#### **4. Results and interpretations**

For the calculation of the keff parameter, we used the MCNP code based on the Monte Carlo method. The Monte Carlo method solves the transport equation in integral form. The latter is based on the random selection of several variables and after the estimation of their mathematical expectation which is equivalent to the value of the physical quantity sought. It simulates the history of each neutron through the different interactions it can have in the media where it propagates.

In the present calcul, we simulated 1500 cycles of 30,000 neutrons each, the first 50 cycles are used to ensure the homogeneity of the source distribution. With this number of simulated stories, all keff results are obtained with a standard deviation between +/− 9 and +/− 12 pcm.

### **5. Case of fast benchmarks highly enriched in 235U**

The experimental values obtained for the various Benchmarks concerning the effective multiplication factor keff, as well as the average deviations from experience are shown and compared to the experience in **Figures 1** and **2**.

**Figure 1** represents the variation of keff according to the cases for the fast benchmarks very highly enriched in 235U, from this figure we notice that for the majority of the cases studied, the ENDF/B-VII and JEFF-3.1 evaluations give results that are in good agreement with experience. The average deviation from experience is in the order of 0.42% for ENDF/B-VII and 0.39% for the JEFF-3.1 evaluation: all the libraries keep the same difference between themselves and the same behavior for the benchmarks reflected by natural uranium, except for the benchmarks from HEU-MF-008 to HEU-MF-011 which are reflected by tungsten carbide and HEU-MF-012 reflected by nickel. We also note that the JENDL-3.3 evaluation underestimates the criticality in most cases, with an average deviation from experience equal to 0.6%. However, there is a marked improvement when upgrading the evaluation from JENDL-3.3 to JENDL-4. Although, we still have an underestimation compared to the other evaluations of JENDL-3.3 and JENDL-4. We notice an overestimation of keff for all the evaluations

*A Criticality Study of Fast Critical Experimental Benchmarks Using MCNP Code to Qualifying… DOI: http://dx.doi.org/10.5772/intechopen.102449*

**Figure 1.** *keff depending on the case for the fast benchmarks very highly enriched in 235U.*

**Figure 2.** *The |C-E|/E ratio of keff for each evaluation.*

concerning the benchmarks reflected by the Tungsten carbide which contains the carbon the problem probably stems from a poor underestimation of the carbon capture cross-sections especially in the energy interval of 5 keV to 5 MeV of the capture cross-section where JENDL-4 over estimates ENDF/B-VII and JEFF-3.1.

#### **5.1 Fast benchmarks in U-233**

**Figures 3** and **4** represent the variation of keff according to the cases for the fast benchmarks in 233U isotope as well as the average deviations from the keff experiment.

**Figure 3.** *keff depending on the case for the fast benchmarks in 233U.*

#### **Figure 4.** *The |C-E|/E ratio of keff for the fast benchmarks in 233U.*

From **Figures 3** and **4** we find that the best results of criticality are given by JENDL-4 with a deviation from the experience of 0.16%, secondly, we find ENDF/B-VII with a deviation by compared to the experience of 0.26% we note an improvement during the transition from JENDL-3.3 to JENDL-4. We also notice an overestimation of JEFF-3.1 of the criticality with a deviation from the experience of 0.39%.

#### **5.2 Fast benchmarks in Pu-239**

**Figures 5** and **6** represent the variation of keff according to the cases for the fast benchmarks in 239Pu isotope as well as the average deviations from the experience.

In **Figures 5** and **6**, the variation of keff, shows that the process based on ENDF/ B-VII and JEFF-3.1 gives results that are in good agreement with the experimental values, the deviations from the experiment are 0.34% and 0.33% respectively, with

*A Criticality Study of Fast Critical Experimental Benchmarks Using MCNP Code to Qualifying… DOI: http://dx.doi.org/10.5772/intechopen.102449*

**Figure 5.** *keff depending on the case for the fast benchmarks in Pu-239.*

the exception of JENDL-3.3 and JENDL-4 which have deviations from criticality greater than the other libraries of 0.39% and 0.38% respectively. Apparently, JENDL-3.3 gives keffs far from 1 compared to other libraries in the Pu-MF-006 and Pu-MF-008 and Pu-MF-010 benchmarks as we notice that the problem is corrected in JENDL-4, and JENDL-4 far from 1 compared to other libraries in benchmarks Pu-MF-026, Pu-MF-28 and Pu-MF-32, We note that there is a deterioration during the transition from JENDL-3.3 to JENDL-4 in these three benchmarks. At the three benchmarks Pu-MF-11, PU-MF-27 and PU-MF-31 all the libraries have criticality estimates.

The Pu-MF-26, 28 and 32 benchmarks use stainless steel as a reflector, so the JENDL-4's underestimation of criticality compared to other libraries and due to the overestimation of JENDL-4 to other libraries in the cross-section of carbon capture.

**Figure 6.** *The |C-E|/E ratio of keff in the case of fast benchmarks in Pu-239.*

### **6. Conclusions**

In this work we were able to model rapid critical benchmarks using the main fissile nuclei which are, the 235U, the 233U, and the 239Pu, we previously generated crosssections using the NJOY code, these cross-sections come from the main evaluations ENDF/B-VII, JEFF-3.1, JENDL-3.3 and JENDL-4.

The Monte Carlo calculation that we carried out consisted in determining the keff parameter, the difference between the calculation and the experiment depends mainly on the type of evaluation used as well as the fissile core of the benchmarks considered this difference remains acceptable all the same. So that we can say, that our results are in good agreement with those obtained experimentally.

### **Author details**

Sanae El Ouahdani1 \*, Hamid Boukhal<sup>2</sup> , El Mahjoub Chakir3 , Ahmed Gaga1 , Houda Elyaakoubi2 , Mustapha Makhloul2 , Abdelaziz Ahmed4 , Abdessamad Didi5 and Mohamed Bencheikh6

1 Polydisciplinary Faculty, LRPSI Laboratory, Physics Department, Sultan Moulay Slimane University, Beni Mellal, Morocco

2 Faculty of Sciences, ERSN, Abdelmalek Essaadi University, Tetouan, Morocco

3 Faculty of Sciences, LHESIR, Ibn Tofail University, Kenitra, Morocco

4 Faculty of Lawder, Physics Department, University of Abyan, Abyan, Yemen

5 National Center for Energy Sciences and Nuclear Techniques, Rabat, Morocco

6 Faculty of Sciences and Technologies, Physics Department, Mohammedia Hassan II University of Casablanca, Mohammedia, Morocco

\*Address all correspondence to: selouahdani@gmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*A Criticality Study of Fast Critical Experimental Benchmarks Using MCNP Code to Qualifying… DOI: http://dx.doi.org/10.5772/intechopen.102449*

#### **References**

[1] Chadwick MB et al. Nuclear data sheets. 2006;**107**(12):2931-3060

[2] The JEFF-3.1 Nuclear Data Library. JEFF Report 21. ISBN 92-64-02314-3

[3] Shibata K et al. Japanese evaluated nuclear data library version 3 revision-3: JENDL-3.3. Journal of Nuclear Science and Technology. 2002;**39**:1125

[4] Shibata K et al. JENDL-4.0: A new library for nuclear science and engineering. Journal of Nuclear Science and Technology. 2011;**48**(1):1-30

[5] NJOY 99.9. A Code System for Producing Pontwise and Multigroup Neutron and Photon Cross Sections for ENDF/B, Evaluated Nuclear Data. New Mexico: Los Alamos National Laboratory. 1999

[6] International Handbook of Evaluated Criticality Safety Benchmark Experiments. NEA/NSC/DOC (95). Vol. I-VII. 2003

[7] MCNPXTM user's manual. Version 2.7.0. LA-CP-11-00438. Manuel/Los Alamos National Laboratory; 2011

[8] JANIS 3.0 user's guide. 2007

#### **Chapter 6**

## Applications of Fuzzy Set and Fixed Point Theory in Dynamical Systems

*Praveen Kumar Sharma, Shivram Sharma, Jitendra Kaushik and Palash Goyal*

#### **Abstract**

This chapter shall discuss various applications of fixed-point theory and fuzzy set theory. Fixed point theory and fuzzy set theory are very useful tools that are applicable in almost all branches of mathematical analysis. There are many problems that cannot be solved by applying the concept of other existing theories but can be solved easily by using the concept of fuzzy set theory and fixed point theory. So here in this chapter, we shall introduce the fuzzy set theory and fixed point theory concerning their applications in existing branches of science, engineering, mathematics, and dynamical systems.

**Keywords:** fixed point, fuzzy set, dynamical systems, stability, fuzzy differential equations, integral equations

#### **1. Introduction**

Fixed point theory is an area of mathematics linked to functional analysis and topology that is still in its infancy. Fixed point theory is an important subject in the fast-growing domains of nonlinear analysis and nonlinear operators. It is a relatively new scientific area that is developing rapidly. In topics as diverse as differential equations, topology, economics, game theory, dynamics, optimal control, and functional analysis, fixed points and fixed point theorems have always been important theoretical tools. Furthermore, with the development of accurate and efficient techniques for computing fixed points, the concept's relevance for applications has expanded dramatically, making fixed point methods a vital weapon in the arsenal of the applied mathematician.

Set theory, general topology, algebraic topology, and functional analysis are just a few of the major fields of mathematics that give natural settings for fixed point theorems. Approximation theory, potential theory, game theory, mathematical economics, theory of differential equations, and other disciplines use fixed point theorems to solve problems in approximation theory, potential theory, game theory, mathematical economics, and so on. It is possible to evaluate various problems from science and engineering using fixed point approaches when one is concerned with a system of differential/integral/functional equations. This method is particularly beneficial when dealing with control system issues and the idea of elasticity.

Fixed point theorems are the most important tools for proving the existence and uniqueness of solutions to various mathematical models (differential, integral, and partial differential equations, variational inequalities, and so on), which represent phenomena arising in multiple fields such as steady-state temperature distributions, chemical reactions, Neutron transport theories, economic theories, epidemics, and fluid flow. They are also employed to look at the difficulty of determining the best central for these systems.

Let *F* : *X* ! *X* represent a function on the set *X*. A point *x*∈ *X* is called a fixed point of *F* if *F x*ð Þ¼ *x*, that is, a point, which remains invariant under the transformation *F*, is called a fixed point, and fixed point theorems are theorems that deal with the attributes and existence of fixed points. If *F* is a function defined on the real numbers as *F x*ð Þ¼ *x* þ 2, then it has no fixed points since *x* is never equal to *x* þ 2 for any real number.

Let *F* : ½ �! 0, 1 ½ � 0, 1 be defined by *x=*10 then *F* ð Þ¼ 0 0. Hence 0 is a fixed point of *F*.

Poincare [1] was the first to establish the fixed-point theory in 1886. He arrived at the first result on a fixed point using a continuous function.

Browder [2] proved the following useful theorem in 1912. Browder's fixed point theorems are fundamental in fixed point theory and its applications. Browder's fixedpoint theorem states, "If *C* is a unit ball in *E<sup>n</sup>* (Euclidean n-dimensional space) and *T* : *C* ! *C* a continuous function. Then *T* has a fixed point in *C*, or *Tx* ¼ *x* has a solution."

The Particular Case of this theorem on the real line can be stated in the following way.

Let *T* : ½ �! 0, 1 ½ � 0, 1 be a continuous function. Then *T* has a fixed point. Schauder proved the following theorem for compact maps.

Let *X* be a Banach space and let *C* be a closed, bounded subset of *X:* Let *T* : *C* ! *C* be a compact map. Then *T* has at least one fixed point in *C*.

This theorem is important in the numerical treatment of equations in analysis.

Banach [3] investigated the concept of contraction type mappings in metric space in 1922. Using the condition of contraction mapping, he established an interesting conclusion in metric space. "Every contraction mapping of a complete metric space into itself has a unique fixed point," according to the Banach contraction principle.

A contraction mapping is continuous, but a continuous map is not necessarily a contraction.

For example, a translation map *T* : *R* ! *R* is defined by *Tx* ¼ *x* þ *p*, *p*>0, is continuous but not contraction.

The Banach contraction principle has a lot of uses, but it has one major flaw: It requires the function to be consistent throughout the space. Kannan [4] proved the improved conclusion in fixed point theory to avoid this flaw. The Banach contraction principle has a lot of uses, but it has one major flaw: It requires the function to be consistent throughout the space. Kannan [5] proved the improved conclusion in fixed point theory to avoid this flaw. He proved that "let *F* be a self-mapping of complete metric space *X* satisfying the following inequality

*d Fx* ð Þ , *Fy* ≤ *α* ½ � *d x*ð Þþ , *Fx d y*ð Þ , *Fy for all x*, *y*∈ *X*, 0<*α* < 1*=*2*:*

Then *F* has a unique fixed point."

#### *Applications of Fuzzy Set and Fixed Point Theory in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105830*

Jungck [6] generalized Banach's fixed point theorem by proving a common fixed point theorem for commuting maps. The Banach fixed point theorem has many applications, but it has one flaw: The definition necessitates function continuity.

In 1982, Sessa [7] refined Jungck's result and proposed the concept of weakly commuting mappings in metric space, demonstrating that "two commuting mappings also commute weakly, but two weakly commuting mappings are not certainly commuting."

Jungck [8] achieved a breakthrough when he declared the new concept of "compatibility" of mappings and demonstrated its utility in achieving a common fixed point of mappings. Every weak commutative pair of mappings is compatible, according to Jungck [9], but the opposite does not have to be true. In his study [10], Singh points out that commutativity does not entail the presence of a series of points that satisfy the compatibility criterion.

Jungck et al. [11] introduced the concept of compatible mappings of type (A) in 1993 and proved some common fixed point theorems.

Common fixed-point theorems for Mann-type iterations are useful for common fixed point theorems and their applications to the best approximation.

In 1999, Popa [12] proved some fixed point theorems for compatible mappings satisfying an implicit relation. Some other results of Popa have been targeted by many authors and common fixed point theorems in different spaces on using implicit relations.

The notion of convex metric spaces was initially introduced by Takahashi [13]. He and others gave some fixed point theorems for non-expansive mappings in convex metric spaces.

In the metric space setting, the strict contractive condition does not ensure the existence of a common fixed point unless the space is assumed compact or the tough conditions are replaced by stronger conditions as in [14, 15]. In 1986, Jungck [8] introduced the notion of compatible mappings. This concept was frequently used to prove the existence of theorems in common fixed point theory. However, the study of common fixed points of non-compatible mappings is also very interesting.

In an attempt to study fixed points of non-self-mappings, Assad and Kirk [16] gave sufficient conditions for such mappings to have a fixed point by proving a result for multivalued mappings in convex metric spaces. Naimpally et al. [17] proved fixed point theorems in convex metric spaces.

Gähler [18] introduced the concept of 2-metric space and further studied 2-metric and other spaces in [19, 20]. He defined 2-metric space as a real-valued function of a point triples on a set X, whose abstract properties were suggested by the area function in Euclidean space. It is natural to expect 3-metric space, which is indicated by the volume function.

In 1998, Pant [21] introduced the notion of R-weakly commuting maps and point-wise R-weakly commuting maps in metric spaces. He has observed that two self-maps on a metric space can fail to be point-wise R-weakly commuting only if they possess a coincidence point at which they do not commute.

The systematic study of fixed points of multivalued mappings had been started with the work of Nadler [22] in 1969, who proved that any multivalued contractive mapping of a complete metric space X into the family of a closed bounded subset of X has a fixed point. Ciric [23] was the first to prove the most general fixed point theorem for a generalized multivalued contraction mapping.

Naimpally et al. [1] obtained some interesting results on fixed point and coincidence point theorems for a hybrid of multivalued and single-valued maps satisfying a contraction condition.

In 1942, Menger [24] suggested associating a distribution function in place of a distance function to any two points in metric space and introduced the concept of probabilistic metric space under statistical metric space.

Sehgal [25] initiated the study of contraction mapping on probabilistic metric space in 1966. Sehgal and Bharucha-Reid [26] proved the Banach contraction principle for probabilistic metric space, stating that "A contraction mapping on a complete probabilistic metric space has a unique fixed point."

Due to the various paradigmatic changes in science and mathematics, many changes also took place in the concept of uncertainty. One such change is the concept of uncertainty which is at the stage of transition from the traditional view to the modern view and is characterized chiefly by the theories of the uncertainty of probability theory.

An important point in the evolution of the modern concept of uncertainty was the publication of a seminal paper by Zadeh [27], a computer scientist university of California; the U.S.A. was the first who introduce the concept of fuzzy set theory in his seminal paper as a new way to represent vagueness in our everyday life. Zadeh introduced a theory whose objects fuzzy sets are sets with boundaries that are not precise. The membership in a fuzzy set is not a matter of affirmation or denial but rather a degree. This concept is being used and found to be more appropriate in solving problems of all disciplines.

The concept of fuzzy sets was initially introduced by Zadeh [27] in 1965 and has caused great interest among pure and applied mathematicians. It has also raised enthusiasm among engineers, biologists, psychologists, and economists.

Let X be a set; we define a fuzzy set X as a map *M* : *X* ! *I* ¼ ½ � 0, 1 . It is to be remarked that fuzzy sets can be regarded as a generalization of characteristic functions taking values between 0 and 1 (including 0 and 1), and the characteristic function on a set *X* is the constant mapping.

In classical set theory, a subset *A* of a set *X* can be defined by its characteristic function *χA*ð Þ¼ *x* 0, *if x* ∉ *A* and *χA*ð Þ¼ *x* 1, if *x*∈ *A:*

The mapping may be represented as a set of ordered pairs f g ð Þ *x*, *χ<sup>A</sup>* ð Þ *x* with exactly one ordered pair present for each element of *X*. The first element of the ordered pair is an element of the set *X*, and the second is its value {0, 1}. The value 0 is used to represent non-membership, and the value 1 is used to describe the membership of the element *A*. The truth or falsity of the statement, *x* is in *A*, determined by the ordered pair. The statement is true if the second element of the ordered pair is 1, and the statement is false if it is 0. Similarly, a fuzzy subset *A* of a set *X* can be defined as a set of ordered pairs f g ð Þ *x*, *χA*ð Þ *x* : *x*∈*X* , each with the first element from *X* and the second element from the interval 0, 1 ½ � with exactly one ordered pair present for each component of *X*. This defines a mapping *μ<sup>A</sup>* between elements of the set X and the values in the interval 0, 1 ½ �. That is, *μ<sup>A</sup>* : *X* ! ½ � 0, 1 *:*

The value 0 represents complete non-membership, the value 1 represents complete membership, and the values in between represent intermediate degrees of membership.

The fuzzy set theory has an application in neural network theory, robotics, reliability, stability theory, mathematical programming, modeling theory, engineering sciences, medical sciences, image processing, control theory, communication, etc.

In the last three decades, fuzzy mathematics's development and rich growth were tremendous. A large number of authors studied applications of fuzzy set theory in different engineering branches.

Zimmermann and Sebastian [28, 29] defined knowledge base system design and intelligent system design support. Tsourveloudis et al. [30] defined machine flexibility and established a result.

For fuzzy mathematics, we refer to Bedard [31], Butnariu [32], Grabiec [33], and Weiss [34].

#### **2. Main results/discussion/application of fuzzy set and a fixed point in a dynamical system**

In this chapter, our main aim is to give an application of fuzzy sets and fixed points in a dynamic system.

**A dynamical system** is one in which a function explains the time dependency of a point in an ambient space or one in which something evolves with time. For instance, mathematical models that represent the swinging of a clock pendulum, water flow in a conduit, the quantity of fish in a lake each spring, population expansion, and so on.

A dynamic system can be described over either discrete time steps or a continuous timeline.

**Discrete-time dynamical system-**

$$\mathbf{x}\_t = F(\mathbf{x}\_{t-1}, t) \tag{1}$$

This type of model is called a difference equation, a recurrence equation, or an iterative map (if the right-hand side is not dependent on *t*)

**Continuous-time dynamical system-**

$$\frac{d\mathbf{x}}{dt} = F(\mathbf{x}, t) \tag{2}$$

This type of model is called a differential equation.

In both cases, *xt* or *x* is the system's state variable at time *t*, which may take a scalar or vector value. *F* is a function that determines the rule by which the system changes its state over time.

Dynamical systems are often modeled by differential equations.

So, here we discuss an application of fuzzy Laplace transforms to solve differential equations/fuzzy differential equations:

Fuzzy differential equations (FDEs) are a natural technique to model dynamic systems under uncertainty. One of the most basic FDEs, first-order linear fuzzy differential equations, can be found in many applications. Chang and Zadeh [35] were the first to present the fuzzy derivative notion. The concept of FDEs was used in the analysis of fuzzy dynamical problems by Kandel and Byatt [4, 36]. FDEs and their fuzzy beginning and boundary value problems are solved using the fuzzy Laplace transform approach. Fuzzy Laplace transforms make it easier to solve an FDE by converting it to an algebraic problem.

Operational calculus is an important area of applied mathematics that involves switching from calculus operations to algebraic operations on transforms. The fuzzy Laplace transform approach is practically the essential functional method for engineers. The fuzzy Laplace transform also benefits by directly addressing difficulties, fuzzy initial value problems without identifying a general solution, and non-homogeneous differential equations without first solving the corresponding homogeneous equation.

There are a number of mathematicians who studied and developed several approaches to study FIVP [37–41]. They initially used H-Differentiability for fuzzyvalued functions. Using this concept, they looked at the existence and uniqueness of the solution of FIVP [37, 41, 42]. This concept has a drawback: The fuzzy solution behaves quite differently from the crisp solution. Bede B and Gal SG [43] introduced a new idea called the strongly generalized differentiability, and it was studied and used for solving FIVPs in [44–47]. This concept allows us to overcome the drawback mentioned above. So, we use this differentiability concept to find out the solution to FIVP in this chapter.

The fuzzy Laplace transform method solves the problems of FDEs and corresponding fuzzy initial and boundary values. This method solves FIVPs/FDEs directly and gives the complete solution without determining the complementary and particular solution (one can refer to [44, 45, 48–51]. This chapter uses this technique to solve FIVPs/FDEs/ODEs.

We need some definitions and theorems given under the following to solve the fuzzy differential equation by the fuzzy Laplace transform.

**Definition 2.1.** ([46]). Let *f x*ð Þ be a continuous fuzzy-value function. Suppose that *f x*ð Þ⊙*e*�*px* is improper fuzzy Riemann integrable on 0, ½ Þ <sup>∞</sup> , then <sup>Ð</sup> <sup>∞</sup> <sup>0</sup> *f x*ð Þ<sup>⨀</sup> *<sup>e</sup>*�*pxdx* is called fuzzy Laplace transforms and is denoted as

$$\begin{aligned} L[f(\mathbf{x})] &= \int\_0^\infty f(\mathbf{x}) \mathsf{C} e^{-p\mathbf{x}} d\mathbf{x} = \left( \int\_0^\infty \underline{f}(\mathbf{x}, r) e^{-p\mathbf{x}} d\mathbf{x}, \int\_0^\infty \overline{f}(\mathbf{x}, a) e^{-p\mathbf{x}} d\mathbf{x} \right) \\ &= \left( l \left[ \underline{f}(\mathbf{x}, a) \right], l \left[ \overline{f}(\mathbf{x}, a) \right] \right), \end{aligned}$$

$$\begin{aligned} \text{where } & \left[\underline{f}(\mathfrak{x}, a)\right] = \left[\prescript{\text{ess}}{f}(\mathfrak{x}, a)e^{-p\mathfrak{x}}d\mathfrak{x} \text{ , } l\left[\overline{f}(\mathfrak{x}, a)\right] = \int\_{0}^{\infty} \bar{f}(\mathfrak{x}, a)e^{-p\mathfrak{x}}d\mathfrak{x} \\ \text{(Or) } L\left[f(\mathfrak{x})\right] = \left(l\left[\underline{f}(\mathfrak{x}, a)\right], l\left[\overline{f}(\mathfrak{x}, a)\right]\right) \end{aligned}$$

**Theorem 2.2.** Chalco-Cano et al. [46] Let *f* : *R* ! *E* be a fuzzy valued function and denote *f t*ðÞ¼ *f t*ð Þ , *<sup>α</sup>* , �*f t*ð Þ , *<sup>α</sup>* � �, for each *<sup>α</sup>*∈½ � 0, 1 . Then

1. If *<sup>f</sup>* is (i)-differentiable, then *f t*ð Þ , *<sup>α</sup>* and �*f t*ð Þ , *<sup>α</sup>* are differentiable functions and *f* 0 ðÞ¼ *t f*<sup>0</sup> ð Þ *<sup>t</sup>*, *<sup>α</sup>* , �*<sup>f</sup>* 0 ð Þ *t*, *α* � �

2. If *<sup>f</sup>* is (ii)-differentiable, then *f t*ð Þ , *<sup>α</sup>* and �*f t*ð Þ , *<sup>α</sup>* are differentiable functions and *f* 0 ðÞ¼ *<sup>t</sup>* �*<sup>f</sup>* 0 ð Þ *t*, *α* , *f*<sup>0</sup> ð Þ *t*, *α* � �

**Formulae 2.3.** Consider the fuzzy initial value problem

$$\begin{cases} \mathcal{Y}'(t) = f(t, \mathcal{Y}(t)) \\ \mathcal{Y}(\mathbf{0}) = \left( \underline{\mathcal{Y}}(\mathbf{0}, a), \bar{\mathcal{Y}}(\mathbf{0}, a) \right), \mathbf{0} < a \le 1. \end{cases}$$

Where *f* : *R*<sup>þ</sup> � *E* ! *E* is a continuous fuzzy mapping. Using the fuzzy Laplace transform method, we have: *L y*<sup>0</sup> ½ �¼ ð Þ*t Lf t* ½ � ð Þ , *y t*ð Þ

Case I—If we consider *y*<sup>0</sup> ð Þ*t* by using (i)-differentiable, then *y*<sup>0</sup> ðÞ¼ *t*

*y*0 ð Þ *t*, *α* , �*y*<sup>0</sup> ð Þ *t*, *α* � � and

$$L[\boldsymbol{\mathfrak{y}}'(t)] = (\boldsymbol{s} \otimes L[\boldsymbol{\mathfrak{y}}(t)]) - {}^h \boldsymbol{\mathfrak{y}}(\mathbf{0}),$$

$$\begin{aligned} \text{(Or)} \ l \left[ \overline{f}(t, \boldsymbol{y}(t), a) \right] &= s \ l \left[ \overline{\boldsymbol{y}}(t, a) \right] - \underline{\boldsymbol{y}}(0, a), \ l \left[ \overline{f}(t, \boldsymbol{y}(t), a) \right] = s \ l \left[ \overline{\boldsymbol{y}}(t, a) \right] - \bar{\boldsymbol{y}}(0, a) \\ \text{(Or)} \ l \left[ \overline{f}(t, \boldsymbol{y}(t), a) \right] &= s \ H\_{1}(s, a) - \underline{\boldsymbol{y}}(0, a), \ l \left[ \overline{f}(t, \boldsymbol{y}(t), a) \right] = s \ K\_{1}(s, a) - \bar{\boldsymbol{y}}(0, a) \\ \text{Where} \ l \left[ \underline{\boldsymbol{y}}(t, a) \right] &= H\_{1}(s, a), \ l \left[ \bar{\boldsymbol{y}}(t, a) \right] = K\_{1}(s, a) \end{aligned}$$

Case II—If we consider *y*<sup>0</sup> ð Þ*t* by using (ii)-differentiable, then *y*<sup>0</sup> ðÞ¼ *t* �*y*0 ð Þ *t*, *α* , *y*<sup>0</sup> ð Þ *t*, *α* � � and

$$L[\boldsymbol{\mathfrak{y}}'(t)] = \boldsymbol{\mathfrak{y}}(\mathbf{0}) - {}^h(-s\ominus L[\boldsymbol{\mathfrak{y}}(t)]),$$

$$\begin{aligned} \text{(Or)} \ l \left[ \underline{f}(t, \boldsymbol{y}(t), a) \right] &= s \ l \left[ \underline{\boldsymbol{y}}(t, a) \right] - \underline{\boldsymbol{y}}(0, a), \ l \left[ \bar{f}(t, \boldsymbol{y}(t), a) \right] = s \ l \left[ \bar{\boldsymbol{y}}(t, a) \right] - \bar{\boldsymbol{y}}(0, a) \\ \text{(Or)} \ l \left[ \underline{f}(t, \boldsymbol{y}(t), a) \right] &= s \ H\_{1}(s, a) - \underline{\boldsymbol{y}}(0, a), \ l \left[ \bar{f}(t, \boldsymbol{y}(t), a) \right] = s \ K\_{1}(s, a) - \bar{\boldsymbol{y}}(0, a) \\ \text{Where} \ l \left[ \underline{\boldsymbol{y}}(t, a) \right] &= H\_{2}(s, a), \ l \left[ \bar{\boldsymbol{y}}(t, a) \right] = K\_{2}(s, a) \end{aligned}$$

Now, we solve the fuzzy differential equation by the fuzzy Laplace transform method-

**Example 2.4.** Consider the initial value problem

$$\begin{cases} \quad \mathcal{Y}'(t) = \mathcal{y}(t)\mathbf{0} \le t \le T \\\quad \mathcal{Y}(\mathbf{0}) = \left(\underline{\mathcal{Y}}(\mathbf{0}, a), \bar{\mathcal{Y}}(\mathbf{0}, a)\right). \end{cases}$$

by using the fuzzy Laplace transform method, we have

*L y*<sup>0</sup> ½ �¼ ð Þ*<sup>t</sup> L yt* ½ � ð Þ , and *L y*<sup>0</sup> ½ �¼ ð Þ*<sup>t</sup>* <sup>Ð</sup> <sup>∞</sup> <sup>0</sup> *y*<sup>0</sup> ð Þ*<sup>t</sup>* <sup>⨀</sup>*e*�*ptdt* in (i)-differentiable then by using Case (i), we have *L y*<sup>0</sup> ½ �¼ ð Þ*t* ð Þ *sL y t* ½ � ð Þ ⊖*y*ð Þ 0

Therefore, *Ly t* ½ �¼ ð Þ *sL y t* ½ �Þ ð Þ ⊖*y*ð Þ 0

$$l[\bar{\boldsymbol{y}}(t,a)] = \operatorname{s\,l}\left[\underline{\boldsymbol{y}}(t,a)\right] - \underline{\boldsymbol{y}}(0,a)$$

$$l\left[\underline{\boldsymbol{y}}(t,a)\right] = \operatorname{s\,l}[\bar{\boldsymbol{y}}(t,a)] - \bar{\boldsymbol{y}}(0,a) \quad \dots \text{\,} \tag{3}$$

Hence, the solution of system (3) is:

$$l\left[\bar{\boldsymbol{\eta}}(t,a)\right] = -\bar{\boldsymbol{\eta}}(0,a)\left(\frac{\mathfrak{s}}{\mathfrak{s}^2 - 1}\right) + \underline{\boldsymbol{\eta}}(0,a)\left(-\frac{1}{\mathfrak{s}^2 - 1}\right)$$

$$l\left[\underline{\boldsymbol{\eta}}(t,a)\right] = -\underline{\boldsymbol{\eta}}(0,a)\left(\frac{\mathfrak{s}}{\mathfrak{s}^2 - 1}\right) + \bar{\boldsymbol{\eta}}(0,a)\left(-\frac{1}{\mathfrak{s}^2 - 1}\right)$$

Thus

$$\bar{\jmath}(t,a) = -\bar{\jmath}(0,a)l^{-1}\left[\left(\frac{s}{s^2-1}\right)\right] + \underline{\jmath}(0,a)l^{-1}\left[\left(-\frac{1}{s^2-1}\right)\right]$$

$$\underline{\jmath}(t,a) = -\underline{\jmath}(0,a)\,l^{-1}\left[\left(\frac{s}{s^2-1}\right)\right] + \bar{\jmath}(0,a)\,l^{-1}\left[\left(-\frac{1}{s^2-1}\right)\right]$$

Finally, we have:

$$\begin{aligned} \bar{\chi}(t,a) &= e^{-t} \left( \frac{\underline{\chi}(0,a) - \bar{\jmath}(0,a)}{2} \right) - e^{t} \left( \frac{\underline{\wp}(0,a) + \bar{\jmath}(0,a)}{2} \right) \\ \underline{\chi}(t,r) &= e^{-t} \left( \frac{-\underline{\wp}(0,a) + \bar{\jmath}(0,a)}{2} \right) - e^{t} \left( \frac{\underline{\wp}(0,a) + \bar{\jmath}(0,a)}{2} \right) \end{aligned}$$

If *y*<sup>0</sup> ð Þ*t* in (ii)-differentiable, then by using Case II, we have *L y*<sup>0</sup> ½ �¼� ð Þ*t* ð Þ *y*ð Þ 0 ⊖ð Þ �*sL y t* ½ � ð Þ . Therefore, *Ly t* ½ �¼ � ð Þ ð Þ *y*ð Þ 0 ⊖ð Þ �*sL y t* ½ � ð Þ

$$l[\bar{\boldsymbol{y}}(t,a)] = \boldsymbol{s} \, l[\bar{\boldsymbol{y}}(t,a)] - \bar{\boldsymbol{y}}(0,a)$$

$$l\left[\underline{\boldsymbol{y}}(t,a)\right] = \boldsymbol{s}l\left[\underline{\boldsymbol{y}}(t,a)\right] - \underline{\boldsymbol{y}}(0,a) \quad \dots \text{ } \tag{4}$$

Hence, the solution of system (4) is:

$$l[\bar{\jmath}(t,a)] = -\bar{\jmath}(0,a)\left(\frac{1}{1+s}\right)$$

$$l\left[\underline{\jmath}(t,a)\right] = -\underline{\jmath}(0,a)\left(\frac{1}{1+s}\right)$$

Thus

$$\bar{\jmath}(t,a) = -\bar{\jmath}(0,a)l^{-1}\left(\frac{1}{\mathbf{1}+s}\right)$$

$$\underline{\jmath}(t,a) = -\underline{\jmath}(0,a)l^{-1}\left(\frac{\mathbf{1}}{\mathbf{1}+s}\right)$$

Finally, we have:

$$
\bar{\jmath}(t,a) = -\bar{\jmath}(\mathbf{0},a)e^{-t}
$$

$$
\underline{\jmath}(t,a) = -\underline{\jmath}(\mathbf{0},a)e^{-t}
$$

**Remark 2.5**. By following the above procedure, the solution of fuzzy IVP with initial conditions can be obtained. The solution of simultaneous fuzzy linear differential equations and fuzzy BVPs can also be obtained.

Now, we discuss the application of fixed points in existence and the uniqueness of the solution of the ordinary differential equation.

IVP's existence and uniqueness can be easily established using the fixed point technique. Banach fixed point theorem can be applied to derive the existence and *Applications of Fuzzy Set and Fixed Point Theory in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105830*

uniqueness of the solution of an initial value problem. The function used in IVP satisfies the Lipschitz condition.

**Definition 2.6**. Contraction Mapping:

Let X be a complete normed linear space (Banach Space). A mapping *F* : *X* ! *X* is called a contraction if k k *Fx* � *Fy* ≤*α*k k *x* � *y* , ∀*x*, *y*∈*X*, for some *α* <1

Example-. If *F x*ð Þ¼ *<sup>x</sup>* <sup>2</sup>, then *<sup>F</sup>* is a contraction for *<sup>α</sup>* <sup>¼</sup> <sup>1</sup> 2

**Definition 2.7**. Banach Fixed-Point Theorem (or Banach Contraction Principle):

If *F* : *X* ! *X* is a contraction, then *F* has a unique fixed point; say it is *x*1. *Fx*<sup>1</sup> ¼ *x*<sup>1</sup> Further, the sequence f g *xn* defined by *xn* ¼ *Fxn* ð Þ ∀*n* ¼ 1, 2, 3, … , converges to the unique fixed point *x*<sup>1</sup> of *F*.

**Definition 2.8**. Generalized Banach contraction principle:

If *<sup>F</sup><sup>n</sup>* is contraction, *<sup>F</sup>* : *<sup>X</sup>* ! *<sup>X</sup>* is a Banach space for *<sup>n</sup>*<sup>≥</sup> 1, then *<sup>F</sup>* has a unique fixed point

**Theorem 2.9.** Consider an Initial Value Problem (IVP)

$$\frac{d\mathbf{y}}{d\mathbf{x}} = f(\mathbf{x}, \mathbf{y}), \mathbf{y}(\mathbf{x\_0}) = \mathbf{y\_0}$$

Let *f x*ð Þ , *<sup>y</sup>* be a continuous function defined on a domain *<sup>D</sup>* <sup>⊆</sup>*R*<sup>2</sup> . Let f be Lipschitz continuous concerning y on D. Then, there exists a unique solution to the IVP on an interval j j *<sup>x</sup>* � *<sup>x</sup>*<sup>0</sup> <sup>≤</sup>*h*, where *<sup>h</sup>* <sup>¼</sup> min *<sup>a</sup>*, *<sup>b</sup> M* � �, *<sup>M</sup>* <sup>¼</sup> *Max f x* j j ð Þ , *<sup>y</sup>*

ð Þ *x*, *y* ∈*R*, and *R* ¼ ð Þ *x*, *y* : j j *x* � *x*<sup>0</sup> ≤*a*, *y* � *y*<sup>0</sup> � � � �≤*<sup>b</sup>* � �<sup>⊂</sup> *<sup>D</sup>*

Further, the unique solution can be computed from the successive approximation scheme *yn*þ<sup>1</sup>ð Þ¼ *<sup>x</sup> <sup>y</sup>*<sup>0</sup> <sup>þ</sup> <sup>Ð</sup> *<sup>x</sup> x*0 *f t*, *yn*ð Þ*<sup>t</sup>* � �*dt*, *<sup>y</sup>*0ðÞ¼ *<sup>t</sup> <sup>y</sup>*<sup>0</sup> ð Þ <sup>∀</sup>*<sup>n</sup>* <sup>¼</sup> 0, 1, 2, …

**Proof:** The solvability of IVP follows

If the integral equation *y x*ð Þ¼ *<sup>y</sup>*<sup>0</sup> <sup>þ</sup> <sup>Ð</sup> *<sup>x</sup> x*0 *f t*ð Þ , *y t*ð Þ *dt* is solvable, let *X* ¼ *C x*½ � 0, *x*<sup>1</sup> set of all continuous functions defined on ½ � *x*0, *x*<sup>1</sup> *:* Define k k*x* ¼ sup t∈½ � x0, *x*<sup>1</sup> j j *x t*ð Þ ð Þ *X*, k k*:* is

complete normed linear space.

Define

$$\boldsymbol{y}(\mathbf{x}) = \boldsymbol{y}\_0 + \int\_{\mathbf{x}\_0}^{\mathbf{x}} \boldsymbol{f}(t, \boldsymbol{y}(t)) dt \tag{5}$$

Define an operator *<sup>F</sup>* : *C x*½ �! 0, *<sup>x</sup>*<sup>1</sup> *C x*½ � 0, *<sup>x</sup>*<sup>1</sup> by ð Þ *Fy* ð Þ¼ *<sup>x</sup> <sup>y</sup>*<sup>0</sup> <sup>þ</sup> <sup>Ð</sup> *<sup>x</sup> x*0 *f t*ð Þ , *y t*ð Þ *dt*.

If *F* has a fixed point, that is, there exists a *y* such that *y* ¼ *Fy*, then the fixed point *y* is a solution to the integral equation (5).

Now, we will show *F<sup>n</sup>* Is contraction for some large *n*.

$$(F\mathfrak{y})(\mathfrak{x}) = \mathfrak{y}\_0 + \int\_{\mathfrak{x}\_0}^{\mathfrak{x}} f(t, \mathfrak{y}(t)) dt$$

Let *y*1, *y*<sup>2</sup> ∈*C x*½ � 0, *x*<sup>1</sup> , then

$$\left| \left( F\mathbf{y}\_1 \right)(\mathbf{x}) - \left( F\mathbf{y}\_2 \right)(\mathbf{x}) \right| = \left| \int\_{\mathbf{x}\_0}^{\mathbf{x}} \left( f(t, \mathbf{y}\_1(t) - f(t, \mathbf{y}\_2(t))) dt \right) \right|$$

$$\leq a \int\_{\mathbf{x}\_0}^{\mathbf{x}} \left| \mathbf{y}\_1(t) - \mathbf{y}\_2(t) \right| dt \tag{6}$$

$$\begin{aligned} \leq a \int\_{x\_{0t}}^{x} \sup\_{t \in [x\_{0, \infty}]} & \left| y\_1(t) - y\_2(t) \right| dt \\ \leq a \int\_{x\_0}^{x} & \left|| y\_1(t) - y\_2(t) \right|| dt \\ \leq a(x - x\_0) \left|| y\_1 - y\_2 \right|| \\ \left| \left( F^2 \mathcal{Y}\_1 \right)(x) - \left( F^2 \mathcal{Y}\_2 \right)(x) \right| &= \left| F(F\mathcal{Y}\_1)(x) - F(F\mathcal{Y}\_2)(x) \right| \\ \leq a \int\_{x\_0}^{x} & \left| F\mathcal{Y}\_1(t) - F\mathcal{Y}\_2(t) \right| dt \\ \leq a^2 \int\_{x\_0}^{x} & (x - x\_0) \left|| \mathcal{Y}\_1 - \mathcal{Y}\_2 \right|| dt \\ \leq \frac{a^2}{2} (x - x\_0)^2 \left|| \mathcal{Y}\_1 - \mathcal{Y}\_2 \right|| \end{aligned} \tag{7}$$

$$\left| \left| \left( F^2 \mathcal{V}\_1 \right) - \left( F^2 \mathcal{V}\_2 \right) \right| \right| \le \frac{\alpha^2}{2!} \left( \varkappa\_1 - \varkappa\_0 \right)^2 \left\| \mathcal{V}\_1 - \mathcal{V}\_2 \right\|^2$$

$$\left| \left| \left( F^3 \boldsymbol{\uprho}\_1 \right) - \left( F^3 \boldsymbol{\uprho}\_2 \right) \right| \right| \le \frac{\alpha^3}{3!} (\boldsymbol{\upkappa}\_1 - \boldsymbol{\upkappa}\_0)^3 ||\boldsymbol{\uprho}\_1 - \boldsymbol{\uprho}\_2||$$

$$- - -$$

$$\left| \left( F^n \boldsymbol{\uprho}\_1 \right) - \left( F^n \boldsymbol{\uprho}\_2 \right) \right| \le \frac{\alpha^n}{n!} (\boldsymbol{\upkappa}\_1 - \boldsymbol{\upkappa}\_0)^n ||\boldsymbol{\uprho}\_1 - \boldsymbol{\uprho}\_2||\tag{8}$$

$$\mathcal{Y}\_{n+1}(\mathbf{x}) = \mathcal{Y}\_0 + \int\_{\mathbf{x}\_0}^{\mathbf{x}} f\left(t, \mathcal{Y}\_n(t)\right) dt$$

$$\mathcal{Y}\_1(\mathbf{x}) = \mathbf{1} + \int\_1^{\mathbf{x}} f\left(t, \mathcal{Y}\_0(t)\right) dt$$

$$\mathcal{Y}\_1(\mathbf{x}) = \int\_1^{\mathbf{x}} \frac{2}{t} dt$$

$$\mathcal{Y}\_1(\mathbf{x}) = 2 \log \left(\mathbf{x}\right)$$

*Applications of Fuzzy Set and Fixed Point Theory in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105830*

$$\begin{aligned} \mathcal{y}\_2(\mathbf{x}) &= \mathbf{1} + \int\_1^\mathbf{x} f(t, \mathbf{y}\_1(t)) dt \\ \mathcal{y}\_2(\mathbf{x}) &= \int\_1^\mathbf{x} 4 \frac{\log t}{t} dt \\ \mathcal{y}\_2(\mathbf{x}) &= \int\_1^\mathbf{x} 4 \frac{\log t}{t} dt \\ \mathcal{y}\_2(\mathbf{x}) &= 2(\mathbf{x}^2 - 1) \end{aligned}$$

$$\begin{aligned} \mathcal{y}\_3(\mathbf{x}) &= \int\_1^\mathbf{x} f(t, \mathcal{y}\_2(t)) dt \\ \mathcal{y}\_3(\mathbf{x}) &= \mathbf{1} + \int\_1^\mathbf{x} f(t, \mathcal{y}\_2(t)) dt \\ \mathcal{y}\_3(\mathbf{x}) &= \mathbf{1} + \int\_1^\mathbf{x} \frac{4(t^2 - 1)}{t} dt \\ \mathcal{y}\_3(\mathbf{x}) &= \mathbf{1} + 4 \int\_1^\mathbf{x} \left(t - \frac{1}{t}\right) dt \\ \mathcal{y}\_3(\mathbf{x}) &= \mathbf{1} + 2 \left(\mathbf{x}^2 - 1\right) - 4 \left(\log \mathbf{x}\right) \\ \mathcal{y}\_3(\mathbf{x}) &= \left(2\mathbf{x}^2 - 1\right) - 4 \left(\log \mathbf{x}\right) \end{aligned}$$

On keep continuing this process up to the nth time and then taking *n* ! ∞ we have

*y x*ð Þ¼ *<sup>x</sup>*<sup>2</sup>

**Example 2.11**. Consider the IVP *dy dx* ¼ *y*, *y*ð Þ¼ 0 1 Here *f x*ð Þ¼ , *y y*, *x*<sup>0</sup> ¼ 0, *y*<sup>0</sup> ¼ 1

$$\mathcal{Y}\_{n+1}(\mathbf{x}) = \mathcal{Y}\_0 + \int\_{\mathbf{x}\_0}^{\mathbf{x}} f\left(t, \mathcal{Y}\_n(t)\right) dt$$

$$\mathcal{Y}\_1(\mathbf{x}) = \mathbf{1} + \int\_0^{\mathbf{x}} \mathcal{Y}\_0(t) dt$$

$$\mathcal{Y}\_1(\mathbf{x}) = \mathbf{1} + \int\_0^{\mathbf{x}} \mathbf{1} dt = \mathbf{1} + \mathbf{x}$$

$$\mathcal{Y}\_2(\mathbf{x}) = \mathbf{1} + \int\_0^{\mathbf{x}} \mathcal{Y}\_1(t) dt = \mathbf{1} + \mathbf{x} + \frac{\mathbf{x}^2}{2}$$

$$\mathcal{Y}\_3(\mathbf{x}) = \mathbf{1} + \int\_0^{\mathbf{x}} \mathcal{Y}\_2(t) dt = \mathbf{1} + \mathbf{x} + \frac{\mathbf{x}^2}{2} + \frac{\mathbf{x}^3}{6}$$

$$\vdots$$

$$\mathcal{Y}\_n(\mathbf{x}) = \mathbf{1} + \mathbf{x} + \frac{\mathbf{x}^2}{2} + \frac{\mathbf{x}^3}{3} + \dots + \frac{\mathbf{x}^n}{n}$$

$$\mathcal{Y}\_n(\boldsymbol{\kappa}) = \mathbf{1} + \boldsymbol{\varkappa} + \frac{\boldsymbol{\varkappa}^2}{2!} + \frac{\boldsymbol{\varkappa}^3}{3!} + \dots \\ + \frac{\boldsymbol{\varkappa}^n}{n!} = \sum \frac{\boldsymbol{\varkappa}^n}{n!}$$

Taking *<sup>n</sup>* ! <sup>∞</sup> , then we have *yn*ð Þ! *<sup>x</sup> <sup>e</sup><sup>x</sup>* Thus *y x*ð Þ¼ *<sup>e</sup><sup>x</sup>* Is a solution for a given IVP.

Stability of the solution: In these questions/examples, the functions *f x*ð Þ¼ , *y* 2*y <sup>x</sup>* and *f x*ð Þ¼ , *y y* satisfies the Lipschitz condition concerning the initial condition *y*<sup>0</sup> ¼ 1, so the answer is stable concerning initial data.

#### **3. Conclusion**

Here in this chapter, we discussed the application of a fuzzy set and a fixed point in dynamic systems. We also tried to discuss applications of fuzzy sets and fixed points in other directions. We gave a solution of fuzzy ordinary differential equations with initial conditions by the fuzzy Laplace transform method and provided a solution to the existence and uniqueness problem of ODE of the first order by the fixed point technique. We solved examples of existence and uniqueness problems and checked the stability of the solution also.

#### **Author details**

Praveen Kumar Sharma<sup>1</sup> \*, Shivram Sharma<sup>2</sup> , Jitendra Kaushik<sup>3</sup> and Palash Goyal<sup>4</sup>

1 Department of Mathematics, SVIS, Shri Vaishnav Vidyapeeth Vishwavidyalaya, Indore, M.P., India


\*Address all correspondence to: praveen\_jan1980@rediffmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Applications of Fuzzy Set and Fixed Point Theory in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105830*

#### **References**

[1] Poincare H. Analysis situs. J. det. Ecole Polytrch. 1985;**2**:1-123

[2] Browder FE. On a generalisation of Schauder's fixed point theorem. Duke Mathematical Journal. 1959;**26**: 291-303

[3] Banach S. Theorie Les Operations Lineaires, Manograie Mathematyezne Warsaw Poland, In French Z. Subwencji Funduszu Kultury Narodowej, New York. Warsaw: Z Subwencji Funduszu Kultury Naroldowej; 1932

[4] Kandel A, Byatt WJ. Fuzzy differential equations, Proceedings of International Conference on Cybernetics and Society. Tokyo, Japan: IEEE; 1978, pp. 1213-1216

[5] Kannan R. Some results on fixed points II. American Mathematical Monthly. 1969;**76**:405-408

[6] Jungck G. Commuting mappings and fixed points. American Mathematically Monthly. 1976;**83**:261-263

[7] Sessa S. On weak commutativity condition of mappings in consideration of a fixed point. Publications de l'Institut Mathematique. 1982;**32**(46): 149-153

[8] Jungck G. Compatible mappings and common fixed points. International Journal of Mathematics and Mathematical Sciences. 1986;**9**:771-779

[9] Jungck G. Compatible mappings and common fixed points. International Journal of Mathematics and Mathematical Sciences. 1988;**11**:285-288

[10] Sherwood H. Complete probabilistic metric spaces. Z. Wahrseh Verw. Gabinete. 1971;**20**:117-228

[11] Jungck G, Murthy PP, Cho YJ. Compatible mappings of type (A) and common fixed points. Mathematica Japonica. 1993;**38**(2):381-390

[12] Popa V. Some fixed point theorems for compatible mappings satisfying an implicit relation. Demonstratio Mathematica. 1999;**32**(1):157-163

[13] Takahashi W. A convexity in metric spaces and non-expansive mappings. Kodai Mathematical Semester Report. 1970;**22**:142-149

[14] Jachymski J. Common fixed-point theorems for some maps families. Indian Journal of Pure and Applied Mathematics. 1994;**25**:925-937

[15] Pant RP. Common fixed points of the sequence of mappings. Ganita. 1996;**47**: 43-49

[16] Assad NA, Kirk WA. Fixed point theorems for set-valued mappings of contractive type. Pacific Journal of Mathematics. 1972;**43**:553-562

[17] Naimpally SA, Singh SL, Whitfield JHM. Fixed points in convex metric spaces. Mathematica Japonica. 1984;**29**:585-597

[18] Gähler S. 2-metric Raume und ihre topologiche Struktur. Mathematische Nachrichten. 1963;**26**:115-148

[19] Gähler S. Linear 2-normierte Räume. Mathematische Nachrichten. 1964;**28**:1-43

[20] Gähler S. Über 2-Banach Raume. Mathematische Nachrichten. 1969;**42**: 335-347

[21] Pant RP. R-weak commutativity and common fixed points of non-compatible maps. Ganita. 1998;**49**:19-27

[22] Nadler SB. Multivalued contraction mappings. Pacific Journal of Mathematics. 1969;**20**(2):457-488

[23] Ciric LJB. Fixed point for generalised multivalued contractions. Matematichki Vesnik. 1972;**9**(24):265-272

[24] Menger K. Statistical Metric. Proceedings of the National Academy Science of USA. 1942;**28**:535-537

[25] Sehgal VM. A fixed point theorem for mapping with a contractive iterate. Proceedings of the American Mathematical Society. 1969;**23**:631-634

[26] Sehgal VM, Bharucha-Reid AT. Fixed points of contraction mappings of probabilistic metric spaces. Mathematical Systems Theory. 1972;**6**:92-102

[27] Zadeh LA. Fuzzy Sets. Information and Control. 1965;**8**:338-353

[28] Zimmermann HJ, Sebastian HJ. Fuzzy design-Integration of fuzzy theory and knowledge-based system design. In: Proceedings of 1994 IEEE 3rd International Fuzzy Systems Conference. IEEE. 26-29 June 1994. DOI: 10.1109/FUZZY.1994.343673

[29] Zimmermann HJ, Sebastian HJ. Intelligent system design support by fuzzy multi-criteria decision making and evolutionary algorithms. In: Proceedings of the Fourth IEEE International Conference on Fuzzy Systems (FUZZ-IEEE/IFES 95, 20-24 March 1995). Yokohama, Japan: IEEE; 1995

[30] Tsourveloudis NC, Yannis A, Phillis A. Fuzzy assessment of machine flexibility. IEEE Transactions of Engineering Management. 1998;**45**(1):78-87

[31] Bedard R. Fixed point theorems for fuzzy number. Fuzzy Sets and Systems. 1984;**13**:291-302

[32] Butnariu D. Fixed points for fuzzy mappings. Fuzzy Sets and Systems. 1982; **7**:191-207

[33] Grabiec M. Fixed point in fuzzy metric space. Fuzzy Sets and Systems. 1988;**27**:385-389

[34] Weiss MD. Fixed points and induced fuzzy topologies for fuzzy sets. Journal of Mathematical Analysis and Applications. 1975;**50**:142-150

[35] Chang SSL, Zadeh L. On fuzzy mapping and control. IEEE Transactions on System Cybernetics. 1972;**2**:30-34

[36] Kandel A. Fuzzy dynamical systems and the nature of their solutions. In: Wang PP, Chang SK, editors. Fuzzy Sets Theory and Application to Policy Analysis and Information Systems. New York: Plenum Press; 1980. pp. 93-122

[37] Buckley JJ, Feuring T. Fuzzy differential equations. Fuzzy Sets and Systems. 2000;**110**:43-54

[38] Chalco-Cano Y, Roman-Flores H. Comparison between ´ some approaches to solve fuzzy differential equations. Fuzzy Sets and Systems. 2009;**160**: 1517-1527

[39] Nieto JJ, Rodriguez Lopez R. Euler polygonal method for ´ metric dynamical systems. Information Sciences. 2007;**177**: 587-600

[40] Prakash P, Sudha Priya G, Kim JH. Third-order three-point fuzzy boundary value problems. Nonlinear Analysis: Hybrid Systems. 2009;**3**:323-333

[41] Song S, Wu C. Existence and uniqueness of solutions to the Cauchy problem of fuzzy differential equations. Fuzzy Sets and Systems. 2000;**110**:55-67

*Applications of Fuzzy Set and Fixed Point Theory in Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105830*

[42] Kaleva O. Fuzzy differential equations. Fuzzy Sets and Systems. 1987; **24**:301-317

[43] Bede B, Gal SG. Almost periodic fuzzy-number-valued functions. Fuzzy Sets and Systems. 2004;**147**:385-403

[44] Bede B, Gal SG. Generalisations of the differentiability of fuzzy number value functions with applications to fuzzy differential equations. Fuzzy Sets and Systems. 2005;**151**:581-599

[45] Bede B, Rudas IJ, Bencsik AL. First order linear fuzzy differential equations under generalised differentiability. Information Sciences. 2007;**177**: 1648-1662

[46] Chalco Cano Y, Roman-Flores H. On new solutions of ´ fuzzy differential equations. Chaos, Solitons and Fractals. 2008;**38**:112-119

[47] Nieto JJ, Khastan A, Ivaz K. Numerical solution of fuzzy differential equations under generalised differentiability. Nonlinear Analysis: Hybrid Systems. 2009;**3**:700-707

[48] Allahviranloo T, Ahmady E, Ahmady N. Nth order fuzzy linear differential equations. Information Sciences. 2008;**178**:1309-1324

[49] Allahviranloo T, Abbasbandy S, Salahshour S, Hakimzadeh A. A new method for solving fuzzy linear differential equations. Computing. 2011; **92**:181-197

[50] Allahviranloo T, Ahmadi MB. Fuzzy Laplace transforms. Soft Computing. 2010;**14**:235-243

[51] Salahshour S, Allahviranloo T. Applications of fuzzy Laplace transform. Soft Computing. 2013;**17**:145-158. DOI: 10.1007/s00500-012-0907-4

#### **Chapter 7**

## Study of a Dynamical Problem under Fuzzy Conformable Differential Equation

*Atimad Harir, Said Melliani and Lalla Saadia Chadli*

#### **Abstract**

The notion of inclusion by generalized conformable differentiability is used to analyze fuzzy conformable differential equations (FCDE). This idea is based on expanding the class of conformable differentiable fuzzy mappings, and we use generalized lateral conformable derivatives to do so. We'll see that both conformable derivatives are distinct and that they lead to different FCDE solutions. The approach's utility and efficiency are demonstrated with an example.

**Keywords:** fuzzy fractional differential equation, conformable fractional derivative, fuzzy number

#### **1. Introduction**

Aubin and Cellina [1] established the notion of differential inclusions systemically. They looked at the existence and qualities of differential inclusion solutions of the form [2].

$$u'(t) \in \Phi(u(t)) \quad or \quad u'(t) \in \Phi(t, u(t)). \tag{1}$$

In this paper we will consider the conformable fractional differential equation.

$$\begin{aligned} u^{(q)}(t) &= \Phi(t, u(t)) \\ u^{\kappa}(0) &\in [u\_0]^{\kappa}, \quad \kappa \in [0, 1] \end{aligned} \tag{2}$$

where *<sup>t</sup>*<sup>∈</sup> ð Þ 0, *<sup>a</sup>* and *<sup>u</sup>*<sup>0</sup> is a fuzzy number. *<sup>u</sup>*ð Þ*<sup>q</sup>* is the conformable fractional derivative of *u* of order *γ* ∈ð � 0, 1 [3–5]. There are numerous options for defining a fuzzy fractional derivatives and, as a result, see [6–9], studying Eq. (2). [10–16] constructed the generalized derivative of a set value function and investigated it, while [17–20] explored the generalized conformable fractional derivative.

The objective of this research is to see if fuzzy solutions exist using conformable differential inclusion, using the generalized conformable differentiability concept

This idea is based on expanding the class of differentiable fuzzy mappings, and we use lateral conformable derivatives to do so. We will see that both derivatives are different and they lead us to different solutions from an FCDE.

#### **2. Preliminaries**

We'll go through a few definitions now that will come in handy later in the paper. Let us start with a definition. R<sup>F</sup> the class of fuzzy subsets of the real axis f g *η* : R ! ½ � 0, 1 satisfying the following properties:

i. *η* is normal,


Then R<sup>F</sup> is called the space of fuzzy numbers [21].

If *<sup>η</sup>* is a fuzzy set, we define ½ � *<sup>η</sup> <sup>κ</sup>* <sup>¼</sup> f g *<sup>x</sup>*∈Rj*η*ð Þ *<sup>x</sup>* <sup>≥</sup>*<sup>κ</sup>* the *<sup>κ</sup>*-level sets of *<sup>η</sup>*, with <sup>0</sup><*<sup>κ</sup>* <sup>≤</sup>1. Also, if *<sup>η</sup>*∈R<sup>F</sup> then *<sup>κ</sup>*-cut of *<sup>η</sup>* denoted by ½ � *<sup>η</sup> <sup>κ</sup>* <sup>¼</sup> *ηκ* 1, *ηκ* 2 � �*:*

For *η*, *ν*∈R<sup>F</sup> and *λ*∈ R the sum *η* þ *ν* and the product *λη* are defined by ∀*κ* ∈½ � 0, 1 ,

$$[\eta + \nu]^\kappa = \left[\eta\_1^\kappa + \nu\_1^\kappa, \eta\_2^\kappa + \nu\_2^\kappa\right],\tag{3}$$

$$[\lambda\eta]^\kappa = \lambda[\eta]^\kappa = \begin{cases} \left[\lambda\eta\_1^\kappa, \lambda\eta\_2^\kappa\right], & \lambda \ge 0;\\ \left[\lambda\eta\_2^\kappa, \lambda\eta\_1^\kappa\right], & \lambda < 0, \end{cases} \tag{4}$$

Let *d* : R<sup>F</sup> � R<sup>F</sup> ! Rþ∪f g0 by the following equation [22].

$$d(\eta, \nu) = \sup\_{\kappa \in [0, 1]} d\_{\varPi}(\left[\eta\right]^{\kappa}, \left[\nu\right]^{\kappa}), \text{ for all } \eta, \nu \in \mathbb{R}\_{\mathcal{F}}, \tag{5}$$

$$=\sup\_{\kappa\in[0,1]}\max\left\{|\eta\_1^{\kappa}-\nu\_1^{\kappa}|,\ |\eta\_2^{\kappa}-\nu\_2^{\kappa}|\right\}\tag{6}$$

where *dH* is the Hausdorff metric.

The following properties are well-known see [22, 23]: ∀ *η*, *ν*, *ω*, *ρ* ∈R<sup>F</sup> and *λ*∈R.

$$\begin{aligned} d(\eta + \omega, \nu + \omega) &= d(\eta, \nu) \quad \text{and} \quad d(\eta, \nu) = d(\nu, \eta), \\ d(\lambda \eta, \lambda \nu) &= |\lambda| d(\eta, \nu), \\ d(\eta + \nu, \omega + \rho) &\le d(\eta, \omega) + d(\nu, \rho). \end{aligned} \tag{7}$$

And Rð Þ <sup>F</sup> , *d* is a complete metric space.

**Theorem 1.** [1, 22] *Let <sup>η</sup>* : ½ �! 0, *<sup>a</sup>* <sup>R</sup><sup>F</sup> *and* ½ � *<sup>η</sup>*ð Þ*<sup>t</sup> <sup>κ</sup>* <sup>¼</sup> *<sup>η</sup><sup>κ</sup>* <sup>1</sup>ð Þ*<sup>t</sup>* , *ηκ* <sup>2</sup>ð Þ*<sup>t</sup>* � � *be Seikkala differentiable. Then, η<sup>κ</sup>* <sup>1</sup>ð Þ*<sup>t</sup> and ηκ* <sup>2</sup>ð Þ*t are differentiable and*

$$\left[\eta'(t)\right]^\kappa = \left[\left(\eta\_1^\kappa\right)'(t), \ \left(\eta\_2^\kappa\right)'(t)\right], \ \kappa \in [0, 1]. \tag{8}$$

*Study of a Dynamical Problem under Fuzzy Conformable Differential Equation DOI: http://dx.doi.org/10.5772/intechopen.105904*

**Definition 1.** [24] *Let η* : ½ �! 0, *a* R<sup>F</sup> *.* Ð *c <sup>b</sup> η*ð Þ*t dt*, *b*,*c*∈ ½ � 0, *a is the fuzzy integral, defined by*

$$
\left[\int\_{b}^{c} \eta(t)dt\right]^{\kappa} = \left[\int\_{b}^{c} \eta\_{1}^{\kappa}(t)dt, \int\_{b}^{c} \eta\_{2}^{\kappa}(t)dt\right],\tag{9}
$$

for all 0 ≤*κ* ≤1. In [24], if *η* : ½ �! 0, *a* R<sup>F</sup> is continuous, it is fuzzy integrable. **Theorem 2.** [22, 25] *If η*∈R<sup>F</sup> *, then:*

i.

$$[\eta]^{\kappa\_2} \subset [\eta]^{\kappa\_1}, \quad \dot{\mathcal{Y}} \ 0 \le \kappa\_1 \le \kappa\_2 \le 1; \tag{10}$$

ii. f g *κ<sup>k</sup>* ⊂½ � 0, 1 *is a increasing sequence which converges to κ,*

$$\left[\eta\right]^{\kappa} = \underset{k \geq 1}{\cap} \left[\eta\right]^{\kappa\_k}.\tag{11}$$

Alternatively, if <sup>ϒ</sup>*<sup>κ</sup>* <sup>¼</sup> *ηκ* 1, *η<sup>κ</sup>* 2 � �; *<sup>κ</sup>* <sup>∈</sup>ð � 0, 1 � � is a closed real intervals ið Þ and ii ð Þ, then f g <sup>ϒ</sup>*<sup>κ</sup>* defined a fuzzy number *<sup>η</sup>*∈R<sup>F</sup> such that ½ � *<sup>η</sup> <sup>κ</sup>* <sup>¼</sup> <sup>ϒ</sup>*κ:.*

#### **3. Fuzzy conformable differentiability and integral**

The funcion Φ : ½ �! *a*, *b* <sup>F</sup> is called fuzzy function. The *κ*-level representation of fuzzy function <sup>Φ</sup> given by ½ � <sup>Φ</sup>ð Þ*<sup>t</sup> <sup>κ</sup>* <sup>¼</sup> *<sup>ϕ</sup><sup>κ</sup>* <sup>1</sup>ð Þ*<sup>t</sup>* , *<sup>ϕ</sup><sup>κ</sup>* <sup>2</sup>ð Þ*<sup>t</sup>* � �, <sup>∀</sup>*t*∈½ � *<sup>a</sup>*, *<sup>b</sup>* , <sup>∀</sup>*<sup>κ</sup>* <sup>∈</sup>½ � 0, 1 .

**Definition 2.** [17] *Let* <sup>Φ</sup> : ð Þ! 0, *<sup>a</sup>* <sup>F</sup> *be a fuzzy function. <sup>γ</sup>th order*" *fuzzy conformable derivative" of* Φ *is defined by*

$$T\_{\mathcal{I}}(\Phi)(t) = \lim\_{\varepsilon \to 0^{+}} \frac{\Phi(t + \varepsilon t^{1-\gamma}) \ominus \Phi(t)}{\varepsilon} = \lim\_{\varepsilon \to 0^{+}} \frac{\Phi(t) \ominus \Phi(t - \varepsilon t^{1-\gamma})}{\varepsilon}. \tag{12}$$

for all *<sup>t</sup>*<sup>&</sup>gt; 0, *<sup>γ</sup>* <sup>∈</sup> ð Þ 0, 1 . Let <sup>Φ</sup>ð Þ*<sup>γ</sup>* ð Þ*<sup>t</sup>* stands for *<sup>T</sup><sup>γ</sup>* ð Þ <sup>Φ</sup> ð Þ*<sup>t</sup>* . Hence

$$\Phi^{(\gamma)}(t) = \lim\_{\varepsilon \to 0^{+}} \frac{\Phi(t + \varepsilon t^{1-\gamma}) \ominus \Phi(t)}{\varepsilon} = \lim\_{\varepsilon \to 0^{+}} \frac{\Phi(t) \ominus \Phi(t - \varepsilon t^{1-\gamma})}{\varepsilon}. \tag{13}$$

If <sup>Φ</sup> is *<sup>γ</sup>*-differentiable in some 0, ð Þ *<sup>a</sup>* , and lim *<sup>t</sup>*!0<sup>þ</sup> <sup>Φ</sup>ð Þ*<sup>γ</sup>* ð Þ*<sup>t</sup>* exists, then

$$\Phi^{(\mathcal{r})}(\mathbf{0}) = \lim\_{\mathfrak{t} \to \mathbf{0}^+} \Phi^{(\mathcal{r})}(\mathbf{t}) \tag{14}$$

and the limits (in the metric d). **Remark 1**. [26]

— *If* Φ *is γ-differentiable then the multivalued mapping* Φ*<sup>κ</sup> is γ-differentiable for all κ* ∈½ � 0, 1 *and*

$$T\_{\gamma} \Phi\_{\kappa} = \left[ \Phi^{(\gamma)}(t) \right]^{\kappa},\tag{15}$$

where *Tγ*Φ*<sup>κ</sup>* is denoted from the conformable fractional derivative of Φ*<sup>κ</sup>* of order *γ*.

— ½ � *<sup>η</sup> <sup>κ</sup>* <sup>⊖</sup> ½ � *<sup>ν</sup> <sup>κ</sup>* , *κ* ∈½ � 0, 1 *does not imply the existence of Hukuhara difference (Hdifference) η* ⊖ *ν:*

**Theorem 3.** [26]

Let <sup>Φ</sup> : ð Þ! 0, *<sup>a</sup>* <sup>F</sup> , ½ � <sup>Φ</sup>ð Þ*<sup>t</sup> <sup>κ</sup>* <sup>¼</sup> *ϕκ* <sup>1</sup>ð Þ*<sup>t</sup>* , *<sup>ϕ</sup><sup>κ</sup>* <sup>2</sup>ð Þ*<sup>t</sup>* � �, *<sup>κ</sup>* <sup>∈</sup>½ � 0, 1 *.*

i. *If* <sup>Φ</sup> *is <sup>γ</sup>*ð Þ<sup>1</sup> *-differentiable, then <sup>ϕ</sup><sup>κ</sup>* <sup>1</sup>ð Þ*<sup>t</sup> and <sup>ϕ</sup><sup>κ</sup>* <sup>2</sup>ð Þ*t are γ-differentiable and*

$$\left[\Phi^{\left(\mathcal{I}\_{(1)}\right)}(t)\right]^{\kappa} = \left[\left(\phi\_1^{\kappa}\right)^{(\mathcal{I})}(t), \left(\phi\_2^{\kappa}\right)^{(\mathcal{I})}(t)\right] \tag{16}$$

ii. *If* <sup>Φ</sup> *is <sup>γ</sup>*ð Þ<sup>2</sup> *-differentiable, then ϕκ* <sup>1</sup>ð Þ*<sup>t</sup> and ϕκ* <sup>2</sup>ð Þ*t are γ-differentiable and*

$$\left[\Phi^{\left(\mathcal{I}\_{\left(\mathcal{I}\right)}\right)}(t)\right]^{\kappa} = \left[\left(\phi\_2^{\kappa}\right)^{\left(\mathcal{I}\right)}(t), \left(\phi\_1^{\kappa}\right)^{\left(\mathcal{I}\right)}(t)\right].\tag{17}$$

**Theorem 4.** [17] *Let γ* ∈ð � 0, 1 :*.*

i. *If* Φ *is* ð Þ1 *-differentiable and* Φ *is γ*ð Þ<sup>1</sup> *-differentiable, then*

$$T\_{\mathcal{I}\_{(1)}}\Phi(t) = t^{1-\gamma}D\_1^1\Phi(t)\tag{18}$$

ii. *If* Φ *is* ð Þ2 *-differentiable and* Φ *is γ*ð Þ<sup>2</sup> *-differentiable, then*

$$T\_{\mathcal{I}\_{(2)}}\Phi(t) = t^{1-\gamma}D\_2^1\Phi(t)\tag{19}$$

**Theorem 5.** *If* Φ : ð Þ! 0, *a* <sup>F</sup> *is γ-differentiable then it is continuous.*

*Proof.* Denote <sup>Φ</sup>*κ*ðÞ¼ *<sup>t</sup> ϕκ* <sup>1</sup>ð Þ*<sup>t</sup>* , *<sup>ϕ</sup><sup>κ</sup>* <sup>2</sup>ð Þ*<sup>t</sup>* � �, *<sup>κ</sup>* <sup>∈</sup> ½ � 0, 1 *:* Then *<sup>ϕ</sup><sup>κ</sup>* <sup>1</sup>ð Þ*<sup>t</sup>* and *ϕκ* <sup>2</sup>ð Þ*t* are continuous at *t*0, so Φ is continuous at *t*0*:*

If *ε*> 0 and *κ* ∈½ � 0, 1 , we have:

$$\mathbb{E}\left[\Phi\left(t\_0 + \epsilon t\_0^{1-\gamma}\right) \ominus \Phi(t\_0)\right]^\kappa = \left[\phi\_1^\kappa\left(t\_0 + \epsilon t\_0^{1-\gamma}\right) - \phi\_1^\kappa(t\_0), \phi\_2^\kappa\left(t\_0 + \epsilon t\_0^{1-\gamma}\right) - \phi\_2^\kappa(t\_0)\right]^\kappa$$

Dividing and multiplying by *ε*, we have:

$$\mathbb{E}\left[\Phi\left(t\_0 + \varepsilon t\_0^{1-\gamma}\right)\ominus\Phi(t\_0)\right]^\kappa = \left[\frac{\phi\_1^\kappa\left(t\_0 + \varepsilon t\_0^{1-\gamma}\right) - \phi\_1^\kappa(t\_0)}{\varepsilon} \cdot \varepsilon, \frac{\phi\_2^\kappa\left(t\_0 + \varepsilon t\_0^{1-\gamma}\right) - \phi\_2^\kappa(t\_0)}{\varepsilon} \cdot \varepsilon\right]^\kappa$$

Similarly, we obtain:

$$\left[\Phi(t\_0)\ominus\Phi\left(t\_0-\epsilon t\_0^{1-\gamma}\right)\right]^\kappa = \left[\frac{\phi\_1^\kappa(t\_0)-\phi\_1^\kappa(t\_0-\epsilon t\_0^{1-\gamma})}{\varepsilon}\cdot\varepsilon,\frac{\phi\_2^\kappa(t\_0)-\phi\_2^\kappa(t\_0-\epsilon t\_0^{1-\gamma})}{\varepsilon}\cdot\varepsilon\right]^\kappa$$

*Study of a Dynamical Problem under Fuzzy Conformable Differential Equation DOI: http://dx.doi.org/10.5772/intechopen.105904*

Then

$$\begin{aligned} \lim\_{\varepsilon \to 0^{+}} \left[ \Phi \left( t\_{0} + \varepsilon t\_{0}^{1-\gamma} \right) \ominus \Phi(t\_{0}) \right]^{\kappa} &= \quad \left[ \lim\_{\varepsilon \to 0^{+}} \frac{\phi\_{1}^{\kappa} \left( t\_{0} + \varepsilon t\_{0}^{1-\gamma} \right) - \phi\_{1}^{\kappa}(t\_{0})}{\varepsilon} \cdot \lim\_{\varepsilon \to 0^{+}} \varepsilon, \right. \\ & \quad \lim\_{\varepsilon \to 0^{+}} \frac{\phi\_{2}^{\kappa} \left( t\_{0} + \varepsilon t\_{0}^{1-\gamma} \right) - \phi\_{2}^{\kappa}(t\_{0})}{\varepsilon} \cdot \lim\_{\varepsilon \to 0^{+}} \varepsilon \end{aligned}$$

Similarly, we obtain:

$$\begin{aligned} \lim\_{\varepsilon \to 0^{+}} \left[ \Phi(t\_0) \ominus \Phi \left( t\_0 - \varepsilon t\_0^{1-\gamma} \right) \right]^\kappa &= \quad \left[ \lim\_{\varepsilon \to 0^{+}} \frac{\phi\_1^\kappa(t\_0) - \phi\_1^\kappa \left( t\_0 - \varepsilon t\_0^{1-\gamma} \right)}{\varepsilon} \cdot \lim\_{\varepsilon \to 0^{+}} \varepsilon \right] \\ &\lim\_{\varepsilon \to 0^{+}} \frac{\phi\_2^\kappa(t\_0) - \phi\_2^\kappa \left( t\_0 - \varepsilon t\_0^{1-\gamma} \right)}{\varepsilon} \cdot \lim\_{\varepsilon \to 0^{+}} \varepsilon \end{aligned}$$

Let *h* ¼ *εt* 1�*γ* <sup>0</sup> *:* Then

$$\lim\_{h \to 0^+} [\Phi(t\_0 + h) \ominus \Phi(t\_0)]^\kappa = \left[ \left( \phi\_1^\kappa \right)^{(\gamma)}(t\_0) \cdot \mathbf{0}, \left( \phi\_2^\kappa \right)^{(\gamma)}(t\_0) \cdot \mathbf{0} \right]^\kappa$$

Similarly, we obtain:

$$\lim\_{h \to 0^+} [\Phi(t\_0 - h)]^\kappa = [\Phi(t\_0)]^\kappa$$

which implies that

$$\lim\_{h \to 0^+} [\Phi(t\_0 + h)]^\kappa = [\Phi(t\_0)]^\kappa$$

Similary, we obtain:

$$\lim\_{h \to 0^+} [\Phi(t\_0 - h)]^\kappa = [\Phi(t\_0)]^\kappa$$

Hence, <sup>Φ</sup> is continuous at *<sup>t</sup>*0. □ **Remark 2.** *If* <sup>Φ</sup> : ð Þ! 0, *<sup>a</sup>* <sup>F</sup> *is <sup>γ</sup>-differentiable and* <sup>Φ</sup>ð Þ*<sup>γ</sup> for all <sup>γ</sup>* <sup>∈</sup>ð � 0, 1 *is continuous, then we denote* Φ ∈*C*<sup>1</sup> ð Þ ð Þ 0, *a* , <sup>F</sup> *.*

**Theorem 6.** *Let γ* ∈ð � 0, 1 *and if* Φ, Ψ : ð Þ! 0, *a* <sup>F</sup> *are γ-differentiable and λ*∈ *then*

i.

$$T\_{\mathcal{I}}(\Phi + \Psi)(t) = T\_{\mathcal{I}}(\Phi)(t) + T\_{\mathcal{I}}(\Psi)(t) \tag{20}$$

ii.

$$T\_{\gamma}(\lambda \Phi)(t) = \lambda T\_{\gamma}(\Phi)(t). \tag{21}$$

*proof.* We present the details only for case ið Þ, since the other case is anlogous. Since Φ is *γ*ð Þ<sup>1</sup> -differentiable it follows that Φ *t* þ *εt* <sup>1</sup>�*<sup>γ</sup>* ð Þ <sup>⊖</sup> <sup>Φ</sup>ð Þ*<sup>t</sup>* exists i.e. there exists *u*<sup>1</sup> *t*, *εt* <sup>1</sup>�*<sup>γ</sup>* ð Þ such that

$$\Phi(t + \epsilon t^{1-\gamma}) = \Phi(t) + u\_1(t, \epsilon t^{1-\gamma}) \tag{22}$$

Analogously since Ψ is *γ*ð Þ<sup>1</sup> -differentiable there exists *v*<sup>1</sup> *t*, *εt* <sup>1</sup>�*<sup>γ</sup>* ð Þ such that

$$
\Psi(t + \epsilon t^{1-\gamma}) = \Psi(t) + \nu\_1(t, \epsilon t^{1-\gamma}),
$$

and we get

$$
\Phi(t + \epsilon t^{1-\gamma}) + \Psi(t + \epsilon t^{1-\gamma}) = \Phi(t) + \Psi(t) + u\_1(t, \epsilon t^{1-\gamma}) + v\_1(t, \epsilon t^{1-\gamma})\tag{23}
$$

that is the H-difference

$$\left(\Phi\left(t+\varepsilon t^{1-\gamma}\right)+\Psi\left(t+\varepsilon t^{1-\gamma}\right)\right)\ominus\left(\Phi(t)+\Psi(t)\right)=u\_1\left(t,\varepsilon t^{1-\gamma}\right)+v\_1\left(t,\varepsilon t^{1-\gamma}\right)\tag{24}$$

By similar reasoning we get that there exist *u*<sup>2</sup> *t*, *εt* <sup>1</sup>�*<sup>γ</sup>* ð Þ and *<sup>v</sup>*<sup>2</sup> *<sup>t</sup>*, *<sup>ε</sup><sup>t</sup>* <sup>1</sup>�*<sup>γ</sup>* ð Þ such that

$$\begin{aligned} \Phi(t) &= \Phi\left(t - \epsilon t^{1-\gamma}\right) + \nu\_2\left(t, \epsilon t^{1-\gamma}\right), \\ \Psi(t) &= \Psi\left(t - \epsilon t^{1-\gamma}\right) + \nu\_2\left(t, \epsilon t^{1-\gamma}\right) \end{aligned}$$

and so

$$\left( (\Phi(t) + \Psi(t)) \right) = \left( \Phi\left(t - \varepsilon t^{1-\gamma} \right) + \Psi\left(t - \varepsilon t^{1-\gamma} \right) \right) + u\_2\left(t, \varepsilon t^{1-\gamma} \right) + v\_2\left(t, \varepsilon t^{1-\gamma} \right)$$

that is the H-difference

$$\left(\left(\Phi(t) + \Psi(t)\right)\ominus\left(\Phi\left(t - \epsilon t^{1-\gamma}\right) + \Psi\left(t - \epsilon t^{1-\gamma}\right)\right) - \mathfrak{u}\_2\left(t, \epsilon t^{1-\gamma}\right) + \mathfrak{v}\_2\left(t, \epsilon t^{1-\gamma}\right) \tag{25}$$

We observe that

$$\lim\_{\varepsilon \to 0^{+}} \frac{u\_1(t, \varepsilon t^{1-\gamma})}{\varepsilon} \quad = \lim\_{\varepsilon \to 0^{+}} \frac{u\_2(t, \varepsilon t^{1-\gamma})}{\varepsilon} = \Phi^{(\gamma)}(t) \quad \text{and}$$

$$\lim\_{\varepsilon \to 0^{+}} \frac{v\_1(t, \varepsilon t^{1-\gamma})}{\varepsilon} \quad = \lim\_{\varepsilon \to 0^{+}} \frac{v\_2(t, \varepsilon t^{1-\gamma})}{\varepsilon} = \Psi^{(\gamma)}(t).$$

Finally, by multiplying (24) and (25) with <sup>1</sup> *<sup>ε</sup>* and passing to limit with lim *<sup>ε</sup>*!0<sup>þ</sup> we get that Φ þ Ψ is *γ*ð Þ<sup>1</sup> -differentiable and *T<sup>γ</sup>* ð Þ Φ þ Ψ ðÞ¼ *t Tγ*ΦðÞþ*t Tγ*Ψð Þ*t* The case when <sup>Φ</sup> and <sup>Ψ</sup> are *<sup>γ</sup>*ð Þ<sup>2</sup> -differentiable is similar to the previous one. □

**Definition 3.** *Let* <sup>Φ</sup> <sup>∈</sup>*C*ð Þ ð Þ 0, *<sup>a</sup>* , <sup>F</sup> <sup>∩</sup> *<sup>L</sup>*<sup>1</sup> ð Þ ð Þ 0, *a* , <sup>F</sup> , *Define the fuzzy fractional integral for γ* ∈ ð � 0, 1 *:*

$$I\_{\boldsymbol{\gamma}}(\Phi)(t) = I\_1(t^{r-1}\Phi)(t) = \int\_0^t \frac{\Phi}{s^{1-\boldsymbol{\gamma}}}(s)ds,\tag{26}$$

where the integral is the usual Riemann improper integral.

**Theorem 7.** *T<sup>γ</sup> I<sup>γ</sup>* ð Þ Φ ð Þ*t , for t*≥ 0*, where* Φ *is any continuous function in the domain of Iγ. Proof.* Since Φ is continuous, then *I<sup>γ</sup>* ð Þ Φ ð Þ*t* is clearly conformable differentiable. Hence,

*Study of a Dynamical Problem under Fuzzy Conformable Differential Equation DOI: http://dx.doi.org/10.5772/intechopen.105904*

$$\begin{aligned} \left[T\_{\boldsymbol{\gamma}}I\_{\boldsymbol{\gamma}}(\Phi)(t)\right]^{\kappa} &= \left[t^{1-\gamma}\frac{d}{dt}I\_{\boldsymbol{\gamma}}(\Phi)(t)\right]^{\kappa} \\ &= \left[t^{1-\gamma}\frac{d}{dt}\int\_{0}^{t}\frac{\phi\_{1}^{\kappa}(\boldsymbol{\chi})}{\boldsymbol{x}^{1-\gamma}}d\boldsymbol{x},t^{1-\gamma}\frac{d}{dt}\int\_{0}^{t}\frac{\phi\_{2}^{\kappa}(\boldsymbol{\chi})}{\boldsymbol{x}^{1-\gamma}}d\boldsymbol{x}\right] \\ &= \left[t^{1-\gamma}\frac{\phi\_{1}^{\kappa}(t)}{t^{1-\gamma}},t^{1-\gamma}\frac{\phi\_{2}^{\kappa}(t)}{t^{1-\gamma}}\right] \\ &= [\Phi(t)]^{\kappa} \end{aligned}$$

□

**Theorem 8.** *Let γ* ∈ð � 0, 1 *and* Φ *be γ-differentiable in* ð Þ 0, *a and assume that the conformable derivative* <sup>Φ</sup>ð Þ*<sup>γ</sup> is integrable over* ð Þ 0, *<sup>a</sup> . Then* <sup>∀</sup> *<sup>s</sup>*∈ð Þ 0, *<sup>a</sup> we have.*

$$\Phi(\mathfrak{s}) = \Phi(\mathfrak{a}) + I\_{\mathfrak{I}} \Phi^{(\mathfrak{I})}(\mathfrak{t}) \tag{27}$$

*Proof.* Let *γ* ∈ð � 0, 1 and *κ* ∈½ � 0, 1 be fixed. We will demonstrate this.

$$\Phi\_{\kappa}(\mathfrak{s}) = \Phi\_{\kappa}(\mathfrak{a}) + I\_{\mathcal{I}} \Phi\_{\kappa}^{(\mathcal{I})} \tag{28}$$

where Φð Þ*<sup>γ</sup> <sup>κ</sup>* is conformable derivative of Φ*κ*, the equation is then obtained by applying Theorems 3 and 4.

$$\begin{aligned} \Phi\_{\kappa}(s) &= \Phi\_{\kappa}(a) + I\_{\mathcal{I}} \Phi\_{\kappa}^{(\mathcal{I})} \\ &= \Phi\_{\kappa}(a) + I\_{\mathcal{I}} \left(t^{1-\gamma} \Phi\_{\kappa}^{\prime}\right), \end{aligned}$$

by (26) we have

$$\begin{aligned} \Phi\_{\kappa}(s) &= \Phi\_{\kappa}(a) + I\_{\gamma} \left( t^{1-\gamma} \Phi\_{\kappa}' \right) \\ &= \Phi\_{\kappa}(a) + \int\_{0}^{s} t^{\gamma - 1} \left( t^{1-\gamma} \Phi\_{\kappa}' \right) \end{aligned}$$

So

$$\Phi\_{\kappa}(\mathfrak{s}) = \Phi\_{\kappa}(\mathfrak{a}) + \int\_{0}^{\mathfrak{s}} \Phi\_{\kappa}' \tag{29}$$

where Φ<sup>0</sup> *<sup>κ</sup>* is the derivative of Φ*κ*, For a fuzzy mapping, the (29) is likewise true Φ : ð Þ! 0, *a* <sup>F</sup> . In [1], the equality (28) now follows Theorem (8).

#### **4. Solutions via conformable differential inclusions**

We consider the fuzzy conformable differential equation.

$$u^{(\gamma)}(t) = \Phi(t, u(t)), \quad u(0) = u\_0, \ \gamma \in (0, 1], \tag{30}$$

where Φ : ð Þ� 0, *a* R<sup>F</sup> ! R<sup>F</sup> is generated from a continuous function using Zadeh's extension principle. *ψ* : ð Þ� 0, *a* R ! R*:*

Let Φð Þ *t*, *u* can be calculated at the level, i.e., ∀ *κ* ∈½ � 0, 1 .

$$[\Phi(t, u)]^\kappa = \psi(t, [u]^\kappa).$$

for all *t*∈ð Þ 0, *a* , *u* ∈R<sup>F</sup> and *κ* ∈½ � 0, 1 *:* We interpret the fuzzy initial value problem (30) as a set of differential inclusions, following Diamonds [7, 10].

$$\left(\boldsymbol{v}^{(\boldsymbol{\eta})}\right)^{\kappa}(t) = \boldsymbol{\psi}(t, \boldsymbol{v}^{\kappa}(t)), \quad \boldsymbol{v}^{\kappa}(\mathbf{0}) \in [\boldsymbol{u}\_{0}]^{\kappa} \tag{31}$$

The reachable sets, under reasonable assumptions.

$$\Upsilon\_{\kappa}(t) = \{v^{\kappa}(t) \mid v^{\kappa} \text{ is a solution } \text{ of} \ (\mathfrak{A}1)\},$$

are *κ*-cuts of a fuzzy set *u t*ð Þ, which we call a solution of (30). If we assume that the solutions to the initial value problem are unique, *v*ð Þ*<sup>γ</sup> <sup>κ</sup>* ðÞ¼ *<sup>t</sup> <sup>ψ</sup> <sup>t</sup>*, *<sup>v</sup><sup>κ</sup>* ð Þ ð Þ*<sup>t</sup>* , *<sup>v</sup><sup>κ</sup>*ð Þ¼ <sup>0</sup> *v*0, it follows that ϒ*κ*ðÞ¼ *t* ½ � *w*1ð Þ*t* , *w*2ð Þ*t* , where.

$$\begin{aligned} \left(w^{(\boldsymbol{\eta})}\right)\_1(t) &= \boldsymbol{\psi}(t, w\_1(t)), \quad w\_1(0) = u\_{01}^\kappa \quad \text{and} \\ \left(w^{(\boldsymbol{\eta})}\right)\_2(t) &= \boldsymbol{\psi}(t, w\_2(t)), \quad w\_2(0) = u\_{02}^\kappa \end{aligned}$$

**Theorem 9.** *The fuzzy solution and solution by differential inclusions solution using the first form are equivalent if ψ is nondecreasing with respect to the second argument.*

*proof.* For each *<sup>κ</sup>* <sup>∈</sup> ½ � 0, 1 and <sup>∀</sup> *<sup>γ</sup>* <sup>∈</sup> ð � 0, 1 , we think ½ Þ� *u t*ð Þ *<sup>κ</sup>* <sup>¼</sup> *<sup>u</sup><sup>κ</sup>* <sup>1</sup>ð Þ*<sup>t</sup>* , *<sup>u</sup><sup>κ</sup>* <sup>2</sup>ð Þ*<sup>t</sup>* and ½ Þ� *<sup>u</sup>*ð Þ <sup>0</sup> *<sup>κ</sup>* <sup>¼</sup> *<sup>u</sup><sup>κ</sup>* 01, *u<sup>κ</sup>* <sup>02</sup> *:* Since *<sup>g</sup>* is continuous and

$$\left[\Phi(t, u(t))\right]^\kappa = \psi(t, \left[u(t)\right])^\kappa\big) = \psi\left(t, \left[u\_1^\kappa(t), u\_2^\kappa(t)\right]\right),$$

then *ψ t*, *u<sup>κ</sup>* <sup>1</sup>ð Þ*<sup>t</sup>* , *<sup>u</sup><sup>κ</sup>* <sup>2</sup>ð ÞÞ *<sup>t</sup>* is compact and connected i.e. a closed bounded interval. As *ψ* is nondecreasing, we have that

$$\boldsymbol{\psi}\left(t,\left[\boldsymbol{u}\_1^{\kappa}(t),\boldsymbol{u}\_2^{\kappa}(t)\right]\right) = \left[\boldsymbol{\psi}\left(t,\boldsymbol{u}\_1^{\kappa}(t)\right),\boldsymbol{\psi}\left(t,\boldsymbol{u}\_2^{\kappa}(t)\right)\right],$$

As a result, the conformable differential system for boundary functions of fuzzy solution is uncoupled into two initial value problems:

$$\begin{aligned} \left(\boldsymbol{u}^{(\boldsymbol{y})}\right)\_1^\kappa(t) &= \boldsymbol{\psi}\left(t, \boldsymbol{u}\_1^\kappa(t)\right), \quad \boldsymbol{u}\_1^\kappa(\mathbf{0}) = \boldsymbol{u}\_{01}^\kappa, \\ \left(\boldsymbol{u}^{(\boldsymbol{y})}\right)\_2^\kappa(t) &= \boldsymbol{\psi}\left(t, \boldsymbol{u}\_2^\kappa(t)\right), \quad \boldsymbol{u}\_2^\kappa(\mathbf{0}) = \boldsymbol{u}\_{02}^\kappa, \end{aligned}$$

This results in *u<sup>κ</sup>* <sup>1</sup> <sup>¼</sup> *<sup>w</sup>*<sup>1</sup> and *<sup>u</sup><sup>κ</sup>* <sup>2</sup> ¼ *w*2.

Now, we offer the following result as an extension of the preceding theorem to the class of differentiable functions with regard to the second form (17).

**Theorem 10.** *If ψ is nonincreasing with respect to the second argument then, using the derivative in the second form* (17)*, the fuzzy solution of* (30) *and the solution via differential inclusions are identical.*

*proof.* Let *κ* ∈½ � 0, 1 and *γ* ∈ ð � 0, 1 , we consider.

$$[(u(t))]^\kappa = \left[u\_1^\kappa(t), u\_2^\kappa(t)\right] \quad \text{and} \quad [u(0))]^\kappa = \left[u\_{01}^\kappa, u\_{02}^\kappa\right].$$

*Study of a Dynamical Problem under Fuzzy Conformable Differential Equation DOI: http://dx.doi.org/10.5772/intechopen.105904*

So

$$\left[\Phi(t, u(t))\right]^\kappa = \psi(t, \left[u(t)\right])^\kappa\rangle = \psi\left(t, \left[u\_1^\kappa(t), u\_2^\kappa(t)\right]\right).$$

and *ψ* is continuous, and *ψ t*, *u<sup>κ</sup>* <sup>1</sup>ð Þ*<sup>t</sup>* , *<sup>u</sup><sup>κ</sup>* <sup>2</sup>ð Þ*<sup>t</sup>* � � � � is a closed bounded interval. Since *<sup>ψ</sup>* is nonincreasing, it follows that

$$\varphi\left(t,\left[u\_1^\kappa(t),u\_2^\kappa(t)\right]\right) = \left[\varphi\left(t,u\_2^\kappa(t)\right),\varphi\left(t,u\_1^\kappa(t)\right)\right].$$

Consequently, from (31), we have the conformable differential system.

$$\begin{aligned} \left(\boldsymbol{u}^{(\boldsymbol{y})}\right)\_{2}^{\kappa}(t) &= \boldsymbol{\nu}\left(t, \boldsymbol{u}\_{2}^{\kappa}(t)\right), \quad \boldsymbol{u}\_{2}^{\kappa}(\mathbf{0}) = \boldsymbol{u}\_{02}^{\kappa}, \\ \left(\boldsymbol{u}^{(\boldsymbol{y})}\right)\_{1}^{\kappa}(t) &= \boldsymbol{\nu}\left(t, \boldsymbol{u}\_{1}^{\kappa}(t)\right), \quad \boldsymbol{u}\_{1}^{\kappa}(\mathbf{0}) = \boldsymbol{u}\_{01}^{\kappa}, \end{aligned} \tag{32}$$

If *<sup>u</sup>*ð Þ*<sup>γ</sup>* ð Þ*<sup>t</sup>* is consider in the form (2), we have that

$$\left[ \left( u^{(\boldsymbol{\eta})} \right) (t) \right]^\kappa = \left[ \left( u^{(\boldsymbol{\eta})} \right)\_2^\kappa (t), \left( u^{(\boldsymbol{\eta})} \right)\_1^\kappa (t) \right] = \left[ \boldsymbol{\nu} \left( t, u\_2^\kappa (t) \right), \boldsymbol{\nu} \left( t, u\_1^\kappa (t) \right) \right]^\kappa$$

The proof is complete after obtaining the differential system (32). **Example 1.** *Consider the fuzzy initial value problem.*

$$u^{(\gamma)}(t) = u^2(t), \quad u(0) = u\_0 \tag{33}$$

Where *<sup>u</sup>*<sup>0</sup> is a traingular fuzzy number ½ � *<sup>u</sup>*<sup>0</sup> *<sup>κ</sup>* <sup>¼</sup> ½ � <sup>1</sup> <sup>þ</sup> *<sup>κ</sup>*, 3 � *<sup>κ</sup> :* Since *<sup>u</sup>*<sup>2</sup> is continuous and we are operating on R<sup>F</sup> , we can solve the equation levelwise.

Since *u*<sup>2</sup> is increasing when *x*>0, we have to solve a conformable differential system for *γ* ∈ð � 0, 1 *.*

$$\left(\mathfrak{u}\_{1}^{\kappa}\right)^{(\mathfrak{y})}(t) = \left(\mathfrak{u}\_{1}^{\kappa}\right)^{2}(t), \quad \mathfrak{u}\_{1}^{\kappa}(\mathbf{0}) = \mathbf{1} + \kappa,\tag{34}$$

$$\left(\boldsymbol{u}\_{2}^{\kappa}\right)^{(\boldsymbol{\eta})}(t) = \left(\boldsymbol{u}\_{2}^{\kappa}\right)^{2}(t), \quad \boldsymbol{u}\_{2}^{\kappa}(\mathbf{0}) = \mathbf{3} - \kappa,\tag{35}$$

where ½ � *u t*ð Þ *<sup>κ</sup>* <sup>¼</sup> *<sup>u</sup><sup>κ</sup>* 1, *u<sup>κ</sup>* 2 � �*:* The solutions are

$$\mu\_1^{\kappa}(t) = \frac{-\mathbf{1} - \kappa}{\left(\frac{\upsilon}{\underline{\gamma}} + \frac{\upsilon}{\underline{\gamma}}\kappa\right) - \mathbf{1}} \text{ and } \mu\_2^{\kappa}(t) = \frac{-\mathbf{3} + \kappa}{\left(\mathbf{3}\frac{\upsilon}{\underline{\gamma}} - \frac{\upsilon}{\underline{\gamma}}\kappa\right) - \mathbf{1}}$$

We can see this *u<sup>κ</sup>* <sup>2</sup>ð Þ*<sup>t</sup>* <sup>&</sup>lt; <sup>∞</sup> for *<sup>t</sup>*<sup>&</sup>lt; <sup>1</sup> <sup>3</sup> and

$$0 \le \mathfrak{u}\_1^{\kappa}(t) \le \mathfrak{u}\_2^{\kappa}(t)$$

for these values of *t:*

As a result, there is a fuzzy solution to the fuzzy initial value problem. *u t*ð Þ for 0≤*t*< <sup>1</sup> 3 *:*

#### **5. Conclusion**

The fuzzy conformable differential inclusions (FCDI) are introduced, which have been used by various authors to solve FDE for *γ* ¼ 1 [2, 6]. It also has the advantage that the solutions derived using FCDI appear to be more intuitive than other conformable derivative solutions first form (9), [19]. It's also worth noting that this interpretation has a drawback in that we cannot discuss the fuzzy conformable derivative. Instead, we address this obstacle by utilizing the fuzzy conformable derivative in the second form (17), and the fuzzy solution and the solution via conformable differential inclusions are identical.

### **Conflicts of interest**

The authors declare that they have no conflicts of interest.

### **Data availability**

The data used to support the findings of this study are available from the corresponding author upon request.

#### **Author details**

Atimad Harir\*, Said Melliani and Lalla Saadia Chadli Laboratory of Applied Mathematics and Scientific Computing, Sultan Moulay Slimane University, Beni Mellal, Morocco

\*Address all correspondence to: atimad.harir@gmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Study of a Dynamical Problem under Fuzzy Conformable Differential Equation DOI: http://dx.doi.org/10.5772/intechopen.105904*

#### **References**

[1] Kaleva O. A note on fuzzy differential equations. Nonlinear Analysis. 2006;**64**: 895-900

[2] Diamond P. Time-dependent differential inclusions, cocycle attractors and fuzzy differential equations. IEEE Transactions on Fuzzy Systems. 1999;**7**: 734-740

[3] Abdeljawad T. On conformable fractional calculus. Journal of Computational and Applied Mathematics. 2015;**279**:57-66

[4] Khalil R, Al Horani M, Yousef A, Sababheh M. A new definition of fractional derivative. Journal of Computational and Applied Mathematics. 2014;**264**:65-70

[5] Unal E, Gokdogan A. Solution of conformable fractional ordinary differential equations via differential transform method. International Journal for Light and Electron Optics. 2017;**128**: 264-273

[6] Abbasbandy S, Nieto JJ, Alavi M. Tuning of reachable set in one dimensional fuzzy differential inclusions. Chaos, Solitons-Fractals. 2005;**26**:1337-1341

[7] Arshad S, Lupulescu V. On the fractional differential equations with uncertainty. Nonliniear Analysis. 2011; **74**:3685-3693

[8] Harir A, Melliani S, Chadli LS, Minchev E. Solutions of fuzzy fractional heatlike and wave-like equations by variational iteration method. International Journal of Contemporary Mathematical Sciences. 2020;**15**(1):11-35

[9] Harir A, Melliani S, Chadli LS. Fuzzy fractional evolution equations and fuzzy solution operators. Advanced Fuzzy

Systems. 2019;**2019**:10. DOI: 10.1155/ 2019/5734190

[10] Bede B, Gal SG. Almost periodic fuzzy number valued functions. Fuzzy Sets and Systems. 2004;**147**:385-403

[11] Bede B, Gal SG. Generalizations of the differentiability of fuzzy number value functions with applications to fuzzy differential equations. Fuzzy Sets and Systems. 2005;**151**:581-599

[12] Goo HY, Park JS. On the continuity of the Zadeh extensions. Journal of the Chungcheong Mathematical Society. 2007;**20**(4):525-533

[13] Shah K, Arfan M, Ullah A, Mdallal Q, Ansari KJ, Abdeljawad T. Computational study on the dynamics of fractional order differential equations with applications. Chaos, Solitons and Fractals. 2022;**157**:111955

[14] Shah K, Naz H, Sarwar M, Abdeljawad T. On spectral numerical method for variable-order partial differential equations. AIMS Mathematics. 2022;**7**(6):10422-10438

[15] Shah K, Ali A, Zeb S, Khan A, Alqudah MA, Abdeljawad T. Study of fractional order dynamics of nonlinear mathematical model. Alexandria Engineering Journal. 2022;**61**(12): 11211-11224

[16] Shahid A, Khan A, Shah K, Alqudah MA, Abdeljawad T, Islam Su. On computational analysis of highly nonlinear model addressing real world applications. Results in Physics. 2022;**36**:105431

[17] Harir A, Melliani S, Chadli LS. Fuzzy generalized conformable fractional derivative. Advanced Fuzzy Systems. 2019;**2020**:7. DOI: 10.1155/2020/1954975

[18] Seikkala S. On the fuzzy initialvalue problem. Fuzzy Sets and Systems. 1987; **24**:319-330

[19] Harir A, Melliani S, Chadli LS. The fractional differential equations with uncertainty by conformable derivative. European Journal of Pure and Applied Mathematics. 2022;**15**(2):557-571. DOI: 10.29020/nybg.ejpam.v15i2.4299

[20] Harir A, Melliani S, Chadli LS. Fuzzy conformable fractional semigroups of operators. International Journal of Differential Equations. 2020, 2020:6. DOI: 10.1155/2020/8836011

[21] Diamond P, Kloeden PE. Metric Spaces of Fuzzy Sets: Theory and Applications. Singapore: World Scienific; 1994

[22] Harir A, Melliani S, Chadli LS. Existence, uniqueness and approximate solutions of fuzzy fractional differential equations. In: Fuzzy Systems—Theory and Applications. London, UK: IntechOpen; 2020. DOI: 10.5772/ intechopen.94000

[23] Puri ML, Ralescu DA. Differentials of fuzzy functions. Journal of Mathematical Analysis and Applications. 1983;**91**:552-558

[24] Song S, Guo L, Feng C. Global existence of solutions to fuzzy differential equations. Fuzzy Sets and Systems. 2000;**115**:371-376

[25] Negoita CV, Ralescu DA. Applications of Fuzzy Sets to System Analysis. Basel: Birkhauser; 1975

[26] Harir A, Melliani S, Chadli LS. Analytic solution method for fractional fuzzy conformable Laplace transforms. SeMA. 2021;**78**:401-414. DOI: 10.1007/ s40324-021-00240-7

#### **Chapter 8**

## Electrical Circuits as Dynamical Systems

*Alexandru G. Gheorghe and Mihai E. Marin*

#### **Abstract**

An electrical circuit containing at least one dynamic circuit element (inductor or capacitor) is an example of a dynamic system. The behavior of inductors and capacitors is described using differential equations in terms of voltages and currents. The resulting set of differential equations can be rewritten as state equations in normal form. The eigenvalues of the state matrix can be used to verify the stability of the circuit. The most fitted numerical methods to integrate electrical circuit differential equations are the Euler Method (Forward and Backward), the Trapezoidal Rule, and the Gear Method of second to sixth degree, for circuits having stiff equations. These methods are implemented, with adjustable time-step integration, in the majority of circuit simulation software, such as SPICE. The analytical solution can also be computed, for small-size circuits, applying the Laplace Transform. It is interesting to compare the graphical presentation of numerically and analytically obtained solutions. While the numerical methods can be used for both linear and nonlinear circuits, the Laplace Transform is mostly used for linear circuits. A method of using it for nonlinear circuits is also presented.

**Keywords:** electrical circuits, state equations, Laplace Transform, numerical methods, linear and nonlinear circuits

#### **1. Introduction**

The existence and uniqueness of a dynamic circuit solution are strongly tied to the existence of the state equation in normal form. If the equation *x*\_ ¼ *f x*ð Þ , *t* exists, then it can be proved, as it can be found in the mathematic literature, that if *f* is Lipschitzian (for any *x*<sup>1</sup> and *x*<sup>2</sup> and any *t*, k k *f x*ð Þ� 1, *t f x*ð Þ 2, *t* ≤ *k*<sup>∙</sup>k k *x*<sup>1</sup> � *x*<sup>2</sup> where *k*>0 and kk is the Euclidian norm) and if the function *f*ð Þ 0, *t* is uniformly continuous and bounded, then the state equation has an unique solution for any initial state *x*<sup>1</sup> ¼ *x t*ð Þ. The existence of the normal form of the state equation is related to the existence and uniqueness of the resistive multiport solution for any values of the source parameters (which replace the dynamic elements) connected to the ports, and the existence of nonzero dynamic capacitances and inductances for any values of the control parameters of these elements.

It can be seen as in the case of linear circuits *f x*ð Þ , *t* is Lipschitzian: k k *f x*ð Þ� 1, *t f x*ð Þ 2, *t* ¼ k k *A*<sup>∙</sup>ð Þ *x*<sup>1</sup> � *x*<sup>2</sup> because there is always a constant *k* such that k k *A*<sup>∙</sup>ð Þ *x*<sup>1</sup> � *x*<sup>2</sup> ≤ *k*<sup>∙</sup>k k *x*<sup>1</sup> � *x*<sup>2</sup> [1–3].

In the case of nonlinear circuits without excess state quantities (the circuit does not contain any loop consisting only of independent or controlled voltage sources and/or capacitors and does not contain any cut-set consisting only of independent or controlled current sources and/or inductors), if the characteristics of the dynamic elements are strictly increasing and derivable, then they have a nonzero dynamic parameter at any point of operation. If the resistors' characteristics are strictly increasing and the resistive multiport does not have loops formed only from voltage sources and cut-sets formed only from current sources, then this resistive multiport has a unique solution. So, there are state equations in the normal form *x*\_ ¼ *f x*ð Þ , *t* . For the dynamic circuit to have a unique solution for any initial state *x t*ð Þ<sup>0</sup> , it is enough that *f* is Lipschitzian [1–3]. When we have excess state quantities, the problem is treated similarly.

#### **2. State equations for dynamic circuits**

The simplest *dynamic* circuit elements are the linear capacitor and the linear inductor. The operating equation of the linear capacitor is *ic*ðÞ¼ *<sup>t</sup> <sup>C</sup>*<sup>∙</sup> *dvc*ð Þ*<sup>t</sup> dt* where *vc*ð Þ*t* is the voltage at the capacitor terminals, *ic*ð Þ*t* is the current through the capacitor, and *C* is a constant called the capacitor capacity. The operating equation of the linear inductor is *vL*ðÞ¼ *<sup>t</sup> <sup>L</sup>*<sup>∙</sup> *diL*ð Þ*<sup>t</sup> dt* where *vL* is the voltage at the inductor terminals, *iL*ð Þ*t* is the current through the inductor, and *L* is a constant called the inductance of the inductor. The ideal dynamic elements are, unlike resistors, lossless elements, i.e., they do not dissipate energy but accumulate it. The energy accumulated at a given moment by such an element can be subsequently transferred to the circuit in which the respective element is connected.

A circuit that contains at least one dynamic element is called *a dynamic circuit*. The behavior of dynamic circuits, consisting of independent sources, inductors, capacitors, and resistors, is described by a system of differential equations.

#### **2.1 First-order dynamic circuits**

A first-order linear circuit contains only one dynamic element (an inductor or a capacitor), other linear circuit elements (resistors, linear controlled sources), and independent sources. The resistive two poles have an equivalent voltage generator (Thevenin) in **Figure 1** [4] and/or an equivalent current generator (Norton) in

**Figure 1.** *A Thevenin first-order circuit.*

**Figure 2.** *A Norton first-order circuit.*

**Figure 2** [5] at the input of the dynamic element. With this consideration in mind, it is sufficient to consider only the next two first-order linear circuits.


So, a first-order linear circuit satisfies the equation:

$$
\dot{\omega} + \frac{\omega}{\tau} = \frac{s(t)}{\tau} \tag{1}
$$

where *x* is the state variable of the circuit, *τ* is the time constant of the circuit, and *s t*ð Þ is the parameter of the equivalent independent source.

#### **2.2 Dynamic circuits of second order or greater**

The aim is to write the state equations of state in normal form *x*\_ ¼ *f x*ð Þ , *t* . Bringing the equations to this form has two advantages, the qualitative properties of the circuit can be studied more easily, and certain numerical methods for circuit analysis can be applied, formulated to solve a system of differential equations written in this form.

There are two cases. In the first case, the circuit does not contain any loop consisting only of independent or controlled voltage sources and/or capacitors and does not contain any cut-set consisting only of independent or controlled current sources and/or inductors. In this case, the state variables are independent of each

other, and we say that *the circuit has no excess state quantities*. In the second case, the circuit does not satisfy the above restrictions, the state variables are not independent of each other, and we say that *the circuit has excess state quantities*.

#### *2.2.1 Circuits without excess state quantities*

Any linear dynamic circuit without excess state quantities can be considered as a linear resistive multiport (containing linear resistors and independent sources) with dynamic elements connected at the ports. The dynamic circuit of order *n*> 2 that contains *p* capacitors and *n* � *p* inductors can be represented as follows, in **Figure 3**.

Capacitors are replaced with independent voltage sources and inductors with independent current sources, as in **Figure 4**. If the following notations are made:

*x* ¼ *v*1, ⋯, *vp*, *ip*þ1, ⋯, *in <sup>t</sup>* —vector of state variables,

*y* ¼ *i*1, ⋯, *ip*, *vp*þ1, ⋯, *vn <sup>t</sup>* —the vector of the output quantities and *<sup>u</sup>* <sup>¼</sup> ½ � *<sup>e</sup>*1, <sup>⋯</sup>, *<sup>e</sup>α*, *<sup>i</sup>α*þ1, <sup>⋯</sup>, *<sup>i</sup><sup>μ</sup> <sup>t</sup>* —the vector of the input quantities (independent sources), according to the superposition theorem [6], the circuit equations become *y* ¼ *k*<sup>0</sup> <sup>∙</sup>*x* þ *k*<sup>1</sup> <sup>∙</sup>*u*, where *k*<sup>0</sup> and *k*<sup>1</sup> are matrices with constant elements.

Using the notation: Δ ¼ diag *C*1, ⋯, *Cp*, *Lp*þ1, ⋯ *Ln* we obtain *<sup>x</sup>*\_ <sup>¼</sup> <sup>Δ</sup>�<sup>1</sup> <sup>∙</sup>*<sup>y</sup>* and *<sup>y</sup>* <sup>¼</sup> <sup>Δ</sup>∙*x*\_ and the state equations are *<sup>x</sup>*\_ <sup>¼</sup> <sup>Δ</sup>�<sup>1</sup> <sup>∙</sup> *<sup>k</sup>*<sup>0</sup> <sup>∙</sup>*<sup>x</sup>* <sup>þ</sup> *<sup>k</sup>*<sup>1</sup> ð Þ <sup>∙</sup>*<sup>u</sup>* or:

$$
\dot{\mathbf{x}} = \mathbf{A} \cdot \mathbf{x} + \mathbf{B} \cdot \mathbf{u} \tag{2}
$$

where *A* is called the circuit state matrix.

As an example, for the circuit without excess state quantities in the **Figure 5** [3, 7], the following state equations are obtained.

**Figure 3.**

*A p capacitors and n* � *p inductors dynamic circuit.*

#### **Figure 4.** *Capacitors/inductors replaced with independent voltage/current sources.*

*Electrical Circuits as Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105780*

**Figure 5.** *Circuit without excess state quantities.*

#### *2.2.2 Circuits with excess state quantities*

In this case, there is at least one loop consisting only of capacitors and voltage sources or a cut-set consisting only of inductors and current sources. This means that there will be at least one capacitor that cannot be placed in the tree or an inductor that cannot be placed in the co-tree. The normal tree therefore contains all voltage sources and as many capacitors as possible whose voltages are independent state quantities. The other capacitors will be contained in the normal co-tree, and their voltages are excess state quantities. The normal co-tree contains all current sources and as many inductors as possible whose currents are independent state quantities. The other inductors are contained in the normal tree and their currents are excess state quantities. We consider the circuit as a resistive multiport with dynamic elements connected to the ports. According to the substitution theorem, the capacitors and inductors in the tree are replaced with voltage sources and the capacitors and inductors in the cotree with current sources. The result is the circuit in **Figure 6**, in which for simplicity, only one source was represented for each category of element (tree capacitors, tree inductors, co-tree inductors, co-tree capacitors). The second size index represents tree (*t*) or co-tree (*c*).

$$\text{Using the notation } : \boldsymbol{x} = [\boldsymbol{v}\_{\text{G}}, \quad i\_{L\boldsymbol{x}}]^t, \boldsymbol{\upchi}^\* = [\boldsymbol{v}\_{\text{G}}, \quad i\_{Lt}]^t, \boldsymbol{y} = [i\_{\text{G}}, \quad \boldsymbol{u}\_{L\boldsymbol{x}}]^t, \boldsymbol{y}^\* = [i\_{\text{G}}, \quad \boldsymbol{v}\_{\text{Lt}}]^t.$$

The resistive circuit being linear, according to the superposition theorem we obtain:

$$y = k\_0 \cdot \varkappa + k\_1 \cdot \mu\_\mathbb{S} + k\_2 \cdot y^\* \tag{3}$$

where *k*0, *k*1, and *k*<sup>2</sup> are matrices with constant parameters. The operating equations of the dynamic elements can be written:

**Figure 6.**

*The capacitors and inductors in the tree replaced with voltage sources, and the capacitors and inductors in the cotree with current sources.*

$$\begin{aligned} \mathcal{Y} &= -\Delta \cdot \dot{\mathbf{x}} \text{ where } \Delta = \text{diag}(\mathbf{C}\_t, \quad L\_c), \\ \mathcal{Y}^\* &= -\Delta^\* \cdot \dot{\mathbf{x}}^\* \text{ where } \Delta^\* = \text{diag}(\mathbf{C}\_t, \quad L\_t). \end{aligned}$$

The excess state quantities can be expressed, with the help of the Kirchhoff's laws, depending on the independent state quantities and the parameters of some independent sources: *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> *<sup>k</sup>*<sup>3</sup> <sup>∙</sup>*<sup>x</sup>* <sup>þ</sup> *<sup>k</sup>*<sup>4</sup> <sup>∙</sup>*μ<sup>S</sup>* and we obtain:

$$
\gamma^\* = -\Delta^\* \cdot k\_3 \cdot \dot{\mathbf{x}} - \Delta^\* \cdot k\_4 \cdot \dot{\mu}\_S \tag{4}
$$

But:

$$\dot{\mathbf{x}} = -\boldsymbol{\Delta}^{-1} \cdot \mathbf{y} = -\boldsymbol{\Delta}^{-1} \cdot \left[ k\_0 \cdot \mathbf{x} + k\_1 \cdot \mu\_S - k\_2 \cdot \left( \boldsymbol{\Delta}^\* \cdot k\_3 \cdot \dot{\mathbf{x}} + \boldsymbol{\Delta}^\* \cdot k\_4 \cdot \dot{\mu}\_S \right) \right] \tag{5}$$

so, the last relationship can be written as:

$$
\dot{\mathbf{x}} = \mathbf{A} \cdot \mathbf{x} + \mathbf{B} \cdot \boldsymbol{\mu\_S} + \mathbf{B}^\* \cdot \dot{\boldsymbol{\mu\_S}} \tag{6}
$$

where *A* is the state matrix of the circuit.

As an example, for the circuit with excess state quantities (capacitor loop and inductor cut-set circuit) and without sources for simplicity, in **Figure 7** [8, 9], the state equations are obtained:

**Figure 7.** *Circuit with excess state quantities.*

#### **2.3 Dynamic circuits with resistive nonlinearities**

The methods presented above for writing state equations can also be used and in the case of dynamic circuits with resistive nonlinearities, with only a few changes. A first approach is to approximate the nonlinear characteristic of the resistor with piecewise linear characteristic. The state equations for the nonlinear circuit can be written separately for each linear portion of the piece-wise linear characteristic. For example, for the rectifier with a diode and the *RC* load in **Figure 8**, we have two states. One in which the diode conducts and is equivalent to a very low value resistor, ideal 0Ω as in **Figure 9** (state 1) and the other in which the diode does not conduct and is equivalent to a very high value resistor, ideal ∞Ω as in **Figure 10** (state 2).

From the equations obtained separately for the two states, the following state equation can be written:

$$\frac{dv\_c(t)}{dt} = \left(-\frac{1-D}{R\_E \cdot \text{C}} - \frac{1}{R \cdot \text{C}}\right) \cdot v\_c(t) - \frac{1-D}{R\_E \cdot \text{C}} \cdot e(t) \tag{7}$$

*Electrical Circuits as Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105780*

**Figure 8.** *A diode rectifier with RC load.*

**Figure 9.** *Diode conducts (state 1).*

**Figure 10.** *Diode does not conduct (state 2).*

Where *e t*ðÞ¼ 5<sup>∙</sup> *sin* ð Þ 2*πf* <sup>∙</sup>ð Þ *t* þ *t*<sup>0</sup> ½ � *V* and *f* ¼ 10*kHz*.

When *D* ¼ 0 the equation for state 1 is obtained, and when *D* ¼ 1 the equation for state 2 is obtained. *D* acts as a closed-open switch over a well-defined time interval depending on the state of the diode. If the rectifier diode conducts, *D* acts as an open switch, and if the diode does not conduct, *D* acts as a closed switch [10].

Another approach to writing the state equations for dynamic circuits with resistive nonlinearities is to replace in the state equations the nonlinear resistance with the function that describes the nonlinearity (**Figure 11**). In the example of a rectifier with a diode and *RC* load, the diode can be modeled as a nonlinear piece-wise linear function: *RD* ¼ 0*:*1Ω if the voltage at its terminals is positive *v*>0, and *RD* ¼ 10*M*Ω if the voltage at its terminals is negative *v*< 0, **Figure 12**.

The following state equation is obtained:

$$\frac{dv\_c(t)}{dt} = -\frac{R\_D + R\_E + R}{(R\_D + R\_E) \cdot R \cdot \mathbf{C}} \cdot v\_c(t) + \frac{1}{(R\_D + R\_E) \cdot \mathbf{C}} \cdot \mathbf{e}(t) \tag{8}$$

As the handling of functions is more difficult than that of symbols, the second approach is suitable for small circuits and the first for larger circuits.

**Figure 11.** *A rectifier with RC load; diode replaced with a nonlinear resistance.*

**Figure 12.** *The nonlinear resistance function.*

#### **3. Qualitative behavior of dynamic circuits**

#### **3.1 First-order dynamic circuits**

In the previous chapter, it has been shown that a linear circuit of the first order satisfies the equation:

$$
\dot{\mathbf{x}} + \frac{\mathbf{x}}{\sigma} = \frac{s(t)}{\sigma} \tag{9}
$$

where *x* is the state variable of the circuit, *τ* is the time constant of the circuit, and *s t*ð Þ is the parameter of the equivalent independent source.

The solution of a first-order linear circuit with independent direct current sources consists of two terms: the solution of the homogeneous equation *xv* and a particular solution *xp*:

$$
\mathfrak{x} = \mathfrak{x}\_{\upsilon} + \mathfrak{x}\_{p} \tag{10}
$$

The homogeneous equation is *<sup>x</sup>*\_ *<sup>ν</sup>* <sup>þ</sup> *<sup>x</sup><sup>ν</sup> <sup>τ</sup>* ¼ 0. This can be solved using the separation of variables method:

$$
\dot{\mathbf{x}}\_{\nu} = -\frac{\mathbf{x}\_{\nu}}{\tau}
$$

$$
\frac{d\mathbf{x}\_{v}}{dt} = -\frac{\mathbf{x}\_{\nu}}{\tau}
$$

$$
\frac{d\mathbf{x}\_{v}}{\mathbf{x}\_{\nu}} = -\frac{dt}{\tau}
$$

$$
\int\_{\mathbf{x}\_{0}}^{\mathbf{x}\_{v}} \frac{d\mathbf{x}\_{v}}{\mathbf{x}\_{\nu}} = -\int\_{t\_{0}}^{t} \frac{dt}{\tau}
$$

*Electrical Circuits as Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105780*

$$
\ln\left(\mathbf{x}\_v\right) - \ln\left(\mathbf{x}\_0\right) = -\frac{\left(t - t\_0\right)}{\tau}
$$

$$
\ln\left(\mathbf{x}\_v\right) = C\_1 - \frac{\left(t - t\_0\right)}{\tau}
$$

$$
\mathbf{x}\_v = e^{C\_1} \cdot e^{-\frac{\left(t - t\_0\right)}{\tau}} = C\_2 \cdot e^{-\frac{\left(t - t\_0\right)}{\tau}}
$$

The particular solution is a constant, in the form of free term *xp* ¼ *C*3, so:

$$\varkappa(t) = \mathcal{C}\_2 \cdot e^{-\frac{(t-t\_0)}{\mathfrak{r}}} + \mathcal{C}\_3$$

The constant *C*<sup>3</sup> is calculated by replacing *t* ¼ *t*∞:

$$\varkappa(t\_{\infty}) = \mathsf{C}\_2 \cdot \mathsf{e}^{-\infty} + \mathsf{C}\_3 = \mathsf{0} + \mathsf{C}\_3 \text{ so } \mathsf{C}\_3 = \varkappa(t\_{\infty})$$

The solution is determined only if the initial condition is known *x t*ð Þ<sup>0</sup> . Replacing *t* ¼ *t*<sup>0</sup> we obtain:

$$\mathbf{x}(t\_0) = \mathbf{C}\_2 \cdot \mathbf{e}^0 + \mathbf{x}(t\_{\Leftrightarrow}) = \mathbf{C}\_2 + \mathbf{x}(t\_{\Leftrightarrow}) \text{ so } \mathbf{C}\_2 = \mathbf{x}(t\_0) - \mathbf{x}(t\_{\Leftrightarrow}) \text{ and results}:$$

$$\mathbf{x}(t) = \left[\mathbf{x}(t\_0) - \mathbf{x}(t\_{\Leftrightarrow})\right] \cdot \mathbf{e}^{-\frac{\left(t - t\_0\right)}{\tau}} + \mathbf{x}(t\_{\Leftrightarrow})\tag{11}$$

We distinguish two cases in which the behavior of the solution is different: *τ* >0 and *τ* <0. If *τ* < 0, when *t* ! ∞, *x t*ð Þ decreases exponentially over time and the solution *tends to equilibrium* (**Figure 13**). If *τ* <0, when *t* ! ∞, *x t*ð Þ increases exponentially over time and the solution tends to an *infinite value* (**Figure 14**).

#### **3.2 Dynamic circuits of second order or greater**

The qualitative behavior of the circuit is determined by the eigenvalues of the state matrix *A*. These can be calculated as roots of the equation:

$$\det(\mathbf{A} - \lambda \mathbf{\cdot} \mathbf{I}) = \mathbf{0} \tag{12}$$

where *I* is the unit matrix of the same order as the state matrix *A*.

First case: The state matrix *A* has real and distinct eigenvalues. There are three possibilities:

**Figure 13.** *The solution decreases exponentially.*

**Figure 14.** *The solution increases exponentially.*

**Figure 15.**

*Circuit response for real and distinct eigenvalues (a) stable response; (b) constant response; and (c) unstable response.*


Second case: The state matrix *A* has complex conjugated eigenvalues. In this case, there are also three possibilities:


*Electrical Circuits as Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105780*

**Figure 16.**

*Circuit response for complex conjugated eigenvalues (a) stable response; (b) constant response; and (c) unstable response.*

c. The state matrix *A* has eigenvalues with a positive real. In this case, the solution contains damped harmonic components that go toward the *steady state* when *t* ! �∞ (**Figure 16c**).

It can be observed that circuits that have the state matrix *A* with negative real or complex conjugated eigenvalues with the negative real part are stable. If at least one real eigenvalue is positive or a pair of complex conjugated eigenvalues has a positive real part, they are unstable.

For example, for the circuit in paragraph 2.2.1 (**Figure 5**), by replacing the numerical values for the circuit elements in the state matrix, the following eigenvalues are obtained:

$$\det(A - \lambda \cdot I) = \det\begin{pmatrix} -1 - \lambda & 0 & 10^5 \\ 0 & -\lambda & 10^{10} \\ -4 \cdot 10^7 & -4 \cdot 10^7 & -4 \cdot 10^6 - \lambda \end{pmatrix} = \mathbf{0}$$

So:

$$-\boldsymbol{\lambda}^3 - 4 \cdot \mathbf{10}^6 \cdot \boldsymbol{\lambda}^2 - 4 \cdot \mathbf{10}^{17} \cdot \boldsymbol{\lambda} - 4 \cdot \mathbf{10}^{17} = \mathbf{0} \to \begin{cases} \boldsymbol{\lambda}\_1 = -1 \\ \boldsymbol{\lambda}\_2 = -2 \cdot \mathbf{10}^6 + 6.3246 \cdot \mathbf{10}^8 \cdot \boldsymbol{i} \\ \boldsymbol{\lambda}\_3 = -2 \cdot \mathbf{10}^6 - 6.3246 \cdot \mathbf{10}^8 \cdot \boldsymbol{i} \end{cases}$$

It is observed that all eigenvalues have a negative real part, so the circuit is stable.

#### **4. Analytical solving of dynamic circuits equations**

With the help of the Laplace transform, it is possible to construct a system of algebraic equations in the complex variable *s* domain, which corresponds to a system of linear differential equations in the time domain. The use of algebraic equations instead of differential equations shows evident benefits in the study of electrical circuits.

Although section 2 shows how to write state equations in normal form *x*\_ ¼ *f x*ð Þ , *t* , this method is cumbersome for a human operator even for circuits with a small

number of dynamic elements. For this reason, in the analysis of electrical circuits, it is preferred to apply the Laplace transform first to the equations that describe the operation of the circuit elements and then to write the circuit equations.

#### **4.1 Circuit analysis with Laplace transform**

The study of the time-varying regime can be performed if the initial conditions (*ik*ð Þ 0� inductor currents and *vk*ð Þ 0� capacitor voltages) at *t* ¼ 0� are known. It is considered that the independent sources are connected in the circuit at *t* ¼ 0�. In this case, all voltages and all currents are original functions. For example, a direct current source (voltage or current) having *E* ¼ *ct* or *IS* ¼ *ct*, being connected at *t* ¼ 0� has *e t*ðÞ¼ *E*∙*u t*ð Þ or *iS*ðÞ¼ *t IS* <sup>∙</sup>*u t*ð Þ*:* Thus, the quantities *e t*ð Þ and *iS*ð Þ*t* become original functions due to the presence of the factor *u t*ð Þ. For such a circuit there are Laplace images of voltages and currents: *Ik*ðÞ¼ *s L i* f g *<sup>k</sup>*ð Þ*t* and *Vk*ðÞ¼ *s L v*f g *<sup>k</sup>*ð Þ*t* . Laplace image computation is called operational computation. Applying the linearity property of the Laplace transform, Kirchhoff's Laws P *<sup>N</sup>ik*ðÞ¼ *<sup>t</sup>* 0 and <sup>P</sup> *<sup>B</sup>vk*ðÞ¼ *t* 0 can be written in the complex variable *s* domain: P *<sup>N</sup>Ik*ðÞ¼ *<sup>s</sup>* 0 and <sup>P</sup> *<sup>B</sup>Vk*ðÞ¼ *s* 0.

According to the linearity and derivation properties of the original function of the Laplace transform, the differential equations that express the connections between *vk*ð Þ*t* and *ik*ð Þ*t* for the circuit elements, correspond to algebraic equations that express the connections between *Vk*ðÞ¼ *s L v*f g *<sup>k</sup>*ð Þ*t* and *Ik*ðÞ¼ *s L i* f g *<sup>k</sup>*ð Þ*t* [11].

#### **4.2 Equivalent operational circuits**

A circuit in which currents and voltages are Laplace images is called the equivalent operational circuit. Below are the most common circuit elements and their Laplace images [11].



b. The ideal alternating current voltage (or current) source


#### c. The resistor

The ideal resistor has the operation equation *v t*ðÞ¼ *R* � *i t*ð Þ and if *V s*ðÞ¼ *L vt* f g ð Þ , *I s*ðÞ¼ *Lit* f g ð Þ then *V s*ðÞ¼ *R* � *I s*ð Þ. The factor multiplied with *I s*ð Þ to get *V s*ð Þ is called the operational impedance *Z s*ðÞ¼ *V s*ð Þ *I s*ð Þ � �. For the ideal resistor *ZR*ðÞ¼ *<sup>s</sup> <sup>R</sup>*. The circuit corresponding to this relationship is shown below:


#### d. The inductor

For the ideal inductor, according to the derivation property of the original function, the equation *v t*ðÞ¼ *<sup>L</sup>* � *di t*ð Þ *dt* transforms to:

$$V(\mathfrak{s}) = L \cdot [\mathfrak{s} \cdot I(\mathfrak{s}) - i(\mathfrak{O}\_{-})] = \mathfrak{s} \cdot L \cdot I(\mathfrak{s}) - L \cdot i(\mathfrak{O}\_{-}) \tag{13}$$

where *i*ð Þ 0� is the initial condition at the time *t* ¼ 0� for the current through the inductor. The equivalent operational circuit is:


#### e. The capacitor

Similar to the inductor, according to the derivation property of the original function, for the ideal capacitor, the relationship *i t*ðÞ¼ *<sup>C</sup>* � *dv t*ð Þ *dt* transforms to:

$$I(\mathfrak{s}) = \mathbf{C} \cdot [\mathfrak{s} \cdot V(\mathfrak{s}) - \nu(\mathbf{0}\_{-})] = \frac{V(\mathfrak{s})}{\frac{1}{\mathfrak{s} \cdot \mathbf{C}}} - \mathbf{C} \cdot \nu(\mathbf{0}\_{-}) \tag{14}$$

where *v*ð Þ 0� is the initial condition at the *t* ¼ 0� for the voltage drop on the capacitor. The equivalent operational circuit is:


#### **4.3 Response computation of linear dynamic circuits**

All theorems and analysis methods valid for direct or alternating current circuits are valid for the equivalent operational circuits: equivalent generator theorems,

superposition theorem, the two-port representation theorems, Kirchhoff's laws analysis, nodal analysis, mesh analysis, etc.

The Laplace transform analysis algorithm of a linear circuit in time-varying regime consists of:


If the circuit has more than four dynamic elements, determining the circuit response using analytical methods requires substantial effort for a human operator. In this case, a software product such as Mathematica [12] or Maple [13] capable of performing symbolic computations can be used.

To illustrate the above algorithm, consider the following example, the circuit in **Figure 17** [14]. The current through the inductor *iL*ð Þ*t* and the voltage at the terminals of the capacitor *vC*ð Þ*t* in the transient mode that occurs after the switch *K is closed* at the time *t* ¼ 0 in the circuit in the figure below are required. We know *R* ¼ 1*Ω*, *C* ¼ 1*=*3*F*, *L* ¼ 3*=*2*H*, *E* ¼ 6*V*, *iL*ð Þ¼ 0� 3*A* and *vC*ð Þ¼ 0� 3*V*.

Using the equivalent operational circuits (**Figure 18**), we obtain:

$$\begin{array}{c} I\_{\mathbb{L}}(\mathbf{s}) = \frac{3 \cdot \mathbf{s}^{\boldsymbol{\varepsilon}} + 11 \cdot \mathbf{s} + 12}{\mathbf{s} \cdot (\mathbf{s} + 1) \cdot (\mathbf{s} + 2)}, \\ \text{and capacitor voltage:} \\ V\_{\mathbb{C}}(\mathbf{s}) = \frac{3 \cdot \mathbf{s}^{2} + 9 \cdot \mathbf{s} + 18}{\mathbf{s} \cdot (\mathbf{s} + 1) \cdot (\mathbf{s} + 2)}. \end{array}$$

**Figure 18.** *The equivalent operational circuit.*

To make it easier to calculate the transient solution, we breakdown the expression into simple fractions:

$$I\_L(s) = \frac{6}{s} + \frac{-4}{s+1} + \frac{1}{s+2} \text{ and } V\_C(s) = \frac{6}{s} + \frac{-6}{s+1} + \frac{3}{s+2}i$$

So:

$$i\_L(t) = L^{-1} \{ I\_L(\mathfrak{s}) \} = \mathfrak{G} - \mathfrak{A} \cdot e^{-t} + e^{-2\cdot t} \text{ [A]}$$

and

$$\nu\_C(t) = L^{-1}\{V\_C(\mathfrak{s})\} = \mathfrak{G} - \mathfrak{G} \cdot e^{-t} + \mathfrak{Z} \cdot e^{-2\cdot t} \text{ [V]}$$

#### **4.4 Response computation of the dynamic circuits with resistive nonlinearities**

Solving systems of differential equations by a human operator is difficult even in the case of linear circuits and even more so in the case of nonlinear ones. To solve them, it is preferred to use programs capable of performing symbolic computations, such as Mathematica, Maple, etc. To solve the circuit state equation obtained by the second approach in paragraph 2.2:

$$\frac{dv\_c(t)}{dt} = -\frac{R\_D + R\_E + R}{(R\_D + R\_E) \cdot R \cdot C} \cdot v\_c(t) + \frac{1}{(R\_D + R\_E) \cdot C} \cdot \varepsilon(t) \tag{15}$$

$$\text{where:}\ R\_D = \begin{cases} \text{ } 0.1\Omega \text{ for } v > 0\\ \text{ } 10M\Omega \text{ for } v < 0 \end{cases}, \\ e(t) = 5 \cdot \sin\left(2\pi f \cdot (t + t\_0)\right) \text{ } [V] \text{ and } f = 10kHz, \text{ it is} $$

possible to proceed in this way. The above equation is solved separately for each linear region of the nonlinear characteristic for *RD*. When *RD* value changes, the initial condition *vc*ð Þ¼ *t*<sup>0</sup> *vc*<sup>0</sup> is considered as the value of the voltage *vc*ð Þ*t* obtained at the end of the previous interval. These calculations were made in the Maple program for two periods of the source *e t*ð Þ, with the following solution obtained by concatenating the results (**Figure 19**):

**Figure 19.** *The analytical solution (Maple) and the numerical solution (SPICE).*

As a verification of this method, we can compare the results with those obtained by using the SPICE program [15], in the same time frame. It is observed that the analytical solution obtained with Maple is almost identical to numerical solution obtained with SPICE.

#### **4.5 Transfer functions**

Consider a circuit with constant parameters and a unique solution. We assume that we have only one independent source and initial conditions equal to zero. The transfer function from gate *i* to gate *j* is defined as the ratio between the output quantity on side *j* and the input quantity of side *i* (**Figure 20**). The output quantity can be a voltage or a current, and the input quantity can be the electromotive voltage of an independent voltage source or the current of an independent current source.

Generally, a circuit function also called a transfer function is:

$$H(\mathfrak{s}) = \frac{P(\mathfrak{s})}{Q(\mathfrak{s})} \tag{16}$$

The roots of *Q s*ð Þ are called *the poles of H s*ð Þ, and the roots of *P s*ð Þ are called the *zeros of H s*ð Þ. The dynamic behavior of the network depends upon the location of the poles and zeros on the network function curve [16]. In general, one can graphically deduce the magnitude and phase curve of any network function from the location of its poles and zeros. The poles of *H s*ð Þ are the circuit natural frequencies. Not all natural frequencies are the poles of any function of that circuit because certain common expressions that appear in both *Q s*ð Þ and *P s*ð Þ can disappear by simplification.

The poles of the transfer function determine the stability of the circuit, similarly to the eigenvalues of the state matrix presented in paragraph 3. If all the poles have a negative real part, the circuit is stable. If at least one pole has a positive real part, the circuit is unstable.

HSPICE uses the Muller method to calculate the roots of polynomials *P s*ð Þ and *Q s*ð Þ. This method approximates the polynomial with a quadratic equation that fits through

**Figure 20.** *Circuit with one input (*i*) and one output (*j*).*


**Table 1.**

*The poles obtained with HSPICE for the circuit in Figure 5.*

*Electrical Circuits as Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105780*

three points in the vicinity of a root. Successive iterations toward a particular root are obtained by finding the nearest root of a quadratic equation whose curve passes through the last three points [16]. Pole/zero analysis results are based on the circuit's DC operating point, so the operating point solution must be accurate [16].

For the circuit in paragraph 2.2.1, **Figure 5**, the poles in **Table 1** are obtained using the HSPICE program.

It is observed that they are identical with the eigenvalues of the state matrix, calculated in paragraph 3.2.

#### **5. Numerical approach of solving dynamic circuits equations**

We aim to solve a system of state equations written in normal form *x*\_ ¼ *f x*ð Þ , *t* . Even for linear circuits whose equations have an analytical solution, the use of numerical methods is preferred because even for a second-order circuit, the analytical calculations are quite complicated to be performed efficiently by a human operator.

Automatic solving of analytical solutions requires considerable computational effort, as computers are designed to operate with numbers rather than symbols. For this purpose, specialized software such as Mathematica or Maple that performs analytical calculations can be used.

#### **5.1 Numerical integration methods used in circuit simulation**

Any numerical method starts from the initial condition *x t*ð Þ<sup>0</sup> and determines successively:

$$
\varkappa(t\_0 + h), \varkappa(t\_0 + 2h), \dots, \tag{17}
$$

where *h* is the time step.

The simplest numerical integration methods used in circuit simulation programs are [17]:

#### *5.1.1 The forward Euler method (FE)*

Expanding *x t*ð Þ *<sup>k</sup>*þ<sup>1</sup> in Taylor series in the vicinity of the point *tk* we obtain:

$$\left.\infty(t\_{k+1}) - \varkappa(t\_k) + \frac{d\varkappa}{dt}\right|\_{t\_k} \cdot \frac{\left.(t\_{k+1} - t\_k)}{1!} + \frac{d^2\varkappa}{dt^2}\right|\_{t\_k} \cdot \frac{\left.(t\_{k+1} - t\_k)^2\right|\_{t\_k}}{2!} + \dots \tag{18}$$

Using the notation *x t*ð Þ¼ *<sup>k</sup> xk*, neglecting the higher-order terms, and taking into account the fact that *dx dt* ¼ *x*\_ ¼ *f x*ð Þ , *t* , we obtain:

$$\boldsymbol{\infty}\_{k+1} = \boldsymbol{\mathcal{X}}\_{k} + \boldsymbol{h} \cdot \dot{\boldsymbol{\mathcal{X}}}\_{k} \text{ or } \boldsymbol{\mathcal{X}}\_{k+1} = \boldsymbol{\mathcal{X}}\_{k} + \boldsymbol{h} \cdot \boldsymbol{f}(\boldsymbol{\mathcal{X}}\_{k}) \tag{19}$$

where *h* ¼ *tk*þ<sup>1</sup> � *tk* (time step).

The graphical interpretation of this method of numerical integration is presented below in **Figure 21**.

*5.1.2 The backward Euler method (BE)* **Figure 21.** *The forward Euler method (FE) graphical interpretation.*

Similarly, expanding *x t*ð Þ*<sup>k</sup>* in Taylor series in the vicinity of the point *tk*þ<sup>1</sup> results:

$$\mathbf{x}(t\_k) = \mathbf{x}(t\_{k+1}) + \frac{d\mathbf{x}}{dt}\Big|\_{t\_{k+1}} \cdot \frac{\left(t\_k - t\_{k+1}\right)}{\mathbf{1}!} + \frac{d^2\mathbf{x}}{dt^2}\Big|\_{t\_{k+1}} \cdot \frac{\left(t\_k - t\_{k+1}\right)^2}{\mathbf{2}!} + \dots \tag{20}$$

With the above notations we obtain:

$$\boldsymbol{\omega}\_{k+1} = \boldsymbol{\omega}\_k + \boldsymbol{h} \cdot \dot{\boldsymbol{\omega}}\_{k+1} \text{ or } \boldsymbol{\omega}\_{k+1} = \boldsymbol{\omega}\_k + \boldsymbol{h} \cdot \boldsymbol{f}(\boldsymbol{\omega}\_{k+1}) \tag{21}$$

In **Figure 22** is presented the graphical interpretation of this numerical integration method.

**Figure 22.** *The backward Euler method (BE) graphical interpretation.*

#### *5.1.3 The trapezoidal rule (TR)*

From the graphic interpretation of the two numerical integration methods presented above, it is observed that for the same function, a method approximates the *Electrical Circuits as Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105780*

**Figure 23.** *The trapezoidal rule (TR) graphical interpretation.*

integral with a value smaller and the other with a bigger value. This drawback can be overcome by using the trapezoidal rule.

Adding and dividing by two the two formulas obtained for *xk*þ1, the one obtained with the forward Euler method and the one obtained with the backward Euler method, the formula for the trapezoidal rule is obtained:

$$\begin{aligned} \boldsymbol{\omega}\_{k+1} &= \boldsymbol{\omega}\_k + \boldsymbol{h} \cdot \dot{\boldsymbol{\omega}}\_k \\ \boldsymbol{\omega}\_{k+1} &= \boldsymbol{\omega}\_k + \boldsymbol{h} \cdot \dot{\boldsymbol{\omega}}\_{k+1} \end{aligned} \Big| \Rightarrow \mathbf{2} \cdot \boldsymbol{\omega}\_{k+1} = \mathbf{2} \cdot \boldsymbol{\omega}\_k + \boldsymbol{h} \cdot (\dot{\boldsymbol{\omega}}\_k + \dot{\boldsymbol{\omega}}\_{k+1}) $$

or

$$\boldsymbol{\infty}\_{k+1} = \boldsymbol{\infty}\_{k} + \frac{h}{2} \cdot (\dot{\boldsymbol{\omega}}\_{k} + \dot{\boldsymbol{\omega}}\_{k+1}) \tag{22}$$

The graphical interpretation of the trapezoidal rule is presented below, in **Figure 23**.

It is easy to see that the forward Euler method, in which *x t*ð Þ *<sup>k</sup>*þ<sup>1</sup> is determined from *x t*ð Þ*<sup>k</sup>* , is an explicit method, while the backward Euler method and the trapezoidal rule are implicit methods.

The solutions obtained by numerical integration being approximate, the errors introduced at each step are calculated (*local error* and *global error* of the method). Local error at the time *t* ¼ *tn*þ<sup>1</sup> is *ε<sup>n</sup>* ¼ *xexact*ð Þ� *tn*þ<sup>1</sup> *xapprox*ð Þ *tn*þ<sup>1</sup> where *xexact*ð Þ *tn*þ<sup>1</sup> was calculated starting from the *x t*ð Þ*<sup>n</sup>* approximate calculation. The total error at *t* ¼ *tn*þ<sup>1</sup> is *ε<sup>n</sup>* ¼ *xexact*ð Þ� *tn*þ<sup>1</sup> *xapprox*ð Þ *tn*þ<sup>1</sup> , where *xexact*ð Þ *tn*þ<sup>1</sup> was calculated from the initial *x*ð Þ 0 .

The numerical integration method for which the total error decreases as time passes is a *stable method*. A method that does not have this property is numerically *unstable*, even if the local error is small and decreases over time.

In a stable method, the size of the time step is limited only by the imposed local error, which depends on the studied problem. Working with very low values of *h* leads to a large number of time steps required to determine the solution that take up an unjustifiably high computation time.

#### *5.1.4 The gear methods (G)*

For circuits with eigenvalues that differ by a few orders of magnitude ("stiff") the numerical methods presented above do not give correct results. In this case, special gear methods are used. The relationships that define the second to sixth-order gear methods are:

Second-order gear: *xk*þ<sup>1</sup> <sup>¼</sup> <sup>3</sup> <sup>4</sup> *xk* � <sup>1</sup> <sup>3</sup> *xk*�<sup>1</sup> <sup>þ</sup> *<sup>h</sup>* <sup>2</sup> <sup>3</sup> � *x*\_ *<sup>k</sup>*þ<sup>1</sup> Third-order gear: *xk*þ<sup>1</sup> <sup>¼</sup> <sup>18</sup> <sup>11</sup> *xk* � <sup>9</sup> <sup>11</sup> *xk*�<sup>1</sup> <sup>þ</sup> <sup>2</sup> <sup>11</sup> � *xk*�<sup>2</sup> <sup>þ</sup> *<sup>h</sup>* <sup>6</sup> <sup>11</sup> � *x*\_ *<sup>k</sup>*þ<sup>1</sup> Fourth-order gear 4: *xk*þ<sup>1</sup> <sup>¼</sup> <sup>48</sup> <sup>25</sup> *xk* � <sup>36</sup> <sup>25</sup> *xk*�<sup>1</sup> <sup>þ</sup> <sup>16</sup> <sup>25</sup> � *xk*�<sup>2</sup> � <sup>3</sup> <sup>25</sup> � *xk*�<sup>3</sup> <sup>þ</sup> *<sup>h</sup>* <sup>12</sup> <sup>25</sup> � *x*\_ *<sup>k</sup>*þ<sup>1</sup> Fifth-order gear 5: *xk*þ<sup>1</sup> <sup>¼</sup> <sup>300</sup> <sup>137</sup> *xk* � <sup>300</sup> <sup>137</sup> *xk*�<sup>1</sup> <sup>þ</sup> <sup>200</sup> <sup>137</sup> � *xk*�<sup>2</sup> � <sup>75</sup> <sup>137</sup> � *xk*�<sup>3</sup> <sup>þ</sup> <sup>12</sup> <sup>137</sup> � *xk*�<sup>4</sup> þ *h* <sup>60</sup> <sup>137</sup> � *x*\_ *<sup>k</sup>*þ<sup>1</sup> Sixth-order gear 6: *xk*þ<sup>1</sup> <sup>¼</sup> <sup>360</sup> <sup>147</sup> *xk* � <sup>450</sup> <sup>147</sup> *xk*�<sup>1</sup> <sup>þ</sup> <sup>400</sup> <sup>147</sup> � *xk*�<sup>2</sup> � <sup>225</sup> <sup>147</sup> � *xk*�<sup>3</sup> <sup>þ</sup> <sup>72</sup> <sup>147</sup> � *xk*�<sup>4</sup> � <sup>10</sup> 147 �

*xk*�<sup>5</sup> <sup>þ</sup> *<sup>h</sup>* <sup>60</sup> <sup>137</sup> � *x*\_ *<sup>k</sup>*þ<sup>1</sup>

In the case of an explicit method, after choosing the time step, we start from the initial condition *x t*ð Þ<sup>0</sup> and successively compute *x t*ð Þ <sup>0</sup> þ *h* , *x t*ð Þ <sup>0</sup> þ 2*h* , ⋯ covering the entire time interval of interest. In the case of an implicit method, several iterations are made at each step. At the first iteration, an explicit method is used, and a predictor is obtained *xk*þ<sup>1</sup> ð Þ <sup>0</sup> <sup>¼</sup> *xk* <sup>þ</sup> *<sup>h</sup>* � *f x*ð Þ*<sup>k</sup>* . The value obtained for the predictor is entered on the right-hand side of the equation of the backward method obtaining a new value for *xk*þ<sup>1</sup> ð Þ<sup>1</sup> (in the left hand side), which at the next iteration is introduced again on the right-hand side and *xk*þ<sup>1</sup> ð Þ<sup>2</sup> is obtained and so on until *xk*þ<sup>1</sup> ð Þ *<sup>N</sup>*�<sup>1</sup> � *xk*þ<sup>1</sup> ð Þ *<sup>N</sup>* <*ε* imposed. Values *xk*þ<sup>1</sup> ð Þ<sup>1</sup> , *xk*þ<sup>1</sup> ð Þ<sup>2</sup> ⋯ are called correctors.

In current SPICE circuit simulation programs, the forward Euler, backward Euler, trapezoidal rule, and gear method of the second order are used. In circuit design, a small simulation time is very important and integration methods of degree greater than 2 are not used because it involves many more computations and the improvement of the solution is not considerable visible.

#### **5.2 Resistive (companion) models for dynamic circuit elements**

The numerical methods described in the previous paragraph require writing the state equations in normal form *x*\_ ¼ *f x*ð Þ , *t* . Writing the circuit equations in this form requires the elimination of some variables and the expanding of others starting from the circuit equations. This process involves the numerical solution of some systems of linear or nonlinear algebraic equations, operation that may be affected by significantly errors and involves a certain calculation effort. This process of integrating circuit equations is simplified by replacing the dynamic elements with so-called resistive (or companion) models. By doing so, the circuit response is determined by solving a linear or nonlinear resistive circuit at each time step.

Companion models for a linear capacitor or an inductor derive from the numerical integration method used in the previous paragraph.

Starting from the relations given by the most used numerical integration method and taking into account that the state variable for the capacitor is the voltage (*<sup>i</sup>* <sup>¼</sup> *<sup>C</sup>* � *dv dt* or *<sup>i</sup>* <sup>¼</sup> *<sup>C</sup>* � *<sup>v</sup>*\_ so *<sup>v</sup>*\_ <sup>¼</sup> <sup>1</sup> *<sup>C</sup>* � *i*), we obtain the models in **Table 2** for the capacitor.

These equations are describing the following equivalent circuit, the capacitor companion model.

*Electrical Circuits as Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105780*


**Table 2.**

*The* BE*,* TR *and* second G *companion models for the capacitor.*

In a similar manner but considering that the state variable for the inductor is the current (*<sup>v</sup>* <sup>¼</sup> *<sup>L</sup>* � *di dt* or *<sup>v</sup>* <sup>¼</sup> *<sup>L</sup>* � \_ *i* so \_ *<sup>i</sup>* <sup>¼</sup> <sup>1</sup> *<sup>L</sup>* � *v*), we obtain the models in **Table 3** for the inductor.


**Table 3.**

*The* BE*,* TR, *and* second G *companion models for the inductor.*

These equations are describing the following equivalent circuit, the inductor companion model.

For example, using the companion models starting from the trapezoidal rule, the dynamic circuit in paragraph 2.2.1, **Figure 5**, becomes a resistive circuit but with different values at each time step for the resistors and sources in the dynamic element models (**Figure 24)**.

#### **Figure 24.**

*Dynamic elements replaced with companion models for circuit in Figure 5.*

The advantage of this approach is that solving a resistive circuit is much easier than a dynamic circuit, for example, using the modified nodal analysis (MNA) method [18]:

$$
\begin{bmatrix}
\frac{1}{R\_1} + \frac{1}{R\_{L\_1}} & -\frac{1}{R\_{L\_1}} & 0 \\
0 & -\frac{1}{R\_{C\_1}} & \frac{1}{R} + \frac{1}{R\_{C\_1}} + \frac{1}{R\_C}
\end{bmatrix} \cdot \begin{bmatrix} V\_2 \\ V\_3 \\ V\_4 \end{bmatrix} = \begin{bmatrix} \frac{V\_1}{R\_1} - I\_{L\_1} \\ I\_{L\_1} + I\_{C\_1} \\ \frac{V\_1}{R} + I\_C - I\_{C\_1} \end{bmatrix}
$$

This method, using a variable integration time step, is implemented in all SPICE circuit simulators.

Similar to linear dynamic element models, companion models can be built for nonlinear dynamic elements. Next, we will determine the parameters for these models for the nonlinear capacitor and the nonlinear inductor for the trapezoidal rule. Models corresponding to other numerical integration methods can be determined in a similar way.

For a nonlinear capacitor where the electric charge is a polynomial dependence of the voltage, of the form, *<sup>q</sup>* <sup>¼</sup> *<sup>C</sup>*<sup>∙</sup> *<sup>v</sup>* <sup>þ</sup> *<sup>c</sup>*<sup>1</sup> <sup>∙</sup>*v*<sup>2</sup> <sup>þ</sup> *<sup>c</sup>*<sup>2</sup> <sup>∙</sup>*v*<sup>3</sup> ð Þ the *<sup>i</sup>* <sup>¼</sup> *dq dt* ¼ *q*\_ the companion model

can be obtained from the following. In this case, the state variable for the capacitor is the electric charge:

$$q\_{k+1} = q\_k + \frac{h}{2} \cdot \left(\dot{q}\_k + \dot{q}\_{k+1}\right) = q\_k + \frac{h}{2} \cdot \left(\dot{u}\_k + \dot{u}\_{k+1}\right)$$

$$\dot{i}\_{k+1} = \frac{2}{h} \cdot q\_{k+1} - \frac{2}{h} \cdot q\_k - \dot{i}\_k$$

$$\dot{i}\_{k+1} = \frac{2}{h} \cdot C \cdot \left(v\_{k+1} + c\_1 \cdot v\_{k+1}^2 + c\_2 \cdot v\_{k+1}^3\right) - \left[\frac{2}{h} \cdot C \cdot \left(v\_k + c\_1 \cdot v\_k^2 + c\_2 \cdot v\_k^3\right) + \dot{i}\_k\right] \tag{23}$$

This equation describes the following equivalent circuit, the nonlinear capacitor companion model.

For a nonlinear inductor where the magnetic flux is a polynomial dependency of the current, of the form *φ* ¼ *L*<sup>∙</sup> *i* þ *c*<sup>1</sup> <sup>∙</sup>*i* <sup>2</sup> <sup>þ</sup> *<sup>c</sup>*<sup>2</sup> <sup>∙</sup>*<sup>i</sup>* <sup>3</sup> � � and *<sup>v</sup>* <sup>¼</sup> *<sup>d</sup><sup>φ</sup> dt* ¼ *φ*\_ , the companion model can be obtained from the following.

In this case, the inductor state variable is the magnetic flux:

$$\rho\_{k+1} = \rho\_k + \frac{h}{2} \cdot \left(\dot{\rho}\_k + \dot{\rho}\_{k+1}\right) = \rho\_k + \frac{h}{2} \cdot \left(v\_k + v\_{k+1}\right)$$

$$v\_{k+1} = \frac{2}{h} \cdot \rho\_{k+1} - \frac{2}{h} \cdot \rho\_k - v\_k$$

$$v\_{k+1} = \frac{2}{h} \cdot L \cdot \left(\dot{\iota}\_{k+1} + c\_1 \cdot \dot{i}\_{k+1}^2 + c\_2 \cdot i\_{k+1}^3\right) - \left[\frac{2}{h} \cdot L \cdot \left(\dot{\iota}\_k + c\_1 \cdot i\_k^2 + c\_2 \cdot i\_k^3\right) + v\_k\right] \tag{24}$$

This equation describes the following equivalent circuit, the nonlinear inductor companion model.

The resistive circuit that is solved at each time step is nonlinear in this case, even if the resistors in the circuit are linear. This circuit is solved by an iterative method, usually the Newton method.

#### **6. Conclusions**

All companion models contain a linear resistor with the resistance depending on the parameter of the dynamic element (*L* or *C*) and the time step *h*, and an independent source whose parameter depends on the value of the state variable at the previous time.

Companion models of linear dynamic elements can also be built with voltage sources (the actual current source can be transformed into a voltage source). Current sources were preferred because they are suitable for the modified nodal analysis (MNA) method.

For a given time step *h*, starting from the given initial state of the dynamic elements, the circuit response is calculated at *t*<sup>0</sup> þ *h* using a *first-*order numerical integration method. In this way, the analysis of a linear dynamic circuit can be done by solving a linear resistive circuit at each time step. Starting from *t*<sup>0</sup> þ 2∙*h*, a *second*order method can be used (such as the trapezoidal rule or the *second*-*order Gear method*). If the time step does not change, only the independent sources change in the resistive circuit, the values of the resistors remain the same. If the time step changes, the model resistance values must be recalculated.

The integration of the circuit equations is usually done with a variable time step *h*. As *h* decreases, the resistance in the capacitor companion model decreases and the resistance in the inductor companion model increases. In the case of an *LC* branch circuit, there are two resistors in parallel whose resistors are different by several orders of magnitude. Such a circuit cannot be solved correctly by using just simple precision. In some cases, even double-precision computations can lead to incorrect results.

#### **Author details**

Alexandru G. Gheorghe\* and Mihai E. Marin Polytechnic University of Bucharest, Bucharest, Romania

\*Address all correspondence to: alexandru.gheorghe@upb.ro

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Electrical Circuits as Dynamical Systems DOI: http://dx.doi.org/10.5772/intechopen.105780*

#### **References**

[1] Chua LO, Desoer CA, Kuh ES. Linear and Nonlinear Circuits. New York: McGrawHill; 1987

[2] Chua LO, Green DN. A qualitative analysis of the behavior of dynamic nonlinear networks: Steady-state solutions of non-autonomous networks. IEEE Transactions on Circuits and Systems. 1976;**23**(9):531-550

[3] Gheorghe AG, Constantinescu F. New Topics in Simulation and Modeling of RF Circuits. Denmark: River Publishers; 2017. ISBN: 9788793379466, e-ISBN: 9788793379459

[4] Johnson DH. Origins of the equivalent circuit concept: The voltagesource equivalent. Proceedings of the IEEE. 2003;**91**(4):636-640

[5] Johnson DH. Origins of the equivalent circuit concept: The current-source equivalent. Proceedings of the IEEE. 2003;**91**(5):817-821

[6] Urbano M. Superposition theorem. In: Introductory Electrical Engineering with Math Explained in Accessible Language. US: John Wiley & Sons, Inc.; 2019. Print ISBN: 9781119580188, Online ISBN: 9781119580164. DOI: 10.1002/97811 19580164

[7] Gheorghe AG, Constantinescu F, Nitescu M. "A new algorithm for envelope following analysis", Revue Roumaine des Sciences Techniques-Serie Electrotechnique et Energetique, Nr.2011;**2**:229-236.

[8] Cristea PD, Tuduce R. State equations of circuits with excess elements - revisited. Science and Technology. 2011;**56**:219-228

[9] Marin M-E, Staicu C-S, Gheorghe AG, Constantinescu F. Generation of state equations for circuits with excess elements. In: 2020 International Symposium on Fundamentals of Electrical Engineering (ISFEE). 2020. pp. 1-4

[10] Benny Yeung. Chapter 7 Dynamic Modeling and Control of DC / DC Converters

[11] Gardner MF, Barnes JL. Transients in Linear Systems studied by the Laplace Transform. New York: Wiley; 1942

[12] Wolfram Research, Inc., Mathematica, Version 13.0.0, Champaign, IL. 2021

[13] Maple User Manual. Maplesoft, a division of Waterloo Maple Inc., 1996– 2021.

[14] Gheorghe AG. Collection of Theory Problems Circuits. Politehnica Press Publishing; 2014

[15] Nagel LW, Pederson DO. SPICE (Simulation Program with Integrated Circuit Emphasis). Berkeley: University of California; 1973

[16] Star-Hspice Manual. Chapter 24: Performing Pole / Zero Analysis. Fremont, CA: Avant!, Release 1998.2.

[17] Nagel LW. SPICE2: A Computer Program to Simulate Semiconductor Circuits. Berkeley: University of California; 1975

[18] Chung-Wen H, Ruehli A, Brennan P. The modified nodal approach to network analysis. IEEE Transactions on Circuits and Systems. 1975;**22**(6):504-509. DOI: 10.1109/TCS.1975.1084079

#### **Chapter 9**

## Computation of Numerical Solution via Non-Standard Finite Difference Scheme

*Eiman Ijaz, Johar Ali, Abbas Khan, Muhammad Shafiq and Taj Munir*

#### **Abstract**

The recent COVID-19 pandemic has brought attention to the strategies of quarantine and other governmental measures, such as lockdown, media coverage on social isolation, strengthening of public safety, etc. All these strategies are because to manage the disease as there is no vaccine and appropriate medicine for treatment. The mathematical model can assist to determine whether these intervention options are the most effective ones for illness control and how they might impact the dynamics of the disease. Motivated by this, in this manuscript, a classical order nonlinear mathematical model has been proposed to analyze the pandemic COVID-19. The model has been analyzed numerically. The suggested mathematical model is classified into susceptible, exposed, recovered, and infected classes. The non-standard finite difference scheme (NSFDS) is used to achieve the approximate results for each compartment. The graphical presentations for various compartments of the systems that correspond to some real facts are given via MATLAB.

**Keywords:** nonlinear dynamical system, COVID-19, approximate solution, NSFDS

#### **1. Introduction**

Many diseases have affected the human population throughout history, the most dangerous of which are viral diseases. Measles, TB, Malaria, HBV, HCV, Dengue fever, Malignant Malignancies, Spanish flu, and other diseases have resulted in millions of deaths. People have learned a memorable lesson from history. So, for controlling and reducing the rate of infections in their communities, they have established different strategies. Among the aforesaid diseases, one of the infectious diseases is COVID-19.

COVID-19 is a threatful outbreak that arose in China [1, 2] and spread throughout the globe very rapidly. It is an infectious disease caused by the virus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease started at a seafood market in Wuhan, a big city in China, in December 2019. The disease spread in the entire city during February and March 2020. At that time, infected people were nearly 0.84 million and more than 5000 have died. Also a considerable number of infected people recovered from the said disease. The disease COVID-19 has become a pandemic due to several

reasons. Some of them are (i) high transmission rate of the disease, (ii) lack of suitable vaccine and exact medicine, and (iii) the exact nature of the SARS-CoV-2 virus is still unknown. The incubation period can range from 2 to 14 days [3–12]. The majority of COVID-19 symptoms are mild, although this may increase when variants arise.

Although there were 5*:*94 million COVID-19 deaths that were officially reported between January 1,2020, and December 31, 2021, the excess mortality caused by the COVID-19 pandemic resulted in 18*:*2 million deaths globally during that time. The COVID-19 pandemic caused an excess mortality rate of 120*:*3 fatalities per 100,000 people worldwide. The regions of south Asia, the Middle East, north Africa, and eastern Europe had the highest number of additional deaths brought on by COVID-19. At the national level, Mexico 798, 000 ð Þ, Brazil 792, 000 ð Þ, Indonesia 736, 000 ð Þ, and Pakistan 664, 000 ð Þ were expected to have the largest total excess mortality from COVID-19, followed by the United States (1*:*13 million), Russia (1*:*07 million), and India (4*:*07 million). The excess mortality rate among these nations was highest in Mexico (325*:*1 per 100,000) and Russia 374 ð *:*6 per 100,000Þ, and it was comparable in Brazil 186 ð *:*9 per 100,000Þ and the USA 179 ð *:*3 per 100,000Þ.

COVID-19 symptoms differ from one person to the next. In fact, some infected people show no signs or symptoms (asymptomatic). Cough, shortness of breath or difficulty breathing, fever or chills, headaches, weariness, muscular or body aches, sore throat, loss of taste or smell, congestion or runny nose, diarrhea, and nausea or vomiting are some of the symptoms people with COVID-19 infection [9]. It's also possible that some will have additional symptoms. Many researchers, doctors, and policymakers are trying to prevent the disease from spreading. One important factor in the spreading of said disease is the migration of affected persons from one locality to another. This affects more people and hence plays a major role in the spreading. Therefore, the primary step taken by most countries is to announce city-wide lockdowns. So that some protective measures should be taken to minimize the greatest possible loss of human lives [13]. On an international level, banned air traffic for an unknown period of time. Keeping in mind that in the past such outbreak not only led to the greatest loss of human lives but also damaged the economy very badly throughout the world. Therefore, scientists and researchers are trying their best to put their part in the investigation of a cure for the COVID-19 outbreak. It is clear from a medical engineering point of view that infectious diseases can be better understood by using the mathematical model. In the last many decades, mathematical modeling is one of the important areas of research [14–55]. To understand the dynamics of COVID-19, it is essential to formulate mathematical models that can assist in the estimation of the transmissibility and dynamic of the virus transmission. Also, the majority of real-world problems, such as infectious diseases, are nonlinear in nature. As a result, nonlinear mathematical models that describe a variety of real-world issues have piqued interest for decades. In this regard, various models were formulated or updated. Also, several types of research focusing on mathematical modeling of COVID-19 have been considered recently. Some models that have recently been considered in this regard are [56–59]. Motivated by the above work, we are going to investigate the COVID-19 mathematical model (see 4) numerically under NSFDS.

#### **2. Preliminaries**

In numerical analysis, NSFDS is a general set of methods that gives numerical solutions to differential equations by discretizing the data. Many real-life problems are *Computation of Numerical Solution via Non-Standard Finite Difference Scheme DOI: http://dx.doi.org/10.5772/intechopen.108450*

modeled by differential equations, for which analytical solutions are difficult to find out efficiently. Several researchers have tried different ways (e.g., via Finite Element Methods, Standard Finite Difference Methods, Spline Approximation Methods, etc). Nowadays, NSFDS is playing an important role in solving the real-life problems governed by ODEs and/or by PDEs. In science and engineering, many differential models for which the existing methodologies do not give reliable results, NSFDS are solving them competitively.

Here we derive the suggested scheme for simple problems as let

$$\frac{dy}{dt} = f(t, y) \tag{1}$$

then NSFD equation is

$$\frac{\mathcal{y}\_{k+1} - \mathcal{y}\_k}{h} = f(t, \mathcal{y}(k)),$$

$$\mathcal{y}\_{k+1} = \mathcal{y}\_k + hf(t, \mathcal{y}(k)).$$

**Definition 1.** *A successful example of a NSFD equation is one setup for a combustion model*

$$\frac{dw}{dt} = w^2(\mathbf{1} - w). \tag{2}$$

The NSFD equation would be

$$\frac{w\_{k+1} - w\_k}{h} = w\_k^2 - w\_k^3. \tag{3}$$

#### **3. Formulation of proposed model**

A model is formulated that further divides the entire population into different classes given as:

individuals who have high chance of getting an infection are placed in susceptible class *S*, individuals who are in close contact with COVID-19 environment are placed in exposed class *E*, individuals having the symptoms of COVID-19 are placed in infected class *I* and *R* recovered class includes recovered individuals. A mathematical model of COVID-19 is described by the following system of differential eqs. [30].

$$\begin{cases} \frac{d}{dt}S(t) = \gamma - k(1 + aI(t))S(t)I(t) - \varepsilon S(t), \\\\ \frac{d}{dt}E(t) = k(1 + aI(t))S(t)I(t) - (\varepsilon + \delta)E(t), \\\\ \frac{d}{dt}I(t) = \eta + \delta E(t) - (\nu + \varepsilon + \beta)I(t), \\\\ \frac{d}{dt}R(t) = \beta I(t) - \varepsilon R(t). \end{cases} \tag{4}$$

**Figure 1.** *A flow chart of the proposed model.*

With initial conditions given by

$$S(\mathbf{0}) = \mathbf{S}\_0, \ E(\mathbf{0}) = E\_0, \ I(\mathbf{0}) = I\_0, \ R(\mathbf{0}) = R\_\mathbf{0}.$$

The description of above model is given in **Figure 1**.

#### **4. Algorithm for approximate solution of the considered model**

To compute the required approximate solution, using general form of NSFD on (4), we have

$$\begin{cases} \frac{S\_{n+1}(t) - S\_n(t)}{h} = \gamma - k(1 + aI\_n(t))S\_n(t)I\_n(t) - \varepsilon S\_n(t), \\\\ \frac{E\_{n+1}(t) - E\_n(t)}{h} = k(1 + aI\_n(t))S\_n(t)I\_n(t) - (\varepsilon + \delta)E\_n(t), \\\\ \frac{I\_{n+1}(t) - I\_n(t)}{h} = \eta + \delta E\_n(t) - (\nu + \varepsilon + \beta)I\_n(t), \\\\ \frac{R\_{n+1}(t) - R\_n(t)}{h} = \beta I\_n(t) - \varepsilon R\_n(t). \end{cases} \tag{5}$$
 
$$\begin{cases} S\_{n+1}(t) = S\_n(t) + h(\gamma - k(1 + aI\_n(t))S\_n(t)I\_n(t) - \varepsilon S\_n(t)), \\\\ E\_{n+1}(t) = E\_n(t) + h(k(1 + aI\_n(t))S\_n(t)I\_n(t) - (\varepsilon + \delta)E\_n(t)), \\\\ I\_{n+1}(t) = I\_n(t) + h(\eta + \delta E\_n(t) - (\nu + \varepsilon + \beta)I\_n(t)), \\ R\_{n+1}(t) = R\_n(t) + h(\beta I\_n(t) - \varepsilon R\_n(t)). \end{cases} \tag{6}$$

Now putting n = 0, 1, 2 … . in (6), we get few terms of the approximate solution as

$$\begin{cases} \mathcal{S}\_{1}(t) = \mathcal{S}\_{0}(t) + h(\gamma - k(1 + aI\_{0}(t))\mathcal{S}\_{0}(t)I\_{0}(t) - \epsilon \mathcal{S}\_{0}(t)), \\\\ E\_{1}(t) = E\_{0}(t) + h(k(1 + aI\_{0}(t))\mathcal{S}\_{0}(t)I\_{0}(t) - (\epsilon + \delta)E\_{0}(t)), \\\\ I\_{1}(t) = I\_{0}(t) + h(\eta + \delta E\_{0}(t) - (\nu + \epsilon + \beta)I\_{0}(t)), \\\\ R\_{1}(t) = R\_{0}(t) + h(\beta I\_{0}(t) - \epsilon R\_{0}(t)). \end{cases} \tag{7}$$

*Computation of Numerical Solution via Non-Standard Finite Difference Scheme DOI: http://dx.doi.org/10.5772/intechopen.108450*

$$\begin{cases} \mathcal{S}\_{2}(t) = \mathcal{S}\_{1}(t) + h(\gamma - k(\mathbf{1} + a\mathbf{I}\_{1}(t))\mathcal{S}\_{1}(t)I\_{1}(t) - \epsilon \mathcal{S}\_{1}(t)), \\\\ E\_{2}(t) = E\_{1}(t) + h(k(\mathbf{1} + a\mathbf{I}\_{1}(t))\mathcal{S}\_{1}(t)I\_{1}(t) - (\epsilon + \delta)E\_{1}(t)), \\\\ I\_{2}(t) = I\_{1}(t) + h(\eta + \delta E\_{1}(t) - (\nu + \epsilon + \beta)I\_{1}(t)), \\\\ R\_{2}(t) = R\_{1}(t) + h(\beta I\_{1}(t) - \epsilon R\_{1}(t)). \end{cases} \tag{8}$$

$$\begin{cases} \mathcal{S}\_{3}(t) = \mathcal{S}\_{2}(t) + h(\gamma - k(\mathbf{1} + aI\_{2}(t))S\_{2}(t)I\_{2}(t) - \epsilon \mathcal{S}\_{2}(t)), \\\\ E\_{3}(t) = E\_{2}(t) + h(k(\mathbf{1} + aI\_{2}(t))S\_{2}(t)I\_{2}(t) - (\epsilon + \delta)E\_{2}(t)), \\ I\_{3}(t) = I\_{2}(t) + h(\eta + \delta E\_{2}(t) - (\nu + \epsilon + \beta)I\_{2}(t)), \\ R\_{3}(t) = R\_{2}(t) + h(\beta I\_{2}(t) - \epsilon R\_{2}(t)). \end{cases} \tag{9}$$

and so on. Similarly, the other terms may be computed.

#### **5. Numerical interpretation**

To present the concerned approximate solutions computed above of the model under consideration, we use numerical values for the parameters in given in **Table 1**. Based on reported data, the initial condition is set as [45]

ðð Þ 0 , ð Þ 0 , ð Þ 0 , ℝð Þ 0 Þ ¼ ð Þ 32*:*37*million*, 12*million*,0*:*001523*million*,0*:*005025*million :*

After putting the numerical values in Eq. (6), we obtained the following results. **Case (1) n = 0**

$$\begin{cases} \mathbb{S}\_{1}(t) = \mathbf{3.2018} \times \mathbf{10}^{\top}, \\\\ \mathbb{E}\_{1}(t) = \mathbf{1.3738} \times \mathbf{10}^{6}, \\\\ \mathbb{I}\_{1}(t) = \mathbf{4.5718} \times \mathbf{10}^{3}, \\\\ \mathbb{R}\_{1}(t) = \mathbf{5.0163} \times \mathbf{10}^{3}. \end{cases} \tag{10}$$


**Table 1.** *Numerical values of parameters.* And similarly from Eqs. (8) and (9), we get **Case (2) n = 1**

$$\begin{cases} S\_2(t) = 2.9958 \times 10^7, \\ E\_2(t) = 3.3015 \times 10^6, \\ I\_2(t) = 4.9285 \times 10^3, \\ R\_2(t) = 5.0305 \times 10^3. \end{cases} \tag{11}$$

**Case (3) n = 2**

$$\begin{cases} S\_3(t) = 2.7741 \times 10^7, \\ E\_3(t) = 5.3871 \times 10^6, \\ I\_3(t) = 5.7629 \times 10^3, \\ R\_3(t) = 5.0473 \times 10^3. \end{cases} \tag{12}$$

**Case (4) n = 3**

$$\begin{cases} \mathbf{S}\_4(t) = 2.4980 \times 10^7, \\ E\_4(t) = 8.0161 \times 10^6, \\ I\_4(t) = 7.1091 \times 10^3, \\ R\_4(t) = 5.0703 \times 10^3. \end{cases} \tag{13}$$

**Case (5) n = 4**

$$\begin{cases} S\_5(t) = 2.1258 \times 10^7, \\ E\_5(t) = 1.1606 \times 10^7, \\ I\_5(t) = 9.0967 \times 10^3, \\ R\_5(t) = 5.1033 \times 10^3. \end{cases} \tag{14}$$

In **Figures 2**–**5**, we have provided a graphical representation of different classes for the proposed model. We concluded that by taking a few terms of the series solutions we can efficiently describe the proposed model. We see in the figures that the

**Figure 2.** *Dynamics of susceptible class.*

*Computation of Numerical Solution via Non-Standard Finite Difference Scheme DOI: http://dx.doi.org/10.5772/intechopen.108450*

**Figure 3.** *Dynamics of exposed class.*

**Figure 4.** *Dynamics of infected class.*

**Figure 5.** *Dynamics of recovered class.*

susceptible class is decreasing as a result increase in infection occurred but due to vaccination and other precautions there occurred an increase in the recovered class. Further, we compare our results with the usual RK4 method numerical results for the given data in **Table 1** in **Figures 6**–**9** respectively. We see that the solution through the NSFDS and RK4 method agrees very well.

**Figure 6.** *Comparison of the approximate solution for the susceptible class at NSFS and RK4.*

**Figure 7.** *Comparison of the approximate solution for the exposed class at NSFS and RK4.*

**Figure 8.** *Comparison of the approximate solution for the infected class at NSFS and RK4.*

*Computation of Numerical Solution via Non-Standard Finite Difference Scheme DOI: http://dx.doi.org/10.5772/intechopen.108450*

**Figure 9.** *Comparison of the approximate solution for the recovered class at NSFS and RK4.*

#### **6. Some explanation and concluding remarks**

In this work, we have studied a four-compartmental mathematical model based on a system of ordinary differential equations to study the dynamics of COVID-19 through the NSFDS method. With the help of the said technique, we develop an algorithm to discretize the data to find an approximate solution to the proposed problem. Using some real values for the parameters and initial data, we compute a few terms and approximate solutions corresponding to a different compartment. We plot our approximate solutions for different compartments graphically using MATLAB. We concluded that by taking a few terms of the solutions, we can efficiently describe the proposed model. As compared to RK4 and Euler methods, NSFD method is easy to implement. The computational cost is low and also good for time-saving in the future, one can extend the current study for mathematical models under nonsingular type derivatives. Finally, we have given a comparison between the approximate solution at NSFD method and RK4 method. We see that both solutions agreed very well.

#### **Author details**

Eiman Ijaz<sup>1</sup> , Johar Ali<sup>1</sup> , Abbas Khan<sup>1</sup> , Muhammad Shafiq<sup>1</sup> and Taj Munir<sup>2</sup> \*

1 Department of Mathematics, University of Malakand, Pakistan

2 Abdus Salam School of Mathematical Sciences G.C. University Lahore Punjab, Pakistan

\*Address all correspondence to: ehuzaifa@gmail.com; taj\_math@hotmail.com

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Chan JF-W et al. Genomic characterization of the 2019 novel human-pathogenic coronavirus isolated from a patient with atypical pneumonia after visiting Wuhan. Emerging Microbes & Infections. 2020;**9**(1): 221-236

[2] World Health Organization. Coronavirus disease 2019 (COVID-19) Situation Report-62. 2019

[3] Riou J, Althaus CL. Pattern of early human-to-human transmission of Wuhan 2019 novel coronavirus (2019 nCoV), December 2019 to January 2020. Eurosurveillance. 2020;**25**(4):2000058

[4] Hurwitz JL. Viruses and the sars-cov-2/covid-19 pandemic of 2020. Viral Immunology. 2020;**33**(4):251-252

[5] Ge XY et al. Isolation and characterization of a bat SARS–like coronavirus that uses the ACE2 receptor. Nature. 2013;**503**:535-538

[6] Zhou P, Yang X-L, Wang X-G, Ben H, Zhang L, Zhang W, et al. A pneumonia outbreak associated with a new coronavirus of probable bat origin. Nature. 2020;**579**(7798):270-273

[7] Sha H, Sanyi T, Libin R. A discrete stochastic model of the covid-19 outbreak, Forecast and control. Mathematical Bioscience Engineering. 2020;**17**(4):2792-2804

[8] Fisher D, Heymann D. The novel coronavirus outbreak causing covid-19. BMC Medicine. 2020;**18**(1):1-3

[9] Forida P et al. The symptoms, contagious process, prevention and post treatment of Covid-19. European Journal of Physiotherapy and Rehabilitation Studies. 2020;**2020**:11

[10] World Health Organization. Advice on the use of masks in the context of COVID-19: Interim guidance. 2020

[11] McAloon C et al. Incubation period of COVID-19, a rapid systematic review and meta-analysis of observational research. BMJ Open. 2020;**10**(8):e039652

[12] Quesada JA et al. Incubation period of COVID-19, a systematic review and meta-analysis. Revista Clinica Espanola (English Edition). 2021;**221**(2):109-117

[13] Lin Q et al. (COVID-19) outbreak in Wuhan, China with individual reaction and governmental action. International Journal of Infectious Diseases. 2019; **93**(2020):211-216

[14] Li Q et al. Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia. New England Journal of Medicine. 2020;**382**: 1199-1207

[15] Alqudah M, Abdeljawad T, Eiman Q, Madlal K, Shah FJ. Existence theory and approximate solution to prey-predator coupled system involving non singular kernel type derivative. Advanced in Difference Equation. 2020;**1**:1-10

[16] Moaddy K, Momani S, Hashim I. The non-standard finite difference scheme for linear fractional PDEs in fluid mechanics. Computers & Mathematics with Applications. 2011;**61**(4):1209-1216

[17] Mickens RE. Applications of nonstandard finite difference schemes. Singapore: World Scientific; 2000

[18] Adekanye O, Washington T. Nonstandard finite difference scheme for a Tacoma Narrows Bridge model. Applied Mathematical Modelling. 2018;**62**:223-236

*Computation of Numerical Solution via Non-Standard Finite Difference Scheme DOI: http://dx.doi.org/10.5772/intechopen.108450*

[19] Korpusik A. A nonstandard finite difference scheme for a basic model of cellular immune response to viral infection. Communications in Nonlinear Science and Numerical Simulation. 2017; **43**:369-384

[20] Mickens RE. A nonstandard finite difference scheme for a Fisher PDE having nonlinear diffusion. Computers and Mathematics with Applications. 2003;**45**:429-436

[21] Hajipour M, Jajarmi A, Baleanu D. An efficient nonstandard finite difference scheme for a class of fractional chaotic systems. Journal of Computational and Nonlinear Dynamics. 2018;**13**(2)

[22] Xu J, Geng Y, Hou J. A non-standard finite difference scheme for a delayed and diffusive viral infection model with general nonlinear incidence rate. Computers and Mathematics with Applications. 2017;**74**(8):1782-1798

[23] Qin W, Wang L, Ding X. A nonstandard finite difference method for a hepatitis B virus infection model with spatial diffusion. Journal of Difference Equations and Applications. 2014; **20**(12):1641-1651

[24] Manna K. A nonstandard finite difference scheme for a diffusive HBV infection model with capsids and time delay. Journal of Difference Equations and Applications. 2017;**23**(11):1901-1911

[25] Manna K, Chakrabarty SP. Global stability and a nonstandard finite difference scheme for a diffusion driven HBV model with capsids. Journal of Difference Equations and Applications. 2015;**21**(10):918-933

[26] Elsheikh S, Ouifki R, Patidar KC. A nonstandard finite difference method to solve a model of HIV–Malaria co– infection. Journal of Difference

Equations and Applications. 2014;**20**(3): 354-378

[27] Tadmon C, Foko S. Nonstandard finite difference method applied to an initial boundary value problem describing hepatitis B virus infection. Journal of Difference Equations and Applications. 2020;**26**(1):122-139

[28] Bisheh-Niasar M, Arab Ameri M. Moving meshnonstandard finite difference method for non–linear heat transfer in a thin finite rod. Journal of Applied and Computational Mechanics. 2018;**4**(3):161-166

[29] Zafar ZU, Abadin NA, Younas S, Abdelwahab SF, Nisar KS. Numerical investigations of stochastic HIV/AIDS infection model. Alexandria Engineering Journal. 2021;**60**(6):5341-5363

[30] Yang Y, Zhou J, Ma X, Zhang T. Nonstandard finite difference scheme for a diffusive within–host virus dynamics model with both virus–to–cell and cell–to–cell transmissions. Computers Mathematics with Applications. 2016;**72**(4):1013-1020

[31] Singh H. Analysis for fractional dynamics of Ebola virus model. Chaos Solitons & Fractals. 2020;**138**: 109992

[32] Singh H, Singh CS. A reliable method based on second kind Chebyshev polynomial for the fractional model of Bloch equation. Alexandria Engineering Journal. 2018;**57**(3):1425-1432

[33] Singh H. Operational matrix approach for approximate solution of fractional model of Bloch equation. Journal of King Saud University–Science. 2017;**29**(2):23-240

[34] Singh H, Pandey R, Srivastava H. Solving non-linear fractional variational problems using jacobi polynomials. Mathematics. 2019;**7**(3):224

[35] Singh H, Srivastava HM. Numerical investigation of the fractional order liénard and duffing equations arising in oscillating circuit theory. Frontier in Physics. 2020;**8**:120

[36] Singh H, Sahoo MR, Singh OP. Numerical method based on Galerkin approximation for the fractional advection–dispersion equation. International Journal of Applied and Computational Mathematics. 2017;**3**(3): 2171-2187

[37] Zhang Y. Initial boundary value problem for fractal heat equation in the semi-infinite region by Yang–Laplace transform. Thermal Science. 2014;**18**(2): 677-681

[38] Miller KS, Ross B. An Introduction to the Fractional Calculus and Fractional Differential Equations. New York: Wiley; 1993

[39] Eltayeb H, Kiliçman A. A note on solutions of wave, Laplace's and heat equations with convolution terms by using a double Laplace transform. Applied Mathematics Letters. 2008; **21**(12):1324-1329

[40] Spiga G, Spiga M. Two-dimensional transient solutions for crossflow heat exchangers with neither gas mixed. Journal of Heat Transfer-transactions of the ASME. 1987;**109**(2):281-286

[41] Khan T, Shah K, Khan RA, Khan A. Solution of fractional order heat equation via triple Laplace transform in 2 dimensions. Mathematical Methods in the Applied Sciences. 2018;**4**(2):818-825

[42] Shah K, Khalil H, Khan RA. Analytical solutions of fractional order diffusion equations by natural transform method. Iranian Journal of Science and Technology, Transactions A: Science. 2018;**42**(3):1479-1490

[43] Singh H, Ghassabzadeh FA, Tohidi E, Cattani C. Legendre spectral method for the fractional Bratu problem. Mathematical Methods in the Applied Sciences. 2020;**43**(9):5941-5952

[44] Singh H, Srivastava HM. Jacobi collocation method for the approximate solution of some fractional order Riccati differential equations with variable coefficients. Physica A. 2019;**523**: 1130-1149

[45] Singh H, Srivastava HM, Kumar D. A reliable algorithm for the approximate solution of the nonlinear Lane–Emden type equations arising in astrophysics. Numerical Methods for Partial Differential Equations. 2018;**34**(5): 1524-1555

[46] Singh J, Jassim HK, Kumar D. An efficient computational technique for local fractional Fokker Planck equation. Physica A. 2020;**555**(1):124525

[47] Ahmad B, Sivasundaram S. On four– point nonlocal boundary value problems of nonlinear integro–differential equations of fractional order. Applied Mathematics and Computation. 2010; **217**:480-487

[48] Bai Z. On positive solutions of a nonlocal fractional boundary value problem. Nonlinear Analysis. 2010;**72**: 916-924

[49] Khan RA, Shah K. Existence and uniqueness of solutions to fractional order multi-point boundary value problems. Communications in Applied Analysis. 2015;**19**:515-526

[50] Shah K, Ali N, Khan RA. Existence of positive solution to a class of

*Computation of Numerical Solution via Non-Standard Finite Difference Scheme DOI: http://dx.doi.org/10.5772/intechopen.108450*

fractional differential equations with three point boundary conditions. Mathematics Science Letter. 2016;**5**(3): 291-296

[51] Wang J, Zhou Y, Wei W. Study in fractional differential equations by means of topological degree methods. Numerical Functional Analysis Optimum. 2012;**33**:216-238

[52] Brauer F, Castillo-Chavez C. Mathematical Models in Population Biology and Epidemiology. New York: Springer; 2001

[53] Hethcote HW. The mathematics of infectious diseases. SIAM Review. 2000; **42**:599

[54] Hethcote HW, Van Ark JW. Modeling HIV Transmission and AIDS in the United States. Berlin, Heidelberg, New York: Springer; 1992

[55] Lu H, Stratton CW, Tang YW. Outbreak of pneumonia of unknown Etiology in Wuhan China: The mystery and the miracle. Journal of Medical Virology. 2020;**2020**:1234-1260

[56] Lin Q et al. A conceptual model for the coronavirus disease 2019 (COVID-19) outbreak in Wuhan, China with individual reaction and governmental action. International Journal of Infectious Diseases. 2020;**93**:211-216

[57] Yousaf M et al. Statistical analysis of forecasting COVID-19 for upcoming month in Pakistan. Chaos, Solitons & Fractals. 2020;**2020**:109926

[58] Shah K et al. Qualitative analysis of a mathematical model in the time of COVID-19. BioMed Research International. 2020;**2020**:11

[59] Abdo MS et al. On a comprehensive model of the novel coronavirus

(COVID-19) under Mittag-Leffler derivative. Chaos, Solitons & Fractals. 2020;**2020**:109867

### *Edited by Kamal Shah, Bruno Carpentieri and Arshad Ali*

It is well known that the theory of dynamical systems is an essential mathematical tool in the analysis of various real-world processes and phenomena that evolve over time. Different aspects of this rich field of research are evolving all the time, including new theoretical results for qualitative investigation as well as fast numerical techniques for approximate solution. The book provides an overview of the current state of the art in this fascinating and critically important field of pure and applied mathematics, presenting recent developments in theory, modeling, algorithms, and applications.

Published in London, UK © 2023 IntechOpen © AlienCat / CanStockPhoto

Qualitative and Computational Aspects of Dynamical Systems

Qualitative and

Computational Aspects of

Dynamical Systems

*Edited by Kamal Shah,* 

*Bruno Carpentieri and Arshad Ali*