**4. Hybrid optimization-based method**

Hybrid optimization-based algorithms have become the modern choice for resolving challenging problems [41–43]. A compromise is gotten in this work, from a combination of a traditional numeric optimization-based with a metaheuristic swarm-based method.

The estimation/identification process can be gotten in three major steps, such as the initial step of prediction through the use of least-squares mean (LSM), the getting of optimal PV parameters values through Levenberg-Marquardt (LM), and the optimization of a dominant factor through GWO as detailed below.

### **4.1 Least squares mean (initial phase of prediction)**

Prediction of initial PV parameters values using LSM [44, 45] for the two parts of the introduced real experimental points of I-V curve characteristics as described below.

• For the linear part:

The prediction in the linear part [46, 47] of the model can be obtained simply through the use of the following expressions.

$$I\_{Model}\left(\mathbf{i}\right) = \mathbf{a} \* V\_{Model}\left(\mathbf{i}\right) + \mathbf{b} \tag{3}$$

$$Error\left(i\right) = I\_{\text{Re all}}\left(i\right) - I\_{\text{Model}}\left(i\right) \tag{4}$$

$$J(i) = J(i-1) + error(i)^2\tag{5}$$

where *a* and *b* are constants depending on a determinant and others constants introduced by user.

• For the nonlinear part:

The prediction in the nonlinear part [19, 48] of the model can be obtained with a logarithmic way through the use of the following logarithmic expression.

$$I\_{Model}\left(\dot{\imath}\right) = \mathbf{C}\_0 + \mathbf{C}\_1 \ast I\_{Model}\left(\dot{\imath}\right) + \mathbf{C}\_2 \ast \log\left(\mathbf{1} - \frac{I\_{Real}\left(\dot{\imath}\right)}{b}\right) \tag{6}$$

$$Error\left(i\right) = I\_{\text{Re all}}\left(i\right) - I\_{\text{Model}}\left(i\right) \tag{7}$$

$$J(i+1) = J\left(i\right) + error\left(i\right)^2\tag{8}$$

where *C*0*, C*1, C2 and *b* are constants depending on a determinant, on the hessian and other constants introduced by the user.

Once obtaining initial values of PV parameters values, we introduce them on the LM in order to optimize their values, as explained in the following subsection.

#### **4.2 Levenberg Marquardt (get of optimal PV parameter values)**

The traditional Levenberg-Marquardt approach is a gradient order from Steepest-Descent (SD) in its first step and from Gauss-Newton (GN) in its second step [48–50]. It is mainly based on an optimization of the error between real data and data from the model through the following expression.

$$Error - Quad = \sum\_{i=1}^{N} Error\left(i\right)^2\tag{9}$$

where *N* is the number of measured I-V data.

$$Error = I\_{\text{Real}}\left(\text{i}\right) - I\_{\text{Model}}\left(\text{i}\right) \tag{10}$$

The real and simulated data are denoted by *I*Real and *I*Model, respectively. While *I*Model is the objective function given as Eq. (2),

$$I\_{Model}\left(i\right) = f\left(I, V, \theta\right) \tag{11}$$

Evaluate the objective function *f(ϴ)|<sup>ϴ</sup> = ϴk*. Here, *ϴ* is considered as the PV parameters vector.

$$\mathcal{O} = \left( I\_L, I\_{ds}, \mathfrak{n}, \mathcal{R}\_s, \mathcal{R}\_{sh} \right) \tag{12}$$

Calculus of Jacobian of *f(I,V,ϴ)* for *ϴ*k, as the derivative calculation of I (Eq. (2)) with respect to parameters:

$$J = -\left[\frac{\partial \mathcal{f}(\theta)}{\partial \theta}\right]\_{\theta = \theta\_b} \tag{13}$$

For (damping optimized) update *ϴ*k. The PV parameters to be found are updated at each iteration by the use of the expression below.

$$\theta\_{k\*1} = \theta\_k - \left[ \frac{J' \* \varepsilon}{J' \* J + \lambda\_k \* I} \right]\_{\theta = \theta\_k} \tag{14}$$

The dominant factor λ is considered as responsible parameters for switching from SD to GN in the LM process [19].

For this reason, it is important to get an optimal value of this damping factor by the use of another optimization-based method, our choice was for the recent swarm-based method called GWO, through the following idea:

$$\text{ECT} - \text{Quad}(\text{I}, \text{V}, \theta, \lambda) \rightarrow \text{ECT} - \text{Quad}(\text{\textquotedblleft})\_{\theta \rightarrow \theta\_{\text{\textquotedblleft}}} \tag{15}$$

**39**

**Figure 4.**

*Study of a New Hybrid Optimization-Based Method for Obtaining Parameter Values of Solar Cells*

In addition, it is mentioned that at each iteration of the LM process that the damping factor must be found and is considered as crucial factor for the convergence process of the algorithm. Therefore, its value must be optimized by the use of

In this subsection, our focus is on the evolution of the function *f(I,V,ϴ*,λ*)* indicated by *f(*λ*)* for *ϴ* fixed at *ϴ*k, as regards with various varied values of the damping factor, at each iteration of the LM. As it is observed that at each iteration different local minimums values of *f(*λ*)* exist. So, for obtaining the global minimum of *f(*λ*),* which correspond to the best minimal value of the objective function *f(I,V,ϴ)*, we

The meta-heuristic methods are known for their simplicity, flexibility, derivation free process and the ability to find the global optimal solution. They are also appropriate for a diversity of problems without changing on their main structure. These methods can be based on a single solution or on population of solutions. The basic concepts can be obtained through exploration (exploring all of the search space and thus avoiding local optimum) and exploitation (investigating process in

Swarm-based intelligence (SI) methods, which derive from meta-heuristics, are based on the smart collective behavior of decentralized and self-organized swarms to ensure some biological needing such as food or security. A detailed discussion about the recent smart swarm-based algorithm, known as GWO is presented

Grey Wolf optimizer (GWO) algorithm, developed by Mirjalili in 2014, is a recent

1.The alphas wolves (α): are the leading wolves that are responsible for managing and making decisions. These are the first level of the wolves' social hierarchical

2.The betas wolves (β): represent the second level. Their main job is to help and

3.The deltas wolves (δ): represent the third level in the pack and are called subordinates. They use to follow alpha and beta wolves. The delta wolves can

*The social hierarchical structure of Grey wolves (dominance decreases from the top-down) [51].*

smart swarm-based meta-heuristic approach [50–52]. This algorithm mimics the leadership hierarchy and hunting process of Grey wolves in the wildlife. The following points represent the hierarchy in a wolf's group, which is about 5 to 12 members.

**4.3 Grey Wolf optimizer (optimization of damping factor's value)**

suggest using the swarm-based meta-heuristic GWO method.

*DOI: http://dx.doi.org/10.5772/intechopen.93324*

another approach such as the GWO approach.

detail of the promising search space area).

structure. This later is presented in **Figure 4**.

divide their tasks into five categories as follows:

support alpha's decisions.

as follow.

In addition, it is mentioned that at each iteration of the LM process that the damping factor must be found and is considered as crucial factor for the convergence process of the algorithm. Therefore, its value must be optimized by the use of another approach such as the GWO approach.
