**3. Overview of the Levenberg-Marquardt method**

The Levenberg-Marquardt method, originally devised for application to nonlinear parameter estimation problems, has also been successfully applied to the solution of linear ill-conditioned problems. Such a method was first derived by Levenberg (1944) by modifying the ordinary least-squares norm. Later Marquardt (1963) derived basically the same technique by using a different approach. Marquardt's intention was to obtain a method that would tend to the Gauss method in the neighborhood of the minimum of the ordinary least-squares norm, and would tend to the steepest descent method in the neighborhood of the initial guess used for the iterative procedure.

To minimize the least squares norm (8), we need to equate to zero the derivatives of *<sup>F</sup>*ð Þ **<sup>P</sup>** with respect to each of the unknown parameters *<sup>p</sup>*1, *<sup>p</sup>*2, <sup>⋯</sup>, *pm*þ<sup>1</sup> � �, that is

$$\frac{\partial F(\mathbf{P})}{\partial p\_1} = \frac{\partial F(\mathbf{P})}{\partial p\_2} = \dots = \frac{\partial F(\mathbf{p})}{\partial p\_{m+1}} = \mathbf{0}.\tag{10}$$

Let us introduce the sensitivity or Jacobian matrix, as follows:

$$\mathbf{J}(\mathbf{P}) = \left[\frac{\partial \mathbf{U}^T(\mathbf{P})}{\partial \mathbf{P}}\right]^T = \begin{bmatrix} u\_{p\_1}(\mathbf{x}\_0, t\_1, \mathbf{p}) & u\_{p\_2}(\mathbf{x}\_0, t\_1, \mathbf{p}) & \cdots & u\_{p\_{m+1}}(\mathbf{x}\_0, t\_1, \mathbf{p}) \\ u\_{p\_1}(\mathbf{x}\_0, t\_2, \mathbf{p}) & u\_{p\_2}(\mathbf{x}\_0, t\_2, \mathbf{p}) & \cdots & u\_{p\_{m+1}}(\mathbf{x}\_0, t\_2, \mathbf{p}) \\ \cdots \\ u\_{p\_1}(\mathbf{x}\_0, t\_n, \mathbf{p}) & u\_{p\_2}(\mathbf{x}\_0, t\_n, \mathbf{p}) & \cdots & u\_{p\_{m+1}}(\mathbf{x}\_0, t\_n, \mathbf{p}) \end{bmatrix}, \tag{11}$$
 
$$\text{or} \quad J\_{\vec{\eta}} = u\_{p\_j}(\mathbf{x}\_0, t\_i, \mathbf{p}) = \frac{\partial u(\mathbf{x}\_0, t\_i, \mathbf{p})}{\partial p\_j}, i = 1, 2, \dots \\ n, j = 1, 2, \dots \\ n + 1. \tag{12}$$

The elements of the sensitivity matrix are called the sensitivity coefficients, the results of differentiation (10) can be written down as follows:

$$-2\mathbf{J}^T(\mathbf{P})[\mathbf{U}(\mathbf{P}) - \mathbf{G}] = \mathbf{0}.\tag{13}$$

For linear inverse problem the sensitivity matrix is not a function of the unknown parameters. The Eq. (13) can be solved then in explicit form:

$$\mathbf{P} = (\mathbf{J}^T \mathbf{J})^{\cdot \mathbf{1}} \mathbf{J}^T \mathbf{G}. \tag{14}$$

**2. Description of the problem**

*Inverse Heat Conduction and Heat Exchangers*

*∂u <sup>∂</sup><sup>t</sup>* <sup>¼</sup> *<sup>∂</sup>*

and Dirichlet boundary conditions

squares method. The error in the estimate

*F p*1, *<sup>p</sup>*2, …, *pm*þ<sup>1</sup> � � <sup>¼</sup> <sup>X</sup>*<sup>n</sup>*

minimum. Such a norm can be written as

is to be minimized. Here, *u x*0, *ti*, *<sup>p</sup>*1, *<sup>p</sup>*2, …, *pm*þ<sup>1</sup>

with the initial condition

*<sup>∂</sup><sup>x</sup> q x*ð Þ *<sup>∂</sup><sup>u</sup> ∂x* � �

given as follows:

solution.

½ � 0, T be denoted by

values of *pj*

**106**

The mathematical formulation of a one-dimensional heat conduction problem is

where f x, t ð Þ, *u*0ð Þ x , *g*1ð Þt , *g*2ð Þt and *q*ð Þ x are continuous known functions. We consider the problem (1)–(4) as a direct problem. As we all know, if *u*0ð Þ x , *g*1ð Þt , *g*2ð Þt are continuous functions and *q*ð Þ x is known, the problem (1)–(4) has a unique

For the inverse problem, the diffusion coefficient *q*ð Þ x is regarded as being unknown. In addition, an overspecified condition is also considered available. To estimate the unknown coefficient *q*ð Þ x , the additional information on the boundary *x* ¼ *x*0, 0<*x*<sup>0</sup> <*L* is required. Let the *u*ð Þ x, t taken at *x* ¼ *x*<sup>0</sup> over the time period

It is evident that for an unknown function *q*ð Þ x , the problem (1)–(4) is underdetermined and we are forced to impose additional information (5) to provide a

We note that the measured overspecified condition u xð Þ¼ 0, t gð Þ*t* should contain measurement errors. Therefore the inverse problem can be stated as follows: by utilizing the above-mentioned measured data, estimate the unknown function *q*ð Þ x . In this work the polynomial form is proposed for the unknown function *q*ð Þ x

where *<sup>p</sup>*1, *<sup>p</sup>*2, <sup>⋯</sup>, *pm*þ<sup>1</sup> are constants which remain to be determined simultaneously. The unknown coefficients *<sup>p</sup>*1, *<sup>p</sup>*2, <sup>⋯</sup>, *pm*þ<sup>1</sup> can be determined by using least

These quantities are determined from the solution of the direct problem which is given previously by using an approximated ^ *<sup>q</sup>*ð Þ <sup>x</sup> for the exact *<sup>q</sup>*ð Þ <sup>x</sup> . The estimated

*q x*ð Þ<sup>≈</sup> ^ *<sup>q</sup>*ð Þ¼ <sup>x</sup> p1 <sup>þ</sup> *<sup>p</sup>*2*<sup>x</sup>* <sup>þ</sup> *<sup>p</sup>*3*x*<sup>2</sup> <sup>þ</sup> <sup>⋯</sup> <sup>þ</sup> *pm*þ<sup>1</sup>*xm*, (6)

*u x*0, *ti*, *<sup>p</sup>*1, *<sup>p</sup>*2, …, *pm*þ<sup>1</sup>

, *<sup>j</sup>* <sup>¼</sup> 1, 2, <sup>⋯</sup>, *<sup>m</sup>* <sup>þ</sup> 1 are determined until the value of *F p*1, *<sup>p</sup>*2, …, *pm*þ<sup>1</sup>

� � � *<sup>g</sup>*ð Þ <sup>t</sup>*<sup>i</sup>* � �<sup>2</sup>

� � are the calculated results.

, (7)

� � is

unique solution pair u x, t ð Þ ð Þ,q xð Þ to the inverse problem (1)–(5).

before performing the inverse calculation. Therefore *q*ð Þ x approximated as

*i*¼1

þ *f x*ð Þ , *t* , ð Þ *x*, *t* ∈ð Þ� 0, L ð � 0, T , (1)

*u*ð Þ¼ x, 0 u0ð Þ *x* , 0≤*x*≤*L*, (2)

*u*ð Þ¼ 0, t g1ð Þ*t* , 0≤ *t*≤*T*, (3) *u*ð Þ¼ 1, t g2ð Þ*t* , 0≤*t*≤*T*, (4)

uð Þ¼ *x*0, *t* gð Þ*t* 0≤ *t*≤*T:* (5)

In the case of a nonlinear inverse problem, the matrix **J** has some functional dependence on the vector **p**. The solution of Eq. (13) requires an iterative procedure, which is obtained by linearizing the vector **U P**ð Þ with a Taylor series expansion around the current solution at iteration *k*. Such a linearization is given by

$$\mathbf{U(P)} = \mathbf{U(P^k)} + \mathbf{J^k}(\mathbf{P} \cdot \mathbf{P}^k),\tag{15}$$

**J**

stopped if any of them is satisfied.

*DOI: http://dx.doi.org/10.5772/intechopen.89096*

*<sup>k</sup>* � � **<sup>G</sup>**‐**U p***<sup>k</sup>* � � � � � � �

*A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg…*

where *ε*1, *ε*<sup>2</sup> and *ε*<sup>3</sup> are user prescribed tolerances and k k� denotes the Euclidean norm. The criterion given by Eq. (19) tests if the least squares norm is sufficiently small, which is expected in the neighborhood of the solution for the problem. Similarly, Eq. (20) checks if the norm of the gradient of *F*ð Þ **p** is sufficiently small, since it is expected to vanish at the point where *F*ð Þ **p** is minimum. The last criterion given by Eq. (21) results from the fact that changes in the vector of parameters are very small when the method has converged. Generally, these three stopping criteria need to be tested and the iterative procedure of the Levenberg-Marquardt method is

Different versions of the Levenberg-Marquardt method can be found in the literature, depending on the choice of the diagonal matrix *Ω<sup>k</sup>* and on the form chosen for

Suppose that the vector of temperature measurements **G** ¼ ½ � g tð Þ<sup>1</sup> ,g tð Þ<sup>2</sup> , ⋯,g tð Þ*<sup>n</sup>* are given at times *ti*, *<sup>i</sup>* <sup>¼</sup> 1, 2, <sup>⋯</sup>, n and an initial guess **<sup>P</sup>**<sup>0</sup> is available for the vector of unknown parameters **<sup>P</sup>**. Choose a value for *<sup>μ</sup>*0, say, *<sup>μ</sup>*<sup>0</sup> <sup>¼</sup> <sup>0</sup>*:*001 and *<sup>k</sup>* <sup>¼</sup> 0. Then, Step 1. Solve the direct problem (1)–(4) with the available estimate **P***<sup>k</sup>* in order

Step 4. Solve the following linear system of algebraic equations, obtained from

Step 6. Solve the exact problem (1)–(4) with the new estimate **P***<sup>k</sup>*þ<sup>1</sup> in order to

Step 8. If *F* **P***<sup>k</sup>*þ<sup>1</sup> � �≤*F* **P***<sup>k</sup>* � �, accept the new estimate **P***<sup>k</sup>*þ<sup>1</sup> and replace *μ<sup>k</sup>* by 0*:*1*μ<sup>k</sup>*. Step 9. Check the stopping criteria given by (19). Stop the iterative procedure if

Generally, there have two approaches for determining the gradient; the first is a

The first approach is to approximate the gradient of the functional by a finite difference quotient approximation, but in general, we cannot determine the sensi-

Here we intend to use differentiate-then-discretize approach which we refer to as the sensitivity equation method. This method can be determined more efficiently

*<sup>k</sup>* � �*<sup>T</sup>* **<sup>G</sup>**‐**U P***<sup>k</sup>* � � � � in order to compute *<sup>Δ</sup>***P***<sup>k</sup>* <sup>¼</sup> **<sup>P</sup>***<sup>k</sup>*þ<sup>1</sup>

*<sup>k</sup>* � �*<sup>T</sup>* **J**

the variation of the damping parameter *μ<sup>k</sup>*. In this paper, we choose the *Ω<sup>k</sup>* as

*<sup>Ω</sup><sup>k</sup>* <sup>¼</sup> *diag* **<sup>J</sup>**

to obtain the vector **U P***<sup>k</sup>* � � <sup>¼</sup> *u x*0, *<sup>t</sup>*1, **<sup>p</sup>***<sup>k</sup>* � �, *u x*0, *<sup>t</sup>*2, **<sup>p</sup>***<sup>k</sup>*Þ, <sup>⋯</sup>, *u x*0, *tn*, **<sup>p</sup>***<sup>k</sup>*<sup>Þ</sup> � � � �.

Step 5. Compute the new estimate **<sup>P</sup>***<sup>k</sup>*þ<sup>1</sup> as **<sup>P</sup>***<sup>k</sup>*þ<sup>1</sup> <sup>¼</sup> **<sup>P</sup>***<sup>k</sup>* <sup>þ</sup> *<sup>Δ</sup>***P***<sup>k</sup>*.

Step 7. If *F* **P***<sup>k</sup>*þ<sup>1</sup> � �≥*F* **P***<sup>k</sup>* � � replace *μ<sup>k</sup>* by 10*μ<sup>k</sup>* and return to step 4.

any of them is satisfied; otherwise, replace *k* by *k* þ 1 and return to step 3.

discretize-then-differentiate approach and the second is a differentiate-then-

Step 2. Compute *F* **P***<sup>k</sup>* � � from the Eq. (8). Step 3. Compute the sensitivity matrix **J**

(22), by using the current value of **P***<sup>k</sup>*.

find *U* **P***<sup>k</sup>*þ<sup>1</sup> � �. Then compute *F* **P***<sup>k</sup>*þ<sup>1</sup> � �.

**4. Calculation of sensitivity coefficients**

tivities exactly, so this method may led to larger error.

*<sup>k</sup>* <sup>þ</sup> *<sup>μ</sup><sup>k</sup>Ω<sup>k</sup>* h i*Δ***P***<sup>k</sup>* <sup>¼</sup> **<sup>J</sup>**

(18): **J**

*<sup>k</sup>* � �*<sup>T</sup>* **J**

discretize approach.

**109**

with the help of the sensitivities

**p***<sup>k</sup>*þ<sup>1</sup> ‐**p***<sup>k</sup>* � � � � <*ε*2, (20)

�< *ε*3, (21)

*<sup>k</sup>* h i*:* (22)

*<sup>k</sup>* from (12) and then the matrix *Ω<sup>k</sup>* from

‐**P***k*.

where **U P***<sup>k</sup>* � � and **J** *<sup>k</sup>* are the estimated temperatures and the sensitivity matrix evaluated at iteration *k*, respectively. Eq. (15) is substituted into (14) and the resulting expression is rearranged to yield the following iterative procedure to obtain the vector of unknown parameters **P**:

$$\mathbf{P}^{k+1} = \mathbf{P}^k + \left[ \left( \mathbf{J}^k \right)^T \mathbf{J}^k \right]^{\cdot 1} \left( \mathbf{J}^k \right)^T \left[ \mathbf{G} \cdot \mathbf{U} \left( \mathbf{P}^k \right) \right]. \tag{16}$$

The iterative procedure given by Eq. (16) is called the Gauss method. Such method is actually an approximation for the Newton (or Newton-Raphson) method. We note that Eq. (14), as well as the implementation of the iterative procedure given by Eq. (16), require the matrix **J** *<sup>T</sup>***J** to be nonsingular, or

$$\left|\mathbf{J}^{T}\mathbf{J}\right| \neq \mathbf{0},\tag{17}$$

where j j� is the determinant.

Formula (17) gives the so called identifiability condition, that is, if the determinant of **J** *<sup>T</sup>***J** is zero, or even very small, the parameters *pj* , for *j* ¼ 1, 2, ⋯, m þ 1, cannot be determined by using the iterative procedure of Eq. (16).

Problems satisfying **J** *<sup>T</sup>***J** � � � �≈0 are denoted ill-conditioned. Inverse heat transfer problems are generally very ill-conditioned, especially near the initial guess used for the unknown parameters, creating difficulties in the application of Eqs. (14) or (16). The Levenberg-Marquardt method alleviates such difficulties by utilizing an iterative procedure in the form:

$$\mathbf{P}^{k+1} = \mathbf{P}^k + \left[ \left( \mathbf{J}^k \right)^T \mathbf{J}^k + \mu^k \boldsymbol{\Omega}^k \right]^{\cdot 1} \left( \mathbf{J}^k \right)^T \left[ \mathbf{G} \mathbf{-U} \left( \mathbf{P}^k \right) \right], \tag{18}$$

where *μ<sup>k</sup>* is a positive scalar named damping parameter and *Ω<sup>k</sup>* is a diagonal matrix.

The purpose of the matrix term *μ<sup>k</sup>Ω<sup>k</sup>* is to damp oscillations and instabilities due to the ill-conditioned character of the problem, by making its components large as compared to those of **J** *<sup>T</sup>***J** if necessary. *μ<sup>k</sup>* is made large in the beginning of the iterations, since the problem is generally ill-conditioned in the region around the initial guess used for iterative procedure, which can be quite far from the exact parameters. With such an approach, the matrix **J** *<sup>T</sup>***J** is not required to be nonsingular in the beginning of iterations and the Levenberg-Marquardt method tends to the steepest descent method, that is, a very small step is taken in the negative gradient direction. The parameter *μ<sup>k</sup>* is then gradually reduced as the iteration procedure advances to the solution of the parameter estimation problem, and then the Levenberg-Marquardt method tends to the Gauss method given by (16). The following criteria were suggested in literature [13] to stop the iterative procedure of the Levenberg-Marquardt method given by Eq. (18):

$$F(\mathbf{p}^{k+1}) < e\_1,\tag{19}$$

*A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg… DOI: http://dx.doi.org/10.5772/intechopen.89096*

$$\left\| \left( \mathbf{J}^{k} \right) \left[ \mathbf{G} \mathbf{-U} (\mathbf{p}^{k}) \right] \right\| < \varepsilon\_{2},\tag{20}$$

$$\left|\left|\mathbf{p}^{k+1}\mathbf{-p}^{k}\right|\right| < \varepsilon\_{3},\tag{21}$$

where *ε*1, *ε*<sup>2</sup> and *ε*<sup>3</sup> are user prescribed tolerances and k k� denotes the Euclidean norm. The criterion given by Eq. (19) tests if the least squares norm is sufficiently small, which is expected in the neighborhood of the solution for the problem. Similarly, Eq. (20) checks if the norm of the gradient of *F*ð Þ **p** is sufficiently small, since it is expected to vanish at the point where *F*ð Þ **p** is minimum. The last criterion given by Eq. (21) results from the fact that changes in the vector of parameters are very small when the method has converged. Generally, these three stopping criteria need to be tested and the iterative procedure of the Levenberg-Marquardt method is stopped if any of them is satisfied.

Different versions of the Levenberg-Marquardt method can be found in the literature, depending on the choice of the diagonal matrix *Ω<sup>k</sup>* and on the form chosen for the variation of the damping parameter *μ<sup>k</sup>*. In this paper, we choose the *Ω<sup>k</sup>* as

$$\mathfrak{Q}^k = \operatorname{diag} \left[ \left( \mathbf{J}^k \right)^T \mathbf{J}^k \right]. \tag{22}$$

Suppose that the vector of temperature measurements **G** ¼ ½ � g tð Þ<sup>1</sup> ,g tð Þ<sup>2</sup> , ⋯,g tð Þ*<sup>n</sup>* are given at times *ti*, *<sup>i</sup>* <sup>¼</sup> 1, 2, <sup>⋯</sup>, n and an initial guess **<sup>P</sup>**<sup>0</sup> is available for the vector of unknown parameters **<sup>P</sup>**. Choose a value for *<sup>μ</sup>*0, say, *<sup>μ</sup>*<sup>0</sup> <sup>¼</sup> <sup>0</sup>*:*001 and *<sup>k</sup>* <sup>¼</sup> 0. Then,

Step 1. Solve the direct problem (1)–(4) with the available estimate **P***<sup>k</sup>* in order to obtain the vector **U P***<sup>k</sup>* � � <sup>¼</sup> *u x*0, *<sup>t</sup>*1, **<sup>p</sup>***<sup>k</sup>* � �, *u x*0, *<sup>t</sup>*2, **<sup>p</sup>***<sup>k</sup>*Þ, <sup>⋯</sup>, *u x*0, *tn*, **<sup>p</sup>***<sup>k</sup>*<sup>Þ</sup> � � � �.

Step 2. Compute *F* **P***<sup>k</sup>* � � from the Eq. (8).

Step 3. Compute the sensitivity matrix **J** *<sup>k</sup>* from (12) and then the matrix *Ω<sup>k</sup>* from (22), by using the current value of **P***<sup>k</sup>*.

Step 4. Solve the following linear system of algebraic equations, obtained from (18): **J** *<sup>k</sup>* � �*<sup>T</sup>* **J** *<sup>k</sup>* <sup>þ</sup> *<sup>μ</sup><sup>k</sup>Ω<sup>k</sup>* h i*Δ***P***<sup>k</sup>* <sup>¼</sup> **<sup>J</sup>** *<sup>k</sup>* � �*<sup>T</sup>* **<sup>G</sup>**‐**U P***<sup>k</sup>* � � � � in order to compute *<sup>Δ</sup>***P***<sup>k</sup>* <sup>¼</sup> **<sup>P</sup>***<sup>k</sup>*þ<sup>1</sup> ‐**P***k*.

Step 5. Compute the new estimate **<sup>P</sup>***<sup>k</sup>*þ<sup>1</sup> as **<sup>P</sup>***<sup>k</sup>*þ<sup>1</sup> <sup>¼</sup> **<sup>P</sup>***<sup>k</sup>* <sup>þ</sup> *<sup>Δ</sup>***P***<sup>k</sup>*.

Step 6. Solve the exact problem (1)–(4) with the new estimate **P***<sup>k</sup>*þ<sup>1</sup> in order to find *U* **P***<sup>k</sup>*þ<sup>1</sup> � �. Then compute *F* **P***<sup>k</sup>*þ<sup>1</sup> � �.

Step 7. If *F* **P***<sup>k</sup>*þ<sup>1</sup> � �≥*F* **P***<sup>k</sup>* � � replace *μ<sup>k</sup>* by 10*μ<sup>k</sup>* and return to step 4.

Step 8. If *F* **P***<sup>k</sup>*þ<sup>1</sup> � �≤*F* **P***<sup>k</sup>* � �, accept the new estimate **P***<sup>k</sup>*þ<sup>1</sup> and replace *μ<sup>k</sup>* by 0*:*1*μ<sup>k</sup>*. Step 9. Check the stopping criteria given by (19). Stop the iterative procedure if any of them is satisfied; otherwise, replace *k* by *k* þ 1 and return to step 3.
