**2. Description of the problem**

The mathematical formulation of a one-dimensional heat conduction problem is given as follows:

$$\frac{\partial u}{\partial t} = \frac{\partial}{\partial \mathbf{x}} \left[ q(\mathbf{x}) \frac{\partial u}{\partial \mathbf{x}} \right] + f(\mathbf{x}, t), \ (\mathbf{x}, t) \in (\mathbf{0}, \mathbf{L}) \times (\mathbf{0}, \mathbf{T}], \tag{1}$$

with the initial condition

$$
\mu(\mathbf{x}, \mathbf{0}) = \mathbf{u}\_0(\mathbf{x}), \ \mathbf{0} \le \mathbf{x} \le L,\tag{2}
$$

*<sup>F</sup>*ð Þ¼ **<sup>P</sup>** ½ � **U P**ð Þ‐**<sup>G</sup>** *<sup>T</sup>*½ � **U P**ð Þ‐**<sup>G</sup>** , (8)

� � denotes the vector of unknown parameters and the

superscript *<sup>T</sup>* above denotes transpose. The vector ½ � **U P**ð Þ‐**<sup>G</sup>** *<sup>T</sup>* is given by

½ � **U P**ð Þ‐**<sup>G</sup>** *<sup>T</sup>* <sup>¼</sup> <sup>½</sup>*u x*ð Þ� 0, *<sup>t</sup>*1, **<sup>p</sup>** *g t*ð Þ<sup>1</sup> , *u x*<sup>ð</sup> 0, *<sup>t</sup>*2, **<sup>p</sup>**Þ � *g t*ð Þ<sup>2</sup> , <sup>⋯</sup>, *u x*ð � 0, *tn*, **<sup>p</sup>**Þ � *g t*ð Þ*<sup>n</sup> :* (9)

. The function *F*ð Þ **P** may have many local minimum in *D*, but it has only

*F*ð Þ **P** is real-valued bounded function defined on a closed bounded domain

*A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg…*

one global minimum. When *F*ð Þ **P** and *D* have some attractive properties, for instance, *F*ð Þ **P** is a differentiable concave function and *D* is a convex region, then a local maximum and problem can be solved explicitly by mathematical program-

The Levenberg-Marquardt method, originally devised for application to nonlinear parameter estimation problems, has also been successfully applied to the solution of linear ill-conditioned problems. Such a method was first derived by Levenberg (1944) by modifying the ordinary least-squares norm. Later Marquardt

(1963) derived basically the same technique by using a different approach.

of *<sup>F</sup>*ð Þ **<sup>P</sup>** with respect to each of the unknown parameters *<sup>p</sup>*1, *<sup>p</sup>*2, <sup>⋯</sup>, *pm*þ<sup>1</sup>

<sup>¼</sup> *<sup>∂</sup>F*ð Þ **<sup>P</sup>** *∂p*2

Let us introduce the sensitivity or Jacobian matrix, as follows:

ð Þ *x*0, *t*1, **p** *up*<sup>2</sup>

ð Þ *x*0, *t*2, **p** *up*<sup>2</sup>

ð Þ *x*0, *tn*, **p** *up*<sup>2</sup>

results of differentiation (10) can be written down as follows:

�2**J**

*<sup>∂</sup>u x*0, *ti* ð Þ , **<sup>p</sup>** *∂pj*

The elements of the sensitivity matrix are called the sensitivity coefficients, the

For linear inverse problem the sensitivity matrix is not a function of the

unknown parameters. The Eq. (13) can be solved then in explicit form:

**P** ¼ **J** *<sup>T</sup>***J** � �‐<sup>1</sup> **J**

*<sup>∂</sup>F*ð Þ **<sup>P</sup>** *∂p*1

*up*1

*x*0, *ti* ð Þ¼ , **p**

*up*1

⋯ *up*1

¼

Marquardt's intention was to obtain a method that would tend to the Gauss method in the neighborhood of the minimum of the ordinary least-squares norm, and would tend to the steepest descent method in the neighborhood of the initial guess used for

To minimize the least squares norm (8), we need to equate to zero the derivatives

<sup>¼</sup> <sup>⋯</sup> <sup>¼</sup> *<sup>∂</sup>F*ð Þ **<sup>p</sup>** *<sup>∂</sup>pm*þ<sup>1</sup>

ð Þ *x*0, *t*1, **p** ⋯ *upm*þ<sup>1</sup>

ð Þ *x*0, *t*2, **p** ⋯ *upm*þ<sup>1</sup>

ð Þ *x*0, *tn*, **p** ⋯ *upm*þ<sup>1</sup>

� �, that is

¼ 0*:* (10)

ð Þ *x*0, *t*1, **p**

, (11)

ð Þ *x*0, *t*2, **p**

ð Þ *x*0, *tn*, **p**

, *i* ¼ 1, 2, …*n*, *j* ¼ 1, 2, …*m* þ 1*:* (12)

*<sup>T</sup>***G***:* (14)

*<sup>T</sup>*ð Þ **<sup>P</sup>** ½ �¼ **U P**ð Þ� **<sup>G</sup>** <sup>0</sup>*:* (13)

**3. Overview of the Levenberg-Marquardt method**

where **<sup>P</sup>***<sup>T</sup>* <sup>¼</sup> *<sup>p</sup>*1, *<sup>p</sup>*2, <sup>⋯</sup>, *pm*þ<sup>1</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.89096*

*D* ⊂*Rm*þ<sup>1</sup>

ming methods.

the iterative procedure.

**J P**ð Þ¼ *<sup>∂</sup>***U***<sup>T</sup>*ð Þ **<sup>P</sup>** *∂***P** � �*<sup>T</sup>*

or *Jij* ¼ *up <sup>j</sup>*

**107**

and Dirichlet boundary conditions

$$u(\mathbf{0}, \mathbf{t}) = \mathbf{g}\_1(\mathbf{t}), \ \mathbf{0} \le t \le T,\tag{3}$$

$$\mu(\mathbf{1}, \mathbf{t}) = \mathbf{g}\_2(\mathbf{t}), \quad \mathbf{0} \le t \le T,\tag{4}$$

where f x, t ð Þ, *u*0ð Þ x , *g*1ð Þt , *g*2ð Þt and *q*ð Þ x are continuous known functions. We consider the problem (1)–(4) as a direct problem. As we all know, if *u*0ð Þ x , *g*1ð Þt , *g*2ð Þt are continuous functions and *q*ð Þ x is known, the problem (1)–(4) has a unique solution.

For the inverse problem, the diffusion coefficient *q*ð Þ x is regarded as being unknown. In addition, an overspecified condition is also considered available. To estimate the unknown coefficient *q*ð Þ x , the additional information on the boundary *x* ¼ *x*0, 0<*x*<sup>0</sup> <*L* is required. Let the *u*ð Þ x, t taken at *x* ¼ *x*<sup>0</sup> over the time period ½ � 0, T be denoted by

$$\mathbf{u}(x\_0, t) = \mathbf{g}(t) \; \mathbf{0} \le t \le T. \tag{5}$$

It is evident that for an unknown function *q*ð Þ x , the problem (1)–(4) is underdetermined and we are forced to impose additional information (5) to provide a unique solution pair u x, t ð Þ ð Þ,q xð Þ to the inverse problem (1)–(5).

We note that the measured overspecified condition u xð Þ¼ 0, t gð Þ*t* should contain measurement errors. Therefore the inverse problem can be stated as follows: by utilizing the above-mentioned measured data, estimate the unknown function *q*ð Þ x .

In this work the polynomial form is proposed for the unknown function *q*ð Þ x before performing the inverse calculation. Therefore *q*ð Þ x approximated as

$$q(\mathbf{x}) \approx q(\hat{\mathbf{x}}) = \mathbf{p}\_1 + p\_2\mathbf{x} + p\_3\mathbf{x}^2 + \dots + p\_{m+1}\mathbf{x}^m,\tag{6}$$

where *<sup>p</sup>*1, *<sup>p</sup>*2, <sup>⋯</sup>, *pm*þ<sup>1</sup> are constants which remain to be determined simultaneously. The unknown coefficients *<sup>p</sup>*1, *<sup>p</sup>*2, <sup>⋯</sup>, *pm*þ<sup>1</sup> can be determined by using least squares method. The error in the estimate

$$F(p\_1, p\_2, \dots, p\_{m+1}) = \sum\_{i=1}^{n} \left[ u(\mathbf{x}\_0, t\_i, p\_1, p\_2, \dots, p\_{m+1}) - \mathbf{g}(\mathbf{t}\_i) \right]^2,\tag{7}$$

is to be minimized. Here, *u x*0, *ti*, *<sup>p</sup>*1, *<sup>p</sup>*2, …, *pm*þ<sup>1</sup> � � are the calculated results. These quantities are determined from the solution of the direct problem which is given previously by using an approximated ^ *<sup>q</sup>*ð Þ <sup>x</sup> for the exact *<sup>q</sup>*ð Þ <sup>x</sup> . The estimated values of *pj* , *<sup>j</sup>* <sup>¼</sup> 1, 2, <sup>⋯</sup>, *<sup>m</sup>* <sup>þ</sup> 1 are determined until the value of *F p*1, *<sup>p</sup>*2, …, *pm*þ<sup>1</sup> � � is minimum. Such a norm can be written as

*A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg… DOI: http://dx.doi.org/10.5772/intechopen.89096*

$$F(\mathbf{P}) = \left[\mathbf{U}(\mathbf{P}) \cdot \mathbf{G}\right]^T \left[\mathbf{U}(\mathbf{P}) \cdot \mathbf{G}\right],\tag{8}$$

where **<sup>P</sup>***<sup>T</sup>* <sup>¼</sup> *<sup>p</sup>*1, *<sup>p</sup>*2, <sup>⋯</sup>, *pm*þ<sup>1</sup> � � denotes the vector of unknown parameters and the superscript *<sup>T</sup>* above denotes transpose. The vector ½ � **U P**ð Þ‐**<sup>G</sup>** *<sup>T</sup>* is given by

$$\left[\mathbf{U(P)}\cdot\mathbf{G}\right]^T = \left[u(\mathbf{x}\_0, t\_1, \mathbf{p}) - \mathbf{g}(t\_1), u(\mathbf{x}\_0, t\_2, \mathbf{p}) - \mathbf{g}(t\_2), \dots, u(\mathbf{x}\_0, t\_n, \mathbf{p}) - \mathbf{g}(t\_n)\right]. \tag{9}$$

*F*ð Þ **P** is real-valued bounded function defined on a closed bounded domain *D* ⊂*Rm*þ<sup>1</sup> . The function *F*ð Þ **P** may have many local minimum in *D*, but it has only one global minimum. When *F*ð Þ **P** and *D* have some attractive properties, for instance, *F*ð Þ **P** is a differentiable concave function and *D* is a convex region, then a local maximum and problem can be solved explicitly by mathematical programming methods.
