Assumption 1.3.1 (Assumption A0).

We assume that the approximations of Hessian f g Bk are uniformly bounded in norm and the level set <sup>L</sup> <sup>¼</sup> <sup>f</sup>xjfð Þ ð Þg is bounded, as well as <sup>f</sup> : <sup>R</sup> <sup>x</sup> <sup>≤</sup> f x<sup>0</sup> <sup>n</sup> ! <sup>R</sup> is continuously differentiable on L. We allow the length of the approximate solution sk of the subproblem (17)–(18) to exceed the bound of the trust region, but we also assume that

$$\|\mathfrak{s}\_{k}\| \leq \bar{\eta} \,\Delta\_{k\ast},$$

where ~η is a positive constant.

In this kind of trust region way of thinking, generally we do not seek an accurate solution of the subproblem (17)–(18); we are satisfied by finding a nearly optimal solution of the subproblem (17)–(18).

Strong theoretical as well as numerical results can be obtained if the step sk, produced by Algorithm 1.3.1, satisfies

$$q\_k(\mathbf{0}) - q\_k(\mathbf{s}\_k) \ge \beta\_1 \|\mathbf{g}\_k\|\_2 \min\left\{\Delta\_k, \frac{\|\mathbf{g}\_k\|\_2}{\|B\_k\|\_2}\right\}, \beta\_1 \in (\mathbf{0}, \mathbf{1}).$$

Theorem 1.3.1 [47] Under Assumption A0, if Algorithm 3.1 has finitely many successful iterations, then it converges to the first-order stationary point.

Theorem 1.3.2 [47] Under Assumption A0, if Algorithm 3.1 has infinitely many successful iterations, then

$$\liminf\_{k \to \infty} \|\mathbf{g}\_k\| = \mathbf{0}.$$

In [44], it is emphasized that trust region methods are very effective for ^ optimization problems and a new adaptive trust region method is presented. This method combines a modified secant equation with the BFGS update formula and an adaptive trust region radius, where the new trust region radius makes use of not only the function information but also the gradient information. Let Bk be a positively definite matrix based on modified Cholesky factorization [43]. Under suitable conditions, in [44] the global convergence is proven; also, the local superlinear convergence of the proposed method is demonstrated. Motivated by the adaptive technique, the proposed method possesses the following nice properties:


A modified secant equation is introduced:

$$B\_{k+1}d\_k = q\_k,\tag{19}$$

T <sup>ð</sup>gkþ1þgk<sup>Þ</sup> dkþ<sup>2</sup>ð<sup>f</sup> <sup>k</sup>�<sup>f</sup> <sup>k</sup>þ<sup>1</sup><sup>Þ</sup> where qk <sup>¼</sup> yk <sup>þ</sup> hkdk, <sup>f</sup> <sup>k</sup> <sup>¼</sup> f xð Þ<sup>k</sup> , and hk <sup>¼</sup> . <sup>∥</sup>dk∥<sup>2</sup>

When f is twice continuously differentiable and Bkþ<sup>1</sup> is generated by the BFGS formula, where B<sup>0</sup> ¼ I, this modified secant Eq. (19) possesses the following nice property:

$$f\_k = f\_{k+1} - g\_{k+1}^T d\_k + \frac{1}{2} d\_k^T B\_{k+1} d\_k,$$

and this property holds for all k.

Under classical assumptions, the global convergence of the method presented in [44] is also proven in this paper.

In [28], the hybridization of monotone and non-monotone approaches is made; a modified trust region ratio is used, in which more information is provided about the agreement between the exact and the approximate models. An adaptive trust region radius is used, as well as two accelerated Armijo-type line search strategies to avoid resolving the trust region subproblem whenever a trial step is rejected. It is shown that the proposed algorithm is globally and locally superlinearly convergent. In this paper trust region methods are denoted shortly by TR; it is emphasized that in TR method, having in view that the iterative scheme is

$$\varkappa\_0 \in \mathbb{R}^n, \varkappa\_{k+1} = \varkappa\_k + s\_k, k = 0, 1, \dots, n$$

and it often happens that sk is an approximate solution of the following quadratic subproblem:

$$\min\_{\mathbf{s}\in\mathbb{R}^{n}\_{\star},\ \|\boldsymbol{s}\_{k}\|\leq\Delta\_{k}} m\_{k}(\boldsymbol{s}) = \mathbf{g}\_{k}^{T}\boldsymbol{s} + \frac{\mathbf{1}}{2}\boldsymbol{s}\_{k}^{T}B\_{k}\boldsymbol{s}.\tag{20}$$

Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods DOI: http://dx.doi.org/10.5772/intechopen.84374

Performance of the TR methods is much influenced by the strategy of choosing the TR radius at each iteration. To determine the radius Δk, in the standard TR method, the agreement between fðxk þ sÞ and mkð Þs is evaluated by the so-called TR ratio ρk:

$$\rho\_k = \frac{f(\mathbf{x}\_k) - f(\mathbf{x}\_k + s\_k)}{m\_k(\mathbf{0}) - m\_k(s\_k)}.$$

When ρ<sup>k</sup> is negative or a small positive number near to zero, the quadratic model is a poor approximation of the objective function. In such situation, Δ<sup>k</sup> should be decreased and, consequently, the subproblem (20) should be solved again. However, when ρ<sup>k</sup> is close to 1, it is reasonable to use the quadratic model as an approximation of the objective function. So, the step sk should be accepted and Δ<sup>k</sup> can be increased. Here, the authors use the modified version of ρk:

$$
\overline{\rho}\_k = \frac{R\_k - f(\mathbf{x}\_k + s\_k)}{P\_k - m\_k(s\_k)},
$$

where Rk <sup>¼</sup> <sup>η</sup>kfl kð Þ þ ð<sup>1</sup> � <sup>η</sup>kÞ<sup>f</sup> <sup>k</sup>, <sup>η</sup><sup>k</sup> <sup>∈</sup>½ηmin; <sup>η</sup>max�, <sup>η</sup>min <sup>∈</sup> <sup>½</sup>0; <sup>1</sup>Þ, and <sup>η</sup>max <sup>∈</sup>½ηmin; <sup>1</sup>�. Also,

$$f\_{l(k)} = \max\_{0 \le j \le q(k)} \left\{ f\_{k-j} \right\}, \\ f\_i = f(\mathbf{x}\_i), \\ q(\mathbf{0}) = \mathbf{0}, \\ \mathbf{0} \le q(k) \le \min\{q(k-1) + 1, N\},$$

where N ∈ N which is originally used by Toint [48].

Something more about trust region methods can be found in [9, 18, 21, 22, 54].
