1. Introduction

Remind to the unconstrained optimization problem which we can present as

$$\min\_{\mathbf{x}\in\mathbb{R}^n} f(\mathbf{x}),\tag{1}$$

where <sup>f</sup> : <sup>R</sup><sup>n</sup> ! <sup>R</sup> is <sup>a</sup> smooth function.

Here, we consider two classes of unconstrained optimization methods: conjugate gradient methods and trust region methods. Both of them are made with the aim to solve the unconstrained optimization problem (1).

In this chapter, at first, we consider the conjugate gradient methods. Then, we study trust region methods. Also, we try to give some of the most recent results in these areas.
