**2.1 Equilibrium points**

2 Will-be-set-by-IN-TECH

differential equations can not have periodic solutions. Hopf bifurcation occurs in systems of differential equations consisting of two or more equations. This type is also referred to as a

For a given system of differential equations first we shall consider the stability and the local Hopf bifurcation. By using the Hopf bifurcation theorem we prove the occurrence of the Hopf bifurcation. And then, based on the normal form method and the center manifold reduction introduced by Hassard et al.,[10], we derive the formulae determining the direction, stability and the period of the bifurcating periodic solution at the critical value of the bifurcation parameter. To verify the theoretical analysis, numerical simulations for bifurcation analysis

We also introduce the Hopf bifurcation for continuous dynamical systems and state the Hopf bifurcation theorem for these models. As it is well known, Hopf bifurcations occur when a conjugated complex pair of eigenvalues crosses the boundary of stability. In the time-continuous case, a limit cycle bifurcates. It has an angular frequency which is given by the imaginary part of the crossing pair. In the discrete case, the bifurcating orbit is generally quasi-periodic, except that the argument of the crossing pair times an integer gives just 2*π*. If we consider an ordinary differential equation (ODE) that depends on one or more parameters

where, for simplicity, we assume *α* to be the only parameter. There is the possibility that under variation of *α* nothing interesting happens to Equation (1). There is only a quantitatively different behavior. Let us define Equation (1) to be structurally stable in the case there are no qualitative changes occurring. However, the ODE (Ordinary Differential Equation) might

Many of the basic principles for one dimensional systems apply also for two-dimensional

*y* = *g*(*x*, *y*, *α*),

where biologically we mostly interpret x as prey or resource and y as predator or consumer.

*f*(*x*, *y*, *α*) = 0, *g*(*x*, *y*, *α*) = 0.

We have three possibilities for the stability of an equilibrium. Next to the stable and unstable equilibrium, there is the saddle equilibrium. A two-dimensional stable equilibrium is attracting in two directions, while a two-dimensional unstable equilibrium is repelling in two directions. A saddle point is attracting in one direction and repelling in the other direction. In the less formal literature saddles are often considered just unstable equilibria. A second

remark is that also the dynamics of the system around the equilibria can differ.

change qualitatively. At that point, bifurcations will have occurred.

Equilibria can be found by taking the equations equal to zero, i.e.,

systems. Let us define a two-dimensional system

*x* = *f*(*x*, *α*), (1)

*x* = *f*(*x*, *y*, *α*), (2)

"Poincare-Andronov-Hopf bifurcation".

*α*

are given in this chapter. For references see [1]-[22].

In dynamical systems, only the solutions of linear systems may be found explicitly. The problem is that in general real life problems may only be modeled by nonlinear systems. The main idea is to approximate a nonlinear system by a linear one (around the equilibrium point). Of course, we do hope that the behavior of the solutions of the linear system will be the same as the nonlinear one. But this is not always true. Before the linear stability analysis, we give some basic definitions below.

**Definition (Equilibrium Point):** Consider a nonlinear differential equation

$$\mathbf{x}'(t) = f(\mathbf{x}(t), \boldsymbol{\mu}(t)),$$

and dropping higher order terms, we obtain

*x*¯ + *ε*(*t*) into the LHS of the ODE,

get the solution as

the following theorem.

Next we compute *f*

*f*(*x*) ≈ *f* �

*ε* � (*t*) = *f* �

where *ε*<sup>0</sup> is a constant. Hence, the solution is growing if *f* �

As a result, the equilibrium point is stable if *f* �

**Theorem:** Suppose for scalar differential equation

Then the equilibrium point *x*¯ is locally stable if *f* �

If the equilibrium point is stable and in addition

We first find the equilibrium points by setting

�

*2.2.2 Linear stability analysis for systems*

*f* �

Consider the two dimensional nonlinear system

(*N*) = *<sup>r</sup>*(<sup>1</sup> <sup>−</sup> <sup>2</sup>*<sup>N</sup>*

at *N*¯ = 0, *f* �

at *N*¯ = *K*, *f* �

then it is called asymptotically stable equilibrium point.

Note that dropping these higher order terms is valid since *ε*(*t*) � 1. Now substituting *x*(*t*) =

The goal is to determine if we have growing or decaying solutions. If the solutions grows, then the equilibrium point is unstable. If the solution decays, then the fixed point is stable. To determine whether or not the solution is stable or unstable we simply solve the ODE and

*x*� = *f*(*x*),

the derivative function *f* � is continuous on an open interval *I* where the equilibrium point *x*¯*�I*.

*<sup>t</sup>*→<sup>∞</sup> *<sup>x</sup>*(*t*) = *<sup>x</sup>*¯,

*f*(*N*¯ ) = 0, which yields two points *N*¯ = 0, *K*.

(0) = *r >* 0, so it is unstable

(*K*) = −*r <* 0, so it is locally stable.

(*N*) and evaluate it at the equilibrium points.

lim

**Example:** Determine the stability of the fixed points to the Logistic growth equation

*<sup>N</sup>*� <sup>=</sup> *<sup>f</sup>*(*N*) = *rN*(<sup>1</sup> <sup>−</sup> *<sup>N</sup>*

*K* )

*ε*(*t*) = *ε*<sup>0</sup> exp(*f* �

(*x*¯)*ε*(*t*).

(*x*¯)*ε*(*t*).

(*x*¯)*t*),

(*x*¯) *<* 0, unstable if *f* �

*<sup>K</sup>* ), where *<sup>r</sup> <sup>&</sup>gt;* 0.

(*x*¯) *<* 0 and it is unstable if *f* �

(*x*¯) *>* 0 and decaying if *f* �

(*x*¯) *<* 0.

(*x*¯) *>* 0 as it is stated in

Bifurcation Analysis and Its Applications 7

(*x*¯) *>* 0.

where *<sup>f</sup>* is a function mapping *<sup>R</sup>nxR*<sup>3</sup> <sup>→</sup> *<sup>R</sup>n*. A point *<sup>x</sup>*¯ is called an equilibrium point if there is a specific *u*¯ *�R<sup>m</sup>* such that

$$f(\mathbf{x}(t), \boldsymbol{\mu}(t)) = 0\_n.$$

Suppose *x*¯ is an equilibrium point (with the input *u*¯). Consider the initial condition *x*(0) = *x*¯, and applying the input *u*(*t*) = *u*¯ for all *t* ≥ *t*0, then resulting solution *x*(*t*) satisfies

$$\mathfrak{x}(t) = \mathfrak{x}\_{\prime}$$

for all *t* ≥ *t*0. That is why it is called an equilibrium point or solution.

**Example:** As an example, consider the logistic growth equation (the rate of population density)

$$\mathbf{x}' = r\mathbf{x}(1 - \frac{\mathbf{x}}{K})\_{\prime}$$

where *x*(*t*) denotes the population density at time *t*, *r* and *K* are positive constants, *K* is the carrying capacity. Then by setting right hand side function equal to zero,

$$f(\mathbf{x}) = r\mathbf{x}(1 - \frac{\mathbf{x}}{K}) = \mathbf{0},$$

we obtain two equilibrium points *x* = 0 and *x* = *K*.

#### **2.2 Linear stability analysis**

Linear stability of dynamical equations can be analyzed in two parts: one for scalar equations and the other for two dimensional systems;

#### *2.2.1 Linear stability analysis for scalar equations*

To analyze the ODE

$$\mathbf{x}' = f(\mathbf{x})$$

locally about the equilibrium point *x* = *x*¯, we expand the function *f*(*x*) in a Taylor series about the equilibrium point *x*¯. To emphasize that we are doing a local analysis, it is customary to make a change of variables from the dependent variable *x* to a local variable. Now let

$$\mathfrak{x}(t) = \mathfrak{x} + \mathfrak{e}(t)\_\prime$$

where it is assumed that *ε*(*t*) � 1, so that we can justify dropping all terms of order two and higher in the expansion. Substituting *x*(*t*) = *x*¯ + *ε*(*t*) into the RHS of the ODE yields;

$$\begin{aligned} f(\mathbf{x}(t)) &= f(\bar{\mathbf{x}} + \varepsilon(t)) \\ &= f(\bar{\mathbf{x}}) + f'(\bar{\mathbf{x}})\varepsilon(t) + f''(\bar{\mathbf{x}})\frac{\varepsilon^2(t)}{2} + \dots \\ &= 0 + f'(\bar{\mathbf{x}})\varepsilon(t) + O(\varepsilon^2), \end{aligned}$$

and dropping higher order terms, we obtain

4 Will-be-set-by-IN-TECH

(*t*) = *f*(*x*(*t*), *u*(*t*)),

where *<sup>f</sup>* is a function mapping *<sup>R</sup>nxR*<sup>3</sup> <sup>→</sup> *<sup>R</sup>n*. A point *<sup>x</sup>*¯ is called an equilibrium point if there

*f*(*x*(*t*), *u*(*t*)) = 0*n*. Suppose *x*¯ is an equilibrium point (with the input *u*¯). Consider the initial condition *x*(0) = *x*¯,

*x*(*t*) = *x*¯,

**Example:** As an example, consider the logistic growth equation (the rate of population

*<sup>x</sup>*� <sup>=</sup> *rx*(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*

where *x*(*t*) denotes the population density at time *t*, *r* and *K* are positive constants, *K* is the

Linear stability of dynamical equations can be analyzed in two parts: one for scalar equations

*x*� = *f*(*x*) locally about the equilibrium point *x* = *x*¯, we expand the function *f*(*x*) in a Taylor series about the equilibrium point *x*¯. To emphasize that we are doing a local analysis, it is customary to

*x*(*t*) = *x*¯ + *ε*(*t*),

where it is assumed that *ε*(*t*) � 1, so that we can justify dropping all terms of order two and

(*x*¯)*ε*(*t*) + *O*(*ε*

(*x*¯)*ε*(*t*) + *f* ��(*x*¯)

2),

*ε*2(*t*) <sup>2</sup> <sup>+</sup> ...

make a change of variables from the dependent variable *x* to a local variable. Now let

higher in the expansion. Substituting *x*(*t*) = *x*¯ + *ε*(*t*) into the RHS of the ODE yields;

*f*(*x*(*t*)) = *f*(*x*¯ + *ε*(*t*))

= *f*(*x*¯) + *f* �

= 0 + *f* �

*<sup>f</sup>*(*x*) = *rx*(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*

*K* ),

*<sup>K</sup>* ) = 0,

and applying the input *u*(*t*) = *u*¯ for all *t* ≥ *t*0, then resulting solution *x*(*t*) satisfies

**Definition (Equilibrium Point):** Consider a nonlinear differential equation

*x*�

for all *t* ≥ *t*0. That is why it is called an equilibrium point or solution.

carrying capacity. Then by setting right hand side function equal to zero,

we obtain two equilibrium points *x* = 0 and *x* = *K*.

and the other for two dimensional systems;

*2.2.1 Linear stability analysis for scalar equations*

**2.2 Linear stability analysis**

To analyze the ODE

is a specific *u*¯ *�R<sup>m</sup>* such that

density)

$$f(\mathfrak{x}) \approx f'(\mathfrak{x})\varepsilon(t).$$

Note that dropping these higher order terms is valid since *ε*(*t*) � 1. Now substituting *x*(*t*) = *x*¯ + *ε*(*t*) into the LHS of the ODE,

$$
\varepsilon'(t) = f'(\mathfrak{x})\varepsilon(t).
$$

The goal is to determine if we have growing or decaying solutions. If the solutions grows, then the equilibrium point is unstable. If the solution decays, then the fixed point is stable.

To determine whether or not the solution is stable or unstable we simply solve the ODE and get the solution as

$$
\varepsilon(t) = \varepsilon\_0 \exp(f'(\mathfrak{x})t),
$$

where *ε*<sup>0</sup> is a constant. Hence, the solution is growing if *f* � (*x*¯) *>* 0 and decaying if *f* � (*x*¯) *<* 0. As a result, the equilibrium point is stable if *f* � (*x*¯) *<* 0, unstable if *f* � (*x*¯) *>* 0 as it is stated in the following theorem.

**Theorem:** Suppose for scalar differential equation

$$\mathbf{x}' = f(\mathbf{x})\_{\prime}$$

the derivative function *f* � is continuous on an open interval *I* where the equilibrium point *x*¯*�I*. Then the equilibrium point *x*¯ is locally stable if *f* � (*x*¯) *<* 0 and it is unstable if *f* � (*x*¯) *>* 0.

If the equilibrium point is stable and in addition

$$\lim\_{t \to \infty} \mathfrak{x}(t) = \mathfrak{x}\_{\prime}$$

then it is called asymptotically stable equilibrium point.

**Example:** Determine the stability of the fixed points to the Logistic growth equation

$$N' = f(N) = rN(1 - \frac{N}{K}), \text{ where } r > 0.$$

We first find the equilibrium points by setting

$$f(\bar{N}) = 0,\text{ which yields two points }\bar{N} = 0\text{,}K.$$

Next we compute *f* � (*N*) and evaluate it at the equilibrium points.

$$\begin{aligned} f'(N) &= r(1 - \frac{2N}{K})\\ \text{at } \bar{N} &= 0, f'(0) = r > 0, \text{ so it is unstable} \\ \text{at } \bar{N} &= K, f'(K) = -r < 0, \text{ so it is locally stable.} \end{aligned}$$

#### *2.2.2 Linear stability analysis for systems*

Consider the two dimensional nonlinear system

6 Will-be-set-by-IN-TECH 8 Numerical Simulation – From Theory to Industry Bifurcation Analysis and Its Applications <sup>7</sup>

$$\begin{aligned} x' &= f(x, y)\_{\prime} \\ y' &= g(x, y)\_{\prime} \end{aligned}$$

**Theorem :** An equilibrium point (*x*¯, *y*¯) of the differential equation is stable if all the eigenvalues of *J*, the Jacobian evaluated at (*x*¯, *y*¯) have negative real parts. The equilibrium

**Asymptotically stable :** A critical point is asymptotically stable if all eigenvalues of the

**Unstable:** A critical point is unstable if at least one eigenvalue of the jacobian matrix *J* is

**Stable (or neutrally stable) :** Each trajectory move about the critical point within a finite range

**Definition(Hyperbolic point):** The equilibrium is said to be hyperbolic if all eigenvalues of

Hyperbolic equilibria are robust(i.e., the system is structurally stable): Small perturbations of order do not change qualitatively the phase portrait near the equilibria. Moreover, local phase portrait of a hyperbolic equilibrium of a nonlinear system is equivalent to that of its linearization. This statement has a mathematically precise form known as the Hartman-Grobman. This theorem guarantees that the stability of the steady state (*x*¯, *y*¯) of the nonlinear system is the same as the stability of the trivial steady state (0, 0) of the linearized

**Definition(Non-Hyperbolic point):** If at least one eigenvalue of the Jacobian matrix is zero or

Non-hyperbolic equilibria are not robust (i.e., the system is not structurally stable). Small perturbations can result in a local bifurcation of a non-hyperbolic equilibrium, i.e., it can change stability, disappear, or split into many equilibria. Some refer to such an equilibrium

*y*(*t*) = *x*(*t*)[2 − *y*(*t*)].

 *y x* <sup>−</sup> <sup>1</sup> 2 − *y* −*x*

 .

The equilibria are the points (*x*¯, *y*¯)=(0, 0) and (*x*¯, *y*¯)=(1, 2) and the Jacobian matrix is

*λ*1.2 = ±

√ 2*i*,

*x*(*t*) = *y*(*t*)[*x*(*t*) − *y*(*t*)], (4)

0 −1 2 0

Bifurcation Analysis and Its Applications 9

which implies

has a zero real part, then the equilibrium is said to be non-hyperbolic.

**Example:** Consider the following nonlinear autonomous system

·

·

*J* =

We compute the Jacobian at the equilibrium point (0,0) where *<sup>J</sup>*(0, 0) =

by the name of the bifurcation (See the section below).

that the eigenvalues are purely imaginary

point is unstable if at least one of the eigenvalues has a positive real part.

jacobian matrix *J* are negative, or have negative real parts.

As a summary,

of distance.

system.

positive, or has positive real part.

the jacobian matrix have non-zero real parts.

and suppose that (*x*¯, *y*¯) is a steady state (equilibrium point), i.e.,

$$f(\mathfrak{x}, \mathfrak{y}) = 0 \text{ and } \mathfrak{g}(\mathfrak{x}, \mathfrak{y}) = 0.$$

Now let's consider a small perturbation from the steady state (*x*¯, *y*¯)

$$\begin{aligned} \mathfrak{x} &= \mathfrak{x} + \mathfrak{u}\_{\prime} \\ \mathfrak{y} &= \mathfrak{y} + \mathfrak{v}\_{\prime} \end{aligned}$$

where *u* and *v* are understood to be small as *u* 1 and *v* 1. Is is natural to ask whether *u* and *v* are growing or decaying so that *x* and *y* will move away form the steady state or move towards the steady states. If it moves away, it is called unstable equilibrium point, if it moves towards the equilibrium point, then it is called stable equilibrium point.As in scalar equations, by expanding the Taylor's series for *f*(*x*, *y*) and *g*(*x*, *y*);

$$\begin{aligned} u' &= x' = f(\mathbf{x}, y) \\ &= f(\mathbf{\bar{x}} + \boldsymbol{\mu}, \bar{y} + \boldsymbol{\upsilon}) \\ &= f(\mathbf{\bar{x}}, \bar{y}) + f\_{\mathbf{x}}(\mathbf{\bar{x}}, \bar{y})u + f\_{\boldsymbol{y}}(\mathbf{\bar{x}}, \bar{y})\boldsymbol{\upsilon} + \text{higher order terms...} \\ &= f\_{\mathbf{x}}(\mathbf{\bar{x}}, \bar{y})u + f\_{\boldsymbol{y}}(\mathbf{\bar{x}}, \bar{y})\boldsymbol{\upsilon} + \text{higher order terms...} \end{aligned}$$

Similarly,

$$\begin{aligned} v' &= y' = g(\mathbf{x}, y) \\ &= g(\mathbf{\bar{x}} + \boldsymbol{\mu}, \bar{y} + v) \\ &= g(\mathbf{\bar{x}}, \bar{y}) + g\_{\boldsymbol{x}}(\bar{\mathbf{x}}, \bar{y})\boldsymbol{\mu} + g\_{\boldsymbol{y}}(\bar{\mathbf{x}}, \bar{y})v + \text{higher order terms...} \\ &= g\_{\boldsymbol{x}}(\bar{\mathbf{x}}, \bar{y})\boldsymbol{\mu} + g\_{\boldsymbol{y}}(\bar{\mathbf{x}}, \bar{y})v + \text{higher order terms...} \end{aligned}$$

Since *u* and *v* are assumed to be small, the higher order terms are extremely small, we can neglect the higher order terms and obtain the following linear system of equations governing

the evolution of the perturbations *u* and *v*,

$$
\begin{bmatrix} u' \\ v' \end{bmatrix} = \begin{bmatrix} f\_{\mathfrak{X}}(\mathfrak{x}, \mathfrak{y}) \ f\_{\mathfrak{Y}}(\mathfrak{x}, \mathfrak{y}) \\ g\_{\mathfrak{X}}(\mathfrak{x}, \mathfrak{y}) \ g\_{\mathfrak{Y}}(\mathfrak{x}, \mathfrak{y}) \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} / \eta
$$

where the matrix *fx fy gx gy* is called Jacobian matrix *J* of the nonlinear system.The above linear system for *u* and *v* has the trivial steady state (*u*, *v*)=(0, 0), and the stability of this trivial steady state is determined by the eigenvalues of the Jacobian matrix, as follows:

**Theorem :** An equilibrium point (*x*¯, *y*¯) of the differential equation is stable if all the eigenvalues of *J*, the Jacobian evaluated at (*x*¯, *y*¯) have negative real parts. The equilibrium point is unstable if at least one of the eigenvalues has a positive real part.

As a summary,

6 Will-be-set-by-IN-TECH

*x* = *f*(*x*, *y*), *y* = *g*(*x*, *y*),

*f*(*x*¯, *y*¯) = 0 and *g*(*x*¯, *y*¯) = 0.

*x* = *x*¯ + *u*, *y* = *y*¯ + *v*,

where *u* and *v* are understood to be small as *u* 1 and *v* 1. Is is natural to ask whether *u* and *v* are growing or decaying so that *x* and *y* will move away form the steady state or move towards the steady states. If it moves away, it is called unstable equilibrium point, if it moves towards the equilibrium point, then it is called stable equilibrium point.As in scalar equations,

= *f*(*x*¯, *y*¯) + *fx*(*x*¯, *y*¯)*u* + *fy*(*x*¯, *y*¯)*v* + higher order terms...

= *g*(*x*¯, *y*¯) + *gx*(*x*¯, *y*¯)*u* + *gy*(*x*¯, *y*¯)*v* + higher order terms...

Since *u* and *v* are assumed to be small, the higher order terms are extremely small, we can neglect the higher order terms and obtain the following linear system of equations governing

> *fx*(*x*¯, *y*¯) *fy*(*x*¯, *y*¯) *gx*(*x*¯, *y*¯) *gy*(*x*¯, *y*¯)

system for *u* and *v* has the trivial steady state (*u*, *v*)=(0, 0), and the stability of this trivial

steady state is determined by the eigenvalues of the Jacobian matrix, as follows:

 *u v* ,

is called Jacobian matrix *J* of the nonlinear system.The above linear

= *fx*(*x*¯, *y*¯)*u* + *fy*(*x*¯, *y*¯)*v* + higher order terms....

= *gx*(*x*¯, *y*¯)*u* + *gy*(*x*¯, *y*¯)*v* + higher order terms....

and suppose that (*x*¯, *y*¯) is a steady state (equilibrium point), i.e.,

Now let's consider a small perturbation from the steady state (*x*¯, *y*¯)

by expanding the Taylor's series for *f*(*x*, *y*) and *g*(*x*, *y*);

= *f*(*x*¯ + *u*, *y*¯ + *v*)

*u* = *x* = *f*(*x*, *y*)

*v* = *y* = *g*(*x*, *y*)

the evolution of the perturbations *u* and *v*,

 *fx fy gx gy* = *g*(*x*¯ + *u*, *y*¯ + *v*)

 *u v* =

Similarly,

where the matrix

**Asymptotically stable :** A critical point is asymptotically stable if all eigenvalues of the jacobian matrix *J* are negative, or have negative real parts.

**Unstable:** A critical point is unstable if at least one eigenvalue of the jacobian matrix *J* is positive, or has positive real part.

**Stable (or neutrally stable) :** Each trajectory move about the critical point within a finite range of distance.

**Definition(Hyperbolic point):** The equilibrium is said to be hyperbolic if all eigenvalues of the jacobian matrix have non-zero real parts.

Hyperbolic equilibria are robust(i.e., the system is structurally stable): Small perturbations of order do not change qualitatively the phase portrait near the equilibria. Moreover, local phase portrait of a hyperbolic equilibrium of a nonlinear system is equivalent to that of its linearization. This statement has a mathematically precise form known as the Hartman-Grobman. This theorem guarantees that the stability of the steady state (*x*¯, *y*¯) of the nonlinear system is the same as the stability of the trivial steady state (0, 0) of the linearized system.

**Definition(Non-Hyperbolic point):** If at least one eigenvalue of the Jacobian matrix is zero or has a zero real part, then the equilibrium is said to be non-hyperbolic.

Non-hyperbolic equilibria are not robust (i.e., the system is not structurally stable). Small perturbations can result in a local bifurcation of a non-hyperbolic equilibrium, i.e., it can change stability, disappear, or split into many equilibria. Some refer to such an equilibrium by the name of the bifurcation (See the section below).

**Example:** Consider the following nonlinear autonomous system

$$
\dot{\dot{x}}(t) = y(t)[\mathbf{x}(t) - y(t)],\tag{4}
$$

$$
\dot{y}(t) = \mathbf{x}(t)[2 - y(t)].
$$

The equilibria are the points (*x*¯, *y*¯)=(0, 0) and (*x*¯, *y*¯)=(1, 2) and the Jacobian matrix is

$$J = \begin{bmatrix} y & x - 1 \\ 2 - y & -x \end{bmatrix}.$$

We compute the Jacobian at the equilibrium point (0,0) where *<sup>J</sup>*(0, 0) = 0 −1 2 0 which implies that the eigenvalues are purely imaginary

$$
\lambda\_{1.2} = \pm \sqrt{2}i\_{\nu}
$$

#### 8 Will-be-set-by-IN-TECH 10 Numerical Simulation – From Theory to Industry Bifurcation Analysis and Its Applications <sup>9</sup>

by solving the characteristic equation

$$\det(J - \lambda I) = 0.$$

Since the point is non-hyperbolic, the linearized system can not tell about the stability. Later on we will show that this is a center.

For the equilibrium point (1, 2), the Jacobian *<sup>J</sup>*(1, 2) = 2 0 0 −1 thus the point is locally unstable (as

$$
\lambda\_1 = 1 \quad \text{and} \quad \lambda\_2 = -1,
$$

where one of the eigenvalues is strictly positive). Since it is a hyperbolic equilibrium point, the stability of fixed point is the same as in the linearized system. So it is also unstable.
