2.1. Direct theory

minimum point ð Þ 0; 0 . However, the Euclidean Hessian of f is not positive semi-definite overall. We

<sup>∂</sup><sup>x</sup> , a<sup>2</sup> <sup>¼</sup> <sup>∂</sup><sup>f</sup> ∂y

<sup>21</sup> ¼ � <sup>2</sup>x<sup>2</sup><sup>y</sup>

<sup>21</sup> ¼ � <sup>2</sup>xy<sup>2</sup>

h

The next example shows what happens if we come out of the conditions of the previous

Example 1.3 Let us take the function f : <sup>R</sup> ! <sup>R</sup>,f xð Þ¼ <sup>x</sup><sup>3</sup>, where the critical point x <sup>¼</sup> <sup>0</sup> is an

<sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> , <sup>Γ</sup>

<sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> , <sup>Γ</sup>

ijð Þ¼ 0; 0 0.

ð Þ¼ <sup>x</sup> <sup>3</sup> <sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>2</sup><sup>x</sup> <sup>þ</sup> <sup>2</sup> � � <sup>&</sup>gt; <sup>0</sup>, <sup>∀</sup>x<sup>∈</sup> <sup>R</sup>:

ð Þ<sup>t</sup> <sup>2</sup> <sup>¼</sup> <sup>0</sup>, t 6¼ <sup>0</sup>:

<sup>8</sup> <sup>þ</sup> <sup>c</sup><sup>2</sup> <sup>p</sup> arctanh <sup>2</sup><sup>t</sup> � <sup>c</sup> ffiffiffiffiffiffiffiffiffiffiffiffi

<sup>x</sup><sup>2</sup> � <sup>x</sup><sup>1</sup> � <sup>1</sup> � �<sup>2</sup>

<sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>2</sup>x<sup>12</sup> <sup>x</sup><sup>2</sup> � �

<sup>8</sup> <sup>þ</sup> <sup>c</sup><sup>2</sup> <sup>p</sup> <sup>þ</sup> <sup>c</sup><sup>1</sup>

<sup>2</sup> � ct <sup>¼</sup> <sup>0</sup>. These curves

, x<sup>1</sup> > 0, x<sup>2</sup> > 0

1

2

. Hence we obtain Γ

<sup>22</sup> ¼ � <sup>2</sup>xy<sup>2</sup>

<sup>22</sup> ¼ � <sup>2</sup>y<sup>3</sup>

<sup>x</sup><sup>2</sup> , which is not defined at the critical point x ¼ 0, but the

<sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> ,

<sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> :

h ij <sup>¼</sup> <sup>T</sup><sup>h</sup> ij,

� <sup>x</sup>2þy<sup>2</sup> ð Þδij, a<sup>1</sup> <sup>¼</sup> <sup>∂</sup><sup>f</sup>

1 <sup>12</sup> ¼ Γ 1

2 <sup>12</sup> ¼ Γ 2

ijð Þ¼ x; y 0. Hence take Γ

ð Þ� x Γð Þx f

<sup>2</sup> ln <sup>∣</sup> � <sup>2</sup> <sup>þ</sup> <sup>t</sup>

are auto-parallels on ð Þ R\f g 0; t1; t<sup>2</sup> ; Γ , where t1, t<sup>2</sup> are real solutions of �2 þ t

; <sup>x</sup><sup>2</sup> � � <sup>¼</sup> <sup>x</sup><sup>12</sup>

� �

<sup>x</sup>ð Þ <sup>R</sup> is not a "segment", the function f : <sup>R</sup> ! <sup>R</sup>,f xð Þ¼ <sup>x</sup><sup>3</sup> is not globally convex.

<sup>þ</sup> <sup>2</sup>x<sup>1</sup> <sup>þ</sup> <sup>2</sup>

x00ð Þ� t 1 þ

0

2 t 2 � �

<sup>2</sup> � ct<sup>∣</sup> <sup>þ</sup>

x0

are extended at t ¼ 0 by continuity. The manifold ð Þ R; Γ is not auto-parallely complete. Since the image

Remark 1.3 For n <sup>≥</sup> <sup>2</sup>, there exists C<sup>1</sup> functions <sup>φ</sup> : <sup>R</sup><sup>n</sup> ! <sup>R</sup> which have two minimum points without

� <sup>x</sup><sup>12</sup>

<sup>þ</sup> <sup>x</sup><sup>12</sup>

<sup>þ</sup> <sup>2</sup>x<sup>13</sup>

� 1 � �<sup>2</sup>

c ffiffiffiffiffiffiffiffiffiffiffiffi

make previous reason for σij ¼ 2e

120 Optimization Algorithms - Examples

Observe that limð Þ! <sup>x</sup>;<sup>y</sup> ð Þ <sup>0</sup>;<sup>0</sup> <sup>T</sup><sup>h</sup>

theorem.

The solutions

The restriction

φ x<sup>1</sup>

; <sup>x</sup><sup>2</sup> � � <sup>¼</sup> <sup>x</sup><sup>1</sup><sup>4</sup>

Γ 1

Γ 2

inflection point. We take <sup>Γ</sup>ð Þ¼� <sup>x</sup> <sup>1</sup> � <sup>2</sup>

Let us consider the ODE of auto-parallels

relation of convexity is realized by prolongation,

σð Þ¼ x f 00

x tðÞ¼� <sup>1</sup>

having another extremum point. As example,

φ x<sup>1</sup>

has two (global) minimum points p ¼ �ð Þ 1; 0 , q ¼ ð Þ 1; 2 .

<sup>þ</sup> <sup>x</sup><sup>14</sup> x22

is difference of two affine convex functions (see Section 2).

<sup>11</sup> ¼ � <sup>2</sup>x<sup>3</sup>

<sup>11</sup> ¼ � <sup>2</sup>x<sup>2</sup><sup>y</sup>

<sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> , <sup>Γ</sup>

<sup>x</sup><sup>2</sup> <sup>þ</sup> <sup>y</sup><sup>2</sup> , <sup>Γ</sup>

The auto-parallel curves x tð Þ on the affine manifold ð Þ M; Γ are solutions of the second order ODE system

$$
\ddot{\boldsymbol{x}}^h(t) + \Gamma^h\_{\dot{\boldsymbol{y}}}(\boldsymbol{x}(t))\dot{\boldsymbol{x}}^i(t)\dot{\boldsymbol{x}}^j(t) = \boldsymbol{0}, \\
\boldsymbol{x}(t\_0) = \boldsymbol{x}\_0, \\
\dot{\boldsymbol{x}}(t\_0) = \boldsymbol{\xi}\_0.
$$

Obviously, the complete notation is x tð Þ ; x0; ξ<sup>0</sup> , with

$$
\mathfrak{x}(t\_0, \mathfrak{x}\_0, \xi\_0) = \mathfrak{x}\_0, \dot{\mathfrak{x}}(t\_0, \mathfrak{x}\_0, \xi\_0) = \xi\_0.
$$

Definition 2.1 Let D <sup>⊂</sup> M be open and connected and f : <sup>D</sup> ! <sup>R</sup> a C<sup>2</sup> function. The point x<sup>0</sup> <sup>∈</sup> D is called minimum (maximum) point of f conditioned by the auto-parallel system, together with initial conditions, if for the maximal solution x tð Þ ; x0; ξ<sup>0</sup> : I ! D, there exists a neighborhood It<sup>0</sup> of t<sup>0</sup> such that

$$f(\mathfrak{x}(t, \mathfrak{x}\_0, \xi\_0)) \succeq \ (\leq) \ f(\mathfrak{x}\_0), \ \forall t \in I\_{t\_0} \subset I.$$

Theorem 2.1 If x<sup>0</sup> ∈ D is an extremum point of f conditioned by the previous second order system, then df xð Þ<sup>0</sup> ð Þ¼ ξ<sup>0</sup> 0.

Definition 2.2 The points x∈ D which are solutions of the equation df xð Þð Þ¼ ξ 0 are called critical points of f conditioned by the previous spray.

Theorem 2.2 If x<sup>0</sup> <sup>∈</sup> D is a conditioned critical point of the function f : <sup>D</sup> ! <sup>R</sup> of class C<sup>2</sup> constrained by the previous auto-parallel system and if the number

$$(H \text{ess} \boldsymbol{f})\_{\boldsymbol{ij}} \, \xi\_0^i \xi\_0^j = \left( \frac{\partial^2 f}{\partial \mathbf{x}^i \partial \mathbf{x}^j} - \frac{\partial f}{\partial \mathbf{x}^h} \, \Gamma\_{\boldsymbol{ij}}^h \right) (\mathbf{x}\_0) \, \xi\_0^i \xi\_0^j$$

is strictly positive (negative), then x<sup>0</sup> is a minimum (maximum) point of f constrained by the autoparallel system.

Example 2.1 We compute the Christoffel symbols on the unit sphere S<sup>2</sup> , using spherical coordinates ð Þ θ;φ and the Riemannian metric

2.2. Theory via the associated spray

the initial condition <sup>γ</sup>ð Þj <sup>t</sup> <sup>t</sup>¼t<sup>0</sup> <sup>¼</sup> <sup>x</sup>0; <sup>y</sup><sup>0</sup>

maximal field line γ t; x0; y<sup>0</sup>

Theorem 2.3 If x0; y<sup>0</sup>

Theorem 2.4 If x0; y<sup>0</sup>

is a point where Y is in Ker df .

yh; Γ<sup>h</sup> ijð Þ<sup>x</sup> yi yj

tive.

x0; y<sup>0</sup>

x0; y<sup>0</sup>

spray.

This point of view regarding extrema comes from paper [22].

f γ t; x0; y<sup>0</sup>

Definition 2.4 The points xð Þ ; y ∈ D which are solutions of the equation

d2

Example 2.2 We consider the Volterra-Hamilton ODE system [2].

dx<sup>1</sup>

dt ðÞ¼ <sup>t</sup> <sup>y</sup><sup>1</sup>

are called critical points of f conditioned by the previous spray.

constrained by the previous spray and if the number

is strictly positive (negative), then x0; y<sup>0</sup>

on the tangent bundle TM, that is,

x\_ h ðÞ¼ <sup>t</sup> yh

The second order system of auto-parallels induces a spray (special vector field) Y xð Þ¼ ; y

ð Þþ <sup>t</sup> <sup>Γ</sup><sup>h</sup>

The solutions <sup>γ</sup>ðÞ¼ <sup>t</sup> ð Þ x tð Þ; y tð Þ : <sup>I</sup> ! <sup>D</sup> of class <sup>C</sup><sup>2</sup> are called field lines of <sup>Y</sup>. They depend on

Definition 2.3 Let D <sup>⊂</sup> TM be open and connected and f : <sup>D</sup> ! <sup>R</sup> a C<sup>2</sup> function. The point

∈ D is called minimum (maximum) point of f conditioned by the previous spray, if for the

DYf xð Þ¼ ; y df Yð Þð Þ¼ x; y 0

f Yð Þþ ;<sup>Y</sup> df Dð Þ YY <sup>x</sup>0; <sup>y</sup><sup>0</sup>

ð Þt , dx<sup>2</sup>

dt ðÞ¼ <sup>t</sup> <sup>y</sup><sup>2</sup>

ð Þt ,

, t∈ I, there exists a neighborhood It<sup>0</sup> of t<sup>0</sup> such that

≥ ≤ð Þ f x0; <sup>y</sup><sup>0</sup>

ijð Þ x tð Þ yi

, and therefore the notation <sup>γ</sup> <sup>t</sup>; <sup>x</sup>0; <sup>y</sup><sup>0</sup>

ð Þ<sup>t</sup> yj

, ∀t ∈It<sup>0</sup> ⊂I:

∈ D is an extremum point of f conditioned by the previous spray, then

<sup>∈</sup> D is a conditioned critical point of the function f : <sup>D</sup> ! <sup>R</sup> of class C<sup>2</sup>

is a minimum (maximum) point of f constrained by the

ðÞ¼ t 0:

Bilevel Disjunctive Optimization on Affine Manifolds http://dx.doi.org/10.5772/intechopen.75643 123

is more sugges-

ð Þ<sup>t</sup> , <sup>y</sup>\_ <sup>h</sup>

$$\mathbf{g}\_{\theta\theta} = \mathbf{1}, \mathbf{g}\_{\theta\varphi} = \mathbf{g}\_{\varphi\theta} = \mathbf{0}, \mathbf{g}\_{\varphi\varphi} = \sin^2\theta.$$

When θ 6¼ 0, π, we find

$$\Gamma^{\theta}\_{\varphi\varphi} = -\frac{1}{2}\sin 2\theta , \Gamma^{\varphi}\_{\varphi\theta} = \Gamma^{\varphi}\_{\theta\varphi} = \cot \theta .$$

and all the other Γs are equal to zero. We can show that the apparent singularity at θ ¼ 0, π can be removed by a better choice of coordinates at the poles of the sphere. Thus, the above affine connection extends to the whole sphere.

The second order system defining auto-parallel curves (geodesics) on S<sup>2</sup> are

$$
\ddot{\hat{\theta}}(t) - \frac{1}{2} \sin 2\theta(t) \,\dot{\phi}(t) \dot{\phi}(t) = 0,\\
\ddot{\phi}(t) - 2 \cot \theta(t) \,\dot{\phi}(t) \dot{\theta}(t) = 0.
$$

The solutions are great circles on the sphere. For example, θ ¼ α t þ β and φ = const.

We compute the curvature tensor R of the unit sphere S<sup>2</sup> . Since there are only two independent coordinates, all the non-zero components of curvature tensor R are given by R<sup>i</sup> <sup>j</sup> <sup>¼</sup> <sup>R</sup><sup>i</sup> <sup>j</sup>θφ ¼ �R<sup>i</sup> <sup>j</sup>φθ, where i, j <sup>¼</sup> <sup>θ</sup>,φ. We get R<sup>θ</sup> <sup>φ</sup> <sup>¼</sup> sin <sup>2</sup>θ, R<sup>φ</sup> <sup>θ</sup> ¼ �1 and the other components are 0.

Let θ t; θ0; φ<sup>0</sup> ; <sup>ξ</sup> ,<sup>φ</sup> <sup>t</sup>; <sup>θ</sup>0;φ<sup>0</sup> ; ξ , t∈ R be the maximal auto-parallel which satisfies θ t0; θ0; φ<sup>0</sup> ; <sup>ξ</sup> <sup>¼</sup> <sup>θ</sup>0, <sup>θ</sup>\_ <sup>t</sup>0; <sup>θ</sup>0;φ<sup>0</sup> ; <sup>ξ</sup> <sup>¼</sup> <sup>ξ</sup><sup>1</sup> ; φ t0; θ0;φ<sup>0</sup> ; <sup>ξ</sup> <sup>¼</sup> <sup>φ</sup>0, <sup>φ</sup>\_ <sup>t</sup>0; <sup>θ</sup>0;φ<sup>0</sup> ; <sup>ξ</sup> <sup>¼</sup> <sup>ξ</sup><sup>2</sup> . We wish to compute minfð Þ¼ <sup>θ</sup>;<sup>φ</sup> <sup>R</sup><sup>θ</sup> <sup>φ</sup> <sup>¼</sup> sin <sup>2</sup><sup>θ</sup> with the restriction <sup>θ</sup> <sup>t</sup>; <sup>θ</sup>0; <sup>φ</sup><sup>0</sup> ; <sup>ξ</sup> ;<sup>φ</sup> t, <sup>θ</sup>0;φ<sup>0</sup> ; ξÞÞ, t ∈R.

Since df <sup>¼</sup> ð Þ 2 sin <sup>θ</sup> cos <sup>θ</sup>; <sup>0</sup> , the critical point condition dfð Þ <sup>θ</sup>;<sup>φ</sup> ð Þ¼ <sup>ξ</sup> <sup>0</sup> becomes sin <sup>θ</sup> cos θ ξ<sup>1</sup> <sup>¼</sup> <sup>0</sup>. Consequently, the critical points are either ð Þ <sup>θ</sup><sup>0</sup> <sup>¼</sup> <sup>k</sup>π; <sup>k</sup> <sup>∈</sup> <sup>ℤ</sup>;<sup>φ</sup> , <sup>ξ</sup><sup>1</sup> ; <sup>ξ</sup><sup>2</sup> 6¼ ð Þ <sup>0</sup>; <sup>0</sup> , or <sup>θ</sup><sup>1</sup> <sup>¼</sup> ð Þ <sup>2</sup><sup>k</sup> <sup>þ</sup> <sup>1</sup> <sup>π</sup> <sup>2</sup> ; <sup>k</sup> <sup>∈</sup> <sup>ℤ</sup>;φÞ, <sup>ξ</sup><sup>1</sup> ; <sup>ξ</sup><sup>2</sup> 6¼ ð Þ <sup>0</sup>; <sup>0</sup> , or ð Þ <sup>θ</sup>;<sup>φ</sup> , <sup>ξ</sup><sup>1</sup> <sup>¼</sup> <sup>0</sup>; <sup>ξ</sup><sup>2</sup> 6¼ <sup>0</sup> .

The components of the Hessian of f are

$$(\operatorname{Hess} f)\_{\partial \theta} = \frac{\partial^2 f}{\partial \theta \partial \theta} = 2 \cos 2\theta,\\ (\operatorname{Hess} f)\_{\partial \varphi} = 0,\\ (\operatorname{Hess} f)\_{\neq \varphi} = \frac{1}{2} \sin^2 2\theta.$$

At the critical points ð Þ θ0;φ or ð Þ θ1;φ , the Hessian of f is positive or negative semi-definite. On the other hand, along <sup>ξ</sup><sup>1</sup> <sup>¼</sup> <sup>0</sup>; <sup>ξ</sup><sup>2</sup> 6¼ <sup>0</sup> , we find Hessf ð Þij <sup>ξ</sup><sup>i</sup> <sup>ξ</sup><sup>j</sup> <sup>¼</sup> <sup>1</sup> <sup>2</sup> sin <sup>2</sup> <sup>2</sup>θ ξ<sup>2</sup> <sup>2</sup> <sup>&</sup>gt; <sup>0</sup>, <sup>ξ</sup><sup>2</sup> 6¼ <sup>0</sup>: Consequently, each point <sup>θ</sup> 6¼ <sup>k</sup><sup>π</sup> <sup>2</sup> ;<sup>φ</sup> , is a minimum point of f along each auto-parallel, starting from given point and tangent to <sup>ξ</sup><sup>1</sup> <sup>¼</sup> <sup>0</sup>; <sup>ξ</sup><sup>2</sup> 6¼ <sup>0</sup> .

#### 2.2. Theory via the associated spray

Example 2.1 We compute the Christoffel symbols on the unit sphere S<sup>2</sup>

Γθ φφ ¼ � <sup>1</sup>

We compute the curvature tensor R of the unit sphere S<sup>2</sup>

<sup>φ</sup> <sup>¼</sup> sin <sup>2</sup>θ, R<sup>φ</sup>

The second order system defining auto-parallel curves (geodesics) on S<sup>2</sup> are

The solutions are great circles on the sphere. For example, θ ¼ α t þ β and φ = const.

coordinates, all the non-zero components of curvature tensor R are given by R<sup>i</sup>

; <sup>ξ</sup> <sup>¼</sup> <sup>ξ</sup><sup>1</sup>

Consequently, the critical points are either ð Þ <sup>θ</sup><sup>0</sup> <sup>¼</sup> <sup>k</sup>π; <sup>k</sup> <sup>∈</sup> <sup>ℤ</sup>;<sup>φ</sup> , <sup>ξ</sup><sup>1</sup>

; <sup>ξ</sup><sup>2</sup> 6¼ ð Þ <sup>0</sup>; <sup>0</sup> , or ð Þ <sup>θ</sup>;<sup>φ</sup> , <sup>ξ</sup><sup>1</sup> <sup>¼</sup> <sup>0</sup>; <sup>ξ</sup><sup>2</sup> 6¼ <sup>0</sup> .

f

other hand, along <sup>ξ</sup><sup>1</sup> <sup>¼</sup> <sup>0</sup>; <sup>ξ</sup><sup>2</sup> 6¼ <sup>0</sup> , we find Hessf ð Þij <sup>ξ</sup><sup>i</sup>

<sup>g</sup>θθ <sup>¼</sup> <sup>1</sup>, gθφ <sup>¼</sup> <sup>g</sup>φθ <sup>¼</sup> <sup>0</sup>, gφφ <sup>¼</sup> sin <sup>2</sup>

and all the other Γs are equal to zero. We can show that the apparent singularity at θ ¼ 0, π can be removed by a better choice of coordinates at the poles of the sphere. Thus, the above affine connection

φθ <sup>¼</sup> <sup>Γ</sup><sup>φ</sup>

<sup>2</sup> sin 2θð Þ<sup>t</sup> <sup>φ</sup>\_ð Þ<sup>t</sup> <sup>φ</sup>\_ðÞ¼ <sup>t</sup> <sup>0</sup>,φ€ð Þ� <sup>t</sup> 2 cot <sup>θ</sup>ð Þ<sup>t</sup> <sup>φ</sup>\_ð Þ<sup>t</sup> <sup>θ</sup>\_ðÞ¼ <sup>t</sup> <sup>0</sup>:

<sup>θ</sup> ¼ �1 and the other components are 0.

; ξ , t∈ R be the maximal auto-parallel which satisfies

Since df <sup>¼</sup> ð Þ 2 sin <sup>θ</sup> cos <sup>θ</sup>; <sup>0</sup> , the critical point condition dfð Þ <sup>θ</sup>;<sup>φ</sup> ð Þ¼ <sup>ξ</sup> <sup>0</sup> becomes sin <sup>θ</sup> cos θ ξ<sup>1</sup> <sup>¼</sup> <sup>0</sup>.

<sup>∂</sup>θ∂<sup>θ</sup> <sup>¼</sup> 2 cos 2θ, Hessf ð Þθφ <sup>¼</sup> <sup>0</sup>, Hessf ð Þφφ <sup>¼</sup> <sup>1</sup>

At the critical points ð Þ θ0;φ or ð Þ θ1;φ , the Hessian of f is positive or negative semi-definite. On the

; φ t0; θ0;φ<sup>0</sup>

<sup>φ</sup> <sup>¼</sup> sin <sup>2</sup><sup>θ</sup> with the restriction <sup>θ</sup> <sup>t</sup>; <sup>θ</sup>0; <sup>φ</sup><sup>0</sup>

<sup>ξ</sup><sup>j</sup> <sup>¼</sup> <sup>1</sup>

<sup>2</sup> ;<sup>φ</sup> , is a minimum point of f along each auto-parallel, starting from given

<sup>2</sup> sin <sup>2</sup> <sup>2</sup>θ ξ<sup>2</sup> <sup>2</sup>

<sup>2</sup> sin 2θ, <sup>Γ</sup><sup>φ</sup>

θ:

θφ ¼ cot θ,

ð Þ θ;φ and the Riemannian metric

When θ 6¼ 0, π, we find

122 Optimization Algorithms - Examples

extends to the whole sphere.

i, j <sup>¼</sup> <sup>θ</sup>,φ. We get R<sup>θ</sup>

Let θ t; θ0; φ<sup>0</sup>

θ t0; θ0; φ<sup>0</sup>

ξÞÞ, t ∈R.

<sup>k</sup> <sup>∈</sup> <sup>ℤ</sup>;φÞ, <sup>ξ</sup><sup>1</sup>

<sup>θ</sup>€ð Þ� <sup>t</sup> 1

; <sup>ξ</sup> ,<sup>φ</sup> <sup>t</sup>; <sup>θ</sup>0;φ<sup>0</sup>

; <sup>ξ</sup> <sup>¼</sup> <sup>θ</sup>0, <sup>θ</sup>\_ <sup>t</sup>0; <sup>θ</sup>0;φ<sup>0</sup>

We wish to compute minfð Þ¼ <sup>θ</sup>;<sup>φ</sup> <sup>R</sup><sup>θ</sup>

The components of the Hessian of f are

point and tangent to <sup>ξ</sup><sup>1</sup> <sup>¼</sup> <sup>0</sup>; <sup>ξ</sup><sup>2</sup> 6¼ <sup>0</sup> .

quently, each point <sup>θ</sup> 6¼ <sup>k</sup><sup>π</sup>

ð Þ Hessf θθ <sup>¼</sup> <sup>∂</sup><sup>2</sup>

, using spherical coordinates

. Since there are only two independent

<sup>j</sup>θφ ¼ �R<sup>i</sup>

 ; <sup>ξ</sup> ;<sup>φ</sup> t, <sup>θ</sup>0;φ<sup>0</sup> ;

; <sup>ξ</sup><sup>2</sup> 6¼ ð Þ <sup>0</sup>; <sup>0</sup> , or <sup>θ</sup><sup>1</sup> <sup>¼</sup> ð Þ <sup>2</sup><sup>k</sup> <sup>þ</sup> <sup>1</sup> <sup>π</sup>

<sup>2</sup> sin <sup>2</sup> <sup>2</sup>θ:

; <sup>ξ</sup> <sup>¼</sup> <sup>ξ</sup><sup>2</sup>

<sup>2</sup> ;

<sup>&</sup>gt; <sup>0</sup>, <sup>ξ</sup><sup>2</sup> 6¼ <sup>0</sup>: Conse-

<sup>j</sup>φθ, where

.

<sup>j</sup> <sup>¼</sup> <sup>R</sup><sup>i</sup>

; <sup>ξ</sup> <sup>¼</sup> <sup>φ</sup>0, <sup>φ</sup>\_ <sup>t</sup>0; <sup>θ</sup>0;φ<sup>0</sup>

This point of view regarding extrema comes from paper [22].

The second order system of auto-parallels induces a spray (special vector field) Y xð Þ¼ ; y yh; Γ<sup>h</sup> ijð Þ<sup>x</sup> yi yj on the tangent bundle TM, that is,

$$\dot{\boldsymbol{x}}^h(t) = \boldsymbol{y}^h(t),\\\dot{\boldsymbol{y}}^h(t) + \Gamma^h\_{\dot{\boldsymbol{y}}}(\boldsymbol{x}(t))\boldsymbol{y}^i(t)\boldsymbol{y}^j(t) = \mathbf{0}.$$

The solutions <sup>γ</sup>ðÞ¼ <sup>t</sup> ð Þ x tð Þ; y tð Þ : <sup>I</sup> ! <sup>D</sup> of class <sup>C</sup><sup>2</sup> are called field lines of <sup>Y</sup>. They depend on the initial condition <sup>γ</sup>ð Þj <sup>t</sup> <sup>t</sup>¼t<sup>0</sup> <sup>¼</sup> <sup>x</sup>0; <sup>y</sup><sup>0</sup> , and therefore the notation <sup>γ</sup> <sup>t</sup>; <sup>x</sup>0; <sup>y</sup><sup>0</sup> is more suggestive.

Definition 2.3 Let D <sup>⊂</sup> TM be open and connected and f : <sup>D</sup> ! <sup>R</sup> a C<sup>2</sup> function. The point x0; y<sup>0</sup> ∈ D is called minimum (maximum) point of f conditioned by the previous spray, if for the maximal field line γ t; x0; y<sup>0</sup> , t∈ I, there exists a neighborhood It<sup>0</sup> of t<sup>0</sup> such that

$$f\left(\boldsymbol{\gamma}\left(t,\mathbf{x}\_{0},y\_{0}\right)\right) \succeq \ (\leq) \; f\left(\mathbf{x}\_{0},y\_{0}\right), \ \forall t \in I\_{t\_{0}} \subset I.$$

Theorem 2.3 If x0; y<sup>0</sup> ∈ D is an extremum point of f conditioned by the previous spray, then x0; y<sup>0</sup> is a point where Y is in Ker df .

Definition 2.4 The points xð Þ ; y ∈ D which are solutions of the equation

$$D\_{\Upsilon}f(\mathfrak{x},y) = df(\mathfrak{Y})(\mathfrak{x},y) = 0$$

are called critical points of f conditioned by the previous spray.

Theorem 2.4 If x0; y<sup>0</sup> <sup>∈</sup> D is a conditioned critical point of the function f : <sup>D</sup> ! <sup>R</sup> of class C<sup>2</sup> constrained by the previous spray and if the number

$$\left(d^2f(\mathcal{Y},\mathcal{Y}) + df(D\_{\mathcal{Y}}\mathcal{Y})\right)\left(\mathfrak{x}\_0, y\_0\right)$$

is strictly positive (negative), then x0; y<sup>0</sup> is a minimum (maximum) point of f constrained by the spray.

Example 2.2 We consider the Volterra-Hamilton ODE system [2].

$$\frac{d\mathfrak{x}^1}{dt}(t) = y^1(t), \\ \frac{d\mathfrak{x}^2}{dt}(t) = y^2(t).$$

$$\begin{aligned} \frac{dy^1}{dt}(t) &= \lambda y^1(t) - \alpha\_1 y^{1^2}(t) - 2\alpha\_2 y^1(t)y^2(t), \\\\ \frac{dy^2}{dt}(t) &= \lambda y^1(t) - \beta\_1 y^{2^2}(t) - 2\beta\_2 y^1(t)y^2(t). \end{aligned}$$

Theorem 3.1 Each posynomial function is affine convex, with respect to some affine connection.

þþ ! <sup>R</sup>,fxð Þ¼ <sup>X</sup>

; <sup>a</sup><sup>2</sup> � �<sup>1</sup>�<sup>t</sup> <sup>b</sup><sup>2</sup> � �<sup>t</sup>

μh

K

xi � �aik ,

;…; b<sup>n</sup> � �, which fix, as example, the affine connec-

Bilevel Disjunctive Optimization on Affine Manifolds http://dx.doi.org/10.5772/intechopen.75643 125

ij ¼ 0:

<sup>b</sup><sup>i</sup> � �aik � �<sup>t</sup>

i¼1

xi � �aik ,

<sup>k</sup>, and hence <sup>ψ</sup>€kðÞ¼ <sup>t</sup> <sup>A</sup><sup>1</sup>�<sup>t</sup>

<sup>b</sup><sup>i</sup> � �aik !<sup>t</sup>

:

<sup>k</sup> B<sup>t</sup>

<sup>k</sup>ð Þ ln Ak � ln Bk

<sup>2</sup> > 0:

k¼1 ck Yn i¼1

;…; an ð Þ<sup>1</sup>�<sup>t</sup> <sup>b</sup><sup>n</sup> ð Þ<sup>t</sup> � �, t <sup>∈</sup>½ � <sup>0</sup>; <sup>1</sup> ,

<sup>μ</sup><sup>j</sup> xj , and otherwise <sup>Γ</sup><sup>h</sup>

ai � �aik � �<sup>1</sup>�<sup>t</sup>

<sup>a</sup><sup>i</sup> � �aik !<sup>1</sup>�<sup>t</sup> <sup>Y</sup><sup>n</sup>

where all the coefficients ck are positive real numbers, and the exponents aik are real numbers.

Proof. A posynomial function has the form

Let us consider the auto-parallel curves of the form

joining the points <sup>a</sup> <sup>¼</sup> <sup>a</sup><sup>1</sup>;…; an � � and <sup>b</sup> <sup>¼</sup> <sup>b</sup><sup>1</sup>

One term in this sum is of the form <sup>ψ</sup>kðÞ¼ <sup>t</sup> <sup>A</sup><sup>1</sup>�<sup>t</sup>

Proof. A signomial function has the form

f : R<sup>n</sup>

k ¼ k<sup>0</sup> þ 1,…, K we have ck < 0. We use the decomposition

two convex function is convex".

some affine connection.

tion

It follows

<sup>γ</sup>ðÞ¼ <sup>t</sup> <sup>a</sup><sup>1</sup> � �<sup>1</sup>�<sup>t</sup> <sup>b</sup><sup>1</sup> � �<sup>t</sup>

Γh hj <sup>¼</sup> <sup>Γ</sup><sup>h</sup>

jh ¼ � <sup>1</sup> 2

<sup>f</sup>ð Þ¼ <sup>γ</sup>ð Þ<sup>t</sup> <sup>X</sup>

K

k¼1 ck Yn i¼1

<sup>¼</sup> <sup>X</sup> K

k¼1 ck Yn i¼1

<sup>k</sup> Bt

Remark 3.1 Posynomial functions belong to the class of functions satisfying the statement "product of

Corollary 3.1 Each signomial function is difference of two affine convex posynomials, with respect to

where all the exponents aik are real numbers and the coefficients ck are either positive or negative. Without loss of generality, suppose that for k ¼ 1,…, k<sup>0</sup> we have ck > 0 and for

K

k¼1 ck Yn i¼1

þþ ! <sup>R</sup>,fxð Þ¼ <sup>X</sup>

f : R<sup>n</sup>

which models production in a Gause-Witt 2-species evolving in R<sup>4</sup>: (1) competition if α<sup>1</sup> > 0, α<sup>2</sup> > 0, β<sup>1</sup> > 0, β<sup>2</sup> > 0 and (2) parasitism if α<sup>1</sup> > 0, α<sup>2</sup> < 0, β<sup>1</sup> > 0, β<sup>2</sup> > 0.

Changing the real parameter t into an affine parameter s, we find the connection with constant coefficients

$$
\Gamma\_{11}^1 = \frac{1}{3} \left( \alpha\_1 - 2\beta\_2 \right) , \Gamma\_{22}^2 = \frac{1}{3} \left( \beta\_1 - 2\alpha\_2 \right) ,
$$

$$
\Gamma\_{12}^1 = \frac{1}{3} \left( 2\alpha\_2 - \beta\_1 \right) , \Gamma\_{12}^2 = \frac{1}{3} \left( 2\beta\_2 - \alpha\_1 \right) .
$$

Let x t; x0; y<sup>0</sup> , t <sup>∈</sup>I be the maximal field line which satisfies x t0; <sup>x</sup>0; <sup>y</sup><sup>0</sup> <sup>¼</sup> <sup>x</sup>0; <sup>y</sup><sup>0</sup> . We wish to compute maxf x<sup>1</sup>; <sup>x</sup><sup>2</sup>; <sup>y</sup><sup>1</sup>; <sup>y</sup><sup>2</sup> <sup>¼</sup> <sup>y</sup><sup>2</sup> with the restriction x <sup>¼</sup> x t; <sup>x</sup>0; <sup>y</sup><sup>0</sup> .

We apply the previous theory. Introduce the vector field

$$Y = \left(y^1, y^2, \lambda y^1 - \alpha\_1 y^{1^2} - 2\alpha\_2 y^1 y^2, \lambda y^1 - \beta\_1 y^{2^2} - 2\beta\_2 y^1 y^2\right).$$

We set the critical point condition df Yð Þ¼ <sup>0</sup>. Since df <sup>¼</sup> ð Þ <sup>0</sup>; <sup>0</sup>; <sup>0</sup>; <sup>1</sup> , it follows the relation <sup>λ</sup>y<sup>1</sup> � <sup>β</sup>1y<sup>22</sup> �2β2y<sup>1</sup>y<sup>2</sup> <sup>¼</sup> <sup>0</sup>, that is, the critical point set is a conic in y<sup>1</sup>Oy<sup>2</sup> .

Since d<sup>2</sup> f ¼ 0, the sufficiency condition is reduced to df Dð Þ YY x0; y<sup>0</sup> < 0, that is,

$$
\left(\lambda - \frac{\alpha\_1 \beta\_1 y^2}{\lambda - 2\beta\_2 y^2} - 2\alpha\_2 y^2\right) (y\_0) < 0.
$$

This last relation is equivalent either to

$$\left(\lambda - 2\alpha\_2 y\_0^2\right) \left(\lambda - 2\beta\_2 y\_0^2\right) - \alpha\_1 \beta\_1 y\_0^2 \mathbf{2} < 0, \lambda - 2\beta\_2 y\_0^2 > 0$$

or to

$$\left(\lambda - 2\alpha\_2 y\_0^2\right) \left(\lambda - 2\beta\_2 y\_0^2\right) - \alpha\_1 \beta\_1 y\_0^2 \mathbf{2} > 0,\\ \lambda - 2\beta\_2 y\_0^2 < 0.$$

Each critical point satisfying one of the last two conditions is a maximum point.

### 3. Affine convexity of posynomial functions

For the general theory regarding geometric programming (based on posynomial, signomial functions, etc.), see [11].

:

Theorem 3.1 Each posynomial function is affine convex, with respect to some affine connection.

Proof. A posynomial function has the form

$$f: \mathbb{R}\_{++}^{n} \to \mathbb{R}, f(\mathfrak{x}) = \sum\_{k=1}^{K} c\_{k} \prod\_{i=1}^{n} \left(\mathfrak{x}^{i}\right)^{a\_{ik}}.$$

where all the coefficients ck are positive real numbers, and the exponents aik are real numbers. Let us consider the auto-parallel curves of the form

$$\gamma(t) = \left( \left( a^1 \right)^{1-t} \left( b^1 \right)^t, \left( a^2 \right)^{1-t} \left( b^2 \right)^t, \dots, \left( a^n \right)^{1-t} \left( b^n \right)^t \right), t \in [0, 1].$$

joining the points <sup>a</sup> <sup>¼</sup> <sup>a</sup><sup>1</sup>;…; an � � and <sup>b</sup> <sup>¼</sup> <sup>b</sup><sup>1</sup> ;…; b<sup>n</sup> � �, which fix, as example, the affine connection

$$
\Gamma^{\mathfrak{h}}\_{\mathfrak{h}\mathfrak{j}} = \Gamma^{\mathfrak{h}}\_{\mathfrak{j}\mathfrak{h}} = -\frac{1}{2} \frac{\mu^{\mathfrak{h}}}{\mu^{\mathfrak{j}} \mathfrak{x}^{\mathfrak{j}'}} \text{ and } \text{otherwise } \Gamma^{\mathfrak{h}}\_{\vec{\mathfrak{j}}} = 0.
$$

It follows

dy<sup>1</sup>

dy<sup>2</sup>

dt ðÞ¼ <sup>t</sup> <sup>λ</sup>y<sup>1</sup>

dt ðÞ¼ <sup>t</sup> <sup>λ</sup>y<sup>1</sup>

β<sup>1</sup> > 0, β<sup>2</sup> > 0 and (2) parasitism if α<sup>1</sup> > 0, α<sup>2</sup> < 0, β<sup>1</sup> > 0, β<sup>2</sup> > 0.

Γ1 <sup>11</sup> <sup>¼</sup> <sup>1</sup>

Γ1 <sup>12</sup> <sup>¼</sup> <sup>1</sup>

We apply the previous theory. Introduce the vector field

; y<sup>2</sup>

�2β2y<sup>1</sup>y<sup>2</sup> <sup>¼</sup> <sup>0</sup>, that is, the critical point set is a conic in y<sup>1</sup>Oy<sup>2</sup>

<sup>λ</sup> � <sup>2</sup>α2y<sup>2</sup> 0 <sup>λ</sup> � <sup>2</sup>β2y<sup>2</sup>

<sup>λ</sup> � <sup>2</sup>α2y<sup>2</sup> 0 <sup>λ</sup> � <sup>2</sup>β2y<sup>2</sup>

3. Affine convexity of posynomial functions

<sup>Y</sup> <sup>¼</sup> <sup>y</sup><sup>1</sup>

This last relation is equivalent either to

functions, etc.), see [11].

compute maxf x<sup>1</sup>; <sup>x</sup><sup>2</sup>; <sup>y</sup><sup>1</sup>; <sup>y</sup><sup>2</sup> <sup>¼</sup> <sup>y</sup><sup>2</sup> with the restriction x <sup>¼</sup> x t; <sup>x</sup>0; <sup>y</sup><sup>0</sup>

Let x t; x0; y<sup>0</sup>

124 Optimization Algorithms - Examples

Since d<sup>2</sup>

or to

ð Þ� <sup>t</sup> <sup>α</sup>1y<sup>12</sup>

ðÞ� <sup>t</sup> <sup>β</sup>1y<sup>22</sup>

<sup>3</sup> <sup>α</sup><sup>1</sup> � <sup>2</sup>β<sup>2</sup> , Γ<sup>2</sup>

<sup>3</sup> <sup>2</sup>α<sup>2</sup> � <sup>β</sup><sup>1</sup> , Γ<sup>2</sup>

, t <sup>∈</sup>I be the maximal field line which satisfies x t0; <sup>x</sup>0; <sup>y</sup><sup>0</sup>

; <sup>λ</sup>y<sup>1</sup> � <sup>α</sup>1y<sup>1</sup><sup>2</sup>

f ¼ 0, the sufficiency condition is reduced to df Dð Þ YY x0; y<sup>0</sup>

<sup>λ</sup> � <sup>α</sup>1β1y<sup>22</sup>

which models production in a Gause-Witt 2-species evolving in R<sup>4</sup>: (1) competition if α<sup>1</sup> > 0, α<sup>2</sup> > 0,

Changing the real parameter t into an affine parameter s, we find the connection with constant coefficients

� <sup>2</sup>α2y<sup>1</sup>

We set the critical point condition df Yð Þ¼ <sup>0</sup>. Since df <sup>¼</sup> ð Þ <sup>0</sup>; <sup>0</sup>; <sup>0</sup>; <sup>1</sup> , it follows the relation <sup>λ</sup>y<sup>1</sup> � <sup>β</sup>1y<sup>22</sup>

<sup>λ</sup> � <sup>2</sup>β2y<sup>2</sup> � <sup>2</sup>α2y<sup>2</sup> 

> 0 � <sup>α</sup>1β1y<sup>2</sup>

0 � <sup>α</sup>1β1y<sup>2</sup>

For the general theory regarding geometric programming (based on posynomial, signomial

Each critical point satisfying one of the last two conditions is a maximum point.

<sup>y</sup><sup>2</sup>

<sup>22</sup> <sup>¼</sup> <sup>1</sup>

<sup>12</sup> <sup>¼</sup> <sup>1</sup>

<sup>3</sup> <sup>β</sup><sup>1</sup> � <sup>2</sup>α<sup>2</sup> ,

<sup>3</sup> <sup>2</sup>β<sup>2</sup> � <sup>α</sup><sup>1</sup> :

<sup>y</sup><sup>2</sup>; <sup>λ</sup>y<sup>1</sup> � <sup>β</sup>1y<sup>2</sup><sup>2</sup>

.

y0 < 0:

<sup>0</sup><sup>2</sup> <sup>&</sup>lt; <sup>0</sup>, <sup>λ</sup> � <sup>2</sup>β2y<sup>2</sup>

<sup>0</sup><sup>2</sup> <sup>&</sup>gt; <sup>0</sup>, <sup>λ</sup> � <sup>2</sup>β2y<sup>2</sup>

.

<sup>¼</sup> <sup>x</sup>0; <sup>y</sup><sup>0</sup>

� <sup>2</sup>β2y<sup>1</sup>

< 0, that is,

<sup>0</sup> > 0

<sup>0</sup> < 0:

:

. We wish to

ð Þ� <sup>t</sup> <sup>2</sup>α2y<sup>1</sup>ð Þ<sup>t</sup> <sup>y</sup><sup>2</sup>

ð Þ� <sup>t</sup> <sup>2</sup>β2y<sup>1</sup>

ð Þt ,

ð Þ<sup>t</sup> <sup>y</sup><sup>2</sup> ð Þt ,

$$f(\boldsymbol{\gamma}(t)) = \sum\_{k=1}^{K} c\_k \prod\_{i=1}^{n} \left( \left(a^i\right)^{a\_k} \right)^{1-t} \left(\left(b^i\right)^{a\_k}\right)^t$$

$$= \sum\_{k=1}^{K} c\_k \left(\prod\_{i=1}^{n} \left(a^i\right)^{a\_k}\right)^{1-t} \left(\prod\_{i=1}^{n} \left(b^i\right)^{a\_k}\right)^t$$

One term in this sum is of the form <sup>ψ</sup>kðÞ¼ <sup>t</sup> <sup>A</sup><sup>1</sup>�<sup>t</sup> <sup>k</sup> Bt <sup>k</sup>, and hence <sup>ψ</sup>€kðÞ¼ <sup>t</sup> <sup>A</sup><sup>1</sup>�<sup>t</sup> <sup>k</sup> B<sup>t</sup> <sup>k</sup>ð Þ ln Ak � ln Bk <sup>2</sup> > 0:

Remark 3.1 Posynomial functions belong to the class of functions satisfying the statement "product of two convex function is convex".

Corollary 3.1 Each signomial function is difference of two affine convex posynomials, with respect to some affine connection.

Proof. A signomial function has the form

$$f: \mathbb{R}\_{++}^n \to \mathbb{R}\_\prime \\
f(\mathbf{x}) = \sum\_{k=1}^K c\_k \prod\_{i=1}^n \left(\mathbf{x}^i\right)^{a\_k}.$$

where all the exponents aik are real numbers and the coefficients ck are either positive or negative. Without loss of generality, suppose that for k ¼ 1,…, k<sup>0</sup> we have ck > 0 and for k ¼ k<sup>0</sup> þ 1,…, K we have ck < 0. We use the decomposition

$$f(\mathbf{x}) = \sum\_{k=1}^{k\_0} c\_k \prod\_{i=1}^n \left(\mathbf{x}^i\right)^{a\_k} - \sum\_{k=k\_0+1}^K |c\_k| \prod\_{i=1}^n \left(\mathbf{x}^i\right)^{a\_k}.$$

we apply the Theorem and the implication <sup>u</sup>00ð Þ<sup>t</sup> <sup>≥</sup> <sup>v</sup>00ðÞ)<sup>t</sup> <sup>u</sup> � <sup>v</sup> convex. □

Corollary 3.2 (1) The polynomial functions with positive coefficients, restricted to R<sup>n</sup> þþ, are affine convex functions.

(2) The polynomial functions with positive and negative terms, restricted to R<sup>n</sup> þþ, are differences of two affine convex functions.

Proudnikov [18] gives the necessary and sufficient conditions for representing Lipschitz multivariable function as a difference of two convex functions. An algorithm and a geometric interpretation of this representation are also given. The outcome of this algorithm is a sequence of pairs of convex functions that converge uniformly to a pair of convex functions if the conditions of the formulated theorems are satisfied.

## 4. Bilevel disjunctive problem

Let M1, <sup>1</sup>Γ � �, the leader decision affine manifold, and M2, <sup>2</sup>Γ � �, the follower decision affine manifold, be two connected affine manifolds of dimension n<sup>1</sup> and n2, respectively. Moreover, M2, <sup>2</sup>Γ � � is supposed to be complete. Let also f : M<sup>1</sup> � M<sup>2</sup> ! R be the leader objective function, and let <sup>F</sup> <sup>¼</sup> ð Þ <sup>F</sup>1; …; Fr : <sup>M</sup><sup>1</sup> � <sup>M</sup><sup>2</sup> ! <sup>R</sup><sup>r</sup> be the follower multiobjective function.

The components Fi : M<sup>1</sup> � M<sup>2</sup> ! R are (possibly) conflicting objective functions.

A bilevel optimization problem means a decision of leader with regard to a multi-objective optimum of the follower (in fact, a constrained optimization problem whose constraints are obtained from optimization problems). For details, see [5, 10, 12].

Let x ∈ M1, y ∈ M<sup>2</sup> be the generic points. In this chapter, the disjunctive solution set of a follower multiobjective optimization problem is defined by

(1) the set-valued function

$$\psi: M\_1 \Rightarrow M\_2 \; \psi(\mathfrak{x}) = \operatorname{Argmin}\_{\mathfrak{y} \in M\_2} F(\mathfrak{x}, \mathfrak{y})\_{\mathcal{H}}$$

where

$$\mathbf{Argmin}\_{\mathbf{y}\in M\_2} F(\mathbf{x}, \mathbf{y}) \coloneqq \cup\_{i=1}^{\prime} \mathbf{Argmin}\_{\mathbf{y}\in M\_2} F\_i(\mathbf{x}, \mathbf{y}),$$

or

(2) the set-valued function

where

ψ : M1⇉M2,ψð Þ¼ x Argmax<sup>y</sup><sup>∈</sup> <sup>M</sup><sup>2</sup>

F xð Þ ; <sup>y</sup> <sup>≔</sup>∪<sup>r</sup>

ð Þ OBDP min

ð Þ PBDP min x∈ M<sup>1</sup>

solutions (his best responses) one which is unfavorable for the leader.

exists if and only if, for an index i, the minimum min<sup>x</sup> f xð Þ ; y : y ∈ψ<sup>i</sup>

h i exists or <sup>ψ</sup><sup>j</sup> <sup>¼</sup> <sup>Ø</sup>. In this case,

ð Þx

coincides to the minimum of minima that exist.

ψ<sup>i</sup> ¼ ∅, and at least one minimum exists.

Proof. Let us consider the multi-functions ϕ<sup>i</sup>

Taking minimum of minima that exist, we find

So, a general optimization problem becomes a pessimistic bilevel problem.

x∈ M<sup>1</sup>

In this case, the follower cooperates with the leader; that is, for each x∈ M1, the follower chooses among all its disjunctive solutions (his best responses) one which is the best for the

In this case, there is no cooperation between the leader and the follower, and the leader expects the worst scenario; that is, for each x ∈ M1, the follower may choose among all its disjunctive

min<sup>x</sup> ½ � f xð Þ ; <sup>y</sup> : <sup>y</sup> <sup>∈</sup>ψð Þ<sup>x</sup>

min<sup>x</sup> ½ � f xð Þ ; <sup>y</sup> : <sup>y</sup> <sup>∈</sup>ψð Þ<sup>x</sup>

ð Þx . It follows that minxϕð Þx exists if and only if either minxϕ<sup>i</sup>

ð Þ¼ x f x;ψ<sup>i</sup>

Argmax<sup>y</sup><sup>∈</sup> <sup>M</sup><sup>2</sup>

We deal with two bilevel problems:

(1) The optimistic bilevel disjunctive problem

leader (assuming that such a solution exists). (2) The pessimistic bilevel disjunctive problem

Theorem 4.1 The value

either min<sup>x</sup> f xð Þ ; y : y ∈ψ<sup>j</sup>

<sup>ϕ</sup>ð Þ¼ <sup>x</sup> <sup>∪</sup><sup>k</sup>

<sup>i</sup>¼<sup>1</sup>ϕ<sup>i</sup>

F xð Þ ; y ,

Fið Þ x; y :

Bilevel Disjunctive Optimization on Affine Manifolds http://dx.doi.org/10.5772/intechopen.75643 127

ð Þ<sup>x</sup> � � exists and, for each j 6¼ i,

ð Þ<sup>x</sup> � � and <sup>ϕ</sup>ð Þ¼ <sup>x</sup> f xð Þ ;ψð Þ<sup>x</sup> . Then

min<sup>x</sup> ½ � f xð Þ ; <sup>y</sup> : <sup>y</sup>∈ψð Þ<sup>x</sup> : □

ð Þx exists or

<sup>i</sup>¼<sup>1</sup> Argmax<sup>y</sup><sup>∈</sup> <sup>M</sup><sup>2</sup>

min y∈ ψð Þx

max y∈ψð Þx f xð Þ ; y :

f xð Þ ; y :

$$
\psi: M\_1 \boxplus M\_2,\\
\psi(\mathfrak{x}) = \operatorname{Argmax}\_{\mathfrak{y} \in M\_2} F(\mathfrak{x}, \mathfrak{y}).
$$

where

f xð Þ¼ <sup>X</sup> k0

conditions of the formulated theorems are satisfied.

<sup>1</sup>Γ � �, the leader decision affine manifold, and M2,

<sup>F</sup> <sup>¼</sup> ð Þ <sup>F</sup>1; …; Fr : <sup>M</sup><sup>1</sup> � <sup>M</sup><sup>2</sup> ! <sup>R</sup><sup>r</sup> be the follower multiobjective function.

obtained from optimization problems). For details, see [5, 10, 12].

Argmin<sup>y</sup><sup>∈</sup> <sup>M</sup><sup>2</sup>

multiobjective optimization problem is defined by

(1) the set-valued function

(2) the set-valued function

4. Bilevel disjunctive problem

convex functions.

Let M1,

where

or

affine convex functions.

126 Optimization Algorithms - Examples

k¼1 ck Yn i¼1

(2) The polynomial functions with positive and negative terms, restricted to R<sup>n</sup>

xi � �aik

Corollary 3.2 (1) The polynomial functions with positive coefficients, restricted to R<sup>n</sup>

we apply the Theorem and the implication <sup>u</sup>00ð Þ<sup>t</sup> <sup>≥</sup> <sup>v</sup>00ðÞ)<sup>t</sup> <sup>u</sup> � <sup>v</sup> convex. □

Proudnikov [18] gives the necessary and sufficient conditions for representing Lipschitz multivariable function as a difference of two convex functions. An algorithm and a geometric interpretation of this representation are also given. The outcome of this algorithm is a sequence of pairs of convex functions that converge uniformly to a pair of convex functions if the

be two connected affine manifolds of dimension n<sup>1</sup> and n2, respectively. Moreover, M2,

The components Fi : M<sup>1</sup> � M<sup>2</sup> ! R are (possibly) conflicting objective functions.

supposed to be complete. Let also f : M<sup>1</sup> � M<sup>2</sup> ! R be the leader objective function, and let

A bilevel optimization problem means a decision of leader with regard to a multi-objective optimum of the follower (in fact, a constrained optimization problem whose constraints are

Let x ∈ M1, y ∈ M<sup>2</sup> be the generic points. In this chapter, the disjunctive solution set of a follower

ψ : M<sup>1</sup> ⇉ M2,ψð Þ¼ x Argmin<sup>y</sup><sup>∈</sup> <sup>M</sup><sup>2</sup>

F xð Þ ; <sup>y</sup> <sup>≔</sup> <sup>∪</sup><sup>r</sup>

� <sup>X</sup> K

k¼k0þ1

∣ck∣ Yn i¼1

xi � �aik ,

þþ, are affine

<sup>2</sup>Γ � � is

þþ, are differences of two

<sup>2</sup>Γ � �, the follower decision affine manifold,

F xð Þ ; y ,

Fið Þ x; y

<sup>i</sup>¼<sup>1</sup> Argmin<sup>y</sup><sup>∈</sup> <sup>M</sup><sup>2</sup>

$$\mathbf{Argmax}\_{y \in M\_2} F(\mathbf{x}, y) \coloneqq \cup\_{i=1}^r \mathbf{Argmax}\_{y \in M\_2} F\_i(\mathbf{x}, y) \dots$$

We deal with two bilevel problems:

(1) The optimistic bilevel disjunctive problem

$$(OBDP)\min\_{\mathbf{x}\in M\_1} \min\_{y\in\psi(\mathbf{x})} f(\mathbf{x}, y).$$

In this case, the follower cooperates with the leader; that is, for each x∈ M1, the follower chooses among all its disjunctive solutions (his best responses) one which is the best for the leader (assuming that such a solution exists).

(2) The pessimistic bilevel disjunctive problem

$$(PBDP) \min\_{\boldsymbol{x} \in M\_1} \max\_{\boldsymbol{y} \in \psi(\boldsymbol{x})} f(\boldsymbol{x}, \boldsymbol{y}).$$

In this case, there is no cooperation between the leader and the follower, and the leader expects the worst scenario; that is, for each x ∈ M1, the follower may choose among all its disjunctive solutions (his best responses) one which is unfavorable for the leader.

So, a general optimization problem becomes a pessimistic bilevel problem.

Theorem 4.1 The value

$$\min\_{\mathbf{x}} \left[ f(\mathbf{x}, y) : y \in \psi(\mathbf{x}) \right]$$

exists if and only if, for an index i, the minimum min<sup>x</sup> f xð Þ ; y : y ∈ψ<sup>i</sup> ð Þ<sup>x</sup> � � exists and, for each j 6¼ i, either min<sup>x</sup> f xð Þ ; y : y ∈ψ<sup>j</sup> ð Þx h i exists or <sup>ψ</sup><sup>j</sup> <sup>¼</sup> <sup>Ø</sup>. In this case,

$$\min\_{\mathbf{x}} \left[ f(\mathbf{x}, y) : y \in \psi(\mathbf{x}) \right]$$

coincides to the minimum of minima that exist.

Proof. Let us consider the multi-functions ϕ<sup>i</sup> ð Þ¼ x f x;ψ<sup>i</sup> ð Þ<sup>x</sup> � � and <sup>ϕ</sup>ð Þ¼ <sup>x</sup> f xð Þ ;ψð Þ<sup>x</sup> . Then <sup>ϕ</sup>ð Þ¼ <sup>x</sup> <sup>∪</sup><sup>k</sup> <sup>i</sup>¼<sup>1</sup>ϕ<sup>i</sup> ð Þx . It follows that minxϕð Þx exists if and only if either minxϕ<sup>i</sup> ð Þx exists or ψ<sup>i</sup> ¼ ∅, and at least one minimum exists.

Taking minimum of minima that exist, we find

$$\min\_{\mathbf{x}} \left[ f(\mathbf{x}, \boldsymbol{y}) : \boldsymbol{y} \in \boldsymbol{\psi}(\mathbf{x}) \right]. \tag{7}$$

Theorem 4.2 Suppose M<sup>1</sup> is a compact manifold. If for each x ∈ M1, at least one partial function y ! Fið Þ x; y is affine convex and has a critical point, then the problem OBDP ð Þ has a solution.

5. Models of bilevel disjunctive programming problems

ij can be realized in each case,

129

Bilevel Disjunctive Optimization on Affine Manifolds http://dx.doi.org/10.5772/intechopen.75643

The manifold M is understood from the context. The connection Γ<sup>h</sup>

min ð Þ x1;x<sup>2</sup>

ð Þ <sup>x</sup>1; <sup>x</sup><sup>2</sup> <sup>∈</sup> <sup>R</sup><sup>2</sup> <sup>j</sup>x<sup>2</sup>

<sup>A</sup> <sup>¼</sup> ð Þ <sup>x</sup>1; <sup>x</sup>2; <sup>y</sup> <sup>∈</sup> <sup>R</sup><sup>3</sup> <sup>j</sup>x<sup>1</sup> ¼ �<sup>1</sup> � <sup>x</sup>2; <sup>x</sup><sup>2</sup> ¼ � <sup>1</sup>

The Pareto-optimal front in F<sup>1</sup> � F<sup>2</sup> space can be written in parametric form

ð Þ <sup>F</sup>1; <sup>F</sup><sup>2</sup> <sup>∈</sup> <sup>R</sup><sup>2</sup> <sup>j</sup>F<sup>1</sup> ¼ �<sup>1</sup> � <sup>F</sup><sup>2</sup> � <sup>t</sup>; <sup>F</sup><sup>2</sup> ¼ � <sup>1</sup>

min<sup>x</sup> ð Þ <sup>x</sup> � <sup>y</sup>

ψð Þ¼ x

8 ><

>:

Example 5.2 Consider the bilevel programming problem

where the set-valued function is

Explicitly,

ð Þ <sup>x</sup>1; <sup>x</sup><sup>2</sup> <sup>∈</sup> Argminð Þ <sup>x</sup>1;x<sup>2</sup> ð Þj <sup>x</sup>1; <sup>x</sup><sup>2</sup> <sup>y</sup><sup>2</sup> � <sup>x</sup><sup>2</sup>

F xð Þ¼ <sup>1</sup>; x2; y ð Þ x<sup>1</sup> � y; x<sup>2</sup>

1 þ x<sup>1</sup> þ x<sup>2</sup> ≥ 0, � 1 ≤ x1, x<sup>2</sup> ≤ 1, 0 ≤ y ≤ 1:

Both the lower and the upper level optimization tasks have two objectives each. For a fixed y value, the feasible region of the lower-level problem is the area inside a circle with center at origin xð Þ <sup>1</sup> ¼ x<sup>2</sup> ¼ 0 and radius equal to y. The Pareto-optimal set for the lower-level optimization task, preserving a fixed y,

<sup>1</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup>

set of the overall problem. Eichfelder [8] reported the following Pareto-optimal set of solutions

The linear constraint in the upper level optimization task does not allow the entire quarter circle to be feasible for some y. Thus, at most a couple of points from the quarter circle belongs to the Pareto-optimal

<sup>2</sup> <sup>¼</sup> <sup>y</sup><sup>2</sup> ; x<sup>1</sup> ≤ 0; x<sup>2</sup> ≤ 0 � �:

� � � � :

� � � � :

ψð Þ¼ x Argminy½ � xy : �x � 1 ⩽y⩽ � x þ 1 :

<sup>2</sup> <sup>þ</sup> <sup>x</sup><sup>2</sup> : �<sup>20</sup> <sup>≤</sup> <sup>x</sup> <sup>≤</sup> <sup>20</sup>; <sup>y</sup>∈ψð Þ<sup>x</sup> h i,

> ½ � �1; 1 if x ¼ 0 �x � 1 if x > 0 �x þ 1 if x < 0:

<sup>1</sup> � <sup>x</sup><sup>2</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2y<sup>2</sup> � 1

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2t <sup>2</sup> � <sup>1</sup> ; y ∈

; t∈

1 ffiffiffi 2 p ; 1

1 ffiffiffi 2 p ; 1

q

p

<sup>2</sup> <sup>≥</sup> <sup>0</sup> � �,

Example 5.1 Let us solve the problem (cite [7], p. 7; [9]):

imposing convexity conditions.

is the bottom-left quarter of the circle,

subject to

Proof. In our hypothesis, the set ψð Þx is nonvoid, for any x, and the compacity assures the existence of minxf xð Þ ; ψð Þx .

In the next Theorem, we shall use the Value Function Method or Utility Function Method. □

Theorem 4.3 If a C<sup>1</sup> increasing scalarization partial function

$$y \to L(\mathfrak{x}, y) = \mathfrak{u}(F\_1(\mathfrak{x}, y), \dots, F\_k(\mathfrak{x}, y))$$

has a minimum, then there exists an index i such that ψ<sup>i</sup> ð Þx 6¼ ∅. Moreover, if f xð Þ ; y is bounded, then the bilevel problem

$$\min\_{\mathbf{x}} \left[ f(\mathbf{x}, y) : y \in \psi(\mathbf{x}) \right]$$

has solution.

Proof. Let minyL xð Þ¼ ; <sup>y</sup> L x; <sup>y</sup><sup>∗</sup> ð Þ. Suppose that for each <sup>i</sup> <sup>¼</sup> <sup>1</sup>, …, k, minyFið Þ <sup>x</sup>; <sup>y</sup> <sup>&</sup>lt; Fi <sup>x</sup>; <sup>y</sup><sup>∗</sup> ð Þ. Then <sup>y</sup><sup>∗</sup> would not be minimum point for the partial function <sup>y</sup> ! L xð Þ ; <sup>y</sup> . Hence, there exists an index i such that y<sup>∗</sup> ∈ψ<sup>i</sup> ð Þ<sup>x</sup> . □

Boundedness of f implies that the bilevel problem has solution once it is well-posed, but the fact that the problem is well-posed is shown in the first part of the proof.

#### 4.1. Bilevel disjunctive programming algorithm

An important concept for making wise tradeoffs among competing objectives is bilevel disjunctive programming optimality, on affine manifolds, introduced in this chapter.

We present an exact algorithm for obtaining the bilevel disjunctive solutions to the multiobjective optimization in the following section.

Step 1: Solve

$$\psi\_i(\mathbf{x}) = \mathbf{Argmin}\_{\mathbf{y} \in M\_2} F\_i(\mathbf{x}, \mathbf{y}), i = 1, \dots, m.$$

Let <sup>ψ</sup>ð Þ¼ <sup>x</sup> <sup>∪</sup><sup>r</sup> <sup>i</sup>¼<sup>1</sup>ψ<sup>i</sup> ð Þx be a subset in M<sup>2</sup> representing the mapping of optimal solutions for the follower multi-objective function.

Step 2: Build the mapping f x, ð ψð Þx .

Step 3: Solve the leader's following program

$$\min\_{\mathbf{x}} \ [f(\mathbf{x}, y), y \in \psi(\mathbf{x})].$$

From numerical point of view, we can use the Newton algorithm for optimization on affine manifolds, which is given in [19].
