Traditional Approach: Pareto Optimality

**Chapter 1**

**Abstract**

proven.

**1. Introduction**

tive games."

**3**

Pareto Optimality and Equilibria

This chapter considers the Nash equilibrium strategy profiles that are Pareto optimal with respect to the rest of the Nash equilibrium strategy profiles. The sufficient conditions for the existence of such pure strategy profiles are established. These conditions employ the Germeier convolutions of the payoff functions. For the noncooperative games with compact strategy sets and continuous payoff functions, the existence of the Pareto-optimal Nash equilibria (PoNE) in mixed strategies is

**Keywords:** Pareto optimality, Nash equilibrium, Pareto-optimal Nash equilibrium,

In 1949, J. Nash, a Princeton University graduate at that time and a famous American mathematician and economist as we know him today, suggested the notion of an equilibrium solution for a noncooperative game [1] lately called "the Nash equilibrium strategy profile." Since then, this equilibrium is widely used in economics, sociology, military sciences, and other spheres of human activity. Moreover, 45 years later J. Nash, J. Harshanyi, and R. Selten were awarded the Nobel Prize "for the pioneering analysis of equilibria in the theory of noncoopera-

However, as shown by Example 1, the set of the Nash equilibrium strategy profiles has a negative property: there may exist two Nash equilibrium strategy profiles such that the payoffs of each player in the first strategy profile are strictly greater than the corresponding payoffs in the second one. In 2013, the authors emphasized this fact in a series of papers [2, 3] while exploring the existence of a guaranteed equilibrium solution for a noncooperative game under uncertainty. Particularly, these papers were focused on the Nash equilibrium strategy profile that is Pareto optimal with respect to the rest of the Nash equilibrium strategy profiles, thereby eliminating the above shortcoming. And the following question arises immediately. How can such an equilibrium (the so-called Pareto equilibrium strategy profile) be found? Our idea is to use the sufficient conditions (Theorem 1) reducing Nash equilibrium strategy profile design to saddle point calculation in a special Germeier convolution of the payoff functions. As an application, this chapter establishes the existence of the Pareto-optimal Nash equilibrium (PoNE) strategy profile in the class of mixed strategies (see Assertion 1). Similar results were obtained by the authors for the Pareto-optimal Berge equilibrium in [4].

in Noncooperative Games

noncooperative game, Germeier convolution

*Vladislav Zhukovskiy and Konstantin Kudryavtsev*

#### **Chapter 1**

## Pareto Optimality and Equilibria in Noncooperative Games

*Vladislav Zhukovskiy and Konstantin Kudryavtsev*

#### **Abstract**

This chapter considers the Nash equilibrium strategy profiles that are Pareto optimal with respect to the rest of the Nash equilibrium strategy profiles. The sufficient conditions for the existence of such pure strategy profiles are established. These conditions employ the Germeier convolutions of the payoff functions. For the noncooperative games with compact strategy sets and continuous payoff functions, the existence of the Pareto-optimal Nash equilibria (PoNE) in mixed strategies is proven.

**Keywords:** Pareto optimality, Nash equilibrium, Pareto-optimal Nash equilibrium, noncooperative game, Germeier convolution

#### **1. Introduction**

In 1949, J. Nash, a Princeton University graduate at that time and a famous American mathematician and economist as we know him today, suggested the notion of an equilibrium solution for a noncooperative game [1] lately called "the Nash equilibrium strategy profile." Since then, this equilibrium is widely used in economics, sociology, military sciences, and other spheres of human activity. Moreover, 45 years later J. Nash, J. Harshanyi, and R. Selten were awarded the Nobel Prize "for the pioneering analysis of equilibria in the theory of noncooperative games."

However, as shown by Example 1, the set of the Nash equilibrium strategy profiles has a negative property: there may exist two Nash equilibrium strategy profiles such that the payoffs of each player in the first strategy profile are strictly greater than the corresponding payoffs in the second one. In 2013, the authors emphasized this fact in a series of papers [2, 3] while exploring the existence of a guaranteed equilibrium solution for a noncooperative game under uncertainty. Particularly, these papers were focused on the Nash equilibrium strategy profile that is Pareto optimal with respect to the rest of the Nash equilibrium strategy profiles, thereby eliminating the above shortcoming. And the following question arises immediately. How can such an equilibrium (the so-called Pareto equilibrium strategy profile) be found? Our idea is to use the sufficient conditions (Theorem 1) reducing Nash equilibrium strategy profile design to saddle point calculation in a special Germeier convolution of the payoff functions. As an application, this chapter establishes the existence of the Pareto-optimal Nash equilibrium (PoNE) strategy profile in the class of mixed strategies (see Assertion 1). Similar results were obtained by the authors for the Pareto-optimal Berge equilibrium in [4].

Note that two approaches can be adopted to perform formalization of the Pareto unimprovable Nash equilibrium. According to the first approach, Pareto optimality is required on the set of all strategy profiles in the game. The second approach dictates to find the Pareto-optimal equilibrium on the set of all Nash equilibria. Generally, the first approach implies construction of all Nash equilibrium strategy profiles with subsequent check belonging to the Pareto boundary of the strategy profile set of the game (see [5]). Numerical algorithms realizing this approach were suggested for the bimatrix games in [5], for some two-player normal-form games in [6] and the monograph ([7], pp. 92–93), as well as for the linear two-player positional games with cylindrical terminal payoff functions in [8]. In the case of nonlinear differential games with convex terminal payoff functions, the publication [9] obtained the sufficient conditions under which the unimprovable equilibrium strategy profile on the set of Nash equilibria (the second approach) is Pareto optimal on the whole strategy profile set of the game.

Now, consider internal instability of *X<sup>e</sup>*

*DOI: http://dx.doi.org/10.5772/intechopen.88184*

*Pareto Optimality and Equilibria in Noncooperative Games*

*internally stable* otherwise.

A strategy profile *xe* <sup>¼</sup> *xe*

<sup>2</sup> <sup>≤</sup> � *xe i* � �<sup>2</sup> <sup>þ</sup> <sup>2</sup>*x<sup>e</sup>*

� *<sup>x</sup>*<sup>1</sup> � *<sup>x</sup><sup>e</sup>* 2 � �<sup>2</sup>

which is equivalent to

�*x*<sup>2</sup>

player *i*.

of all *x<sup>P</sup>* � �.

**5**

*internally stable*.

<sup>1</sup> <sup>þ</sup> <sup>2</sup>*x*1*xe*

there exist at least two strategy profiles *<sup>x</sup>*ð Þ*<sup>j</sup>* <sup>∈</sup> *<sup>X</sup>*<sup>∗</sup> ð Þ *<sup>j</sup>* <sup>¼</sup> <sup>1</sup>*;* <sup>2</sup> such that

*Example 1.* Consider a two-player NG of the form

f g <sup>1</sup>*;* <sup>2</sup> *;* f g *Xi* ¼ �½ � <sup>1</sup>*;* <sup>1</sup> *<sup>i</sup>*¼1*,* <sup>2</sup>*; fi*

<sup>1</sup>*; x<sup>e</sup>* 2

<sup>2</sup>*,* � *<sup>x</sup>*<sup>2</sup>

<sup>1</sup> � *xe* 2 � �<sup>2</sup>

1*xe*

<sup>≤</sup> � *xe*

Therefore, we have *<sup>X</sup><sup>e</sup>* <sup>¼</sup> fð Þ *<sup>α</sup>; <sup>α</sup>* j g <sup>∀</sup>*α*<sup>∈</sup> ½ � �1*;* <sup>1</sup> and

*fi <sup>x</sup>*ð Þ<sup>1</sup> � � <sup>¼</sup> <sup>0</sup> , *fi <sup>x</sup>*ð Þ<sup>2</sup> � � <sup>¼</sup> <sup>1</sup> ð Þ *<sup>i</sup>* <sup>¼</sup> <sup>1</sup>*;* <sup>2</sup> (see Eq. (3)).

profile in the antagonistic setting of game (1).

strategy profiles *xe* in game (1) and the *i*th criterion *fi*

*fi*

The following *statement* is obvious: if for all *x*∈ *X<sup>e</sup>* we have

X *i*∈ N *fi*

then *xP* gives the Pareto-optimal alternative in problem (5).

if ∀*x*∈ *X<sup>e</sup>* the system of inequalities

*f x*ð Þ<sup>1</sup> � � , *f x*ð Þ<sup>1</sup> h i � � <sup>⇔</sup> *fi <sup>x</sup>*ð Þ<sup>1</sup> � � , *fi <sup>x</sup>*ð Þ<sup>1</sup> � � <sup>∀</sup>*i*<sup>∈</sup> <sup>N</sup>

<sup>2</sup> <sup>þ</sup> <sup>2</sup>*xe*

*fi <sup>X</sup><sup>e</sup>* ð Þ¼ <sup>∪</sup>*xe* <sup>∈</sup> *<sup>X</sup><sup>e</sup> fi <sup>x</sup><sup>e</sup>* ð Þ¼ <sup>∪</sup>*α*<sup>∈</sup> ½ � �1*;*<sup>1</sup> *<sup>α</sup>*<sup>2</sup>*; <sup>α</sup>*<sup>2</sup> ð Þ in game (4). Consequently, the set *<sup>X</sup><sup>e</sup>* is internally instable in game (4); as for *<sup>x</sup>*ð Þ<sup>1</sup> <sup>¼</sup> ð Þ <sup>0</sup>*;* <sup>0</sup> and *<sup>x</sup>*ð Þ<sup>2</sup> <sup>¼</sup> ð Þ <sup>1</sup>*;* <sup>1</sup> , it follows that

*Note 1.* In the antagonistic setting of game (1) (N ¼ f g 1*;* 2 and *f* <sup>1</sup>ð Þ¼� *x f* <sup>2</sup>ð Þ *x* ), the equality *<sup>f</sup>* <sup>1</sup> *<sup>x</sup>*ð Þ<sup>1</sup> � � <sup>¼</sup> *<sup>f</sup>* <sup>1</sup> *<sup>x</sup>*ð Þ<sup>2</sup> � � holds for any two saddle points *<sup>x</sup>*ð Þ*<sup>j</sup>* <sup>∈</sup> *X j*ð Þ <sup>¼</sup> <sup>1</sup>*;* <sup>2</sup> by the saddle point equivalence. Hence, the saddle point set is always internally stable in the antagonistic game. Note that a saddle point is a Nash equilibrium strategy

*Note 2.* In the non-antagonistic setting of game (1), the internal instability effect

*i* ∈ N

D E*,* (5)

ð Þ *x* is the payoff function of

*fi xP* � �*,* (6)

*; fi* ð Þ *<sup>x</sup>* � �

where the set *X<sup>e</sup>* of *alternatives x* coincides with the set of Nash equilibrium

**Definition 2**. An alternative *x<sup>P</sup>* ∈ *X<sup>e</sup>* is Pareto optimal (efficient) in problem (5)

ð Þ *<sup>x</sup>* <sup>≥</sup> *fi xP* � � ð Þ *<sup>i</sup>*<sup>∈</sup> <sup>N</sup>

is infeasible, with at least one being a strict inequality. Designate by *X<sup>P</sup>* the set

According to Definition 2, the set *X<sup>P</sup>* satisfies the inclusion *X<sup>P</sup>* ⊆*X<sup>e</sup>* and is

ð Þ *<sup>x</sup>* <sup>≤</sup> <sup>X</sup> *i*∈ N

vanishes if there exist a unique Nash equilibrium strategy profile in (1). Associate the following auxiliary *N*-criterion problem with game (1):

<sup>Γ</sup>*<sup>v</sup>* <sup>¼</sup> *<sup>X</sup><sup>e</sup>*

*,* � *xe*

ð Þ¼� *<sup>x</sup> <sup>x</sup>*<sup>2</sup>

<sup>1</sup>*x*<sup>2</sup> <sup>≤</sup> � *<sup>x</sup><sup>e</sup>*

<sup>1</sup> � *x*<sup>2</sup> � �<sup>2</sup>

D E*:* (4)

� �

� �<sup>∈</sup> ½ � �1*;* <sup>1</sup> <sup>2</sup> is a Nash equilibrium in game (4) if

*i* � �<sup>2</sup> <sup>þ</sup> <sup>2</sup>*xe*

. A subset *X*<sup>∗</sup> ⊆*R<sup>n</sup>* is *internally instable* if

*i*¼1*,* 2

1*xe*

<sup>1</sup> � *<sup>x</sup><sup>e</sup>* 2 � �<sup>2</sup>

<sup>≤</sup> � *xe*

<sup>2</sup> ∀*x*1*, x*<sup>2</sup> ∈ ½ � �1*;* 1 *,*

*:*

h i*,* (3)

*<sup>i</sup>* þ 2*x*1*x*<sup>2</sup>

This chapter adheres to the second approach, suggesting an algorithm that yields the Pareto-optimal strategy profile among all Nash equilibria.

#### **2. Internally instable set of Nash equilibrium strategy profiles**

As is well known, the game theory is used in modeling interactions in economics, sociology, political science, and many other areas. Game theory is the mathematical study of conflict, in which a decision-maker's success in making choices depends on the choice of others. In contrast to the decision-making theory, in game theory, several decision-makers act simultaneously. These decision-makers are called players. Their actions are called pure strategies. Each of the players seeks to achieve their own goals that do not coincide with the goals of other players. A measure of a player's approach of a goal is estimated by his payoff function. The realized value of the player's payoff function is called his payoff. At the same time, the player's payoff function depends not only on his choice but also on the choice of all other players. Therefore, when making a decision, the player is forced to focus not only on his own interests but also on the possible actions of the other players. If the players cannot coordinate their actions, the game is called a noncooperative game. The basic concept of a solution in a noncooperative game theory is the Nash equilibrium.

Consider a noncooperative game (NG) of N players in the class of pure strategies (a non-antagonistic game)

$$
\Gamma = \left\langle \mathbb{N}, \{ \mathbf{X}\_i \}\_{i \in \mathbb{N}}, \left\{ f\_i(\mathbf{x}) \right\}\_{i \in \mathbb{N}} \right\rangle,\tag{1}
$$

where N ¼ f g 1*;* 2*;* …*; N* is the set of players'serial numbers; each player *i* chooses and applies his own pure strategy *xi* ∈ *Xi* ⊆ R*ni* , forming no coalition with the others, which induces a strategy profile *<sup>x</sup>* <sup>¼</sup> ð Þ *<sup>x</sup>*1*;* …*; xN* <sup>∈</sup> *<sup>X</sup>* <sup>¼</sup> <sup>Q</sup> *<sup>i</sup>*<sup>∈</sup> <sup>N</sup> *Xi* <sup>⊆</sup>R*<sup>n</sup>* <sup>ð</sup>*<sup>n</sup>* <sup>¼</sup> *<sup>n</sup>*1<sup>þ</sup> … þ *nN*Þ; for each *i* ∈ N, a payoff function *fi* ð Þ *x* is defined on the strategy profile set *X*, which gives the payoff of player *i*. In addition, denote *f* ¼ *f* <sup>1</sup>*;* …*; f <sup>N</sup>* � � and *x z*k Þ¼*<sup>i</sup> x*1*;* …*; xi*�<sup>1</sup>*; zi* ð ð Þ *; xi*þ<sup>1</sup>*;* …*; xN* .

**Definition 1**. A strategy profile *xe* <sup>¼</sup> *xe* <sup>1</sup>*;* …*; xe N* � �∈ *X* is called a Nash equilibrium in the game (1) if

$$\max\_{\boldsymbol{\infty}\_{i}\in X\_{i}^{\varepsilon}} f\_{i} \left(\boldsymbol{\varkappa}^{\varepsilon} \| \boldsymbol{\varkappa}\_{i}\right) = f\_{i}(\boldsymbol{\varkappa}^{\varepsilon}) \quad (i \in \mathbb{N}).\tag{2}$$

The set of all *xe* f g in game (1) will be designated by *<sup>X</sup><sup>e</sup>* .

Now, consider internal instability of *X<sup>e</sup>* . A subset *X*<sup>∗</sup> ⊆*R<sup>n</sup>* is *internally instable* if there exist at least two strategy profiles *<sup>x</sup>*ð Þ*<sup>j</sup>* <sup>∈</sup> *<sup>X</sup>*<sup>∗</sup> ð Þ *<sup>j</sup>* <sup>¼</sup> <sup>1</sup>*;* <sup>2</sup> such that

$$\mathbb{E}\left[f\left(\mathbf{x}^{(1)}\right) \le f\left(\mathbf{x}^{(1)}\right)\right] \Leftrightarrow \left[f\_i\left(\mathbf{x}^{(1)}\right) \le f\_i\left(\mathbf{x}^{(1)}\right) \,\forall i \in \mathbb{N}\right],\tag{3}$$

*internally stable* otherwise.

Note that two approaches can be adopted to perform formalization of the Pareto unimprovable Nash equilibrium. According to the first approach, Pareto optimality is required on the set of all strategy profiles in the game. The second approach dictates to find the Pareto-optimal equilibrium on the set of all Nash equilibria. Generally, the first approach implies construction of all Nash equilibrium strategy profiles with subsequent check belonging to the Pareto boundary of the strategy profile set of the game (see [5]). Numerical algorithms realizing this approach were suggested for the bimatrix games in [5], for some two-player normal-form games in [6] and the monograph ([7], pp. 92–93), as well as for the linear two-player positional games with cylindrical terminal payoff functions in [8]. In the case of nonlinear differential games with convex terminal payoff functions, the publication [9] obtained the sufficient conditions under which the unimprovable equilibrium strategy profile on the set of Nash equilibria (the second approach) is Pareto opti-

This chapter adheres to the second approach, suggesting an algorithm that yields

As is well known, the game theory is used in modeling interactions in economics, sociology, political science, and many other areas. Game theory is the mathematical study of conflict, in which a decision-maker's success in making choices depends on the choice of others. In contrast to the decision-making theory, in game theory, several decision-makers act simultaneously. These decision-makers are called players. Their actions are called pure strategies. Each of the players seeks to achieve their own goals that do not coincide with the goals of other players. A measure of a player's approach of a goal is estimated by his payoff function. The realized value of the player's payoff function is called his payoff. At the same time, the player's payoff function depends not only on his choice but also on the choice of all other players. Therefore, when making a decision, the player is forced to focus not only on his own interests but also on the possible actions of the other players. If the players cannot coordinate their actions, the game is called a noncooperative game. The basic concept of a solution in a noncooperative game theory is the Nash

Consider a noncooperative game (NG) of N players in the class of pure strategies

D E

where N ¼ f g 1*;* 2*;* …*; N* is the set of players'serial numbers; each player *i* chooses

<sup>1</sup>*;* …*; xe N*

ð Þ *<sup>x</sup>* � �

*i* ∈ N

*,* (1)

, forming no coalition with the others,

ð Þ *x* is defined on the strategy profile set

� �∈ *X* is called a Nash equilibrium

<sup>k</sup>*xi*Þ ¼ *fi <sup>x</sup><sup>e</sup>* ð Þ ð Þ *<sup>i</sup>* <sup>∈</sup> <sup>N</sup> *:* � (2)

.

*<sup>i</sup>*<sup>∈</sup> <sup>N</sup> *Xi* <sup>⊆</sup>R*<sup>n</sup>* <sup>ð</sup>*<sup>n</sup>* <sup>¼</sup> *<sup>n</sup>*1<sup>þ</sup>

� � and

Γ ¼ N*;* f g *Xi <sup>i</sup>*<sup>∈</sup> <sup>N</sup>*; fi*

*X*, which gives the payoff of player *i*. In addition, denote *f* ¼ *f* <sup>1</sup>*;* …*; f <sup>N</sup>*

mal on the whole strategy profile set of the game.

equilibrium.

(a non-antagonistic game)

and applies his own pure strategy *xi* ∈ *Xi* ⊆ R*ni*

… þ *nN*Þ; for each *i* ∈ N, a payoff function *fi*

**Definition 1**. A strategy profile *xe* <sup>¼</sup> *xe*

*x z*k Þ¼*<sup>i</sup> x*1*;* …*; xi*�<sup>1</sup>*; zi* ð ð Þ *; xi*þ<sup>1</sup>*;* …*; xN* .

in the game (1) if

**4**

which induces a strategy profile *<sup>x</sup>* <sup>¼</sup> ð Þ *<sup>x</sup>*1*;* …*; xN* <sup>∈</sup> *<sup>X</sup>* <sup>¼</sup> <sup>Q</sup>

max *xi* ∈ *Xi*

The set of all *xe* f g in game (1) will be designated by *<sup>X</sup><sup>e</sup>*

*fi xe*

the Pareto-optimal strategy profile among all Nash equilibria.

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

**2. Internally instable set of Nash equilibrium strategy profiles**

*Example 1.* Consider a two-player NG of the form

$$\left\langle \{ \mathbf{1}, \mathbf{2} \}, \{ \mathbf{X}\_{i} = [-\mathbf{1}, \mathbf{1}] \} \right\rangle\_{i=1,2}, \left\{ f\_{i}(\mathbf{x}) = -\mathbf{x}\_{i}^{2} + 2\mathbf{x}\_{1}\mathbf{x}\_{2} \right\}\_{i=1,2}. \tag{4}$$

A strategy profile *xe* <sup>¼</sup> *xe* <sup>1</sup>*; x<sup>e</sup>* 2 � �<sup>∈</sup> ½ � �1*;* <sup>1</sup> <sup>2</sup> is a Nash equilibrium in game (4) if

$$-\mathbf{x}\_1^2 + 2\mathbf{x}\_1\mathbf{x}\_2^\varepsilon \le -\left(\mathbf{x}\_i^\varepsilon\right)^2 + 2\mathbf{x}\_1^\varepsilon\mathbf{x}\_2^\varepsilon, \quad -\mathbf{x}\_2^2 + 2\mathbf{x}\_1^\varepsilon\mathbf{x}\_2 \le -\left(\mathbf{x}\_i^\varepsilon\right)^2 + 2\mathbf{x}\_1^\varepsilon\mathbf{x}\_2^\varepsilon \quad \forall \mathbf{x}\_1, \mathbf{x}\_2 \in [-1, 1],$$

which is equivalent to

$$- \left( \boldsymbol{\mathfrak{x}\_1} - \boldsymbol{\mathfrak{x}\_2^{\varepsilon}} \right)^2 \le - \left( \boldsymbol{\mathfrak{x}\_1^{\varepsilon}} - \boldsymbol{\mathfrak{x}\_2^{\varepsilon}} \right)^2, \quad - \left( \boldsymbol{\mathfrak{x}\_1^{\varepsilon}} - \boldsymbol{\mathfrak{x}\_2} \right)^2 \le - \left( \boldsymbol{\mathfrak{x}\_1^{\varepsilon}} - \boldsymbol{\mathfrak{x}\_2^{\varepsilon}} \right)^2.$$

Therefore, we have *<sup>X</sup><sup>e</sup>* <sup>¼</sup> fð Þ *<sup>α</sup>; <sup>α</sup>* j g <sup>∀</sup>*α*<sup>∈</sup> ½ � �1*;* <sup>1</sup> and *fi <sup>X</sup><sup>e</sup>* ð Þ¼ <sup>∪</sup>*xe* <sup>∈</sup> *<sup>X</sup><sup>e</sup> fi <sup>x</sup><sup>e</sup>* ð Þ¼ <sup>∪</sup>*α*<sup>∈</sup> ½ � �1*;*<sup>1</sup> *<sup>α</sup>*<sup>2</sup>*; <sup>α</sup>*<sup>2</sup> ð Þ in game (4). Consequently, the set *<sup>X</sup><sup>e</sup>* is

internally instable in game (4); as for *<sup>x</sup>*ð Þ<sup>1</sup> <sup>¼</sup> ð Þ <sup>0</sup>*;* <sup>0</sup> and *<sup>x</sup>*ð Þ<sup>2</sup> <sup>¼</sup> ð Þ <sup>1</sup>*;* <sup>1</sup> , it follows that *fi <sup>x</sup>*ð Þ<sup>1</sup> � � <sup>¼</sup> <sup>0</sup> , *fi <sup>x</sup>*ð Þ<sup>2</sup> � � <sup>¼</sup> <sup>1</sup> ð Þ *<sup>i</sup>* <sup>¼</sup> <sup>1</sup>*;* <sup>2</sup> (see Eq. (3)). *Note 1.* In the antagonistic setting of game (1) (N ¼ f g 1*;* 2 and *f* <sup>1</sup>ð Þ¼� *x f* <sup>2</sup>ð Þ *x* ),

the equality *<sup>f</sup>* <sup>1</sup> *<sup>x</sup>*ð Þ<sup>1</sup> � � <sup>¼</sup> *<sup>f</sup>* <sup>1</sup> *<sup>x</sup>*ð Þ<sup>2</sup> � � holds for any two saddle points *<sup>x</sup>*ð Þ*<sup>j</sup>* <sup>∈</sup> *X j*ð Þ <sup>¼</sup> <sup>1</sup>*;* <sup>2</sup> by the saddle point equivalence. Hence, the saddle point set is always internally stable in the antagonistic game. Note that a saddle point is a Nash equilibrium strategy profile in the antagonistic setting of game (1).

*Note 2.* In the non-antagonistic setting of game (1), the internal instability effect vanishes if there exist a unique Nash equilibrium strategy profile in (1).

Associate the following auxiliary *N*-criterion problem with game (1):

$$
\Gamma\_{\nu} = \left\langle X^{\epsilon}, \left\{ f\_{i}(\mathbf{x}) \right\}\_{i \in \mathbb{N}} \right\rangle,\tag{5}
$$

where the set *X<sup>e</sup>* of *alternatives x* coincides with the set of Nash equilibrium strategy profiles *xe* in game (1) and the *i*th criterion *fi* ð Þ *x* is the payoff function of player *i*.

**Definition 2**. An alternative *x<sup>P</sup>* ∈ *X<sup>e</sup>* is Pareto optimal (efficient) in problem (5) if ∀*x*∈ *X<sup>e</sup>* the system of inequalities

$$f\_i(\mathbf{x}) \ge f\_i(\mathbf{x}^P) \quad (i \in \mathbb{N})$$

is infeasible, with at least one being a strict inequality. Designate by *X<sup>P</sup>* the set of all *x<sup>P</sup>* � �.

According to Definition 2, the set *X<sup>P</sup>* satisfies the inclusion *X<sup>P</sup>* ⊆*X<sup>e</sup>* and is *internally stable*.

The following *statement* is obvious: if for all *x*∈ *X<sup>e</sup>* we have

$$\sum\_{i \in \mathbb{N}} f\_i(\mathbf{x}) \le \sum\_{i \in \mathbb{N}} f\_i(\mathbf{x}^P),\tag{6}$$

then *xP* gives the Pareto-optimal alternative in problem (5).

#### **3. Sufficient conditions of Pareto-optimal equilibrium**

Get back to game (1), associating it with the *N*-criterion problem (5).

**Definition 3**. A strategy profile *x*<sup>∗</sup> ∈ *X* is called a Pareto-optimal Nash equilibrium for game (1) if *x*<sup>∗</sup> is a Nash equilibrium in (1) (Definition 1) and a Pareto optimum in (5) (Definition 2).

*Note 3.* Two classes of games where the Pareto equilibrium strategy profiles exist in pure strategies were presented in ([7], pp. 91–92) and, in the case of differential games, in [9–12].

*Note 4.* Within Example 1, we have two Pareto equilibrium strategy profiles, namely, *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> ð Þ <sup>1</sup>*;* <sup>1</sup> and *<sup>x</sup>*∗ ∗ ¼ �ð Þ <sup>1</sup>*;* �<sup>1</sup> .

Based on (2) and (5), introduce *N* þ 1 scalar functions defined by

$$\begin{aligned} \rho\_i(\mathbf{x}, \mathbf{z}) &= f\_i(\mathbf{z} \| \mathbf{x}\_i) - f\_i(\mathbf{z}) \text{ ( $i \in \mathbb{N}$ )}, \\ \rho\_{N+1}(\mathbf{x}, \mathbf{z}) &= \sum\_{r \in \mathbb{N}} f\_r(\mathbf{x}) - \sum\_{r \in \mathbb{N}} f\_r(\mathbf{z}), \end{aligned} \tag{7}$$

By (10), for all *x*∈ *X* it follows that

*DOI: http://dx.doi.org/10.5772/intechopen.88184*

*Pareto Optimality and Equilibria in Noncooperative Games*

) ð Þ7

X *i* ∈ N *fi*

*Step 1.* Using the payoff functions *fi*

function*φ*ð Þ *x;* z by formulas (7) and (8).

Pareto equilibrium solution of game (1).

*; <sup>z</sup>* <sup>∗</sup> ð Þ for the Germeier convolution

∧ max *x*∈ *X<sup>e</sup>*

strategy profile *x*<sup>∗</sup> in game (1).

proposed by Dem'yanov [15].

point *xo*

*fi*

**7**

max*<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup> *φ<sup>j</sup>*

∧ X *r*∈ N

> ) max *xi* ∈ *Xi*

ð Þ¼ *<sup>x</sup>* <sup>X</sup> *i*∈ N *fi <sup>z</sup>* <sup>∗</sup> ð Þ " #) )

�

This chain involves the inclusion *<sup>X</sup><sup>e</sup>* <sup>⊆</sup>*X:* □

<sup>0</sup><sup>≥</sup> *<sup>φ</sup> <sup>x</sup>;* <sup>z</sup> <sup>∗</sup> ð Þ¼ max

*fr*ð Þ� *<sup>x</sup>* <sup>X</sup>

*r*∈ N

0≥ max *<sup>j</sup>*¼1*,*…*, <sup>N</sup>*þ<sup>1</sup>

Therefore, for all *x*∈ *X,* the following chain of implications is true:

*<sup>j</sup>*¼1*,*…*, <sup>N</sup>*þ<sup>1</sup>

*<sup>φ</sup><sup>j</sup> <sup>x</sup>;* <sup>z</sup> <sup>∗</sup> ð Þ<sup>≥</sup> *<sup>φ</sup><sup>j</sup> <sup>x</sup>;* <sup>z</sup> <sup>∗</sup> ð Þ � � )

*fj* <sup>z</sup> <sup>∗</sup> ð Þ� <sup>k</sup>*xi fj* <sup>z</sup> <sup>∗</sup> ð Þ≤<sup>0</sup> <sup>∀</sup>*xi* <sup>∈</sup> *Xi* ð Þ *<sup>i</sup>*<sup>∈</sup> <sup>N</sup> nh i <sup>∧</sup>

" #)

*Remark 1.* Theorem 1 substantiates the following design method of the PoNE

*Step 2.* Find the saddle point *xo; ; <sup>z</sup>* <sup>∗</sup> ð Þ of antagonistic game (9). Then *<sup>z</sup>* <sup>∗</sup> is the

*<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup>

have not been developed yet. However, they are vital to construct the Nash equilibrium strategy profiles that are Pareto optimal (see Theorem 1). This is a new trend in equilibrium programming; in the authors' opinion, it can be developed using the mathematical apparatus of Germeier convolution optimization max*<sup>j</sup> φ<sup>j</sup>*

*Remark 2.* The results of operations research ([16], p. 54) yield the following statement that is crucial to prove the existence of a PoNE strategy profile in the class of mixed strategies in game (1) (see the forthcoming section). If *Xi* ∈ *comp*R*ni* and

That game (1) admits a PoNE strategy profile in the class of pure strategies (see Definition 3) is rather a miracle. This equilibrium may exist only for special payoff

*φj*ð Þ *x; z*

As far as the authors know, numerical calculation methods of the saddle

*φ*ð Þ¼ *x; z* max

ð Þ� ∈ *C X*ð Þð Þ *i*∈ N in game (1), then the Germeier convolution *φ*ð Þ¼ *x; z*

**4. Existence of PoNE strategy profile in mixed strategies**

ð Þ *x; z* from (7) and (8) is continuous on *X* � *X*.

*z* ¼ ð Þ *z*1*;* …*; zN , zi* ∈ *Xi* and *x* ¼ ð Þ *x*1*;* …*; xN , xi* ∈ *Xi* ð Þ *i* ∈ N *,* construct the

*fr <sup>z</sup>* <sup>∗</sup> ð Þ≤<sup>0</sup> <sup>∀</sup>*x*<sup>∈</sup> *<sup>X</sup><sup>e</sup>*

*fj* <sup>z</sup> <sup>∗</sup> ð Þ¼ <sup>k</sup>*xi fj* <sup>z</sup> <sup>∗</sup> ð Þ ð Þ *<sup>i</sup>* <sup>∈</sup> <sup>N</sup> � � <sup>∧</sup>

ð Þ<sup>2</sup> *,*ð Þ <sup>6</sup>

) *<sup>φ</sup><sup>j</sup> <sup>x</sup>;* <sup>z</sup> <sup>∗</sup> ð Þ≤<sup>0</sup> ð Þ *<sup>j</sup>* <sup>¼</sup> <sup>1</sup>*;* …*; <sup>N</sup>; <sup>N</sup>* <sup>þ</sup> <sup>1</sup> h i )

*<sup>φ</sup><sup>j</sup> <sup>x</sup>;* <sup>z</sup> <sup>∗</sup> ð Þ¼ <sup>0</sup>*:*

ð Þ7

)

*<sup>z</sup>* <sup>∗</sup> <sup>∈</sup> *<sup>X</sup><sup>e</sup>* ½ � <sup>∧</sup> *<sup>z</sup>* <sup>∗</sup> <sup>∈</sup> *<sup>X</sup><sup>P</sup>* � � � � *:*

ð Þ *x*

ð Þ *x* ð Þ *i* ∈ N from (1) and the vectors

where *z* ¼ ð Þ *z*1*;* …*; zN , zi* ∈ *Xi* ð Þ *i*∈ N *, z*∈ *X, x*∈ *X*. The Germeier convolution ([13], p. 43) of the scalar functions (7) has the form

$$\rho(\mathbf{x}, \mathbf{z}) = \max\_{j=1,\ldots,N+1} \rho\_j(\mathbf{x}, \mathbf{z}).\tag{8}$$

In addition, associate the following *antagonistic* game with game (1) and the *N*criterion problem (5):

$$
\langle \mathbf{X}, \mathbf{Z} = \mathbf{X}, \varrho(\mathbf{x}, \mathbf{z}) \rangle. \tag{9}
$$

In this game, player 1 and his opponent choose their strategies *x*∈ *X* and *z*∈ *X* to maximize and minimize, respectively, the payoff function *φ*ð Þ *x; z* described by (7) and (8).

A saddle point *xo ; <sup>z</sup>* <sup>∗</sup> ð Þ<sup>∈</sup> *<sup>X</sup>*<sup>2</sup> of game (9) is defined by the chain of inequalities

$$
\varrho(\mathfrak{x}, \mathfrak{z}^\*) \le \varrho\left(\mathfrak{x}^0, \mathfrak{z}^\*\right) \le \varrho\left(\mathfrak{x}^0, \mathfrak{z}\right) \quad \forall \mathfrak{x}, \mathfrak{z} \in X. \tag{10}
$$

In game (9), the saddle points are given by the minimax strategy *z* <sup>∗</sup>

$$\left(\min\_{\mathbf{z}\in X} \max\_{\mathbf{z}\in X} \rho(\mathbf{x}, \mathbf{z}) = \max\_{\mathbf{x}\in X} \rho(\mathbf{x}, \mathbf{z}^\*)\right)^2$$

and the maximin strategy *x*<sup>0</sup>

$$\left(\max\_{\mathbf{x}\in X} \min\_{\mathbf{z}\in X} \rho(\mathbf{x}, \mathbf{z}) = \min\_{\mathbf{z}\in X} \rho\left(\mathbf{x}^{0}, \mathbf{z}\right)\right).$$

The following statement defines *a sufficient condition* for the existence of a PoNE strategy profile in game (1).

**Theorem 1**. If a saddle point *xo; <sup>z</sup>* <sup>∗</sup> ð Þ exists in the antagonistic game (9) (i.e., the condition (10) holds), then the minimax strategy *z* <sup>∗</sup> is a PoNE strategy profile for game (1) [14].

*Proof.* Let *<sup>z</sup>* <sup>¼</sup> *<sup>x</sup>*<sup>0</sup> for the right-hand inequality in (10). Using (7) and (8), we have

$$\rho\left(\mathfrak{x}^0, \mathfrak{x}^0\right) = \max\_{j=1,\dots,N+1} \rho\_j\left(\mathfrak{x}^0, \mathfrak{x}^0\right) = \mathbf{0}.$$

*Pareto Optimality and Equilibria in Noncooperative Games DOI: http://dx.doi.org/10.5772/intechopen.88184*

By (10), for all *x*∈ *X* it follows that

**3. Sufficient conditions of Pareto-optimal equilibrium**

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

optimum in (5) (Definition 2).

namely, *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> ð Þ <sup>1</sup>*;* <sup>1</sup> and *<sup>x</sup>*∗ ∗ ¼ �ð Þ <sup>1</sup>*;* �<sup>1</sup> .

games, in [9–12].

criterion problem (5):

A saddle point *xo*

and the maximin strategy *x*<sup>0</sup>

strategy profile in game (1).

game (1) [14].

**6**

and (8).

Get back to game (1), associating it with the *N*-criterion problem (5).

**Definition 3**. A strategy profile *x*<sup>∗</sup> ∈ *X* is called a Pareto-optimal Nash equilibrium for game (1) if *x*<sup>∗</sup> is a Nash equilibrium in (1) (Definition 1) and a Pareto

*Note 3.* Two classes of games where the Pareto equilibrium strategy profiles exist in pure strategies were presented in ([7], pp. 91–92) and, in the case of differential

*Note 4.* Within Example 1, we have two Pareto equilibrium strategy profiles,

ð Þ� *z x*k *<sup>i</sup> fi*

where *z* ¼ ð Þ *z*1*;* …*; zN , zi* ∈ *Xi* ð Þ *i*∈ N *, z*∈ *X, x*∈ *X*. The Germeier convolution

*<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup>

In addition, associate the following *antagonistic* game with game (1) and the *N*-

In this game, player 1 and his opponent choose their strategies *x*∈ *X* and *z*∈ *X* to maximize and minimize, respectively, the payoff function *φ*ð Þ *x; z* described by (7)

*fr*ð Þ� *<sup>x</sup>* <sup>X</sup>

*φj*

*; <sup>z</sup>* <sup>∗</sup> ð Þ<sup>∈</sup> *<sup>X</sup>*<sup>2</sup> of game (9) is defined by the chain of inequalities

*<sup>φ</sup> <sup>x</sup>; <sup>z</sup>* <sup>∗</sup> ð Þ≤*<sup>φ</sup> <sup>x</sup>*<sup>0</sup>*; <sup>z</sup>* <sup>∗</sup> � �<sup>≤</sup> *<sup>φ</sup> <sup>x</sup>*<sup>0</sup>*; <sup>z</sup>* � � <sup>∀</sup>*x, z*<sup>∈</sup> *<sup>X</sup>:* (10)

*<sup>x</sup>*<sup>∈</sup> *<sup>X</sup> <sup>φ</sup> <sup>x</sup>;* <sup>z</sup> <sup>∗</sup> ð Þ

*<sup>φ</sup><sup>j</sup> <sup>x</sup>*<sup>0</sup>*; <sup>x</sup>*<sup>0</sup> � � <sup>¼</sup> <sup>0</sup>*:*

*:*

*r*∈ N

*φ*ð Þ¼ *x; z* max

In game (9), the saddle points are given by the minimax strategy *z* <sup>∗</sup>

*<sup>x</sup>*<sup>∈</sup> *<sup>X</sup> <sup>φ</sup>*ð Þ¼ *<sup>x</sup>;* <sup>z</sup> max

*<sup>z</sup>* <sup>∈</sup> *<sup>X</sup> <sup>φ</sup>*ð Þ¼ *<sup>x</sup>;* <sup>z</sup> min *<sup>z</sup>*<sup>∈</sup> *<sup>X</sup> <sup>φ</sup> <sup>x</sup>*<sup>0</sup>*;* <sup>z</sup> � � � �

The following statement defines *a sufficient condition* for the existence of a PoNE

**Theorem 1**. If a saddle point *xo; <sup>z</sup>* <sup>∗</sup> ð Þ exists in the antagonistic game (9) (i.e., the condition (10) holds), then the minimax strategy *z* <sup>∗</sup> is a PoNE strategy profile for

*Proof.* Let *<sup>z</sup>* <sup>¼</sup> *<sup>x</sup>*<sup>0</sup> for the right-hand inequality in (10). Using (7) and (8), we have

*<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup>

� �

min *z* ∈ *X*

max *x*∈ *X*

max

min

*<sup>φ</sup> <sup>x</sup>*<sup>0</sup>*; <sup>x</sup>*<sup>0</sup> � � <sup>¼</sup> max

ð Þ*z* ð Þ *i*∈ N *,*

h i *X; Z* ¼ *X; φ*ð Þ *x;* z *:* (9)

*fr*ð Þ*<sup>z</sup> ,* (7)

ð Þ *x; z :* (8)

*r*∈ N

Based on (2) and (5), introduce *N* þ 1 scalar functions defined by

*φi*ð Þ¼ *x; z fi*

([13], p. 43) of the scalar functions (7) has the form

*<sup>φ</sup><sup>N</sup>*þ<sup>1</sup>ð Þ¼ *<sup>x</sup>; <sup>z</sup>* <sup>X</sup>

$$\mathbf{0} \ge \boldsymbol{\varrho}(\mathbf{x}, \mathbf{z}^\*) = \max\_{j=1,\dots,N+1} \boldsymbol{\varrho}\_j(\mathbf{x}, \mathbf{z}^\*) = \mathbf{0}.$$

Therefore, for all *x*∈ *X,* the following chain of implications is true:

$$\left[\mathbb{0}\geq\max\_{j=1,\ldots,N+1}\rho\_{j}(\mathbf{x},\mathbf{z}^{\*})\geq\rho\_{j}(\mathbf{x},\mathbf{z}^{\*})\right]\Rightarrow$$

$$\Rightarrow\left[\rho\_{j}(\mathbf{x},\mathbf{z}^{\*})\leq\mathbf{0}\left(j=1,\ldots,N,N+1\right)\right]\stackrel{(7)}{\Rightarrow}$$

$$\stackrel{(7)}{\Rightarrow}\left\{\left[f\_{j}(\mathbf{z}^{\*}\,\|\mathbf{x}\_{i}\right)-f\_{j}(\mathbf{z}^{\*}\,)\leq\mathbf{0}\,\forall \mathbf{x}\_{i}\in X\_{i}\ \left(i\in\mathbb{N}\right)\right]\land$$

$$\wedge\left[\sum\_{r\in\mathbb{N}}f\_{r}(\mathbf{z})-\sum\_{r\in\mathbb{N}}f\_{r}(\mathbf{z}^{\*}\,)\leq\mathbf{0}\,\forall \mathbf{x}\in X^{\varepsilon}\right]\right\}\Rightarrow$$

$$\Rightarrow\left\{\left[\max\_{\mathbf{x}\in X\_{i}^{\prime}}f\_{j}(\mathbf{z}^{\*}\,\|\mathbf{x}\_{i})=f\_{j}(\mathbf{z}^{\*}\,)\ (i\in\mathbb{N})\right]\land$$

$$\wedge\left[\max\_{\mathbf{x}\in X^{\prime}}\sum\_{i\in\mathbb{N}}f\_{i}(\mathbf{x})-\sum\_{i\in\mathbb{N}}f\_{i}(\mathbf{z}^{\*}\,)\right]\right\}\quad\stackrel{(2)}{\Rightarrow}\left\{\left[\mathbf{z}^{\*}\in X^{\prime}\right]\wedge\left[\mathbf{z}^{\*}\in X^{\prime}\right]\right\}\dots$$

This chain involves the inclusion *<sup>X</sup><sup>e</sup>* <sup>⊆</sup>*X:* □

*Remark 1.* Theorem 1 substantiates the following design method of the PoNE strategy profile *x*<sup>∗</sup> in game (1).

*Step 1.* Using the payoff functions *fi* ð Þ *x* ð Þ *i* ∈ N from (1) and the vectors *z* ¼ ð Þ *z*1*;* …*; zN , zi* ∈ *Xi* and *x* ¼ ð Þ *x*1*;* …*; xN , xi* ∈ *Xi* ð Þ *i* ∈ N *,* construct the function*φ*ð Þ *x;* z by formulas (7) and (8).

*Step 2.* Find the saddle point *xo; ; <sup>z</sup>* <sup>∗</sup> ð Þ of antagonistic game (9). Then *<sup>z</sup>* <sup>∗</sup> is the Pareto equilibrium solution of game (1).

As far as the authors know, numerical calculation methods of the saddle point *xo ; <sup>z</sup>* <sup>∗</sup> ð Þ for the Germeier convolution

$$\rho(\mathbf{x}, \mathbf{z}) = \max\_{j=1,\dots,N+1} \rho\_j(\mathbf{x}, \mathbf{z})$$

have not been developed yet. However, they are vital to construct the Nash equilibrium strategy profiles that are Pareto optimal (see Theorem 1). This is a new trend in equilibrium programming; in the authors' opinion, it can be developed using the mathematical apparatus of Germeier convolution optimization max*<sup>j</sup> φ<sup>j</sup>* ð Þ *x* proposed by Dem'yanov [15].

*Remark 2.* The results of operations research ([16], p. 54) yield the following statement that is crucial to prove the existence of a PoNE strategy profile in the class of mixed strategies in game (1) (see the forthcoming section). If *Xi* ∈ *comp*R*ni* and *fi* ð Þ� ∈ *C X*ð Þð Þ *i*∈ N in game (1), then the Germeier convolution *φ*ð Þ¼ *x; z* max*<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup> *φ<sup>j</sup>* ð Þ *x; z* from (7) and (8) is continuous on *X* � *X*.

#### **4. Existence of PoNE strategy profile in mixed strategies**

That game (1) admits a PoNE strategy profile in the class of pure strategies (see Definition 3) is rather a miracle. This equilibrium may exist only for special payoff

functions, strategy sets, and numbers of players. Therefore, adhering to the approach associated with E. Borel [17], J. von Neumann [18], Nash [1], and their followers, we establish the existence of the PoNE strategy profile of game (1) in the class of mixed strategies under standard game theory restrictions (i.e., compact strategy sets and continuous payoff functions).

And so, suppose that in game (1) the sets *Xi* of the pure strategies *xi* are compact sets in R*ni* (are closed and bounded), whereas the payoff function *fi* ð Þ *x* of each player *i i*ð Þ ∈ N is continuous on the set of pure strategy profiles *X*.

Consider the *mixed strategy extension of game* (1). To this end, construct the Borel σ-algebra Bð Þ *Xi* on each compact set *Xi* ð Þ *i*∈ N and probability measures *νi*ð Þ� on Bð Þ *Xi* (i.e., nonnegative scalar functions defined on the elements of Bð Þ *Xi* that are countably additive and normalized to unity on *Xi*). Denote by f g *ν<sup>i</sup>* the whole set of such measures; the measure *νi*ð Þ� proper is called the *mixed strategy of player i i*ð Þ ∈ N in game (1). Next, for game (1) construct the *mixed strategy profiles*, that is, the multiplicative measures

$$
\nu(d\mathbf{x}) = \nu\_1(d\mathbf{x}\_1)...\nu\_N(d\mathbf{x}\_N),
$$

and designate by f g*ν* the set of such strategy profiles. And finally, find the mathematical expectations

$$f\_i(\nu) = \int\_X f\_i(\mathbf{x}) \nu(d\mathbf{x}) \ (i \in \mathbb{N}).\tag{11}$$

throughout the paper, *ν <sup>e</sup>*

by <sup>N</sup> the set of such profiles *<sup>ν</sup> <sup>e</sup>* f g.

from (14).

profiles.

ð Þ *i*∈ N .

where

**9**

2. The payoff function *fi*

profile set *X*.

egy profile in mixed strategies for game (1).

*DOI: http://dx.doi.org/10.5772/intechopen.88184*

*Pareto Optimality and Equilibria in Noncooperative Games*

in mixed strategies in game (1) under *Xi* ∈ *comp*R*ni* and *fi*

Associate the following *N*-criterion problem with the game Γ~

of the Pareto optimal strategy profile is conventional (see below).

*fi*

X *i* ∈ N *fi*

**Assertion 1.** Consider the noncooperative game (1) where:

is infeasible, with at least one inequality being strict.

Combining Definition 4 with Definition 5 leads to.

multicriterion problem Γ~*<sup>υ</sup>* (according to Definition 5).

<sup>Γ</sup>~*<sup>υ</sup>* <sup>¼</sup> <sup>N</sup>*; fi*

ð Þ� ∈ f g*ν* will be also called the Nash equilibrium strat-

D E*:* (14)

*fi <sup>ν</sup> <sup>P</sup>* � �*,* (15)

ð Þ� ∈ *C X*ð Þð Þ *i*∈ N . Denote

By the Glicksberg theorem [19], there exists a Nash equilibrium strategy profile

ð Þ*<sup>ν</sup>* � �

In (14), a decision-maker chooses a strategy profile *ν*ð Þ� ∈ N to simultaneously maximize all components of the vector criterion *<sup>f</sup>*ð Þ¼ *<sup>ν</sup> <sup>f</sup>* <sup>1</sup>ð Þ*<sup>ν</sup> ;* …*; <sup>f</sup> <sup>N</sup>*ð Þ*<sup>ν</sup>* � �. The notion

ð Þ*<sup>ν</sup>* <sup>≥</sup> *fi <sup>ν</sup> <sup>P</sup>* � � ð Þ *<sup>i</sup>* <sup>∈</sup> <sup>N</sup>

The following **statement** represents an analog of (6): if for all *ν*ð Þ� ∈ N we have

ð Þ*<sup>ν</sup>* <sup>≤</sup> <sup>X</sup> *i*∈ N

then the mixed strategy profile *<sup>ν</sup> <sup>P</sup>*ð Þ� <sup>∈</sup> <sup>N</sup> is Pareto optimal in the problem <sup>Γ</sup>~*<sup>υ</sup>*

**Definition 6**. A strategy profile *<sup>ν</sup>* <sup>∗</sup> ð Þ� <sup>∈</sup> f g*<sup>ν</sup>* is called a Pareto-optimal Nash equilibrium strategy profile in mixed strategies for game (1) if *<sup>ν</sup>* <sup>∗</sup> ð Þ� is a Nash equilibrium in <sup>Γ</sup><sup>~</sup> (according to Definition 4), and *<sup>ν</sup>* <sup>∗</sup> ð Þ� is Pareto optimal in the

Now, we prove the existence of a Nash equilibrium strategy profile in mixed strategies that is Pareto optimal with respect to the rest Nash equilibrium strategy

1. The pure strategy set *Xi* of each player *i* is a nonempty compact set in R*ni*

Then there exists a PoNE strategy profile in mixed strategies in game (1).

*<sup>j</sup>*¼<sup>1</sup>*,* …*, <sup>N</sup>*þ<sup>1</sup>

ð Þ� *z x*k *<sup>i</sup> fi*

*r*∈ N

*φj* ð Þ *x; z ,*

*fr*ð Þ� *<sup>x</sup>* <sup>X</sup>

*Proof.* Using formulas (7) and (8), construct the scalar function

*φi*ð Þ¼ *x; z fi*

*<sup>φ</sup><sup>N</sup>*þ<sup>1</sup>ð Þ¼ *<sup>x</sup>; <sup>z</sup>* <sup>X</sup>

*φ*ð Þ¼ *x; z* max

ð Þ *x* of player *i i*ð Þ ∈ N is continuous on the strategy

ð Þ*z* ð Þ *i*∈ N *,*

*fr*ð Þ*z ,*

*r*∈ N

**Definition 5**. A strategy profile *<sup>ν</sup> <sup>P</sup>*ð Þ� <sup>∈</sup> <sup>N</sup> is called Pareto optimal for the *<sup>N</sup>*-criterion problem <sup>Γ</sup>~*<sup>υ</sup>* from (14) if for any *<sup>ν</sup>*ð Þ� <sup>∈</sup> <sup>N</sup> the system of inequalities

*i*∈ N

As a result, the game Γ from (1) is associated with its *mixed strategy extension*

$$
\tilde{\Gamma} = \left\langle \mathbb{N}, \{\nu\_i\}\_{i \in \mathbb{N}}, \left\{ f\_i(\nu) \right\}\_{i \in \mathbb{N}} \right\rangle.
$$

In the noncooperative game Γ~*,* we have the following elements:

*νi*ð Þ� ∈ f g *ν<sup>i</sup>* as the mixed strategy of player *i*.

*ν*ð Þ� ∈ f g*ν* as the mixed strategy profile.

*fi* ð Þ*ν* as the payoff function of player *i* defined by (11).

Further exposition involves the vector *z* ¼ ð Þ *z*1*;* …*; zN* ∈ *X* with *zi* ∈ *Xi* ð Þ *i*∈ N *,* and, of course, the vector *x* ¼ ð Þ *x*1*;* …*; xN* ∈ *X,* as well as the mixed strategy profiles *ν*ð Þ� *, μ*ð Þ� ∈ f g*ν* and the mathematical expectations

$$f\_i(\nu) = \int\_X f\_i(\mathbf{x})\nu(d\mathbf{x}), \quad f\_i(\mu) = \int\_X f\_i(\mathbf{z})\mu(d\mathbf{z}),$$

$$f\_i(\mu||\nu\_i) = \int\_{X\_1} \cdots \int\_{X\_{i-1}} \int\_{X\_{i+1}} \cdots \int\_{X\_N} f\_i(\mathbf{x})\mu\_N(d\mathbf{z}\_N) \dots \tag{12}$$

$$\dots \mu\_{i+1}(d\mathbf{z}\_{i+1})\nu\_i(d\mathbf{x}\_i)\mu\_{i-1}(d\mathbf{z}\_{i-1}) \dots \mu\_1(d\mathbf{z}\_1).$$

Once again, we underline that *xi, zi* ∈ *Xi* ð Þ *i* ∈ N and *x, z*∈ *X*.

The following notion of the Nash equilibrium strategy profile *ν <sup>e</sup>* ð Þ� ∈ f g*ν* in mixed strategies in original game (1) answers to Definition 1 of the Nash equilibrium strategy profile *xe* ∈ *X* in pure strategies in the same game (1).

**Definition 4**. A strategy profile *ν <sup>e</sup>* ð Þ� ∈ f g*ν* is called a Nash equilibrium for the game Γ~ if

$$f\_i\left(\nu^{\epsilon}|\nu\_i\right) \le f\_i(\nu^{\epsilon}) \,\,\forall \nu\_i(\cdot) \in \{\nu\_i\} \,\,(i \in \mathbb{N});\tag{13}$$

functions, strategy sets, and numbers of players. Therefore, adhering to the approach associated with E. Borel [17], J. von Neumann [18], Nash [1], and their followers, we establish the existence of the PoNE strategy profile of game (1) in the class of mixed strategies under standard game theory restrictions (i.e., compact

sets in R*ni* (are closed and bounded), whereas the payoff function *fi*

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

player *i i*ð Þ ∈ N is continuous on the set of pure strategy profiles *X*.

*fi* ð Þ¼ *ν* ð

*νi*ð Þ� ∈ f g *ν<sup>i</sup>* as the mixed strategy of player *i*. *ν*ð Þ� ∈ f g*ν* as the mixed strategy profile.

*ν*ð Þ� *, μ*ð Þ� ∈ f g*ν* and the mathematical expectations

*X fi*

ð

*X*1 ⋯ ð

Once again, we underline that *xi, zi* ∈ *Xi* ð Þ *i* ∈ N and *x, z*∈ *X*. The following notion of the Nash equilibrium strategy profile *ν <sup>e</sup>*

rium strategy profile *xe* ∈ *X* in pure strategies in the same game (1).

ð Þ¼ *μ ν*k *<sup>i</sup>*

*fi ν <sup>e</sup>*

*fi* ð Þ¼ *ν* ð

*fi*

**Definition 4**. A strategy profile *ν <sup>e</sup>*

And so, suppose that in game (1) the sets *Xi* of the pure strategies *xi* are compact

Consider the *mixed strategy extension of game* (1). To this end, construct the Borel σ-algebra Bð Þ *Xi* on each compact set *Xi* ð Þ *i*∈ N and probability measures *νi*ð Þ� on Bð Þ *Xi* (i.e., nonnegative scalar functions defined on the elements of Bð Þ *Xi* that are countably additive and normalized to unity on *Xi*). Denote by f g *ν<sup>i</sup>* the whole set of such measures; the measure *νi*ð Þ� proper is called the *mixed strategy of player i i*ð Þ ∈ N in game (1). Next, for game (1) construct the *mixed strategy profiles*, that is, the

*ν*ð Þ¼ *dx ν*1ð Þ *dx*<sup>1</sup> …*νN*ð Þ *dxN ,*

and designate by f g*ν* the set of such strategy profiles. And finally, find the

As a result, the game Γ from (1) is associated with its *mixed strategy extension*

Further exposition involves the vector *z* ¼ ð Þ *z*1*;* …*; zN* ∈ *X* with *zi* ∈ *Xi* ð Þ *i*∈ N *,* and, of course, the vector *x* ¼ ð Þ *x*1*;* …*; xN* ∈ *X,* as well as the mixed strategy profiles

D E

ð Þ*<sup>ν</sup>* � �

ð Þ¼ *μ* ð

> *XN fi*

⋯ ð *X fi*

*i*∈ N

*:*

ð Þ*z μ*ð Þ *dz ,*

ð Þ *x μN*ð Þ *dzN* …

ð Þ� ∈ f g*ν* is called a Nash equilibrium for the

<sup>k</sup>*νi*Þ≤*fi <sup>ν</sup> <sup>e</sup>* ð Þ <sup>∀</sup>*νi*ð Þ� <sup>∈</sup> f g *<sup>ν</sup><sup>i</sup>* ð Þ *<sup>i</sup>* <sup>∈</sup> <sup>N</sup> *;* � (13)

(12)

ð Þ� ∈ f g*ν* in

ð Þ *x ν*ð Þ *dx* ð Þ *i*∈ N *:* (11)

*X fi*

<sup>Γ</sup><sup>~</sup> <sup>¼</sup> <sup>N</sup>*;* f g *<sup>ν</sup><sup>i</sup> <sup>i</sup>* <sup>∈</sup> <sup>N</sup>*; fi*

ð Þ *x ν*ð Þ *dx , fi*

ð

ð

*Xi*þ<sup>1</sup>

*Xi*

…*μ<sup>i</sup>*þ<sup>1</sup>ð Þ *dzi*þ<sup>1</sup> *νi*ð Þ *dxi μ<sup>i</sup>*�<sup>1</sup>ð Þ *dzi*�<sup>1</sup> …*μ*1ð Þ *dz*<sup>1</sup> *:*

mixed strategies in original game (1) answers to Definition 1 of the Nash equilib-

*Xi*�<sup>1</sup>

In the noncooperative game Γ~*,* we have the following elements:

ð Þ*ν* as the payoff function of player *i* defined by (11).

ð Þ *x* of each

strategy sets and continuous payoff functions).

multiplicative measures

mathematical expectations

*fi*

game Γ~ if

**8**

throughout the paper, *ν <sup>e</sup>* ð Þ� ∈ f g*ν* will be also called the Nash equilibrium strategy profile in mixed strategies for game (1).

By the Glicksberg theorem [19], there exists a Nash equilibrium strategy profile in mixed strategies in game (1) under *Xi* ∈ *comp*R*ni* and *fi* ð Þ� ∈ *C X*ð Þð Þ *i*∈ N . Denote by <sup>N</sup> the set of such profiles *<sup>ν</sup> <sup>e</sup>* f g.

Associate the following *N*-criterion problem with the game Γ~

$$
\tilde{\Gamma}\_{\nu} = \left\langle \mathfrak{N}, \left\{ f\_i(\nu) \right\}\_{i \in \mathbb{N}} \right\rangle. \tag{14}
$$

In (14), a decision-maker chooses a strategy profile *ν*ð Þ� ∈ N to simultaneously maximize all components of the vector criterion *<sup>f</sup>*ð Þ¼ *<sup>ν</sup> <sup>f</sup>* <sup>1</sup>ð Þ*<sup>ν</sup> ;* …*; <sup>f</sup> <sup>N</sup>*ð Þ*<sup>ν</sup>* � �. The notion of the Pareto optimal strategy profile is conventional (see below).

**Definition 5**. A strategy profile *<sup>ν</sup> <sup>P</sup>*ð Þ� <sup>∈</sup> <sup>N</sup> is called Pareto optimal for the *<sup>N</sup>*-criterion problem <sup>Γ</sup>~*<sup>υ</sup>* from (14) if for any *<sup>ν</sup>*ð Þ� <sup>∈</sup> <sup>N</sup> the system of inequalities

$$f\_i(\nu) \ge f\_i(\nu^P) \ (i \in \mathbb{N})$$

is infeasible, with at least one inequality being strict.

The following **statement** represents an analog of (6): if for all *ν*ð Þ� ∈ N we have

$$\sum\_{i \in \mathbb{N}} f\_i(\nu) \le \sum\_{i \in \mathbb{N}} f\_i(\nu^P), \tag{15}$$

then the mixed strategy profile *<sup>ν</sup> <sup>P</sup>*ð Þ� <sup>∈</sup> <sup>N</sup> is Pareto optimal in the problem <sup>Γ</sup>~*<sup>υ</sup>* from (14).

Combining Definition 4 with Definition 5 leads to.

**Definition 6**. A strategy profile *<sup>ν</sup>* <sup>∗</sup> ð Þ� <sup>∈</sup> f g*<sup>ν</sup>* is called a Pareto-optimal Nash equilibrium strategy profile in mixed strategies for game (1) if *<sup>ν</sup>* <sup>∗</sup> ð Þ� is a Nash equilibrium in <sup>Γ</sup><sup>~</sup> (according to Definition 4), and *<sup>ν</sup>* <sup>∗</sup> ð Þ� is Pareto optimal in the multicriterion problem Γ~*<sup>υ</sup>* (according to Definition 5).

Now, we prove the existence of a Nash equilibrium strategy profile in mixed strategies that is Pareto optimal with respect to the rest Nash equilibrium strategy profiles.

**Assertion 1.** Consider the noncooperative game (1) where:


Then there exists a PoNE strategy profile in mixed strategies in game (1). *Proof.* Using formulas (7) and (8), construct the scalar function

$$\rho(\mathbf{x}, z) = \max\_{j=1,\dots,N+1} \rho\_j(\mathbf{x}, z),$$

where

$$\begin{aligned} \rho\_i(\mathbf{x}, \mathbf{z}) &= f\_i(\mathbf{z}|\mathbf{x}\_i) - f\_i(\mathbf{z}) \quad (i \in \mathbb{N}), \\\rho\_{N+1}(\mathbf{x}, \mathbf{z}) &= \sum\_{r \in \mathbb{N}} f\_r(\mathbf{x}) - \sum\_{r \in \mathbb{N}} f\_r(\mathbf{z}), \end{aligned}$$

According to the construction procedure and Remark 2, the function *φ*ð Þ *x; z* is defined and continuous on the product of compact sets *X* � *X*.

Define the auxiliary antagonistic game

$$
\Gamma\_a = \langle \{ \mathbf{I}, \mathbf{II} \}, X, Z = X, \wp(\mathbf{x}, \mathbf{z}) \rangle,
$$

where players I and II seek to maximize and minimize, respectively, the function *φ*ð Þ *x; z* continuous on *X* � *Z Z*ð Þ ¼ *X* by choosing their strategies *x*∈ *X* and *z*∈ *X*.

Now, apply a special case of the Glicksberg theorem [19] to the game Γ*a*, as the saddle point in this game coincides with the Nash equilibrium strategy profile in the two-player noncooperative game

$$\Gamma\_2 = \left\langle \{ \mathbf{I}, \Pi \}, \{ \mathbf{X}, \mathbf{Z} = \mathbf{X} \}, \left\{ f\_{\mathbf{I}}(\mathbf{x}, \mathbf{z}) = \boldsymbol{\varrho}(\mathbf{x}, \mathbf{z}), f\_{\mathbf{II}}(\mathbf{x}, \mathbf{z}) = -\boldsymbol{\varrho}(\mathbf{x}, \mathbf{z}) \right\} \right\rangle.$$

In this game, player I seeks to maximize *f* <sup>I</sup> ð Þ¼ *x;* z *φ*ð Þ *x;* z by choosing his strategy *x*∈ *X,* whereas player II tries to maximize *f* IIð Þ¼� *x;* z *φ*ð Þ *x;* z The sets *X* and *X* ¼ *Z* in game Γ<sup>2</sup> are compact, while the payoff functions *f* <sup>I</sup> ð Þ *x;* z and *f* IIð Þ *x;* z are continuous on *X* � *Z;* hence, by the Glicksberg theorem, there exists a Nash equilibrium strategy profile *ν <sup>e</sup> ; <sup>μ</sup>*<sup>∗</sup> ð Þ in the mixed extension <sup>Γ</sup>2:

$$\tilde{\Gamma}\_2 = \left\langle \{\mathbf{I}, \Pi\}, \{\nu\}, \{\mu\}, \left\{ f\_i(\nu, \mu) = \int\_X \left[ f\_i(\mathbf{x}, z) \nu(d\mathbf{x}) \mu(dz) \right] \right\}\_{i=\mathbf{I}, \Pi} \right\rangle\_{\mathbf{I}}$$

In addition, *ν <sup>e</sup> ; <sup>μ</sup>*<sup>∗</sup> ð Þ is simultaneously a saddle point of the mixed extension of the game Γ*<sup>a</sup>* :

$$\tilde{\Gamma}\_{\mathfrak{a}} = \left\langle \{ \mathbf{I}, \Pi \}, \{ \nu \}, \{ \mu \}, \varrho(\nu, \mu) = \left\{ \int\_{\mathcal{X}} \varrho(\mathbf{x}, z) \nu(d\mathbf{x}) \mu(dz) \right\}. \right\rangle$$

Thus, according to the Glicksberg theorem, there exists a pair *ν<sup>e</sup> ; <sup>μ</sup>*<sup>∗</sup> ð Þ representing a saddle point of *φ ν*ð Þ *; μ* , that is,

$$
\varrho(\nu,\mu^\*) \le \varrho(\nu^\epsilon,\mu^\*) \le \varrho(\nu^\epsilon,\mu), \quad \forall \nu(\cdot), \mu(\cdot) \in \{\nu\}.\tag{16}
$$

*:*

and then surely for each *j* ¼ 1*,*…*,N,N* þ 1, we have

*μi*ð Þ¼ *dzi* 1ð Þ *i* ∈ N *,*

*<sup>φ</sup>j*ð Þ *<sup>x</sup>; <sup>z</sup> <sup>ν</sup>*ð Þ *dx <sup>μ</sup>*<sup>∗</sup> ð Þ *dz* <sup>≤</sup><sup>0</sup> <sup>∀</sup>*ν*ð Þ� <sup>∈</sup> f g*<sup>ν</sup> :* (19)

*ν*ð Þ¼ *dx* 1*,*

ð

*μ*ð Þ¼ *dz* 1 (20)

*X*

ð Þ*<sup>z</sup>* � �*νi*ð Þ *dxi <sup>μ</sup>*<sup>∗</sup> ð Þ¼ *dz*

*νi*ðÞ ¼ *dxi*

� *fi <sup>μ</sup>*<sup>∗</sup> ð Þ¼

ð Þ <sup>12</sup> *,*ð Þ <sup>20</sup>

*<sup>N</sup>* ð Þ *dzN* …

ð Þ *<sup>x</sup> <sup>ν</sup>*ð Þ *dx <sup>μ</sup>*<sup>∗</sup> ð Þ¼ *dz*

Next, taking into account the normalized mixed strategies and the normalized

ð

*X*

that hold ∀*νi*ð Þ� ∈ f g *ν<sup>i</sup> , μi*ð Þ� ∈ f g *μ<sup>i</sup> , ν*ð Þ� ∈ f g*ν , μ*ð Þ� ∈ f g*μ ,* we distinguish between two cases, namely, *j*∈ N and *j* ¼ *N* þ 1. For each of these cases, it is necessary to

**Case 1:** *j*∈ N. Using (7) and (20) for each *i* ∈ N*,* inequality (19) is reduced to

*X*

*X fi*

*<sup>i</sup>*�<sup>1</sup>ð Þ *dzi*�<sup>1</sup> …*μ*<sup>∗</sup>

<sup>¼</sup> *fi <sup>μ</sup>*<sup>∗</sup> ð Þ� <sup>k</sup>*ν<sup>i</sup> fi <sup>μ</sup>*<sup>∗</sup> ð Þ≤<sup>0</sup> <sup>∀</sup>*νi*ð Þ� <sup>∈</sup> f g *<sup>ν</sup><sup>i</sup> :*

In combination with (13), this result gives the inclusion *<sup>μ</sup>*<sup>∗</sup> ð Þ� <sup>∈</sup> <sup>N</sup>, that is, the mixed strategy profile *<sup>μ</sup>*<sup>∗</sup> ð Þ� is a Nash equilibrium for the game (1) by Definition 4.

ð Þ *<sup>x</sup> <sup>ν</sup>*ð Þ *dx <sup>μ</sup>*<sup>∗</sup> ð Þ� *dz* <sup>ð</sup>

X *i*∈ N *fi*

*i* ∈ N *fi*

ð

*fi*

ð Þ� *z x*k *<sup>i</sup> fi*

*Xi*

*X*

ð Þ*<sup>z</sup> <sup>μ</sup>*<sup>∗</sup> ð Þ *dz* <sup>ð</sup>

ð Þ� *<sup>ν</sup>* <sup>X</sup> *i* ∈ N

ð

X *i*∈ N *fi*

*X*

*<sup>ν</sup>*ð Þ¼ *dx* ð Þ <sup>20</sup>

*fi <sup>μ</sup>*<sup>∗</sup> ð Þ<sup>≤</sup> <sup>0</sup> <sup>∀</sup>*ν*ð Þ� <sup>∈</sup> <sup>N</sup>*,*

*X*

ð Þ*<sup>z</sup> <sup>μ</sup>*<sup>∗</sup> ð Þ *dz* <sup>ð</sup>

*fi <sup>z</sup>*1*;* …*; zi*�<sup>1</sup>*; xi* ð Þ *; zi*þ<sup>1</sup>*;* …*; zN <sup>μ</sup>*<sup>∗</sup>

<sup>1</sup> ð Þ *dz*<sup>1</sup> �

*Xi*

ð

*DOI: http://dx.doi.org/10.5772/intechopen.88184*

ð

*Pareto Optimality and Equilibria in Noncooperative Games*

*X*

*X*

mixed strategy profiles, that is, the conditions

ð

*X*

ð Þ*<sup>z</sup>* � �*ν*ð Þ *dx <sup>μ</sup>*<sup>∗</sup> ð Þ¼ *dz* <sup>ð</sup>

ð Þ *z x*<sup>k</sup> *<sup>i</sup> <sup>ν</sup>i*ð Þ *dxi <sup>μ</sup>*<sup>∗</sup> ð Þ� *dz* <sup>ð</sup>

ð

*νi*ð Þ¼ *dxi* 1*,*

refine inequalities (19).

*fi*

¼ ð

*X*

<sup>¼</sup> ð Þ <sup>12</sup> *,*ð Þ <sup>20</sup> <sup>ð</sup>

ð Þ� *z x*k *<sup>i</sup> fi*

ð

*X fi*

> 2 6 4

*X*1 … ð

…*μ*<sup>∗</sup>

*<sup>φ</sup><sup>N</sup>*þ<sup>1</sup>ð Þ *<sup>x</sup>; <sup>z</sup> <sup>ν</sup>*ð Þ *dx <sup>μ</sup>*<sup>∗</sup> ð Þ *dz* <sup>¼</sup>

¼ ð

ð

*X fi* *X*

X *i*∈ N *fi*

ð Þ *<sup>x</sup> <sup>ν</sup>*ð Þ� *dx* <sup>X</sup>

*Xi*�<sup>1</sup>

ð

ð

… ð

**Case 2:** *j* ¼ *N* þ 1*:* Here inequality (19) acquires the form

ð

X *i*∈ N *fi*

*<sup>μ</sup>*<sup>∗</sup> ð Þ� *dz* <sup>ð</sup>

ð Þ*<sup>z</sup> <sup>μ</sup>*<sup>∗</sup> ð Þ¼ *dz* ð Þ <sup>12</sup> <sup>X</sup>

in as much as <sup>N</sup> <sup>⊆</sup>f g*<sup>ν</sup>* . This immediately yields (15) for *<sup>ν</sup> <sup>P</sup>* <sup>¼</sup> *<sup>μ</sup>*<sup>∗</sup> , that is, the strategy profile *<sup>μ</sup>*<sup>∗</sup> ð Þ� is Pareto optimal for the *<sup>N</sup>*-criterion problem <sup>Γ</sup>~*<sup>υ</sup>* from (14) by

This outcome and the inclusion *<sup>μ</sup>*<sup>∗</sup> ð Þ� <sup>∈</sup> <sup>N</sup> conclude the proof. □ *Note 5.* Another proof of Assertion 1 can be found in ([3], pp. 13–15).

*X*

*X*

*X*

*XN*

*Xi*þ<sup>1</sup>

*Xi*

*<sup>i</sup>*þ<sup>1</sup>ð Þ *dzi*þ<sup>1</sup> *<sup>ν</sup>i*ð Þ *dxi <sup>μ</sup>*<sup>∗</sup>

ð Þ7 ð

ð Þ *<sup>x</sup> <sup>ν</sup>*ð Þ *dx* <sup>ð</sup>

ð

*X fi*

*i* ∈ N

*X*

*X*

the form

ð

ð

*X*

*X*

ð

ð

*X*

¼ ð Þ <sup>20</sup> X *i*∈ N

Definition 5.

**11**

*X*

Letting *<sup>μ</sup>* <sup>¼</sup> *<sup>ν</sup> <sup>e</sup>* in the right inequality of (16) gives *φ ν <sup>e</sup> ; <sup>ν</sup> <sup>e</sup>* ð Þ¼ 0 and so, ∀*ν*ð Þ� ∈ f g*ν* formula (16) implies

$$0 \ge \varrho(\nu, \mu^\*) = \int\_X \int\_{\substack{j=1,\dots,N+1 \\ X \mid X}} \varrho\_j(\varkappa, z) \nu(d\varkappa) \mu^\*(dz). \tag{17}$$

It was established in [3] that

$$\max\_{j=1,\ldots,N+1} \left\{ \int \rho\_j(\mathbf{x},\mathbf{z}) \nu(d\mathbf{x}) \mu(dz) \le \int\_{\begin{subarray}{c} j=1,\ldots,N+1 \end{subarray}} \rho\_j(\mathbf{x},\mathbf{z}) \nu(d\mathbf{x}) \mu(dz). \tag{18}$$

Note that this property has an analog: the maximum of the sum of functions does not exceed the sum of their maxima. It follows from (17) and (18) that

$$\max\_{j=1,\dots,N+1} \int\_{X} \int\_{X} \rho\_{j}(\infty, z) \nu(d\infty) \mu^{\*} \, (dz) \le 0 \,\,\forall \nu(\cdot) \in \{\nu\},$$

*Pareto Optimality and Equilibria in Noncooperative Games DOI: http://dx.doi.org/10.5772/intechopen.88184*

According to the construction procedure and Remark 2, the function *φ*ð Þ *x; z* is

Γ*<sup>a</sup>* ¼ h i f g I*;*II *;X; Z* ¼ *X; φ*ð Þ *x;* z *,*

where players I and II seek to maximize and minimize, respectively, the function *φ*ð Þ *x; z* continuous on *X* � *Z Z*ð Þ ¼ *X* by choosing their strategies *x*∈ *X* and *z*∈ *X*. Now, apply a special case of the Glicksberg theorem [19] to the game Γ*a*, as the saddle point in this game coincides with the Nash equilibrium strategy profile in the

ð Þ¼ *<sup>x</sup>;* <sup>z</sup> *<sup>φ</sup>*ð Þ *<sup>x</sup>;* <sup>z</sup> *; <sup>f</sup>* IIð Þ¼� *<sup>x</sup>;* <sup>z</sup> *<sup>φ</sup>*ð Þ *<sup>x</sup>;* <sup>z</sup> � � � � *:*

egy *x*∈ *X,* whereas player II tries to maximize *f* IIð Þ¼� *x;* z *φ*ð Þ *x;* z The sets *X* and

continuous on *X* � *Z;* hence, by the Glicksberg theorem, there exists a Nash equi-

*; <sup>μ</sup>*<sup>∗</sup> ð Þ in the mixed extension <sup>Γ</sup>2:

ð

\* +

ð

ð Þ *x; z ν*ð Þ *dx μ*ð Þ *dz*

*φ*ð Þ *x; z ν*ð Þ *dx μ*ð Þ *dz*

*; <sup>μ</sup>*<sup>∗</sup> ð Þ≤*φ ν <sup>e</sup>* ð Þ *; <sup>μ</sup> ,* <sup>∀</sup>*ν*ð Þ� *, <sup>μ</sup>*ð Þ� <sup>∈</sup> f g*<sup>ν</sup> :* (16)

*X fi*

ð

\* +

ð

*X*

*X*

*; <sup>μ</sup>*<sup>∗</sup> ð Þ is simultaneously a saddle point of the mixed extension of

*X*

ð Þ¼ *ν; μ*

ð Þ¼ *x;* z *φ*ð Þ *x;* z by choosing his strat-

9 = ; *i*¼I*,*II

*:*

*; <sup>μ</sup>*<sup>∗</sup> ð Þ

*; <sup>ν</sup> <sup>e</sup>* ð Þ¼ 0 and so,

ð Þ *<sup>x</sup>; <sup>z</sup> <sup>ν</sup>*ð Þ *dx <sup>μ</sup>*<sup>∗</sup> ð Þ *dz :* (17)

ð Þ *x; z ν*ð Þ *dx μ*ð Þ *dz :* (18)

ð Þ *x;* z and *f* IIð Þ *x;* z are

*:*

defined and continuous on the product of compact sets *X* � *X*.

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

Define the auxiliary antagonistic game

Γ<sup>2</sup> ¼ f g I*;*II *;* f g *X; Z* ¼ *X ; f* <sup>I</sup>

<sup>Γ</sup>~<sup>2</sup> <sup>¼</sup> f g <sup>I</sup>*;*II *;* f g*<sup>ν</sup> ;* f g*<sup>μ</sup> ; fi*

representing a saddle point of *φ ν*ð Þ *; μ* , that is,

∀*ν*ð Þ� ∈ f g*ν* formula (16) implies

It was established in [3] that

ð

*X φj*

> max *<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup>

ð

ð

*X*

*X*

ð

*X*

max *<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup>

**10**

*φ ν; <sup>μ</sup>*<sup>∗</sup> ð Þ≤*φ ν <sup>e</sup>*

<sup>0</sup><sup>≥</sup> *φ ν; <sup>μ</sup>*<sup>∗</sup> ð Þ¼

In this game, player I seeks to maximize *f* <sup>I</sup>

*X* ¼ *Z* in game Γ<sup>2</sup> are compact, while the payoff functions *f* <sup>I</sup>

<sup>Γ</sup>~*<sup>a</sup>* <sup>¼</sup> f g <sup>I</sup>*;*II *;* f g*<sup>ν</sup> ;* f g*<sup>μ</sup> ; φ ν*ð Þ¼ *; <sup>μ</sup>*

Letting *<sup>μ</sup>* <sup>¼</sup> *<sup>ν</sup> <sup>e</sup>* in the right inequality of (16) gives *φ ν <sup>e</sup>*

ð Þ *x; z ν*ð Þ *dx μ*ð Þ *dz* ≤

ð

ð

max *<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup>

ð

ð

*X*

Note that this property has an analog: the maximum of the sum of functions does

*X*

*φj*

max *<sup>j</sup>*¼<sup>1</sup>*,*…*, <sup>N</sup>*þ<sup>1</sup>

*<sup>φ</sup>j*ð Þ *<sup>x</sup>; <sup>z</sup> <sup>ν</sup>*ð Þ *dx <sup>μ</sup>*<sup>∗</sup> ð Þ *dz* <sup>≤</sup><sup>0</sup> <sup>∀</sup>*ν*ð Þ� <sup>∈</sup> f g*<sup>ν</sup> ,*

*φj*

*X*

not exceed the sum of their maxima. It follows from (17) and (18) that

*X*

Thus, according to the Glicksberg theorem, there exists a pair *ν<sup>e</sup>*

8 < :

two-player noncooperative game

librium strategy profile *ν <sup>e</sup>*

In addition, *ν <sup>e</sup>*

the game Γ*<sup>a</sup>* :

and then surely for each *j* ¼ 1*,*…*,N,N* þ 1, we have

$$\bigcup\_{X \ge X} \varrho\_j(\mathfrak{x}, z) \nu(d\mathfrak{x}) \mu^\*(dz) \le \mathbf{0} \,\,\forall \nu(\cdot) \in \{\nu\}.\tag{19}$$

Next, taking into account the normalized mixed strategies and the normalized mixed strategy profiles, that is, the conditions

$$\int\_{X} \nu\_{i}(d\mathbf{x}\_{i}) = \mathbf{1}, \quad \int\_{X} \mu\_{i}(d\mathbf{z}\_{i}) = \mathbf{1}(i \in \mathbb{N}), \quad \int\_{X} \nu(d\mathbf{x}) = \mathbf{1}, \quad \int\_{X} \mu(d\mathbf{z}) = \mathbf{1} \tag{20}$$

that hold ∀*νi*ð Þ� ∈ f g *ν<sup>i</sup> , μi*ð Þ� ∈ f g *μ<sup>i</sup> , ν*ð Þ� ∈ f g*ν , μ*ð Þ� ∈ f g*μ ,* we distinguish between two cases, namely, *j*∈ N and *j* ¼ *N* þ 1. For each of these cases, it is necessary to refine inequalities (19).

**Case 1:** *j*∈ N. Using (7) and (20) for each *i* ∈ N*,* inequality (19) is reduced to the form

ð *X* ð *X fi* ð Þ� *z x*k *<sup>i</sup> fi* ð Þ*<sup>z</sup>* � �*ν*ð Þ *dx <sup>μ</sup>*<sup>∗</sup> ð Þ¼ *dz* <sup>ð</sup> *X* ð *Xi fi* ð Þ� *z x*k *<sup>i</sup> fi* ð Þ*<sup>z</sup>* � �*νi*ð Þ *dxi <sup>μ</sup>*<sup>∗</sup> ð Þ¼ *dz* ¼ ð *X* ð *X fi* ð Þ *z x*<sup>k</sup> *<sup>i</sup> <sup>ν</sup>i*ð Þ *dxi <sup>μ</sup>*<sup>∗</sup> ð Þ� *dz* <sup>ð</sup> *X fi* ð Þ*<sup>z</sup> <sup>μ</sup>*<sup>∗</sup> ð Þ *dz* <sup>ð</sup> *Xi νi*ðÞ ¼ *dxi* ð Þ <sup>12</sup> *,*ð Þ <sup>20</sup> <sup>¼</sup> ð Þ <sup>12</sup> *,*ð Þ <sup>20</sup> <sup>ð</sup> *X*1 … ð *Xi*�<sup>1</sup> ð *Xi* ð *Xi*þ<sup>1</sup> … ð *XN fi <sup>z</sup>*1*;* …*; zi*�<sup>1</sup>*; xi* ð Þ *; zi*þ<sup>1</sup>*;* …*; zN <sup>μ</sup>*<sup>∗</sup> *<sup>N</sup>* ð Þ *dzN* … 2 6 4 …*μ*<sup>∗</sup> *<sup>i</sup>*þ<sup>1</sup>ð Þ *dzi*þ<sup>1</sup> *<sup>ν</sup>i*ð Þ *dxi <sup>μ</sup>*<sup>∗</sup> *<sup>i</sup>*�<sup>1</sup>ð Þ *dzi*�<sup>1</sup> …*μ*<sup>∗</sup> <sup>1</sup> ð Þ *dz*<sup>1</sup> � � *fi <sup>μ</sup>*<sup>∗</sup> ð Þ¼ <sup>¼</sup> *fi <sup>μ</sup>*<sup>∗</sup> ð Þ� <sup>k</sup>*ν<sup>i</sup> fi <sup>μ</sup>*<sup>∗</sup> ð Þ≤<sup>0</sup> <sup>∀</sup>*νi*ð Þ� <sup>∈</sup> f g *<sup>ν</sup><sup>i</sup> :*

In combination with (13), this result gives the inclusion *<sup>μ</sup>*<sup>∗</sup> ð Þ� <sup>∈</sup> <sup>N</sup>, that is, the mixed strategy profile *<sup>μ</sup>*<sup>∗</sup> ð Þ� is a Nash equilibrium for the game (1) by Definition 4. **Case 2:** *j* ¼ *N* þ 1*:* Here inequality (19) acquires the form

$$\begin{split} \int\limits\_{X\in\mathcal{X}} \int\_{X} \rho\_{N+1}(\mathbf{x},\mathbf{z}) \nu(d\mathbf{x}) \mu^{\*}\left(d\mathbf{z}\right) \overset{(7)}{=} \int\limits\_{X} \sum\_{i\in\mathbb{N}} f\_{i}(\mathbf{x}) \nu(d\mathbf{x}) \mu^{\*}\left(d\mathbf{z}\right) - \int\limits\_{X} \sum\_{i\in\mathbb{N}} f\_{i}(\mathbf{x}) \nu(d\mathbf{x}) \mu^{\*}\left(d\mathbf{z}\right) = 0 \\ = \int\limits\_{X} \sum\_{i\in\mathbb{N}} f\_{i}(\mathbf{x}) \nu(d\mathbf{x}) \int\_{X} \mu^{\*}\left(d\mathbf{z}\right) - \int\limits\_{X} \sum\_{i\in\mathbb{N}} f\_{i}(\mathbf{z}) \mu^{\*}\left(d\mathbf{z}\right) \int\_{X} \nu(d\mathbf{x}) \overset{(20)}{=} \\ \overset{(20)}{=} \sum\_{i\in\mathbb{N}} \left[ f\_{i}(\mathbf{x}) \mu^{\*}\left(d\mathbf{x}\right) \overset{(12)}{=} \sum\_{i\in\mathbb{N}} f\_{i}(\mathbf{z}) - \sum\_{i\in\mathbb{N}} f\_{i}(\mu^{\*}) \le 0 \,\,\forall \nu(\cdot) \in \mathfrak{N}, \end{split}$$

in as much as <sup>N</sup> <sup>⊆</sup>f g*<sup>ν</sup>* . This immediately yields (15) for *<sup>ν</sup> <sup>P</sup>* <sup>¼</sup> *<sup>μ</sup>*<sup>∗</sup> , that is, the strategy profile *<sup>μ</sup>*<sup>∗</sup> ð Þ� is Pareto optimal for the *<sup>N</sup>*-criterion problem <sup>Γ</sup>~*<sup>υ</sup>* from (14) by Definition 5.

This outcome and the inclusion *<sup>μ</sup>*<sup>∗</sup> ð Þ� <sup>∈</sup> <sup>N</sup> conclude the proof. □ *Note 5.* Another proof of Assertion 1 can be found in ([3], pp. 13–15).

### **5. Conclusions**

Vorob'ev, the founder of game theory in Russia, believed that its subject [20] is answering the following three questions:

**References**

[1] Nash JF. Equilibrium points in N-Person games. Proceedings of the National Academy of Sciences of the United States of America. 1950;**36**:48-49

*DOI: http://dx.doi.org/10.5772/intechopen.88184*

*Pareto Optimality and Equilibria in Noncooperative Games*

[9] Mamedov MB. Investigation of unimprovable equilibrium situations in nonlinear conflict-controlled dynamical

Matematiki i Matematicheskoy Fiziki.

[10] Kononenko AF, Konurbaev EM. Existence of equilibrium profiles in the class of positional strategies that are pareto optimal for certain differential games. In: Game Theory and Its Applications. Kemerovo: Kemerovo State University; 1983. pp. 105-115

noy

systems. Zhurnal Vychislitel0

[11] Mamedov MB. On a Nash Equilibrium Profile Being Pareto Optimal. Izvestiya Akademii Nauk Azerbaĭdzhana: Seriya Fiziko-Tekhnicheskikh. 1983;**4**(2):11-17

[12] Starr AW, Ho YC. Further

Moscow: Nauka; 1971

207-219

Nauka; 1972

Nauka; 1986

properties of nonzero-sum differential games. The Journal of Optimization Theory and Applications. 1969;**3**(4):

[13] Germeier YB. Introduction to the Theory of Operations Research.

[14] Kudryavtsev K, Zhukovskiy V, Stabulit I. One method for computing the Pareto-optimal Nash equilibrium in bimatrix game. In: 2017 Constructive Nonsmooth Analysis and Related Topics

(dedicated to the memory of VF

Demyanov)(CNSA). IEEE; 2017. pp. 1-3. DOI: 10.1109/CNSA.2017.7973978

[15] Dem'yanov VF, Malozemov VN. Introduction to Minimax. Moscow:

[16] Morozov VV, Sukharev AG, Fedorov VV. Operations Research in Problems and Exercises. Moscow:

[17] Borel E. La th'eorie du jeu et les equations int'engrales a noyau

2004;**44**(2):308-317

[2] Zhukovskiy VI, Kudryavtsev KN.

uncertainty. I. Analogue of a saddlepoint. Matematicheskaya Teoriya Igr i Ee Prilozheniya. 2013;**5**(1):27-44

[3] Zhukovskiy VI, Kudryavtsev KN. Equilibrating Conflicts under

[4] Zhukovskiy VI, Kudryavtsev KN. Mathematical foundations of the Golden Rule. I. Static case. Automation and Remote Control. 2017;**78**(10): 1920-1940. DOI: 10.1134/ S0005117917100149

[5] Gatti N, Rocco M, Sandhom T. On the verification and computation of strong Nash equilibrium. In:

Proceedings of the ACM International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS); 6–10 May 2013; Saint Paul,

USA. 2013. pp. 723-730

237-246

**13**

[6] Gasko N, Suciu M, Lung RI, Dumitrescu D. Pareto-optimal Nash equilibrium detection using an evolutionary approach. Acta Universitatis Sapientiae. 2012;**4**(2):

[7] Podinovskii VV, Nogin BD. Pareto Optimal Solutions of Multicriteria Problems. Moscow: Fizmatlit; 2007

[8] Kleimenov AF, Kuvshinov DR, Osipov SI. Nash and Stackelberg solutions numerical construction in a two-person nonantagonistic linear positional differential game. Trudy Instituta Matematiki i Mekhaniki UrO

RAN. 2009;**15**(4):120-133

Uncertainty. II. Analogue of a Maximin. Matematicheskaya Teoriya Igr i Ee Prilozheniya. 2013;**5**(2):3-45

Equilibrating conflicts under


For the many-player noncooperative games, the answer to the first question is the PoNE strategy profile.

The answer to the second question is given by Assertion 1: if the strategy sets are compact and the payoff functions are continuous, then a Pareto equilibrium strategy profile exists in the class of mixed strategies.

As turned out, the answer to the third question is not so simple. At first glance, one should just construct the Germeier convolution of the payoff functions using formulas (7) and (8) and find the saddle point (10); then the minimax strategy entering the saddle point is the PoNE strategy profile. This equilibrium design method is dictated by Theorem 1, actually being the basic result of the present paper. However, the issues of saddle point construction for the Germeier convolutions have not been developed so far. The usage of specific numerical algorithms and their complexity still remain under investigated. Further research by the authors and, hopefully, by the readers will endeavor to improve the situation.

#### **Author details**

Vladislav Zhukovskiy<sup>1</sup> and Konstantin Kudryavtsev<sup>2</sup> \*


\*Address all correspondence to: kudrkn@gmail.com

<sup>© 2020</sup> The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Pareto Optimality and Equilibria in Noncooperative Games DOI: http://dx.doi.org/10.5772/intechopen.88184*

#### **References**

**5. Conclusions**

answering the following three questions:

2. Does an optimal solution exist?

3. How can it be found?

the PoNE strategy profile.

**Author details**

**12**

1. What is the optimality of a given game?

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

egy profile exists in the class of mixed strategies.

Vladislav Zhukovskiy<sup>1</sup> and Konstantin Kudryavtsev<sup>2</sup>

2 South Ural State University, Chelyabinsk, Russia

\*Address all correspondence to: kudrkn@gmail.com

provided the original work is properly cited.

1 M.V. Lomonosov Moscow State University, Moscow, Russia

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

Vorob'ev, the founder of game theory in Russia, believed that its subject [20] is

For the many-player noncooperative games, the answer to the first question is

The answer to the second question is given by Assertion 1: if the strategy sets are compact and the payoff functions are continuous, then a Pareto equilibrium strat-

As turned out, the answer to the third question is not so simple. At first glance, one should just construct the Germeier convolution of the payoff functions using formulas (7) and (8) and find the saddle point (10); then the minimax strategy entering the saddle point is the PoNE strategy profile. This equilibrium design method is dictated by Theorem 1, actually being the basic result of the present paper. However, the issues of saddle point construction for the Germeier convolutions have not been developed so far. The usage of specific numerical algorithms and their complexity still remain under investigated. Further research by the authors and, hopefully, by the readers will endeavor to improve the situation.

\*

[1] Nash JF. Equilibrium points in N-Person games. Proceedings of the National Academy of Sciences of the United States of America. 1950;**36**:48-49

[2] Zhukovskiy VI, Kudryavtsev KN. Equilibrating conflicts under uncertainty. I. Analogue of a saddlepoint. Matematicheskaya Teoriya Igr i Ee Prilozheniya. 2013;**5**(1):27-44

[3] Zhukovskiy VI, Kudryavtsev KN. Equilibrating Conflicts under Uncertainty. II. Analogue of a Maximin. Matematicheskaya Teoriya Igr i Ee Prilozheniya. 2013;**5**(2):3-45

[4] Zhukovskiy VI, Kudryavtsev KN. Mathematical foundations of the Golden Rule. I. Static case. Automation and Remote Control. 2017;**78**(10): 1920-1940. DOI: 10.1134/ S0005117917100149

[5] Gatti N, Rocco M, Sandhom T. On the verification and computation of strong Nash equilibrium. In: Proceedings of the ACM International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS); 6–10 May 2013; Saint Paul, USA. 2013. pp. 723-730

[6] Gasko N, Suciu M, Lung RI, Dumitrescu D. Pareto-optimal Nash equilibrium detection using an evolutionary approach. Acta Universitatis Sapientiae. 2012;**4**(2): 237-246

[7] Podinovskii VV, Nogin BD. Pareto Optimal Solutions of Multicriteria Problems. Moscow: Fizmatlit; 2007

[8] Kleimenov AF, Kuvshinov DR, Osipov SI. Nash and Stackelberg solutions numerical construction in a two-person nonantagonistic linear positional differential game. Trudy Instituta Matematiki i Mekhaniki UrO RAN. 2009;**15**(4):120-133

[9] Mamedov MB. Investigation of unimprovable equilibrium situations in nonlinear conflict-controlled dynamical systems. Zhurnal Vychislitel0 noy Matematiki i Matematicheskoy Fiziki. 2004;**44**(2):308-317

[10] Kononenko AF, Konurbaev EM. Existence of equilibrium profiles in the class of positional strategies that are pareto optimal for certain differential games. In: Game Theory and Its Applications. Kemerovo: Kemerovo State University; 1983. pp. 105-115

[11] Mamedov MB. On a Nash Equilibrium Profile Being Pareto Optimal. Izvestiya Akademii Nauk Azerbaĭdzhana: Seriya Fiziko-Tekhnicheskikh. 1983;**4**(2):11-17

[12] Starr AW, Ho YC. Further properties of nonzero-sum differential games. The Journal of Optimization Theory and Applications. 1969;**3**(4): 207-219

[13] Germeier YB. Introduction to the Theory of Operations Research. Moscow: Nauka; 1971

[14] Kudryavtsev K, Zhukovskiy V, Stabulit I. One method for computing the Pareto-optimal Nash equilibrium in bimatrix game. In: 2017 Constructive Nonsmooth Analysis and Related Topics (dedicated to the memory of VF Demyanov)(CNSA). IEEE; 2017. pp. 1-3. DOI: 10.1109/CNSA.2017.7973978

[15] Dem'yanov VF, Malozemov VN. Introduction to Minimax. Moscow: Nauka; 1972

[16] Morozov VV, Sukharev AG, Fedorov VV. Operations Research in Problems and Exercises. Moscow: Nauka; 1986

[17] Borel E. La th'eorie du jeu et les equations int'engrales a noyau

**Chapter 2**

**Abstract**

**1. Introduction**

**15**

Polynomial Algorithm for

*Alexander A. Lazarev and Nikolay Pravdivets*

Schedules for Problem

1∣*rj*∣*L*max,*C*max

Constructing Pareto-Optimal

In this chapter, we consider the single machine scheduling problem with given release dates, processing times, and due dates with two objective functions. The first one is to minimize the maximum lateness, that is, maximum difference between each job due date and its actual completion time. The second one is to minimize the maximum completion time, that is, to complete all the jobs as soon as possible. The problem is NP-hard in the strong sense. We provide a polynomial time algorithm for constructing a Pareto-optimal set of schedules on criteria of maximum lateness and maximum completion time, that is, problem 1∣*rj*∣*Lmax*,*Cmax*, for the subcase of

the problem: *d*<sup>1</sup> ≤*d*<sup>2</sup> ≤ … ≤*dn*; *d*<sup>1</sup> � *r*<sup>1</sup> � *p*<sup>1</sup> ≥*d*<sup>2</sup> � *r*<sup>2</sup> � *p*<sup>2</sup> ≥ … ≥*dn* � *rn* � *pn*.

completion time, polynomial time algorithm

**2. Statement of the problem 1∣***r <sup>j</sup>***∣***L***max,** *C***max**

**Keywords:** single machine scheduling, two-criteria scheduling, Pareto-set, Paretooptimality, minimization of maximum lateness, minimization of maximum

We consider a classical scheduling problem on a single machine. A release time of each job is predefined and represents the minimum possible start time of the job. When constructing schedules, we consider two objective functions. The first one is to minimize the maximum lateness, that is, maximum difference between each job due date and its actual completion time. The second one is to minimize the maximum completion time, that is, to complete all the jobs as soon as possible. The problem is NP-hard in the strong sense [1]. We provide a polynomial time algorithm for constructing a Pareto-optimal set of schedules on criteria of maximum lateness and maximum completion time, that is, problem 1∣*rj*∣*Lmax*,*Cmax*, for the subcase of the problem when due dates are: *d*<sup>1</sup> ≤ *d*<sup>2</sup> ≤ … ≤*dn*; *d*<sup>1</sup> � *r*<sup>1</sup> � *p*<sup>1</sup> ≥ *d*<sup>2</sup> � *r*<sup>2</sup> � *p*<sup>2</sup> ≥ … ≥*dn* � *rn* � *pn*. Example of a problem case that meets these constraints will be the case when all jobs have the same time for processing before due date.

We consider single machine scheduling problem, where a set of *n* jobs *N* ¼ f g 1, 2, …, *n* has to be processed on a single machine. Each job we is numbered, that

sym'etrique. Comptes rendus de l'Académie des Sciences. 1921;**173**: 1304-1308

[18] von Neumann J. Zur Theorie der Gesellschaftspiele. Mathematische Annalen. 1928;**100**(1):295-320

[19] Glicksberg IL. A further generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. Proceedings of American Mathematical Society. 1952; **3**(1):170-174

[20] Vorob'ev NN. The present state of the theory of games. Russian Mathematical Surveys. 1970;**25**(2): 77-136

#### **Chapter 2**

sym'etrique. Comptes rendus de l'Académie des Sciences. 1921;**173**:

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

[18] von Neumann J. Zur Theorie der Gesellschaftspiele. Mathematische Annalen. 1928;**100**(1):295-320

generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. Proceedings of American Mathematical Society. 1952;

[20] Vorob'ev NN. The present state of

[19] Glicksberg IL. A further

the theory of games. Russian Mathematical Surveys. 1970;**25**(2):

1304-1308

**3**(1):170-174

77-136

**14**

## Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem 1∣*rj*∣*L*max,*C*max

*Alexander A. Lazarev and Nikolay Pravdivets*

### **Abstract**

In this chapter, we consider the single machine scheduling problem with given release dates, processing times, and due dates with two objective functions. The first one is to minimize the maximum lateness, that is, maximum difference between each job due date and its actual completion time. The second one is to minimize the maximum completion time, that is, to complete all the jobs as soon as possible. The problem is NP-hard in the strong sense. We provide a polynomial time algorithm for constructing a Pareto-optimal set of schedules on criteria of maximum lateness and maximum completion time, that is, problem 1∣*rj*∣*Lmax*,*Cmax*, for the subcase of the problem: *d*<sup>1</sup> ≤*d*<sup>2</sup> ≤ … ≤*dn*; *d*<sup>1</sup> � *r*<sup>1</sup> � *p*<sup>1</sup> ≥*d*<sup>2</sup> � *r*<sup>2</sup> � *p*<sup>2</sup> ≥ … ≥*dn* � *rn* � *pn*.

**Keywords:** single machine scheduling, two-criteria scheduling, Pareto-set, Paretooptimality, minimization of maximum lateness, minimization of maximum completion time, polynomial time algorithm

#### **1. Introduction**

We consider a classical scheduling problem on a single machine. A release time of each job is predefined and represents the minimum possible start time of the job. When constructing schedules, we consider two objective functions. The first one is to minimize the maximum lateness, that is, maximum difference between each job due date and its actual completion time. The second one is to minimize the maximum completion time, that is, to complete all the jobs as soon as possible. The problem is NP-hard in the strong sense [1]. We provide a polynomial time algorithm for constructing a Pareto-optimal set of schedules on criteria of maximum lateness and maximum completion time, that is, problem 1∣*rj*∣*Lmax*,*Cmax*, for the subcase of the problem when due dates are: *d*<sup>1</sup> ≤ *d*<sup>2</sup> ≤ … ≤*dn*; *d*<sup>1</sup> � *r*<sup>1</sup> � *p*<sup>1</sup> ≥ *d*<sup>2</sup> � *r*<sup>2</sup> � *p*<sup>2</sup> ≥ … ≥*dn* � *rn* � *pn*. Example of a problem case that meets these constraints will be the case when all jobs have the same time for processing before due date.

#### **2. Statement of the problem 1∣***r <sup>j</sup>***∣***L***max,** *C***max**

We consider single machine scheduling problem, where a set of *n* jobs *N* ¼ f g 1, 2, …, *n* has to be processed on a single machine. Each job we is numbered, that is, the entry "job *j*" is equivalent to the entry "job numbered *j*." Simultaneous executing of jobs or preemptions of the processing of a job are prohibited. For jobs *j*∈ *N*, value *rj* is the minimum possible start time, *p <sup>j</sup>* ≥0 is a processing time of job *j* and *d <sup>j</sup>* is a due date of job *j*.

The schedule is represented by a set *<sup>π</sup>* <sup>¼</sup> *sj*<sup>j</sup> *<sup>j</sup>*<sup>∈</sup> *<sup>N</sup>* � � of start times of each job. By *τ*, we denote the permutation of *j* <sup>1</sup>, …, *j n* � � elements of the set *N*. A set of all different schedules of jobs from the set *N* is denoted by Πð Þ *N* . Schedule *π* is called *feasible* if *sj*ð Þ *π* ≥*rj*, ∀*j*∈ *N*. We denote the completion time of job *j*∈ *N* in schedule *π* as *Cj*ð Þ *π* . Difference *L <sup>j</sup>*ð Þ¼ *π Cj*ð Þ� *π d <sup>j</sup>*, *j*∈ *N*, denotes lateness of job *j* in the schedule *π*. Maximum lateness of jobs of the set *N* under the schedule *π* is

$$L\_{\max}(\boldsymbol{\pi}) = \max\_{j \in N} \left\{ \mathbf{C}\_{j}(\boldsymbol{\pi}) - d\_{j} \right\}. \tag{1}$$

If process times of all jobs are equal, the complexity can be reduced to *O n*ð Þ log *n* [7]. Vakhania generalized this result [8] considering the case when the processing times of some jobs are restricted to either *p* or 2*p*. An algorithm with complexity of

A case when job processing times are mutually divisible is considered in [9].

Special cases 1∣*prec*;*rj*∣*C*max, 1∣*prec*; *p <sup>j</sup>* ¼ *p*;*rj*∣*L*max and 1∣*prec*;*rj*; *pmtn*∣*L*max with

*<sup>j</sup>* ð Þ *π* the lateness and completion time of job *j*∈ *N* in

n o, *<sup>j</sup>*<sup>∈</sup> *<sup>N</sup>*. Respectively,

*<sup>τ</sup>* . In the early schedule, each job *j*∈ *N*

*τA* .

n o, *<sup>k</sup>* <sup>¼</sup> 2, …, *<sup>n</sup>:* (4)

**∣***L***max,** *C***max**

*m*

*m*

� �, (5)

� �*:* (6)

*<sup>j</sup>* , *pA <sup>j</sup>* , *dA j*

*<sup>j</sup>* ð Þ *π* is a maximum lateness of the schedule *π* for instance *A*.

This paper deals with a problem with two objective functions *L*max and *C*max, which in general case can be referred as 1∣*rj*∣*L*max,*C*max. This problem was considered in [15], where authors consider some dominance properties and conditions

**Definition 1.1** For any instance *A* of the problem, each permutation *τ* of the jobs

*<sup>k</sup>*�<sup>1</sup> <sup>þ</sup> *<sup>p</sup><sup>A</sup> j k*�1 ,*r A j k*

Early schedules play an important role in our construction, since it is sufficient to check all early schedules to find the optimal schedule of any problem instance. By *τ<sup>A</sup>* we denote the optimal permutation and *π<sup>A</sup>* we denote the optimal schedule

We denote by Πð Þ *N* the set of all permutations of jobs of the set *N*, and by Π*<sup>A</sup>*

This section deals with the problem of constructing a Pareto-optimal set by criteria *C*max and *L*max, that is, problem 1∣*rj*∣*L*max, *C*max. We suggest an algorithm for

2

2

1, *π*<sup>0</sup> 2, …, *π*<sup>0</sup> *m* � � for which

� �< … <*C*max *π*<sup>0</sup>

� �> … > *L*max *π*<sup>0</sup>

for instance *<sup>A</sup>*. Only early optimal schedules are be considered, that is, *<sup>π</sup><sup>A</sup>* <sup>¼</sup> *<sup>π</sup><sup>A</sup>*

starts immediately after the end of the previous job in the corresponding permutation. If the completion time of the previous job is less than the release time of the current job, then the beginning of the current job is equal to its release time. That is,

precedence constraints for jobs have been addressed in works of Lawler [10], Simons [11], Baker et al. [12]. Hoogeveen [13] proposed a polynomial algorithm for the special case when job parameters satisfy the constraints *d <sup>j</sup>* � *p <sup>j</sup>* � *A* ≤ *rj* ≤*d <sup>j</sup>* � *A*, ∀*j*∈ *N*, for some constant *A*. A pseudo-polynomial algorithm for the NP-hard case when release times and due dates are in the reversed order (*d*<sup>1</sup> ≤ … ≤*dn* and

Author suggest a polynomial-time algorithm with a complexity of

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem…*

*<sup>p</sup>*max � � operations for solving this case.

*<sup>j</sup>* ð Þ *<sup>π</sup>* and *<sup>C</sup><sup>A</sup>*

schedule *π*, for instance, *A* with job parameters *rA*

of the set *N* is uniquely defines *early schedule π<sup>A</sup>*

when the Pareto-optimal set can be formed in polynomial time.

*<sup>τ</sup>* ¼ *s <sup>j</sup>* 1 , *s <sup>j</sup>* 2 , …, *s <sup>j</sup> n* � �, where

*<sup>k</sup>* ¼ max *s <sup>j</sup>*

*O n*<sup>2</sup> ð Þ log *<sup>n</sup>* log *<sup>p</sup>* was suggested.

*DOI: http://dx.doi.org/10.5772/intechopen.93677*

*r*<sup>1</sup> ≥ … ≥*rn*) was developed in [14].

We denote by *LA*

*j* ∈ *N LA*

maxð Þ¼ *π* max

*LA*

if *τ* ¼ *j*

**17**

<sup>1</sup>, *j* <sup>2</sup>, …, *j n* � �, then *π<sup>A</sup>*

> *s j* <sup>1</sup> ¼ *r A j* 1 , *s <sup>j</sup>*

the set of feasible schedules for instance *A*.

constructing a set of schedules Φð Þ¼ *N*, *t π*<sup>0</sup>

**3. Problem 1∣***di* **≤** *d <sup>j</sup>***,** *di* � *ri* � *pi* **≥** *d <sup>j</sup>* � *r <sup>j</sup>* � *p <sup>j</sup>*

*C*max *π*<sup>0</sup> 1 � �<*C*max *π*<sup>0</sup>

*L*max *π*<sup>0</sup> 1 � �>*L*max *π*<sup>0</sup>

*O n*<sup>3</sup> log *n* log <sup>2</sup>

We denote the completion time of all jobs of the set *N* in schedule *π* as

$$C\_{\max}(\pi) = \max\_{j \in N} C\_j(\pi).$$

The problem is to find the optimal schedule *π* <sup>∗</sup> with the smallest value of the maximum lateness:

$$L\_{\max}^\* = \min\_{\pi \in \Pi(N)} L\_{\max}(\pi) = L\_{\max}(\pi^\*). \tag{2}$$

For any arbitrary set of jobs *M* ⊆ *N* we also denote:

$$r\_M = \min\_{j \in M} r\_j, \quad d\_M = \max\_{j \in M} d\_j, \quad p\_M = \sum\_{j \in M} p\_j. \tag{3}$$

In the standard notation of Graham et al. [2], this problem is denoted as 1∣*rj*∣*L*max. Intensive work on the solution of this problem has continued since the early 50s of the 20th century. Lenstra et al. [1] showed that the general case of the problem 1∣*rj*∣*L*max is *NP*-hard in the strong sense.

Potts [3] introduced an iterative version of extended Jackson rule (IJ) [4] and proved that *<sup>L</sup>*maxð Þ *<sup>π</sup>IJ L*∗ max ≤ <sup>3</sup> <sup>2</sup>. Hall and Shmoys [5] modified the iterative version and created an algorithm (MIJ) that guarantees the evaluation *<sup>L</sup>*maxð Þ *<sup>π</sup>MIJ L*∗ max ≤ <sup>4</sup> <sup>3</sup>. They also presented two approximation schemes that guarantee finding *ε*-approximate solution in *O n* log *<sup>n</sup>* <sup>þ</sup> *<sup>n</sup>*ð Þ <sup>1</sup>*=<sup>ε</sup> <sup>O</sup>* <sup>1</sup>*=ε*<sup>2</sup> ð Þ � � and *O n*ð Þ *<sup>=</sup><sup>ε</sup> <sup>O</sup>*ð Þ <sup>1</sup>*=<sup>ε</sup>* � � operations. Mastrolilli [6] introduced an improved approximation scheme with complexity of *O n* <sup>þ</sup> ð Þ <sup>1</sup>*=<sup>ε</sup> <sup>O</sup>*ð Þ <sup>1</sup>*=<sup>ε</sup>* � � operations.

A number of polynomially solvable cases of the problem were found, starting with Jackson's early result [4] for the case *rj* ¼ 0, *j*∈ *N*, when the solution is a schedule in which jobs are ordered by nondecreasing due dates (by rule *EDD*). Such a schedule is also be optimal for the case when the release times and due dates are associated (*ri* ≤*rj* ⇔ *di* ≤*d <sup>j</sup>*, ∀*i*, *j*∈ *N*).

Schedule is constructed according to the extended Jackson rule (Schrage schedule): on the next place in the schedule we select a released non-ordered job with the minimum due date; if there are no such jobs, then we select the job with the minimum release time among the unscheduled jobs.

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem… DOI: http://dx.doi.org/10.5772/intechopen.93677*

If process times of all jobs are equal, the complexity can be reduced to *O n*ð Þ log *n* [7]. Vakhania generalized this result [8] considering the case when the processing times of some jobs are restricted to either *p* or 2*p*. An algorithm with complexity of *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* log *<sup>p</sup>* was suggested.

A case when job processing times are mutually divisible is considered in [9]. Author suggest a polynomial-time algorithm with a complexity of *O n*<sup>3</sup> log *n* log <sup>2</sup> *<sup>p</sup>*max � � operations for solving this case.

Special cases 1∣*prec*;*rj*∣*C*max, 1∣*prec*; *p <sup>j</sup>* ¼ *p*;*rj*∣*L*max and 1∣*prec*;*rj*; *pmtn*∣*L*max with precedence constraints for jobs have been addressed in works of Lawler [10], Simons [11], Baker et al. [12]. Hoogeveen [13] proposed a polynomial algorithm for the special case when job parameters satisfy the constraints *d <sup>j</sup>* � *p <sup>j</sup>* � *A* ≤ *rj* ≤*d <sup>j</sup>* � *A*, ∀*j*∈ *N*, for some constant *A*. A pseudo-polynomial algorithm for the NP-hard case when release times and due dates are in the reversed order (*d*<sup>1</sup> ≤ … ≤*dn* and *r*<sup>1</sup> ≥ … ≥*rn*) was developed in [14].

We denote by *LA <sup>j</sup>* ð Þ *<sup>π</sup>* and *<sup>C</sup><sup>A</sup> <sup>j</sup>* ð Þ *π* the lateness and completion time of job *j*∈ *N* in schedule *π*, for instance, *A* with job parameters *rA <sup>j</sup>* , *pA <sup>j</sup>* , *dA j* n o, *<sup>j</sup>*<sup>∈</sup> *<sup>N</sup>*. Respectively, *LA* maxð Þ¼ *π* max *j* ∈ *N LA <sup>j</sup>* ð Þ *π* is a maximum lateness of the schedule *π* for instance *A*.

This paper deals with a problem with two objective functions *L*max and *C*max, which in general case can be referred as 1∣*rj*∣*L*max,*C*max. This problem was considered in [15], where authors consider some dominance properties and conditions when the Pareto-optimal set can be formed in polynomial time.

**Definition 1.1** For any instance *A* of the problem, each permutation *τ* of the jobs of the set *N* is uniquely defines *early schedule π<sup>A</sup> <sup>τ</sup>* . In the early schedule, each job *j*∈ *N* starts immediately after the end of the previous job in the corresponding permutation. If the completion time of the previous job is less than the release time of the current job, then the beginning of the current job is equal to its release time. That is, if *τ* ¼ *j* <sup>1</sup>, *j* <sup>2</sup>, …, *j n* � �, then *π<sup>A</sup> <sup>τ</sup>* ¼ *s <sup>j</sup>* 1 , *s <sup>j</sup>* 2 , …, *s <sup>j</sup> n* � �, where

$$\mathfrak{s}\_{\dot{j}\_1} = r\_{\dot{j}\_1}^A, \mathfrak{s}\_{\dot{j}\_k} = \max\left\{ \mathfrak{s}\_{\dot{j}\_{k-1}} + p\_{\dot{j}\_{k-1}}^A, r\_{\dot{j}\_k}^A \right\}, \ k = 2, \ldots, n. \tag{4}$$

Early schedules play an important role in our construction, since it is sufficient to check all early schedules to find the optimal schedule of any problem instance.

By *τ<sup>A</sup>* we denote the optimal permutation and *π<sup>A</sup>* we denote the optimal schedule for instance *<sup>A</sup>*. Only early optimal schedules are be considered, that is, *<sup>π</sup><sup>A</sup>* <sup>¼</sup> *<sup>π</sup><sup>A</sup> τA* .

We denote by Πð Þ *N* the set of all permutations of jobs of the set *N*, and by Π*<sup>A</sup>* the set of feasible schedules for instance *A*.

#### **3. Problem 1∣***di* **≤** *d <sup>j</sup>***,** *di* � *ri* � *pi* **≥** *d <sup>j</sup>* � *r <sup>j</sup>* � *p <sup>j</sup>* **∣***L***max,** *C***max**

This section deals with the problem of constructing a Pareto-optimal set by criteria *C*max and *L*max, that is, problem 1∣*rj*∣*L*max, *C*max. We suggest an algorithm for constructing a set of schedules Φð Þ¼ *N*, *t π*<sup>0</sup> 1, *π*<sup>0</sup> 2, …, *π*<sup>0</sup> *m* � � for which

$$\mathcal{C}\_{\max} \left( \pi\_1' \right) < \mathcal{C}\_{\max} \left( \pi\_2' \right) < \dots < \mathcal{C}\_{\max} \left( \pi\_m' \right), \tag{5}$$

$$L\_{\max} \left( \pi\_1' \right) > L\_{\max} \left( \pi\_2' \right) > \dots > L\_{\max} \left( \pi\_m' \right). \tag{6}$$

is, the entry "job *j*" is equivalent to the entry "job numbered *j*." Simultaneous executing of jobs or preemptions of the processing of a job are prohibited. For jobs *j*∈ *N*, value *rj* is the minimum possible start time, *p <sup>j</sup>* ≥0 is a processing time of job *j*

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

<sup>1</sup>, …, *j n*

*L*maxð Þ¼ *π* max

*L*∗

*rM* ¼ min *j*∈ *M*

1∣*rj*∣*L*max is *NP*-hard in the strong sense.

≤ <sup>3</sup>

operations.

the minimum release time among the unscheduled jobs.

For any arbitrary set of jobs *M* ⊆ *N* we also denote:

max ¼ min *π* ∈ Πð Þ *N*

created an algorithm (MIJ) that guarantees the evaluation *<sup>L</sup>*maxð Þ *<sup>π</sup>MIJ*

introduced an improved approximation scheme with complexity of

different schedules of jobs from the set *N* is denoted by Πð Þ *N* . Schedule *π* is called *feasible* if *sj*ð Þ *π* ≥*rj*, ∀*j*∈ *N*. We denote the completion time of job *j*∈ *N* in schedule *π* as *Cj*ð Þ *π* . Difference *L <sup>j</sup>*ð Þ¼ *π Cj*ð Þ� *π d <sup>j</sup>*, *j*∈ *N*, denotes lateness of job *j* in the schedule *π*. Maximum lateness of jobs of the set *N* under the schedule *π* is

*j* ∈ *N*

We denote the completion time of all jobs of the set *N* in schedule *π* as

*C*maxð Þ¼ *π* max

*rj*, *dM* ¼ max

*j*∈ *N*

*j*∈ *M*

In the standard notation of Graham et al. [2], this problem is denoted as 1∣*rj*∣*L*max. Intensive work on the solution of this problem has continued since the early 50s of the 20th century. Lenstra et al. [1] showed that the general case of the problem

Potts [3] introduced an iterative version of extended Jackson rule (IJ) [4] and

presented two approximation schemes that guarantee finding *ε*-approximate solu-

and *O n*ð Þ *<sup>=</sup><sup>ε</sup> <sup>O</sup>*ð Þ <sup>1</sup>*=<sup>ε</sup>* � �

A number of polynomially solvable cases of the problem were found, starting with Jackson's early result [4] for the case *rj* ¼ 0, *j*∈ *N*, when the solution is a schedule in which jobs are ordered by nondecreasing due dates (by rule *EDD*). Such a schedule is also be optimal for the case when the release times and due dates are

Schedule is constructed according to the extended Jackson rule (Schrage schedule): on the next place in the schedule we select a released non-ordered job with the minimum due date; if there are no such jobs, then we select the job with

The problem is to find the optimal schedule *π* <sup>∗</sup> with the smallest value of the

The schedule is represented by a set *<sup>π</sup>* <sup>¼</sup> *sj*<sup>j</sup> *<sup>j</sup>*<sup>∈</sup> *<sup>N</sup>* � � of start times of each job. By

� � elements of the set *N*. A set of all

� �*:* (1)

*<sup>L</sup>*maxð Þ¼ *<sup>π</sup> <sup>L</sup>*max *<sup>π</sup>* <sup>∗</sup> ð Þ*:* (2)

*j*∈ *M p j*

> *L*∗ max

≤ <sup>4</sup>

operations. Mastrolilli [6]

<sup>3</sup>. They also

*:* (3)

*<sup>d</sup> <sup>j</sup>*, *pM* <sup>¼</sup> <sup>X</sup>

<sup>2</sup>. Hall and Shmoys [5] modified the iterative version and

*Cj*ð Þ� *π d <sup>j</sup>*

*Cj*ð Þ *π :*

and *d <sup>j</sup>* is a due date of job *j*.

maximum lateness:

proved that *<sup>L</sup>*maxð Þ *<sup>π</sup>IJ*

*O n* <sup>þ</sup> ð Þ <sup>1</sup>*=<sup>ε</sup> <sup>O</sup>*ð Þ <sup>1</sup>*=<sup>ε</sup>* � �

**16**

*L*∗ max

tion in *O n* log *<sup>n</sup>* <sup>þ</sup> *<sup>n</sup>*ð Þ <sup>1</sup>*=<sup>ε</sup> <sup>O</sup>* <sup>1</sup>*=ε*<sup>2</sup> ð Þ � �

associated (*ri* ≤*rj* ⇔ *di* ≤*d <sup>j</sup>*, ∀*i*, *j*∈ *N*).

*τ*, we denote the permutation of *j*

There is no schedule *π* such that *C*maxð Þ *π* ≤*C*max *π*<sup>0</sup> *i* and *<sup>L</sup>*maxð Þ *<sup>π</sup>* <sup>≤</sup>*L*max *<sup>π</sup>*<sup>0</sup> *i* , at least one of the inequalities is strict for some *i*, *i* ¼ 1, …, *m*. It is shown that *m* ≤*n*.

#### **3.1 Problem properties**

We denote the precedence of the jobs *i* and *j* in schedule *π* as ð Þ *i* ! *j <sup>π</sup>*. We also introduce

$$r\_j(t) = \max\left\{r\_j, t\right\};\tag{7}$$

but this contradicts the definition of job *f* (10). Therefore, *rj* >*rf* . Its obvious that

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem…*

Since *d <sup>j</sup>* < *d <sup>f</sup>* , then (from (9)) *d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ≥ *d <sup>f</sup>* � *rf* � *p <sup>f</sup>* or *d <sup>j</sup>* � *d <sup>f</sup>* ≥*rj* þ *p <sup>j</sup>* � *rf* � *p <sup>f</sup>* , so *Cj*ð Þ� *π Cf*ð Þ *π* < *p <sup>j</sup>* þ *rj* � *p <sup>f</sup>* � *rf* ≤ *d <sup>j</sup>* � *d <sup>f</sup>* . Then, *Lj*ð Þ *π*, *t* <*L <sup>f</sup>*ð Þ *π*, *t* for

For each job *j* satisfying the condition ð Þ *j* ! *s <sup>π</sup>*, we have *Cj*ð Þ *π* <*Cs*ð Þ *π* . If *d <sup>j</sup>* ≥*ds*,

Since *Cj*ð Þ *π* ≤*Cs*ð Þ� *π ps* and *p <sup>j</sup>* >0, then *Cj*ð Þ� *π p <sup>j</sup>* <*Cs*ð Þ� *π ps* and since *rj* >*rs*,

Let for the job *j*∈ *N*nf g*f* , ð Þ *j* ! *s <sup>π</sup>*, *d <sup>j</sup>* <*ds*, then *rj* >*rs*. Indeed, if we assume that *rj* ≤*rs*, then *rj*ð Þ*t* ≤ *rs*ð Þ*t* (it follows from (7)). In addition, *rs*ð Þ*t* ≥*r t*ð Þ for any job *s* according to definitions (8) and (11). If *rs*ðÞ¼ *t r t*ð Þ, then for the jobs *j* and *s* we can write *rj*ðÞ¼ *t rs*ðÞ¼ *t r t*ð Þ and *d <sup>j</sup>* <*ds*, which contradicts the definition (11) of job *s N*ð Þ , *t* . If *rs*ð Þ*t* >*r t*ð Þ, that is, *rs* > *r t*ð Þ, then there is no job *i*∈ *N*nf g *f*, *s* , for which *rs* >*ri* >*r t*ð Þ. Therefore, for the jobs *j* and *s* we get *rj*ðÞ¼ *t rs*ð Þ*t* and *d <sup>j</sup>* <*ds*, which

*Cj*ð Þ� *π p <sup>j</sup>* � *rj* <*Cf*ð Þ� *π p <sup>f</sup>* � *rf* , (15)

*Cj*ð Þ� *π Cf*ð Þ *π* <*p <sup>j</sup>* þ *rj* � *p <sup>f</sup>* � *rf :* (16)

*Cj*ð Þ� *π Cs*ð Þ *π* <*p <sup>j</sup>* þ *rj* � *ps* � *rs:* (17)

*Cj*ð Þ� *π Cs*ð Þ *π* <*p <sup>j</sup>* þ *rj* � *ps* � *rs* ≤ *d <sup>j</sup>* � *ds:* (18)

, *t*

<sup>0</sup> ð Þ≤*C*max *π*, *t*

.

<sup>0</sup> ð Þ≤*C*max ð Þ *π*1, *f* , *t*

<sup>0</sup> ð Þ, ∀*j*∈f g ð Þ *π*2, *s*, *π*<sup>3</sup> *:* (21)

<sup>0</sup> ð Þ, ∀*j*∈f g *π*<sup>1</sup> ∪f g *π*<sup>2</sup> *:* (22)

<sup>0</sup> ð Þ is at the first position in schedule

<sup>0</sup> ð Þ, and (20)

⊆ *N*, then at any

<sup>0</sup> ð Þ (19)

<sup>0</sup> ð Þ and

*Cj*ð Þ� *π p <sup>j</sup>* <*Cf*ð Þ� *π p <sup>f</sup>* and, since *rj* > *rf* ,

*DOI: http://dx.doi.org/10.5772/intechopen.93677*

The inequality (13) can be proved in a similar way.

contradicts the definition (11) of job *s N*ð Þ , *t* . Therefore, *rj* >*rs:*

Hence, *L <sup>j</sup>*ð Þ *π* <*Ls*ð Þ *π* for each job *j*∈ *N*nf g*f* ,ð Þ *j* ! *s <sup>π</sup>*.

<sup>0</sup> ð Þ≤*L*max *π*, *t*

<sup>0</sup> ð Þ, *j*∈ *N*<sup>0</sup>

*L <sup>j</sup> π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤ *L <sup>j</sup> π*, *t*

*L <sup>j</sup> π*<sup>0</sup> , *t* <sup>0</sup> ð Þ< *Ls π*<sup>0</sup>

Obviously, the following inequality is true for job *f*

therefore *Cj*ð Þ� *π p <sup>j</sup>* � *rj* <*Cs*ð Þ� *π ps* � *rs* and

*L*max *π*<sup>0</sup> , *t*

and one of the jobs *f* ¼ *f N*<sup>0</sup>

<sup>0</sup> ð Þ≤*rj t*

From the lemma 1.1 we have

then *L <sup>j</sup>*ð Þ¼ *π*, *t Cj*ð Þ� *π d <sup>j</sup>* <*Cs*ð Þ� *π ds* ¼ *Ls*ð Þ *π*, *t* , therefore (13) is true.

Since *d <sup>j</sup>* < *ds*, then from (9) we have *d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ≥*ds* � *rs* � *ps* or

Theorem 1.1 If conditions (9) are true for jobs in the subset *N*<sup>0</sup>

, *t* <sup>0</sup> ð Þ or *s* ¼ *s N*<sup>0</sup>

. If *d <sup>f</sup>* ≤*ds*, then job *f* is at the first position in schedule *π*<sup>0</sup>

*C*max *π*<sup>0</sup>

, *t*

<sup>0</sup> ≥*t* and any early schedule *π* ∈ Π *N*<sup>0</sup> ð Þ there exists *π*<sup>0</sup> ∈ Π *N*<sup>0</sup> ð Þ such that

<sup>0</sup> ð Þ, and *C*max *π*<sup>0</sup>

**Proof:** Let *π* ¼ ð Þ *π*1, *f*, *π*2, *s*, *π*<sup>3</sup> , where *π*1, *π*<sup>2</sup> and *π*<sup>3</sup> are partial schedules of *π*. Then, we construct a schedule *π*<sup>0</sup> ¼ ð Þ *f*, *π*1, *π*2, *s*, *π*<sup>3</sup> . From the definitions (7), (8),

, hence *C*max ð Þ *f*, *π*<sup>1</sup> , *t*

<sup>0</sup> ð Þ≤*C*max *π*, *t*

, *t*

, *t*

each job *j*,ð Þ *j* ! *f <sup>π</sup>*.

time *t*

*π*0

**19**

(10) we get *rf t*

$$r(N, t) = \min\_{j \in N} \{r\_j(t)\}. \tag{8}$$

In cases when its obvious how many jobs we mean, we write *r t*ð Þ instead of *r N*ð Þ , *t* .

We assume that the job parameters satisfy the following constraints:

$$d\_1 \le \dots \le d\_n, \quad d\_1 - r\_1 - p\_1 \ge \dots \ge d\_n - r\_n - p\_n. \tag{9}$$

For example, these constraints correspond to the case when *d <sup>j</sup>* ¼ *rj* þ *p <sup>j</sup>* þ *z*, *j* ¼ 1, …, *n*, where *z* is a constant, that is, when all jobs have the same time for processing before due date. A problem with similar constraints but for a single objective function (*L*max) is considered in [16].

We assume that ∣*N*∣>1 and *t* is the time when the machine is ready. From the set *N*, we find two jobs *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* in the following way:

$$f(N, t) = \arg\min\_{j \in N} \left\{ d\_j | r\_j(t) = r(N, t) \right\},\tag{10}$$

$$s(N, t) = \arg\min\_{j \in N \backslash \{f\}} \{d\_j | r\_j(t) = r(N | f, t)\},\tag{11}$$

where *f* ¼ *f N*ð Þ , *t* . If *N* ¼ f g*i* , then we set *f N*ð Þ¼ , *t i*, *s N*ð Þ¼ , *t* 0, ∀*t*. We also define *d*<sup>0</sup> ¼ þ∞, *f*ð Þ¼ ∅,t 0, *s*ð Þ¼ ∅,t 0, ∀*t*. For jobs *f* and *s* the following properties are true.

Lemma 1.1 If the jobs of the set *N* satisfy (4), then for any schedule *π* ∈ Πð Þ *N* for all *j*∈ *N*nf g*f* , for which ð Þ *j* ! *f <sup>π</sup>*

$$L\_j(\pi) < L\_f(\pi) \tag{12}$$

is true, and for all *j* ∈ *N*nf g *f*, *s* , satisfying the condition ð Þ *j* ! *s <sup>π</sup>*,

$$L\_j(\pi) < L\_s(\pi),\tag{13}$$

where *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* , is also true.

**Proof:** For each job *j* such that ð Þ *j* ! *f <sup>π</sup>*, completion time *Cj*ð Þ *π* <*Cf*ð Þ *π* . If *d <sup>j</sup>* ≥*d <sup>f</sup>* , then obviously

$$L\_j(\pi) = \mathcal{C}\_j(\pi) - d\_j < \mathcal{C}\_f(\pi) - d\_f = L\_f(\pi),\tag{14}$$

therefore (12) is valid.

If for job *j*∈ *N*, ð Þ *j* ! *f <sup>π</sup>*, then *d <sup>j</sup>* < *d <sup>f</sup>* . Then *rj* >*rf* . If *rj* ≤*rf* , then *rj*ð Þ*t* ≤*rf*ð Þ*t* and *rf*ðÞ¼ *t r t*ð Þ, as follows from (7) and (10). Then *rj*ðÞ¼ *t rf*ðÞ¼ *t r t*ð Þ and *d <sup>j</sup>* <*d <sup>f</sup>* , *Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem… DOI: http://dx.doi.org/10.5772/intechopen.93677*

but this contradicts the definition of job *f* (10). Therefore, *rj* >*rf* . Its obvious that *Cj*ð Þ� *π p <sup>j</sup>* <*Cf*ð Þ� *π p <sup>f</sup>* and, since *rj* > *rf* ,

$$C\_j(\pi) - p\_j - r\_j < C\_f(\pi) - p\_f - r\_f,\tag{15}$$

$$\mathbf{C}\_{j}(\boldsymbol{\pi}) - \mathbf{C}\_{f}(\boldsymbol{\pi}) < p\_{j} + r\_{j} - p\_{f} - r\_{f}.\tag{16}$$

Since *d <sup>j</sup>* < *d <sup>f</sup>* , then (from (9)) *d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ≥ *d <sup>f</sup>* � *rf* � *p <sup>f</sup>* or *d <sup>j</sup>* � *d <sup>f</sup>* ≥*rj* þ *p <sup>j</sup>* � *rf* � *p <sup>f</sup>* , so *Cj*ð Þ� *π Cf*ð Þ *π* < *p <sup>j</sup>* þ *rj* � *p <sup>f</sup>* � *rf* ≤ *d <sup>j</sup>* � *d <sup>f</sup>* . Then, *Lj*ð Þ *π*, *t* <*L <sup>f</sup>*ð Þ *π*, *t* for each job *j*,ð Þ *j* ! *f <sup>π</sup>*.

The inequality (13) can be proved in a similar way.

For each job *j* satisfying the condition ð Þ *j* ! *s <sup>π</sup>*, we have *Cj*ð Þ *π* <*Cs*ð Þ *π* . If *d <sup>j</sup>* ≥*ds*, then *L <sup>j</sup>*ð Þ¼ *π*, *t Cj*ð Þ� *π d <sup>j</sup>* <*Cs*ð Þ� *π ds* ¼ *Ls*ð Þ *π*, *t* , therefore (13) is true.

Let for the job *j*∈ *N*nf g*f* , ð Þ *j* ! *s <sup>π</sup>*, *d <sup>j</sup>* <*ds*, then *rj* >*rs*. Indeed, if we assume that *rj* ≤*rs*, then *rj*ð Þ*t* ≤ *rs*ð Þ*t* (it follows from (7)). In addition, *rs*ð Þ*t* ≥*r t*ð Þ for any job *s* according to definitions (8) and (11). If *rs*ðÞ¼ *t r t*ð Þ, then for the jobs *j* and *s* we can write *rj*ðÞ¼ *t rs*ðÞ¼ *t r t*ð Þ and *d <sup>j</sup>* <*ds*, which contradicts the definition (11) of job *s N*ð Þ , *t* . If *rs*ð Þ*t* >*r t*ð Þ, that is, *rs* > *r t*ð Þ, then there is no job *i*∈ *N*nf g *f*, *s* , for which *rs* >*ri* >*r t*ð Þ. Therefore, for the jobs *j* and *s* we get *rj*ðÞ¼ *t rs*ð Þ*t* and *d <sup>j</sup>* <*ds*, which contradicts the definition (11) of job *s N*ð Þ , *t* . Therefore, *rj* >*rs:*

Since *Cj*ð Þ *π* ≤*Cs*ð Þ� *π ps* and *p <sup>j</sup>* >0, then *Cj*ð Þ� *π p <sup>j</sup>* <*Cs*ð Þ� *π ps* and since *rj* >*rs*, therefore *Cj*ð Þ� *π p <sup>j</sup>* � *rj* <*Cs*ð Þ� *π ps* � *rs* and

$$\mathbf{C}\_{j}(\pi) - \mathbf{C}\_{\mathfrak{s}}(\pi) < p\_{j} + r\_{j} - p\_{\mathfrak{s}} - r\_{\mathfrak{s}}.\tag{17}$$

Since *d <sup>j</sup>* < *ds*, then from (9) we have *d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ≥*ds* � *rs* � *ps* or

$$\mathbf{C}\_{j}(\pi) - \mathbf{C}\_{\iota}(\pi) < p\_{j} + r\_{j} - p\_{\iota} - r\_{\iota} \le d\_{j} - d\_{\iota}. \tag{18}$$

Hence, *L <sup>j</sup>*ð Þ *π* <*Ls*ð Þ *π* for each job *j*∈ *N*nf g*f* ,ð Þ *j* ! *s <sup>π</sup>*.

Theorem 1.1 If conditions (9) are true for jobs in the subset *N*<sup>0</sup> ⊆ *N*, then at any time *t* <sup>0</sup> ≥*t* and any early schedule *π* ∈ Π *N*<sup>0</sup> ð Þ there exists *π*<sup>0</sup> ∈ Π *N*<sup>0</sup> ð Þ such that

$$L\_{\max}(\pi', t') \le L\_{\max}(\pi, t'), \quad \text{and} \quad C\_{\max}(\pi', t') \le C\_{\max}(\pi, t') \tag{19}$$

and one of the jobs *f* ¼ *f N*<sup>0</sup> , *t* <sup>0</sup> ð Þ or *s* ¼ *s N*<sup>0</sup> , *t* <sup>0</sup> ð Þ is at the first position in schedule *π*0 . If *d <sup>f</sup>* ≤*ds*, then job *f* is at the first position in schedule *π*<sup>0</sup> .

**Proof:** Let *π* ¼ ð Þ *π*1, *f*, *π*2, *s*, *π*<sup>3</sup> , where *π*1, *π*<sup>2</sup> and *π*<sup>3</sup> are partial schedules of *π*. Then, we construct a schedule *π*<sup>0</sup> ¼ ð Þ *f*, *π*1, *π*2, *s*, *π*<sup>3</sup> . From the definitions (7), (8), (10) we get *rf t* <sup>0</sup> ð Þ≤*rj t* <sup>0</sup> ð Þ, *j*∈ *N*<sup>0</sup> , hence *C*max ð Þ *f*, *π*<sup>1</sup> , *t* <sup>0</sup> ð Þ≤*C*max ð Þ *π*1, *f* , *t* <sup>0</sup> ð Þ and

$$\mathbf{C}\_{\max}(\pi', t') \le \mathbf{C}\_{\max}(\pi, t'), \quad \text{and} \tag{20}$$

$$L\_j(\pi', t') \le L\_j(\pi, t'), \quad \forall j \in \{ (\pi\_2, s, \pi\_3) \}. \tag{21}$$

From the lemma 1.1 we have

$$L\_j(\pi', t') < L\_s(\pi', t'), \quad \forall j \in \{\pi\_1\} \cup \{\pi\_2\}. \tag{22}$$

Obviously, the following inequality is true for job *f*

There is no schedule *π* such that *C*maxð Þ *π* ≤*C*max *π*<sup>0</sup>

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

that *m* ≤*n*.

introduce

*r N*ð Þ , *t* .

are true.

**18**

**3.1 Problem properties**

at least one of the inequalities is strict for some *i*, *i* ¼ 1, …, *m*. It is shown

*r N*ð Þ¼ , *t* min

We assume that the job parameters satisfy the following constraints:

*N*, we find two jobs *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* in the following way:

*f N*ð Þ¼ , *<sup>t</sup>* arg min *<sup>j</sup>* <sup>∈</sup> *<sup>N</sup>*

*s N*ð Þ¼ , *<sup>t</sup>* arg min *<sup>j</sup>* <sup>∈</sup> *<sup>N</sup>*nf g*<sup>f</sup>*

objective function (*L*max) is considered in [16].

all *j*∈ *N*nf g*f* , for which ð Þ *j* ! *f <sup>π</sup>*

*d <sup>j</sup>* ≥*d <sup>f</sup>* , then obviously

therefore (12) is valid.

We denote the precedence of the jobs *i* and *j* in schedule *π* as ð Þ *i* ! *j <sup>π</sup>*. We also

*j* ∈ *N*

*d*<sup>1</sup> ≤ … ≤*dn*, *d*<sup>1</sup> � *r*<sup>1</sup> � *p*<sup>1</sup> ≥ … ≥*dn* � *rn* � *pn:* (9)

In cases when its obvious how many jobs we mean, we write *r t*ð Þ instead of

For example, these constraints correspond to the case when *d <sup>j</sup>* ¼ *rj* þ *p <sup>j</sup>* þ *z*, *j* ¼ 1, …, *n*, where *z* is a constant, that is, when all jobs have the same time for processing before due date. A problem with similar constraints but for a single

We assume that ∣*N*∣>1 and *t* is the time when the machine is ready. From the set

where *f* ¼ *f N*ð Þ , *t* . If *N* ¼ f g*i* , then we set *f N*ð Þ¼ , *t i*, *s N*ð Þ¼ , *t* 0, ∀*t*. We also define *d*<sup>0</sup> ¼ þ∞, *f*ð Þ¼ ∅,t 0, *s*ð Þ¼ ∅,t 0, ∀*t*. For jobs *f* and *s* the following properties

is true, and for all *j* ∈ *N*nf g *f*, *s* , satisfying the condition ð Þ *j* ! *s <sup>π</sup>*,

**Proof:** For each job *j* such that ð Þ *j* ! *f <sup>π</sup>*, completion time *Cj*ð Þ *π* <*Cf*ð Þ *π* . If

If for job *j*∈ *N*, ð Þ *j* ! *f <sup>π</sup>*, then *d <sup>j</sup>* < *d <sup>f</sup>* . Then *rj* >*rf* . If *rj* ≤*rf* , then *rj*ð Þ*t* ≤*rf*ð Þ*t* and *rf*ðÞ¼ *t r t*ð Þ, as follows from (7) and (10). Then *rj*ðÞ¼ *t rf*ðÞ¼ *t r t*ð Þ and *d <sup>j</sup>* <*d <sup>f</sup>* ,

where *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* , is also true.

Lemma 1.1 If the jobs of the set *N* satisfy (4), then for any schedule *π* ∈ Πð Þ *N* for

*i*

*rj*ðÞ¼ *<sup>t</sup>* max *rj*, *<sup>t</sup>* ; (7)

*rj*ð Þ*<sup>t</sup> :* (8)

*<sup>d</sup> <sup>j</sup>*j*rj*ðÞ¼ *<sup>t</sup> r N*ð Þ , *<sup>t</sup>* , (10)

*<sup>d</sup> <sup>j</sup>*j*rj*ðÞ¼ *<sup>t</sup> r N*ð Þ <sup>n</sup>*f*, *<sup>t</sup>* , (11)

*Lj*ð Þ *π* <*L <sup>f</sup>*ð Þ *π* (12)

*L <sup>j</sup>*ð Þ *π* < *Ls*ð Þ *π* , (13)

*L <sup>j</sup>*ð Þ¼ *π Cj*ð Þ� *π d <sup>j</sup>* <*Cf*ð Þ� *π d <sup>f</sup>* ¼ *L <sup>f</sup>*ð Þ *π* , (14)

and *<sup>L</sup>*maxð Þ *<sup>π</sup>* <sup>≤</sup>*L*max *<sup>π</sup>*<sup>0</sup>

*i* ,

$$L\_f(\pi', t') \le L\_f(\pi, t'). \tag{23}$$

Example 1.1

8 >>>>>><

>>>>>>:

ð Þ 2, 1, 4, 3, …, *n*, *n* � 1 *:*

**Proof:** Let *π* ¼ *j*

schedule from Ω *N*<sup>0</sup>

*d <sup>f</sup>* >*ds*, then *j*

schedule *π*<sup>0</sup>

*L*max *πl*, *π*<sup>0</sup>

which *L*max *π*<sup>0</sup>

*l* � �, *t*

<sup>0</sup> ð Þ≤*L*max *π*, *t*

*C*max *π*<sup>0</sup> *l* ,*Cl*

*L*max *π*<sup>0</sup> , *t*

*π*0 *l* � �

Suppose for some *l*, 0 ≤*l* <*n*<sup>0</sup>

time *t*

*j <sup>l</sup>*þ<sup>1</sup>, …, *<sup>j</sup> n*0 *n* ¼ 2*m*, *t*≤*r*<sup>1</sup> < *r*<sup>2</sup> < … <*rn*,

*DOI: http://dx.doi.org/10.5772/intechopen.93677*

*<sup>r</sup>*2*<sup>i</sup>* <sup>þ</sup> *<sup>p</sup>*2*<sup>i</sup>* <sup>þ</sup> *<sup>p</sup>*2*i*�<sup>1</sup> � *<sup>d</sup>*2*<sup>i</sup>* <sup>≤</sup>*y:*

The optimal solution of the problem 1∣*rj*, *d <sup>j</sup>* ¼ *rj* þ *p <sup>j</sup>*

<sup>0</sup> ð Þ≤*L*max *π*, *t*

� �, then *<sup>π</sup>* <sup>¼</sup> *<sup>π</sup><sup>l</sup>* ð Þ , *<sup>π</sup><sup>l</sup>* . We introduce *Nl* <sup>¼</sup> *<sup>N</sup>*<sup>0</sup>

not an initial schedule of some schedule from Ω *N*<sup>0</sup>

<sup>1</sup> 6¼ *f N*<sup>0</sup>

*<sup>l</sup>* starting at time *Cl*, for which *L*max *π*<sup>0</sup>

*l*

for set of jobs *N* starting at time *t* using the algorithm 1.1.

**Algorithm 1.1** for constructing schedule *ω*ð Þ *N*, *t* .

2: **Main step.** Find the jobs *f* ≔ *f N*ð Þ , *t* and *s* ≔ *s N*ð Þ , *t* ;

<sup>0</sup> ð Þ, *C*max *π*<sup>0</sup>

the same as first *l* þ 1 jobs of some schedule from the set Ω *N*<sup>0</sup>

, *t*

<sup>0</sup> ð Þ and *C*max *π*<sup>0</sup>

*l* � �

<sup>1</sup> ¼ *f*, where ½ � *σ <sup>k</sup>* is the job in the *k*-th place in schedule *σ*. Hence,

<sup>0</sup> ð Þ and *C*max *πl*, *π*<sup>0</sup>

<sup>0</sup> ð Þ≤*C*max *π*, *t*

the original randomly selected schedule *π* we come to schedule *π*<sup>0</sup> ∈ Ω *N*<sup>0</sup>

, *t* <sup>0</sup> ð Þ and *j*

the largest partial schedule is empty. Let us say *f* ¼ *f Nl* ð Þ ,*Cl* and *s* ¼ *s Nl* ð Þ ,*Cl* . If

According to the theorem 1.1 for the jobs of the set f g *π<sup>l</sup>* , *π<sup>l</sup>* ∈ Πð Þ *Nl* , there is a

Theorem 1.2 If for the jobs of the subset *N*<sup>0</sup>

<sup>1</sup>, *j* <sup>2</sup>, …, *j n*0

of the schedule *π* as *πl*, *l* ¼ 0, 1, 2, …, *n*<sup>0</sup>

, *t* <sup>0</sup> ð Þ. If *j*

*<sup>l</sup>*þ<sup>1</sup> 6¼ *<sup>f</sup>* and *<sup>j</sup>*

� �≤*C*max *<sup>π</sup><sup>l</sup>* ð Þ ,*Cl* , and *<sup>π</sup>*<sup>0</sup>

<sup>0</sup> � �<sup>≤</sup> *<sup>L</sup>*max *<sup>π</sup><sup>l</sup>* ð Þ , *<sup>π</sup><sup>l</sup>* , *<sup>t</sup>*

Let us denote *π*<sup>0</sup> ¼ *πl*, *π*<sup>0</sup>

, *t*

1: **Initial step.** Let *ω* ¼ ∅.

3: **if** *d <sup>f</sup>* ≤*ds* **then** 4: *ω* ≔ ð Þ *ω*, *f*

5: **else**

**21**

<sup>0</sup> ð Þ≤*L*max *π*, *t*

*L*max *π*<sup>0</sup> , *t*

*<sup>r</sup>*2*i*�<sup>1</sup> <sup>&</sup>lt; *<sup>r</sup>*2*<sup>i</sup>* <sup>þ</sup> *<sup>p</sup>*2*<sup>i</sup>* <sup>&</sup>lt; *<sup>r</sup>*2*i*�<sup>1</sup> <sup>þ</sup> *<sup>p</sup>*2*i*�1, 1≤*<sup>i</sup>* <sup>≤</sup> *<sup>m</sup>*,

*<sup>r</sup>*2*i*�<sup>1</sup> <sup>þ</sup> *<sup>p</sup>*2*i*�<sup>1</sup> <sup>þ</sup> *<sup>p</sup>*2*<sup>i</sup>* � *<sup>d</sup>*2*i*�<sup>1</sup> <sup>&</sup>gt;*y*, 1≤*i*<sup>≤</sup> *<sup>m</sup>* � 1,

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem…*

<sup>0</sup> ≥*t* and any schedule *π* ∈ Π *N*<sup>0</sup> ð Þ exists a schedule *π*<sup>0</sup> ∈ Ω *N*<sup>0</sup>

*<sup>r</sup>*2*i*�<sup>1</sup> <sup>þ</sup> *<sup>p</sup>*2*i*�<sup>1</sup> <sup>þ</sup> *<sup>p</sup>*2*<sup>i</sup>* <sup>&</sup>lt; *<sup>r</sup>*2*i*þ<sup>1</sup> <sup>&</sup>lt; *<sup>r</sup>*2*<sup>i</sup>* <sup>þ</sup> *<sup>p</sup>*2*<sup>i</sup>* <sup>þ</sup> *<sup>p</sup>*2*i*�<sup>1</sup> <sup>&</sup>lt;*r*2*i*þ2, 1<sup>≤</sup> *<sup>i</sup>*<sup>≤</sup> *<sup>m</sup>* � 1,

The set <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* contains 2*<sup>m</sup>* schedules. The value of *<sup>y</sup>* is used below in the text.

<sup>0</sup> ð Þ and *C*max *π*<sup>0</sup>

⊆ *N*, ∣*N*<sup>0</sup>

� � be an arbitrary schedule. We denote the first *l* jobs

<sup>1</sup> 6¼ *s N*<sup>0</sup>

*l* ,*Cl*

*l* � �, *t*

� �. A feature of schedule *<sup>π</sup>*<sup>0</sup> is that the first *<sup>l</sup>* <sup>þ</sup> 1 jobs are

*<sup>l</sup>*þ<sup>1</sup> 6¼ *<sup>s</sup>*, moreover when *<sup>d</sup> <sup>f</sup>* <sup>≤</sup>*ds*, then *<sup>j</sup>*

<sup>0</sup> ð Þ. After no more than *n*<sup>0</sup> sequential conversions (since schedule length *n*<sup>0</sup> ≤*n*) of

<sup>0</sup> ð Þ≤*C*max *π*, *t*

, *t*

We form the following partial schedule *ω*ð Þ¼ *N*, *t i*1, *i*2, …, *i* ð Þ*<sup>l</sup>* . For each job *ik*, *k* ¼ 1, 2, …, *l*, we have *ik* ¼ *f <sup>k</sup>* and *d <sup>f</sup> <sup>k</sup>* ≤*dsk* , where *f <sup>k</sup>* ¼ *f Nk*�<sup>1</sup> ð Þ ,*Ck*�<sup>1</sup> and *sk* ¼ *s Nk*�<sup>1</sup> ð Þ ,*Ck*�<sup>1</sup> . For *f* ¼ *f Nl* ð Þ ,*Cl* and *s* ¼ *s Nl* ð Þ ,*Cl* inequality *d <sup>f</sup>* > *ds* is true. In case when *d <sup>f</sup>* >*ds* for *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* , then *ω*ð Þ¼ *N*, *t* ∅. So *ω*ð Þ *N*, *t* is the "maximum" schedule, during the construction of which job (like *f* ) for the next place of the schedule can be uniquely selected. We can construct a schedule *ω*ð Þ *N*, *t*

∣ ¼ *n*<sup>0</sup>

<sup>0</sup> ð Þ≤*C*max *π*, *t*

, where *π*<sup>0</sup> is an empty schedule, and *π<sup>l</sup>* ¼

, *t*

, *π<sup>l</sup>* is the largest initial partial the schedule of some

, *t*

� �≤*L*max *<sup>π</sup><sup>l</sup>* ð Þ ,*Cl* ,

<sup>1</sup> ¼ ð Þ *f* or *s* , moreover, with *d <sup>f</sup>* ≤ *ds*, true

<sup>0</sup> � �≤*C*max *<sup>π</sup><sup>l</sup>* ð Þ , *<sup>π</sup><sup>l</sup>* , *<sup>t</sup>*

, *t* <sup>0</sup> ð Þ.

, *Lmax* <sup>≤</sup> *<sup>y</sup>*∣*C*max is *<sup>π</sup>* <sup>∗</sup> <sup>¼</sup>

, *t* <sup>0</sup> ð Þ such that

nf g *π<sup>l</sup>* and *Cl* ¼ *C*max *πl*, *t*

<sup>0</sup> ð Þ, then *π<sup>l</sup>* ¼ *π*0, *l* ¼ 0, then

<sup>0</sup> ð Þ.

<sup>0</sup> ð Þ. The theorem is proved.

, *t* <sup>0</sup> ð Þ, and

, is true (9), then at any

<sup>0</sup> ð Þ*:* (30)

<sup>0</sup> ð Þ.

*<sup>l</sup>*þ<sup>1</sup> 6¼ *<sup>f</sup>*, since *<sup>π</sup><sup>l</sup>*þ<sup>1</sup> is

, *t* <sup>0</sup> ð Þ, for

From (20)–(23) we get *C*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*C*max *π*, *t* <sup>0</sup> ð Þ and *L*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*L*max *π*, *t* <sup>0</sup> ð Þ. Let *π* ¼ ð Þ *π*1, *s*, *π*2, *f*, *π*<sup>3</sup> , that is, job *s* is before job *f*. Construct a schedule *π*<sup>0</sup> ¼ ð Þ *s*, *π*1, *π*2, *f*, *π*<sup>3</sup> . Further proof may be repeated as for job *f*. The first part of the theorem is proved.

Let us assume *d <sup>f</sup>* ≤ *ds* and the schedule *π* ¼ ð Þ *π*1, *s*, *π*2, *f*, *π*<sup>3</sup> . Then, we construct a schedule *π*<sup>0</sup> ¼ ð Þ *f*, *π*11, *π*12, *π*<sup>3</sup> , where *π*11, *π*<sup>12</sup> are schedules of jobs of the sets *j*∈ *N*<sup>0</sup> : *j*∈f g ð Þ *π*1, *s*, *π*<sup>2</sup> , *d <sup>j</sup>* <*d <sup>f</sup>* � � and *<sup>j</sup>*<sup>∈</sup> *<sup>N</sup>*<sup>0</sup> : *<sup>j</sup>*<sup>∈</sup> f g ð Þ *<sup>π</sup>*1, *<sup>s</sup>*, *<sup>π</sup>*<sup>2</sup> , *<sup>d</sup> <sup>j</sup>* <sup>≥</sup>*<sup>d</sup> <sup>f</sup>* � �. Jobs in *π*<sup>11</sup> and *π*<sup>12</sup> are ordered according to nondecreasing release times *rj*. From *ds* ≥*d <sup>f</sup>* we can conclude that *s*∈f g *π*<sup>12</sup> .

For each job *j* ∈f g *π*<sup>11</sup> we have *d <sup>j</sup>* <*d <sup>f</sup>* . Of (9) we get *d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ≥*d <sup>f</sup>* � *rf* � *p <sup>f</sup>* , hence *rj* þ *p <sup>j</sup>* <*rf* þ *p <sup>f</sup>* , ∀*j*∈f g *π*<sup>11</sup> , and *C*max ð Þ *f*, *π*<sup>11</sup> , *t* <sup>0</sup> ð Þ¼ *rf t* <sup>0</sup> ð Þþ *<sup>p</sup> <sup>f</sup>* <sup>þ</sup> <sup>P</sup> *<sup>j</sup>*∈f g *<sup>π</sup>*<sup>11</sup> *<sup>p</sup> <sup>j</sup>* . Since jobs in schedule f g *π*<sup>12</sup> are sorted by nondecreasing release times, then *C*max ð Þ *f*, *π*11, *π*<sup>12</sup> , *t* <sup>0</sup> ð Þ≤*C*max ð Þ *π*1, *s*, *π*2, *f* , *t* <sup>0</sup> ð Þ. As a result

$$\mathbf{C}\_{\max}(\pi', t') \le \mathbf{C}\_{\max}(\pi, t'), \quad \text{and} \tag{24}$$

$$L\_j(\pi', t') \le L\_j(\pi, t'), \quad \forall j \in \{\pi\_3\}. \tag{25}$$

Job *j*∈f g *π*<sup>12</sup> satisfies *d <sup>j</sup>* ≥ *d <sup>f</sup>* and *Cj π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*Cf π*, *t* <sup>0</sup> ð Þ, which means

$$L\_j(\pi', t') \le L\_f(\pi, t'), \quad \forall j \in \{\pi\_{12}\}.\tag{26}$$

Since *s* ∈f g *π*<sup>12</sup> , then

$$L\_s(\pi', t') \le L\_f(\pi, t'). \tag{27}$$

From the lemma 1.1

$$L\_j(\pi', t') \le L\_\circ(\pi', t'), \quad \forall j \in \{\pi\_{11}\}.\tag{28}$$

Moreover, it is obvious that

$$L\_f(\pi', t') \le L\_f(\pi, t'). \tag{29}$$

From inequalities (24)–(29) it follows that *C*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*C*max *π*, *t* <sup>0</sup> ð Þ and *L*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*L*max *π*, *t* <sup>0</sup> ð Þ, the theorem is proved.

We call a schedule *π*<sup>0</sup> ∈ Πð Þ *N* **effective** if there is no schedule *π* ∈ Πð Þ *N* such that *L*maxð Þ *π* ≤ *L*max *π*<sup>0</sup> ð Þ and *C*maxð Þ *π* ≤*C*max *π*<sup>0</sup> ð Þ, that is, at least one inequality would be strict.

Thus, when constraints (9) are satisfied for jobs from the set *N*, then there is an effective schedule *π*<sup>0</sup> , in which either the job *f* ¼ *f N*ð Þ , *t* , or *s* ¼ *s N*ð Þ , *t* is present. Moreover, if *d <sup>f</sup>* ≤ *ds*, then there is an effective schedule *π*<sup>0</sup> with a priority of job *f*.

We define the set of schedules Ωð Þ *N*, *t* as a subset of Πð Þ *N* consisting of *n*! schedules. Schedule *π* ¼ ð Þ *i*1, *i*2, …, *in* belongs to Ωð Þ *N*, *t* if we choose job *ik*, *k* ¼ 1, 2, …, *n* as *f <sup>k</sup>* ¼ *f Nk*�1,*Cik*�<sup>1</sup> � � or *sk* <sup>¼</sup> *s Nk*�1,*Cik*�<sup>1</sup> � �, where *Nk*�<sup>1</sup> <sup>¼</sup> *N*nf g *i*1, *i*2, …, *ik*�<sup>1</sup> , *Cik*�<sup>1</sup> ¼ *Cik*�<sup>1</sup> ð Þ *π* and *N*<sup>0</sup> ¼ *N*, *Ci*<sup>0</sup> ¼ *t*. For *d <sup>f</sup> <sup>k</sup>* ≤*dsk* it is true that *ik* ¼ *f <sup>k</sup>*, so if *d <sup>f</sup> <sup>k</sup>* >*dsk* , then *ik* ¼ *f <sup>k</sup>* or *ik* ¼ *sk*. Its obvious that the set of schedules <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* contains at most 2*<sup>n</sup>* schedules.that is, *<sup>p</sup>*2*<sup>i</sup>* <sup>&</sup>gt;*<sup>y</sup>* <sup>≥</sup>*p*2*i*�<sup>1</sup>.

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem… DOI: http://dx.doi.org/10.5772/intechopen.93677*

Example 1.1

*Lf π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*L <sup>f</sup> π*, *t*

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

, *t*

schedule *π*<sup>0</sup> ¼ ð Þ *f*, *π*11, *π*12, *π*<sup>3</sup> , where *π*11, *π*<sup>12</sup> are schedules of jobs of the sets

Since jobs in schedule f g *π*<sup>12</sup> are sorted by nondecreasing release times, then

, *t*

*Ls π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤ *Lf π*, *t*

*Lf π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*L <sup>f</sup> π*, *t*

and *π*<sup>12</sup> are ordered according to nondecreasing release times *rj*. From *ds* ≥*d <sup>f</sup>* we

<sup>0</sup> ð Þ. As a result

<sup>0</sup> ð Þ≤*C*max *π*, *t*

, *t* <sup>0</sup> ð Þ≤*Cf π*, *t*

, *t*

We call a schedule *π*<sup>0</sup> ∈ Πð Þ *N* **effective** if there is no schedule *π* ∈ Πð Þ *N* such that *L*maxð Þ *π* ≤ *L*max *π*<sup>0</sup> ð Þ and *C*maxð Þ *π* ≤*C*max *π*<sup>0</sup> ð Þ, that is, at least one inequality would be

Thus, when constraints (9) are satisfied for jobs from the set *N*, then there is an

Moreover, if *d <sup>f</sup>* ≤ *ds*, then there is an effective schedule *π*<sup>0</sup> with a priority of job *f*. We define the set of schedules Ωð Þ *N*, *t* as a subset of Πð Þ *N* consisting of *n*! schedules. Schedule *π* ¼ ð Þ *i*1, *i*2, …, *in* belongs to Ωð Þ *N*, *t* if we choose job *ik*, *k* ¼

*N*nf g *i*1, *i*2, …, *ik*�<sup>1</sup> , *Cik*�<sup>1</sup> ¼ *Cik*�<sup>1</sup> ð Þ *π* and *N*<sup>0</sup> ¼ *N*, *Ci*<sup>0</sup> ¼ *t*. For *d <sup>f</sup> <sup>k</sup>* ≤*dsk* it is true that *ik* ¼ *f <sup>k</sup>*, so if *d <sup>f</sup> <sup>k</sup>* >*dsk* , then *ik* ¼ *f <sup>k</sup>* or *ik* ¼ *sk*. Its obvious that the set of schedules

, in which either the job *f* ¼ *f N*ð Þ , *t* , or *s* ¼ *s N*ð Þ , *t* is present.

� �, where *Nk*�<sup>1</sup> <sup>¼</sup>

� � and *<sup>j</sup>*<sup>∈</sup> *<sup>N</sup>*<sup>0</sup> : *<sup>j</sup>*<sup>∈</sup> f g ð Þ *<sup>π</sup>*1, *<sup>s</sup>*, *<sup>π</sup>*<sup>2</sup> , *<sup>d</sup> <sup>j</sup>* <sup>≥</sup>*<sup>d</sup> <sup>f</sup>*

hence *rj* þ *p <sup>j</sup>* <*rf* þ *p <sup>f</sup>* , ∀*j*∈f g *π*<sup>11</sup> , and *C*max ð Þ *f*, *π*<sup>11</sup> , *t*

*C*max *π*<sup>0</sup>

*L <sup>j</sup> π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*L <sup>j</sup> π*, *t*

*L <sup>j</sup> π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*L <sup>f</sup> π*, *t*

*L <sup>j</sup> π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤ *Ls π*<sup>0</sup>

From inequalities (24)–(29) it follows that *C*max *π*<sup>0</sup>

<sup>0</sup> ð Þ, the theorem is proved.

� � or *sk* <sup>¼</sup> *s Nk*�1,*Cik*�<sup>1</sup>

<sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* contains at most 2*<sup>n</sup>* schedules.that is, *<sup>p</sup>*2*<sup>i</sup>* <sup>&</sup>gt;*<sup>y</sup>* <sup>≥</sup>*p*2*i*�<sup>1</sup>.

<sup>0</sup> ð Þ≤*C*max ð Þ *π*1, *s*, *π*2, *f* , *t*

Job *j*∈f g *π*<sup>12</sup> satisfies *d <sup>j</sup>* ≥ *d <sup>f</sup>* and *Cj π*<sup>0</sup>

<sup>0</sup> ð Þ≤*C*max *π*, *t*

Let *π* ¼ ð Þ *π*1, *s*, *π*2, *f*, *π*<sup>3</sup> , that is, job *s* is before job *f*. Construct a schedule *π*<sup>0</sup> ¼ ð Þ *s*, *π*1, *π*2, *f*, *π*<sup>3</sup> . Further proof may be repeated as for job *f*. The first part of the

Let us assume *d <sup>f</sup>* ≤ *ds* and the schedule *π* ¼ ð Þ *π*1, *s*, *π*2, *f*, *π*<sup>3</sup> . Then, we construct a

For each job *j* ∈f g *π*<sup>11</sup> we have *d <sup>j</sup>* <*d <sup>f</sup>* . Of (9) we get *d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ≥*d <sup>f</sup>* � *rf* � *p <sup>f</sup>* ,

From (20)–(23) we get *C*max *π*<sup>0</sup>

*j*∈ *N*<sup>0</sup> : *j*∈f g ð Þ *π*1, *s*, *π*<sup>2</sup> , *d <sup>j</sup>* <*d <sup>f</sup>*

can conclude that *s*∈f g *π*<sup>12</sup> .

Since *s* ∈f g *π*<sup>12</sup> , then

From the lemma 1.1

<sup>0</sup> ð Þ≤*L*max *π*, *t*

1, 2, …, *n* as *f <sup>k</sup>* ¼ *f Nk*�1,*Cik*�<sup>1</sup>

effective schedule *π*<sup>0</sup>

*L*max *π*<sup>0</sup> , *t*

strict.

**20**

Moreover, it is obvious that

theorem is proved.

*C*max ð Þ *f*, *π*11, *π*<sup>12</sup> , *t*

<sup>0</sup> ð Þ*:* (23)

, *t*

� �. Jobs in *π*<sup>11</sup>

<sup>0</sup> ð Þ≤*L*max *π*, *t*

<sup>0</sup> ð Þþ *<sup>p</sup> <sup>f</sup>* <sup>þ</sup> <sup>P</sup>

<sup>0</sup> ð Þ, and (24)

<sup>0</sup> ð Þ, ∀*j*∈f g *π*<sup>3</sup> *:* (25)

<sup>0</sup> ð Þ, which means

<sup>0</sup> ð Þ, ∀*j*∈f g *π*<sup>12</sup> *:* (26)

<sup>0</sup> ð Þ, ∀*j*∈f g *π*<sup>11</sup> *:* (28)

<sup>0</sup> ð Þ≤*C*max *π*, *t*

, *t*

<sup>0</sup> ð Þ*:* (27)

<sup>0</sup> ð Þ*:* (29)

<sup>0</sup> ð Þ and

<sup>0</sup> ð Þ.

*<sup>j</sup>*∈f g *<sup>π</sup>*<sup>11</sup> *<sup>p</sup> <sup>j</sup>* .

<sup>0</sup> ð Þ and *L*max *π*<sup>0</sup>

<sup>0</sup> ð Þ¼ *rf t*

$$\begin{cases} n = 2m, t \le r\_1 < r\_2 < \dots < r\_n, \\ r\_{2i-1} < r\_{2i} + p\_{2i} < r\_{2i-1} + p\_{2i-1}, 1 \le i \le m, \\ r\_{2i-1} + p\_{2i-1} + p\_{2i} < r\_{2i+1} < r\_{2i} + p\_{2i} + p\_{2i-1} < r\_{2i+2}, 1 \le i \le m-1, \\ r\_{2i-1} + p\_{2i-1} + p\_{2i} - d\_{2i-1} > y, 1 \le i \le m-1, \\ r\_{2i} + p\_{2i} + p\_{2i-1} - d\_{2i} \le y. \end{cases}$$

The set <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* contains 2*<sup>m</sup>* schedules. The value of *<sup>y</sup>* is used below in the text. The optimal solution of the problem 1∣*rj*, *d <sup>j</sup>* ¼ *rj* þ *p <sup>j</sup>* , *Lmax* <sup>≤</sup> *<sup>y</sup>*∣*C*max is *<sup>π</sup>* <sup>∗</sup> <sup>¼</sup> ð Þ 2, 1, 4, 3, …, *n*, *n* � 1 *:*

Theorem 1.2 If for the jobs of the subset *N*<sup>0</sup> ⊆ *N*, ∣*N*<sup>0</sup> ∣ ¼ *n*<sup>0</sup> , is true (9), then at any time *t* <sup>0</sup> ≥*t* and any schedule *π* ∈ Π *N*<sup>0</sup> ð Þ exists a schedule *π*<sup>0</sup> ∈ Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ such that

$$L\_{\max}(\pi', t') \le L\_{\max}(\pi, t') \quad \text{and} \quad C\_{\max}(\pi', t') \le C\_{\max}(\pi, t'). \tag{30}$$

**Proof:** Let *π* ¼ *j* <sup>1</sup>, *j* <sup>2</sup>, …, *j n*0 � � be an arbitrary schedule. We denote the first *l* jobs of the schedule *π* as *πl*, *l* ¼ 0, 1, 2, …, *n*<sup>0</sup> , where *π*<sup>0</sup> is an empty schedule, and *π<sup>l</sup>* ¼ *j <sup>l</sup>*þ<sup>1</sup>, …, *<sup>j</sup> n*0 � �, then *<sup>π</sup>* <sup>¼</sup> *<sup>π</sup><sup>l</sup>* ð Þ , *<sup>π</sup><sup>l</sup>* . We introduce *Nl* <sup>¼</sup> *<sup>N</sup>*<sup>0</sup> nf g *π<sup>l</sup>* and *Cl* ¼ *C*max *πl*, *t* <sup>0</sup> ð Þ. Suppose for some *l*, 0 ≤*l* <*n*<sup>0</sup> , *π<sup>l</sup>* is the largest initial partial the schedule of some schedule from Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ. If *j* <sup>1</sup> 6¼ *f N*<sup>0</sup> , *t* <sup>0</sup> ð Þ and *j* <sup>1</sup> 6¼ *s N*<sup>0</sup> , *t* <sup>0</sup> ð Þ, then *π<sup>l</sup>* ¼ *π*0, *l* ¼ 0, then the largest partial schedule is empty. Let us say *f* ¼ *f Nl* ð Þ ,*Cl* and *s* ¼ *s Nl* ð Þ ,*Cl* . If *d <sup>f</sup>* >*ds*, then *j <sup>l</sup>*þ<sup>1</sup> 6¼ *<sup>f</sup>* and *<sup>j</sup> <sup>l</sup>*þ<sup>1</sup> 6¼ *<sup>s</sup>*, moreover when *<sup>d</sup> <sup>f</sup>* <sup>≤</sup>*ds*, then *<sup>j</sup> <sup>l</sup>*þ<sup>1</sup> 6¼ *<sup>f</sup>*, since *<sup>π</sup><sup>l</sup>*þ<sup>1</sup> is not an initial schedule of some schedule from Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ.

According to the theorem 1.1 for the jobs of the set f g *π<sup>l</sup>* , *π<sup>l</sup>* ∈ Πð Þ *Nl* , there is a schedule *π*<sup>0</sup> *<sup>l</sup>* starting at time *Cl*, for which *L*max *π*<sup>0</sup> *l* ,*Cl* � �≤*L*max *<sup>π</sup><sup>l</sup>* ð Þ ,*Cl* , *C*max *π*<sup>0</sup> *l* ,*Cl* � �≤*C*max *<sup>π</sup><sup>l</sup>* ð Þ ,*Cl* , and *<sup>π</sup>*<sup>0</sup> *l* � � <sup>1</sup> ¼ ð Þ *f* or *s* , moreover, with *d <sup>f</sup>* ≤ *ds*, true *π*0 *l* � � <sup>1</sup> ¼ *f*, where ½ � *σ <sup>k</sup>* is the job in the *k*-th place in schedule *σ*. Hence, *L*max *πl*, *π*<sup>0</sup> *l* � �, *t* <sup>0</sup> � �<sup>≤</sup> *<sup>L</sup>*max *<sup>π</sup><sup>l</sup>* ð Þ , *<sup>π</sup><sup>l</sup>* , *<sup>t</sup>* <sup>0</sup> ð Þ and *C*max *πl*, *π*<sup>0</sup> *l* � �, *t* <sup>0</sup> � �≤*C*max *<sup>π</sup><sup>l</sup>* ð Þ , *<sup>π</sup><sup>l</sup>* , *<sup>t</sup>* <sup>0</sup> ð Þ.

Let us denote *π*<sup>0</sup> ¼ *πl*, *π*<sup>0</sup> *l* � �. A feature of schedule *<sup>π</sup>*<sup>0</sup> is that the first *<sup>l</sup>* <sup>þ</sup> 1 jobs are the same as first *l* þ 1 jobs of some schedule from the set Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ, and *L*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*L*max *π*, *t* <sup>0</sup> ð Þ, *C*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*C*max *π*, *t* <sup>0</sup> ð Þ.

After no more than *n*<sup>0</sup> sequential conversions (since schedule length *n*<sup>0</sup> ≤*n*) of the original randomly selected schedule *π* we come to schedule *π*<sup>0</sup> ∈ Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ, for which *L*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*L*max *π*, *t* <sup>0</sup> ð Þ and *C*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≤*C*max *π*, *t* <sup>0</sup> ð Þ. The theorem is proved.

We form the following partial schedule *ω*ð Þ¼ *N*, *t i*1, *i*2, …, *i* ð Þ*<sup>l</sup>* . For each job *ik*, *k* ¼ 1, 2, …, *l*, we have *ik* ¼ *f <sup>k</sup>* and *d <sup>f</sup> <sup>k</sup>* ≤*dsk* , where *f <sup>k</sup>* ¼ *f Nk*�<sup>1</sup> ð Þ ,*Ck*�<sup>1</sup> and *sk* ¼ *s Nk*�<sup>1</sup> ð Þ ,*Ck*�<sup>1</sup> . For *f* ¼ *f Nl* ð Þ ,*Cl* and *s* ¼ *s Nl* ð Þ ,*Cl* inequality *d <sup>f</sup>* > *ds* is true. In case when *d <sup>f</sup>* >*ds* for *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* , then *ω*ð Þ¼ *N*, *t* ∅. So *ω*ð Þ *N*, *t* is the "maximum" schedule, during the construction of which job (like *f* ) for the next place of the schedule can be uniquely selected. We can construct a schedule *ω*ð Þ *N*, *t* for set of jobs *N* starting at time *t* using the algorithm 1.1.

**Algorithm 1.1** for constructing schedule *ω*ð Þ *N*, *t* .

1: **Initial step.** Let *ω* ¼ ∅. 2: **Main step.** Find the jobs *f* ≔ *f N*ð Þ , *t* and *s* ≔ *s N*ð Þ , *t* ; 3: **if** *d <sup>f</sup>* ≤*ds* **then** 4: *ω* ≔ ð Þ *ω*, *f* 5: **else**

6: algorithm stops; 7: **end if** 8: Let *N* ≔ *N*nf g*f* , *t* ≔ *rf*ðÞþ*t p <sup>f</sup>* and go to the next main step.

Lemma 1.2 The complexity of the algorithm 1.1 for finding the schedule *ω*ð Þ *N*, *t* is at most *O n*ð Þ log *n* operations for any *N* and any *t*.

where *sl*þ<sup>1</sup> <sup>¼</sup> *s N*<sup>n</sup> *<sup>ω</sup>*<sup>1</sup> ,*C*max *<sup>ω</sup>*<sup>1</sup> ð Þ . Jobs *sl*þ<sup>1</sup> and *<sup>j</sup>* were not ordered in schedule *<sup>ω</sup>*1, therefore, *<sup>C</sup>*max *<sup>ω</sup>*<sup>1</sup> ð Þ<*rsl*þ<sup>1</sup> <sup>≤</sup>*rj*. Besides, *di* <sup>≤</sup>*<sup>d</sup> <sup>j</sup>*. If *di* <sup>&</sup>gt; *<sup>d</sup> <sup>j</sup>*, then *ri* <sup>þ</sup> *pi* <sup>≥</sup> *rj* <sup>þ</sup> *<sup>p</sup> <sup>j</sup>*

Therefore, our assumption is not true and *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> f g *<sup>π</sup><sup>l</sup>* . The theorem is proved. Therefore, jobs of the set *<sup>ω</sup>*1ð Þ *<sup>N</sup>*, *<sup>t</sup>* precede jobs of the set *<sup>N</sup>*<sup>n</sup> *<sup>ω</sup>*1ð Þ *<sup>N</sup>*, *<sup>t</sup>* for any

schedule from the set Ωð Þ *N*, *t* , including the optimal schedule.

The problem 1∣*di* ≤*d <sup>j</sup>*, *di* � *ri* � *pi* ≥*d <sup>j</sup>* � *rj* � *p <sup>j</sup>*

**3.2 Performance problem with constraint on maximum lateness**

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem…*

min f g *C*maxð Þ *π* : *L*maxð Þ *π* ≤*y* . If *L*maxð Þ *π* >*y* for any *π* ∈ Πð Þ *N* , then *θ* ¼ ∅. Lemma 1.4 The complexity of algorithm 1.2 does not exceed *O n*<sup>2</sup> ð Þ log *<sup>n</sup>*

**Proof:** At each iteration of the main step of the algorithm 1.2 we find the schedules *<sup>ω</sup>*<sup>1</sup> and, if necessary, *<sup>ω</sup>*<sup>2</sup> in *O n*ð Þ log *<sup>n</sup>* operations. Since *<sup>ω</sup>*<sup>1</sup> and *<sup>ω</sup>*<sup>2</sup> consist of at least one job, then at each iteration of the algorithm we either add one or mere jobs to the schedule *θ*, or assume *θ* ¼ ∅ and stop. Therefore, the total number of steps of the algorithm is at most *<sup>n</sup>*. Thus, algorithm 1.2 requires *O n*<sup>2</sup> ð Þ log *<sup>n</sup>*

**Algorithm 1.2** for solving the problem 1∣*di* ≤ *d <sup>j</sup>*, *di* � *ri* � *pi* ≥*d <sup>j</sup>* � *rj* �

<sup>0</sup> ≔ *t*;

<sup>0</sup> <sup>≔</sup> *<sup>C</sup>*maxð Þ*<sup>θ</sup>* and *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup>

, *t* <sup>0</sup> ð Þ, *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup>

<sup>0</sup> ð Þ≤*y* **then**

<sup>0</sup> ð Þ>*y* **then**

in less than *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* operations because there exists (*Example 1.1*). The optimal schedule for this example is *<sup>π</sup>* <sup>∗</sup> <sup>¼</sup> ð Þ 2, 1, 4, 3, …, *<sup>n</sup>*, *<sup>n</sup>* � <sup>1</sup> . To find this schedule, we

, *t* <sup>0</sup> ð Þ.

; *L*max ≤ *y*∣*C*max cannot be solved

following. We need to find schedule *θ* for any *y* with *C*maxð Þ¼ *θ*

, since *π* ¼ *π<sup>l</sup>* ð Þ , *π<sup>l</sup>* ∈ Ωð Þ *N*, *t* , but it contradicts our

; *L*max ≤ *y*∣*C*max consists of the

*ri* þ *pi* <*rj* is true. Hence ð Þ *i* ! *j <sup>π</sup><sup>l</sup>*

*DOI: http://dx.doi.org/10.5772/intechopen.93677*

guess that *i* ∉ f g *π<sup>l</sup>* and *j*∈f g *π<sup>l</sup>* .

operations.

operations.

; *L*max ≤*y*∣*C*max.

2: **Main step.** 3: **if** *L*max *θ*, *t*

6: Find *N*<sup>0</sup> ≔ *N*nf g*θ* , *t*

7: **if** *N*<sup>0</sup> ¼ ∅ **then** 8: the algorithm stops.

10: **if** *L*max *ω*1, *t*

5: **end if**

9: **else**

12: **end if** 13: **if** *L*max *ω*1, *t*

15: **end if** 16: **if** *L*max *ω*1, *t*

18: **end if** 19: **end if**

**23**

need *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* operations.

1: **Initial step.** Let *θ* ≔ *ω*ð Þ *N*, *t* , *t*

<sup>0</sup> ð Þ> *y* **then** 4: *θ* ≔ ∅ and the algorithm stops.

<sup>0</sup> ð Þ≤*y* **then** 11: *<sup>θ</sup>* <sup>≔</sup> *<sup>θ</sup>*, *<sup>ω</sup>*<sup>1</sup> ð Þ and go to next step;

14: *<sup>θ</sup>* <sup>≔</sup> *<sup>θ</sup>*,*ω*<sup>2</sup> ð Þ and go to next step;

17: *θ* ≔ ∅ and the algorithm stops.

<sup>0</sup> ð Þ>*<sup>y</sup>* and *<sup>L</sup>*max *<sup>ω</sup>*2, *<sup>t</sup>*

<sup>0</sup> ð Þ>*<sup>y</sup>* and *<sup>L</sup>*max *<sup>ω</sup>*2, *<sup>t</sup>*

The problem 1∣*di* ≤*d <sup>j</sup>*, *di* � *ri* � *pi* ≥*d <sup>j</sup>* � *rj* � *p <sup>j</sup>*

*p j*

, but

**Proof:** At each iteration of the algorithm 1.1 we find two jobs: *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* . If jobs are ordered by release times *rj* (and, accordingly, time *r t*ð Þ is for *O*ð Þ1 operations), then to find two jobs (*f* and *s*) you need *O*ð Þ log *n* operations. Total number of iterations is not more than *n*. Thus, constructing a schedule *ω*ð Þ *N*, *t* requires *O n*ð Þ log *n* operations.

The main step of algorithm 1.1 is finding the jobs *f* and *s* and it requires at least *O*ð Þ log *n* operations. Obviously, the number of iterations of the algorithm is *O(n)*, therefore, the complexity of the algorithm 1.1 of *O n*ð Þ log *n* operations is the minimum possible for constructing the schedule *ω*ð Þ *N*, *t* .

Lemma 1.3 If for jobs of the set *N* conditions (9) are true, then any schedule *π* ∈ Ωð Þ *N*, *t* starts with the schedule *ω*ð Þ *N*, *t* .

**Proof:** If *ω*ð Þ¼ *N*, *t* ∅, that is, *d <sup>f</sup>* > *ds*, where *f* ¼ *f N*ð Þ , *t* , *s* ¼ *s N*ð Þ , *t* , the statement of the lemma is true, since any schedule starts from empty.

Let *ω*ð Þ¼ *N*, *t i*1, *i*2, …, *i* ð Þ*<sup>l</sup>* , *l* >0, and so for each *ik*, *k* ¼ 1, 2, …, *l*, we have *ik* ¼ *f <sup>k</sup>* and *d <sup>f</sup> <sup>k</sup>* ≤*dsk* , where *f <sup>k</sup>* ¼ *f Nk*�<sup>1</sup> ð Þ ,*Ck*�<sup>1</sup> and *sk* ¼ *s Nk*�<sup>1</sup> ð Þ ,*Ck*�<sup>1</sup> . For *f* ¼ *f Nl* ð Þ ,*Cl* and *s* ¼ *s Nl* ð Þ ,*Cl* it is true that *d <sup>f</sup>* >*ds*. As seen from the definition of the set of schedules Ωð Þ *N*, *t* all schedules in this subset start with a partial schedule *ω*ð Þ *N*, *t* .

Let us use the following notation *<sup>ω</sup>*<sup>1</sup>ð Þ¼ *<sup>N</sup>*, *<sup>t</sup> <sup>f</sup>*, *<sup>ω</sup> <sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ ð Þ and *<sup>ω</sup>*<sup>2</sup>ð Þ¼ *<sup>N</sup>*, *<sup>t</sup> s*,*ω N*00, *t* <sup>00</sup> , where *<sup>f</sup>* <sup>¼</sup> *f N*ð Þ , *<sup>t</sup>* , *<sup>s</sup>* <sup>¼</sup> *s N*ð Þ , *<sup>t</sup>* , *<sup>N</sup>*<sup>0</sup> <sup>¼</sup> *<sup>N</sup>*nf g*<sup>f</sup>* , *<sup>N</sup>*<sup>00</sup> <sup>¼</sup> *<sup>N</sup>*nf g*<sup>s</sup>* , *<sup>t</sup>* <sup>0</sup> ¼ *rf*ð Þþ*t p <sup>f</sup>* , *t* <sup>00</sup> ¼ *rs*ð Þþ*t ps* . Obviously, the algorithm for finding *ω*<sup>1</sup> (as well as *ω*2) requires *O n*ð Þ log *n* operations, as much as the algorithm for constructing *ω*ð Þ *N*, *t* .

Consequence 1.1 **from Lemma 1.3.** If the jobs of the set *N* satisfy conditions (9), then each schedule *<sup>π</sup>* <sup>∈</sup> <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* starts either with *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* , or with *<sup>ω</sup>*<sup>2</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* .

Theorem 1.3 If the jobs of the set *N* satisfy conditions (9), then for any schedule *<sup>π</sup>* <sup>∈</sup> <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* it is true that ð Þ *<sup>i</sup>* ! *<sup>j</sup> <sup>π</sup>* for any *<sup>i</sup>*<sup>∈</sup> *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* and *<sup>j</sup>*<sup>∈</sup> *<sup>N</sup>*<sup>n</sup> *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* .

**Proof:** In the case *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* <sup>¼</sup> *<sup>N</sup>* statement of the theorem is obviously true. Let *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* 6¼ *<sup>N</sup>*. Further in in the proof we use the notation *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* .

If *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* are such that *d <sup>f</sup>* ≤*ds*, then all schedules from the set <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* begin with a partial schedule *<sup>ω</sup>*ð Þ¼ *<sup>N</sup>*, *<sup>t</sup> <sup>ω</sup>*1, therefore the statement of the theorem is also true.

Consider the case of *d <sup>f</sup>* >*ds*. All schedules from set Ωð Þ *N*, *t* starting with job *f* have partial schedule *<sup>ω</sup>*ð Þ¼ *<sup>N</sup>*, *<sup>t</sup> <sup>ω</sup>*1.

Let us choose any arbitrary schedules *π* ∈ Ωð Þ *N*, *t* with job *s* comes first, *π*<sup>1</sup> ¼ *s*, and any schedule <sup>∣</sup>*ω*<sup>1</sup><sup>∣</sup> <sup>¼</sup> *<sup>l</sup>*, *<sup>l</sup>*<sup>&</sup>lt; *<sup>n</sup>*, containing *<sup>l</sup>* jobs. Let *<sup>π</sup><sup>l</sup>* <sup>¼</sup> *<sup>j</sup>* <sup>1</sup>, *j* <sup>2</sup>, …, *j l* be a partial schedule of schedule *π* containing *l* jobs, where *j* <sup>1</sup> ¼ *s*. We need to prove that f g *<sup>π</sup><sup>l</sup>* <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup> . Let us assume the contrary that there is a job *<sup>j</sup>*∈f g *<sup>π</sup><sup>l</sup>* , but *<sup>j</sup>* <sup>∉</sup> *<sup>ω</sup>*<sup>1</sup> .

For case ð Þ *j* ! *f <sup>π</sup>* we need to check two subcases. If *d <sup>j</sup>* < *d <sup>f</sup>* , then from (9) we have *d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ≥*d <sup>f</sup>* � *rf* � *p <sup>f</sup>* , therefore *rj* þ *p <sup>j</sup>* <*rf* þ *p <sup>f</sup>* . Then job *j* is included in schedule *<sup>ω</sup>*<sup>1</sup> according to the definition of *<sup>ω</sup>*ð Þ *<sup>N</sup>*, *<sup>t</sup>* and *<sup>ω</sup>*1, but by our assumption *<sup>j</sup>* <sup>∉</sup> *<sup>ω</sup>*<sup>1</sup> . If *<sup>d</sup> <sup>j</sup>* <sup>≥</sup> *<sup>d</sup> <sup>f</sup>* , then from the fact that *<sup>π</sup>* <sup>∈</sup> <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* follows ð Þ *<sup>f</sup>* ! *<sup>j</sup> <sup>π</sup>*, but this contradicts ð Þ *<sup>j</sup>* ! *<sup>f</sup> <sup>π</sup>*. Therefore, *<sup>j</sup>* <sup>∈</sup> *<sup>ω</sup>*<sup>1</sup> .

The other case is ð Þ *<sup>f</sup>* ! *<sup>j</sup> <sup>π</sup>*. Then for each job *<sup>i</sup>*<sup>∈</sup> *<sup>ω</sup>*<sup>1</sup> , for which *<sup>i</sup>* <sup>∉</sup> f g *<sup>π</sup><sup>l</sup>* , conditions *ri* <sup>&</sup>lt;*ri* <sup>þ</sup> *pi* <sup>≤</sup>*C*max *<sup>ω</sup>*<sup>1</sup> ð Þ<*rsl*þ<sup>1</sup> <sup>≤</sup>*rj* are true, because *<sup>j</sup>* <sup>∉</sup> *<sup>ω</sup>*<sup>1</sup> ,

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem… DOI: http://dx.doi.org/10.5772/intechopen.93677*

where *sl*þ<sup>1</sup> <sup>¼</sup> *s N*<sup>n</sup> *<sup>ω</sup>*<sup>1</sup> ,*C*max *<sup>ω</sup>*<sup>1</sup> ð Þ . Jobs *sl*þ<sup>1</sup> and *<sup>j</sup>* were not ordered in schedule *<sup>ω</sup>*1, therefore, *<sup>C</sup>*max *<sup>ω</sup>*<sup>1</sup> ð Þ<*rsl*þ<sup>1</sup> <sup>≤</sup>*rj*. Besides, *di* <sup>≤</sup>*<sup>d</sup> <sup>j</sup>*. If *di* <sup>&</sup>gt; *<sup>d</sup> <sup>j</sup>*, then *ri* <sup>þ</sup> *pi* <sup>≥</sup> *rj* <sup>þ</sup> *<sup>p</sup> <sup>j</sup>* , but *ri* þ *pi* <*rj* is true. Hence ð Þ *i* ! *j <sup>π</sup><sup>l</sup>* , since *π* ¼ *π<sup>l</sup>* ð Þ , *π<sup>l</sup>* ∈ Ωð Þ *N*, *t* , but it contradicts our guess that *i* ∉ f g *π<sup>l</sup>* and *j*∈f g *π<sup>l</sup>* .

Therefore, our assumption is not true and *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> f g *<sup>π</sup><sup>l</sup>* . The theorem is proved.

Therefore, jobs of the set *<sup>ω</sup>*1ð Þ *<sup>N</sup>*, *<sup>t</sup>* precede jobs of the set *<sup>N</sup>*<sup>n</sup> *<sup>ω</sup>*1ð Þ *<sup>N</sup>*, *<sup>t</sup>* for any schedule from the set Ωð Þ *N*, *t* , including the optimal schedule.

#### **3.2 Performance problem with constraint on maximum lateness**

The problem 1∣*di* ≤*d <sup>j</sup>*, *di* � *ri* � *pi* ≥*d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ; *L*max ≤ *y*∣*C*max consists of the following. We need to find schedule *θ* for any *y* with *C*maxð Þ¼ *θ*

min f g *C*maxð Þ *π* : *L*maxð Þ *π* ≤*y* . If *L*maxð Þ *π* >*y* for any *π* ∈ Πð Þ *N* , then *θ* ¼ ∅. Lemma 1.4 The complexity of algorithm 1.2 does not exceed *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* operations.

**Proof:** At each iteration of the main step of the algorithm 1.2 we find the schedules *<sup>ω</sup>*<sup>1</sup> and, if necessary, *<sup>ω</sup>*<sup>2</sup> in *O n*ð Þ log *<sup>n</sup>* operations. Since *<sup>ω</sup>*<sup>1</sup> and *<sup>ω</sup>*<sup>2</sup> consist of at least one job, then at each iteration of the algorithm we either add one or mere jobs to the schedule *θ*, or assume *θ* ¼ ∅ and stop. Therefore, the total number of steps of the algorithm is at most *<sup>n</sup>*. Thus, algorithm 1.2 requires *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* operations.

**Algorithm 1.2** for solving the problem 1∣*di* ≤ *d <sup>j</sup>*, *di* � *ri* � *pi* ≥*d <sup>j</sup>* � *rj* � *p j* ; *L*max ≤*y*∣*C*max.

```
1: Initial step. Let θ ≔ ωð Þ N, t , t
                                 0 ≔ t;
 2: Main step.
 3: if Lmax θ, t
          0 ð Þ> y then
 4: θ ≔ ∅ and the algorithm stops.
 5: end if
 6: Find N0 ≔ Nnf gθ , t
                      0 ≔ Cmaxð Þθ and ω1 N0
                                            , t
                                         0 ð Þ, ω2 N0
                                                       , t
                                                   0 ð Þ.
 7: if N0 ¼ ∅ then
 8: the algorithm stops.
 9: else
10: if Lmax ω1, t
            0 ð Þ≤y then
11: θ ≔ θ, ω1 ð Þ and go to next step;
12: end if
13: if Lmax ω1, t
            0 ð Þ>y and Lmax ω2, t
                                0 ð Þ≤y then
14: θ ≔ θ,ω2 ð Þ and go to next step;
15: end if
16: if Lmax ω1, t
            0 ð Þ>y and Lmax ω2, t
                                0 ð Þ>y then
17: θ ≔ ∅ and the algorithm stops.
18: end if
19: end if
```
The problem 1∣*di* ≤*d <sup>j</sup>*, *di* � *ri* � *pi* ≥*d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ; *L*max ≤ *y*∣*C*max cannot be solved in less than *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* operations because there exists (*Example 1.1*). The optimal schedule for this example is *<sup>π</sup>* <sup>∗</sup> <sup>¼</sup> ð Þ 2, 1, 4, 3, …, *<sup>n</sup>*, *<sup>n</sup>* � <sup>1</sup> . To find this schedule, we need *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* operations.

6: algorithm stops;

requires *O n*ð Þ log *n* operations.

8: Let *N* ≔ *N*nf g*f* , *t* ≔ *rf*ðÞþ*t p <sup>f</sup>* and go to the next main step.

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

is at most *O n*ð Þ log *n* operations for any *N* and any *t*.

mum possible for constructing the schedule *ω*ð Þ *N*, *t* .

ment of the lemma is true, since any schedule starts from empty.

Let us use the following notation *<sup>ω</sup>*<sup>1</sup>ð Þ¼ *<sup>N</sup>*, *<sup>t</sup> <sup>f</sup>*, *<sup>ω</sup> <sup>N</sup>*<sup>0</sup>

and any schedule <sup>∣</sup>*ω*<sup>1</sup><sup>∣</sup> <sup>¼</sup> *<sup>l</sup>*, *<sup>l</sup>*<sup>&</sup>lt; *<sup>n</sup>*, containing *<sup>l</sup>* jobs. Let *<sup>π</sup><sup>l</sup>* <sup>¼</sup> *<sup>j</sup>*

schedule of schedule *π* containing *l* jobs, where *j*

contradicts ð Þ *<sup>j</sup>* ! *<sup>f</sup> <sup>π</sup>*. Therefore, *<sup>j</sup>* <sup>∈</sup> *<sup>ω</sup>*<sup>1</sup> .

<sup>00</sup> , where *<sup>f</sup>* <sup>¼</sup> *f N*ð Þ , *<sup>t</sup>* , *<sup>s</sup>* <sup>¼</sup> *s N*ð Þ , *<sup>t</sup>* , *<sup>N</sup>*<sup>0</sup> <sup>¼</sup> *<sup>N</sup>*nf g*<sup>f</sup>* , *<sup>N</sup>*<sup>00</sup> <sup>¼</sup> *<sup>N</sup>*nf g*<sup>s</sup>* , *<sup>t</sup>*

then each schedule *<sup>π</sup>* <sup>∈</sup> <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* starts either with *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* , or with *<sup>ω</sup>*<sup>2</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* .

*<sup>π</sup>* <sup>∈</sup> <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* it is true that ð Þ *<sup>i</sup>* ! *<sup>j</sup> <sup>π</sup>* for any *<sup>i</sup>*<sup>∈</sup> *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* and *<sup>j</sup>*<sup>∈</sup> *<sup>N</sup>*<sup>n</sup> *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* .

requires *O n*ð Þ log *n* operations, as much as the algorithm for constructing *ω*ð Þ *N*, *t* . Consequence 1.1 **from Lemma 1.3.** If the jobs of the set *N* satisfy conditions (9),

Theorem 1.3 If the jobs of the set *N* satisfy conditions (9), then for any schedule

**Proof:** In the case *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* <sup>¼</sup> *<sup>N</sup>* statement of the theorem is obviously true. Let *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* 6¼ *<sup>N</sup>*. Further in in the proof we use the notation *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* .

If *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* are such that *d <sup>f</sup>* ≤*ds*, then all schedules from the set <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* begin with a partial schedule *<sup>ω</sup>*ð Þ¼ *<sup>N</sup>*, *<sup>t</sup> <sup>ω</sup>*1, therefore the statement of the

Consider the case of *d <sup>f</sup>* >*ds*. All schedules from set Ωð Þ *N*, *t* starting with job *f*

Let us choose any arbitrary schedules *π* ∈ Ωð Þ *N*, *t* with job *s* comes first, *π*<sup>1</sup> ¼ *s*,

f g *<sup>π</sup><sup>l</sup>* <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup> . Let us assume the contrary that there is a job *<sup>j</sup>*∈f g *<sup>π</sup><sup>l</sup>* , but *<sup>j</sup>* <sup>∉</sup> *<sup>ω</sup>*<sup>1</sup> . For case ð Þ *j* ! *f <sup>π</sup>* we need to check two subcases. If *d <sup>j</sup>* < *d <sup>f</sup>* , then from (9) we have *d <sup>j</sup>* � *rj* � *p <sup>j</sup>* ≥*d <sup>f</sup>* � *rf* � *p <sup>f</sup>* , therefore *rj* þ *p <sup>j</sup>* <*rf* þ *p <sup>f</sup>* . Then job *j* is included in schedule *<sup>ω</sup>*<sup>1</sup> according to the definition of *<sup>ω</sup>*ð Þ *<sup>N</sup>*, *<sup>t</sup>* and *<sup>ω</sup>*1, but by our assumption *<sup>j</sup>* <sup>∉</sup> *<sup>ω</sup>*<sup>1</sup> . If *<sup>d</sup> <sup>j</sup>* <sup>≥</sup> *<sup>d</sup> <sup>f</sup>* , then from the fact that *<sup>π</sup>* <sup>∈</sup> <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* follows ð Þ *<sup>f</sup>* ! *<sup>j</sup> <sup>π</sup>*, but this

The other case is ð Þ *<sup>f</sup>* ! *<sup>j</sup> <sup>π</sup>*. Then for each job *<sup>i</sup>*<sup>∈</sup> *<sup>ω</sup>*<sup>1</sup> , for which *<sup>i</sup>* <sup>∉</sup> f g *<sup>π</sup><sup>l</sup>* ,

conditions *ri* <sup>&</sup>lt;*ri* <sup>þ</sup> *pi* <sup>≤</sup>*C*max *<sup>ω</sup>*<sup>1</sup> ð Þ<*rsl*þ<sup>1</sup> <sup>≤</sup>*rj* are true, because *<sup>j</sup>* <sup>∉</sup> *<sup>ω</sup>*<sup>1</sup> ,

*π* ∈ Ωð Þ *N*, *t* starts with the schedule *ω*ð Þ *N*, *t* .

<sup>00</sup> ¼ *rs*ð Þþ*t ps*

have partial schedule *<sup>ω</sup>*ð Þ¼ *<sup>N</sup>*, *<sup>t</sup> <sup>ω</sup>*1.

Lemma 1.2 The complexity of the algorithm 1.1 for finding the schedule *ω*ð Þ *N*, *t*

**Proof:** At each iteration of the algorithm 1.1 we find two jobs: *f* ¼ *f N*ð Þ , *t* and *s* ¼ *s N*ð Þ , *t* . If jobs are ordered by release times *rj* (and, accordingly, time *r t*ð Þ is for *O*ð Þ1 operations), then to find two jobs (*f* and *s*) you need *O*ð Þ log *n* operations. Total number of iterations is not more than *n*. Thus, constructing a schedule *ω*ð Þ *N*, *t*

The main step of algorithm 1.1 is finding the jobs *f* and *s* and it requires at least *O*ð Þ log *n* operations. Obviously, the number of iterations of the algorithm is *O(n)*, therefore, the complexity of the algorithm 1.1 of *O n*ð Þ log *n* operations is the mini-

Lemma 1.3 If for jobs of the set *N* conditions (9) are true, then any schedule

**Proof:** If *ω*ð Þ¼ *N*, *t* ∅, that is, *d <sup>f</sup>* > *ds*, where *f* ¼ *f N*ð Þ , *t* , *s* ¼ *s N*ð Þ , *t* , the state-

Let *ω*ð Þ¼ *N*, *t i*1, *i*2, …, *i* ð Þ*<sup>l</sup>* , *l* >0, and so for each *ik*, *k* ¼ 1, 2, …, *l*, we have *ik* ¼ *f <sup>k</sup>* and *d <sup>f</sup> <sup>k</sup>* ≤*dsk* , where *f <sup>k</sup>* ¼ *f Nk*�<sup>1</sup> ð Þ ,*Ck*�<sup>1</sup> and *sk* ¼ *s Nk*�<sup>1</sup> ð Þ ,*Ck*�<sup>1</sup> . For *f* ¼ *f Nl* ð Þ ,*Cl* and *s* ¼ *s Nl* ð Þ ,*Cl* it is true that *d <sup>f</sup>* >*ds*. As seen from the definition of the set of schedules Ωð Þ *N*, *t* all schedules in this subset start with a partial schedule *ω*ð Þ *N*, *t* .

, *t* <sup>0</sup> ð Þ ð Þ and *<sup>ω</sup>*<sup>2</sup>ð Þ¼ *<sup>N</sup>*, *<sup>t</sup>*

> <sup>1</sup>, *j* <sup>2</sup>, …, *j l* be a partial

<sup>1</sup> ¼ *s*. We need to prove that

. Obviously, the algorithm for finding *ω*<sup>1</sup> (as well as *ω*2)

<sup>0</sup> ¼

7: **end if**

*s*,*ω N*00, *t*

*rf*ð Þþ*t p <sup>f</sup>* , *t*

theorem is also true.

**22**

We denote by *θ*ð Þ *N*, *t*, *y* the schedule constructed by algorithm 1.2 starting at time *t* from the jobs of the set *N* with the maximum lateness not more than *y*. If *N* ¼ ∅, then *θ*ð Þ¼ ∅, t, y ∅ for any *t* and *y*.

Theorem 1.4 Let the jobs of the set *N* satisfy conditions (9). If the algorithm 1.2 constructs the schedule *θ*ð Þ *N*, *t*, *y* 6¼ ∅, then *C*maxð Þ¼ *θ*

6: **if** *L*max *ω*1, *t*

10: **if** *L*max *ω*1, *t*

11: find *θ* ¼ *θ N*<sup>0</sup>

12: **if** *θ* ¼ ∅ **then**

15: *<sup>π</sup>*<sup>0</sup> <sup>≔</sup> *<sup>π</sup>* <sup>∗</sup> ð Þ , *<sup>θ</sup>* 16: **if** *C*max *π*<sup>0</sup>

17: *m* ≔ *m* þ 1, *π*<sup>0</sup>

23: find *<sup>ω</sup>*<sup>2</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup>

24: **if** *L*max *ω*2, *t*

8: **end if** 9: **if** *L*max *ω*1, *t*

14: **else**

18: **else** 19: *π*<sup>0</sup>

20: **end if** 21: **end if** 22: **if** *L*max *ω*1, *t*

26: **else**

28: **end if** 29: **end if** 30: **end if** 31: **end if**

<sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* consists of 2*<sup>n</sup>*

execute algorithm 1.3.

a partial schedule *ω*ð Þ *N*, *t* .

**25**

operations.

<sup>0</sup> ð Þ≤*L*max *<sup>π</sup>* <sup>∗</sup> ð Þ **then** 7: *<sup>π</sup>* <sup>∗</sup> <sup>≔</sup> *<sup>π</sup>* <sup>∗</sup> ,*ω*<sup>1</sup> ð Þ, where *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.93677*

<sup>0</sup> ð Þ<sup>&</sup>gt; *<sup>L</sup>*max *<sup>π</sup>* <sup>∗</sup> ð Þ **then**

, *t* 0

13: *<sup>π</sup>* <sup>∗</sup> <sup>≔</sup> *<sup>π</sup>* <sup>∗</sup> ,*ω*<sup>1</sup> ð Þ and go to the next step;

<sup>&</sup>lt;*C*max *<sup>π</sup>*<sup>0</sup> ð Þ **then**

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem…*

*<sup>m</sup>* ≔ *π*<sup>0</sup>

*<sup>m</sup>* ¼ *π*<sup>0</sup> and go to next step;

, *t* <sup>0</sup> ð Þ;

<sup>0</sup> ð Þ<*y* **then** 25: *<sup>π</sup>* <sup>∗</sup> <sup>¼</sup> *<sup>π</sup>* <sup>∗</sup> ,*ω*<sup>2</sup> ð Þ and go to the next step;

27: *<sup>π</sup>* <sup>∗</sup> <sup>¼</sup> *<sup>π</sup>m*<sup>0</sup> and the algorithm stops.

<sup>2</sup> schedules:

schedule *π* <sup>∗</sup> , or the algorithm stops at last schedule *π*<sup>0</sup>

*L*max *π*<sup>0</sup> ð Þ≤ *L*maxð Þ *π* and *C*max *π*<sup>0</sup> ð Þ≤*C*maxð Þ *π* .

<sup>0</sup> ð Þ<*y* **then**

*m*

<sup>0</sup> ð Þ≥ *y* **then**

, *t*

, Φ ≔ Φ∪ *π*<sup>0</sup>

As a result of the algorithm 1.3, a set of schedules Φð Þ *N*, *t* is constructed, for the set of jobs *N* starting at time *t*, for which inequality 1≤ ∣Φð Þ *N*, *t* ∣ ≤*n* true. We should note that the set Φð Þ *N*, *t* for *Example 1.1* consists of two schedules, although set

Lemma 1.5 The complexity of the algorithm 1.3 does not exceed *O n*<sup>3</sup> ð Þ log *<sup>n</sup>*

job, then at any iteration of the algorithm one or more jobs are added to the

of iterations is at most *<sup>n</sup>*. Thus, it takes no more than *O n*<sup>3</sup> ð Þ log *<sup>n</sup>* operations to

for any schedule *π* ∈ Πð Þ *N* there exists a schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* such that

**Proof:** At each iteration of the main step of the algorithm 1.3 we find schedules *<sup>ω</sup>*<sup>1</sup> and, if necessary, *<sup>ω</sup>*2, which requires *O n*ð Þ log *<sup>n</sup>* operations according to lemma 1.2, and also schedule *<sup>θ</sup>* in *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* operations. As *<sup>ω</sup>*<sup>1</sup> and *<sup>ω</sup>*<sup>2</sup> consist of at least one

Theorem 1.5 If case if (9) is true for each job of the set *N*, then the schedule *π* <sup>∗</sup> , constructed by algorithm 1.3, is optimal according to the criterion *L*max. Moreover,

**Proof:** According to Theorem 1.2, there exists an optimal (by *L*max) schedule from set Ωð Þ *N*, *t* . According to Lemma 1.3, all schedules of the set Ωð Þ *N*, *t* start with

*π*<sup>1</sup><sup>0</sup> ¼ ð Þ 1, 2, 3, 4, …, *n* � 1, *n* , (31) *π*<sup>2</sup><sup>0</sup> ¼ ð Þ 2, 1, 4, 3, …, *n*, *n* � 1 *:* (32)

. Therefore, the total number

, *<sup>y</sup>*<sup>0</sup> ð Þ using algorithm 1.2, where *<sup>y</sup>*<sup>0</sup> <sup>¼</sup> *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>*

*m*

, *<sup>y</sup>* <sup>¼</sup> *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup>

*m* ;

<sup>0</sup> ð Þ and go to the next step;

<sup>0</sup> ð Þ;

min f g *C*maxð Þ *π* : *L*maxð Þ *π* ≤*y*, *π* ∈ Πð Þ *N* . If, as a result of the algorithm 1.2 the schedule will not be generated, that is, *θ*ð Þ¼ *N*, *t*, *y* ∅, then *L*maxð Þ *π* >*y* for each *π* ∈ Πð Þ *N* .

**Proof:** In case if for schedule *π* ∈ Πð Þ *N* condition *L*maxð Þ *π* ≤*y* is true, then according to Theorem 1.2 there is a schedule *π*<sup>0</sup> ∈ Ωð Þ *N*, *t* such that *L*max *π*<sup>0</sup> ð Þ≤ *L*maxð Þ *π* ≤ *y* and *C*max *π*<sup>0</sup> ð Þ≤*C*maxð Þ *π* . Therefore, the required schedule *θ* contains in set Ωð Þ *N*, *t* .

According to Lemma 1.3, all schedules of the set Ωð Þ *N*, *t* start with *ω*ð Þ *N*, *t* . Let us take *θ*<sup>0</sup> ¼ *ω*ð Þ *N*, *t* .

After *k*, *k*≥0 main steps of the algorithm 1.2 we got the schedule *θ<sup>k</sup>* and *N*<sup>0</sup> ¼ *N*nf g *θ<sup>k</sup>* , *t* <sup>0</sup> ¼ *C*maxð Þ *θ<sup>k</sup>* . Let us assume that there is an optimal by the criterion of maximum completion time (*C*max) schedule *θ* starting with *θk*. According to Theorem 1.2, there is an optimal extension of the schedule *θ<sup>k</sup>* among the schedules from the set Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ.

Let *<sup>θ</sup><sup>k</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>θ</sup>k*,*ω*<sup>1</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ ð Þ , that is, *L*maxð Þ *θ<sup>k</sup>*þ<sup>1</sup> ≤*y*. According to Theorem 1.3, for schedule *<sup>ω</sup>*1, *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ, there is no artificial idle times of the machine and all schedules from the set Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ start with jobs of the set *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ . Therefore, *ω*<sup>1</sup> *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ is the best by the criterion of *C*max among all feasible by maximum lateness (*L*max) extensions of the partial schedule *θk*.

If *<sup>θ</sup><sup>k</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>θ</sup>k*, *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ ð Þ , that is, *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ<sup>&</sup>gt; *<sup>y</sup>*, and *<sup>L</sup>*max *<sup>ω</sup>*2, *<sup>t</sup>* <sup>0</sup> ð Þ≤*y*. All schedules of the set Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ start with either schedule *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ or *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ. As *L*max *ω*1, *t* <sup>0</sup> ð Þ>*y*, then the only suitable extension is *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ.

Thus, at each main step of the algorithm, we choose the fastest continuation of the partial schedule *θ<sup>k</sup>* among all those allowed by the maximum lateness. After no more than *n* main steps of the algorithm, the required schedule is constructed.

Let us assume that after the *<sup>k</sup>* <sup>þ</sup> 1 steps of the algorithm *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ> *y* and *L*max *ω*2, *t* <sup>0</sup> ð Þ> *y*. If schedule *θ* could exist, that is, *θ* 6¼ ∅, then *θ* would start with *θk*. Then for any schedule *π* ∈ Π *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ there would exist a schedule *π*<sup>0</sup> ∈ Ω *N*<sup>0</sup> , *t* <sup>0</sup> ð Þ such that *L*max *π*, *t* <sup>0</sup> ð Þ≥*L*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≥*L*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ>*y* or *L*max *π*, *t* <sup>0</sup> ð Þ≥*L*max *π*<sup>0</sup> , *t* <sup>0</sup> ð Þ≥ *L*max *ω*2, *t* <sup>0</sup> ð Þ> *y*. Therefore *θ* ¼ ∅.

Repeating our proof as many times as the main step of algorithm 1.2 (no more than *n*), we come to the truth of the statement of the theorem.

#### **3.3 Algorithm for constructing a set of Pareto schedules by criteria** *C***max and** *L***max**

Let us develop an algorithm for constructing a set of Pareto schedules Φð Þ¼ *N*, *t π*0 1, *π*<sup>0</sup> 2, …, *π*<sup>0</sup> *m* , *m* ≤ *n*, by criteria *C*max and *L*max according to conditions (5)–(6). Schedule *π*<sup>0</sup> *<sup>m</sup>* is a solution to problem 1∣*rj*∣*L*max if (9) is true.

**Algorithm 1.3** for constructing a set of Pareto schedules by criteria *C*max and *L*max.

1: **Initial step.** *<sup>Y</sup>* <sup>≔</sup> <sup>þ</sup> <sup>∞</sup>, *<sup>π</sup>* <sup>∗</sup> <sup>≔</sup> *<sup>ω</sup>*ð Þ *<sup>N</sup>*, *<sup>t</sup>* , <sup>Φ</sup> <sup>≔</sup> <sup>∅</sup>, *<sup>m</sup>* <sup>≔</sup> 0, *<sup>N</sup>*<sup>0</sup> <sup>≔</sup> *<sup>N</sup>*<sup>n</sup> *<sup>π</sup>* <sup>∗</sup> f g and *t* <sup>0</sup> <sup>≔</sup> *<sup>C</sup>*max *<sup>π</sup>* <sup>∗</sup> ð Þ. 2: **if** *N*<sup>0</sup> ¼ ∅ **then** 3: <sup>Φ</sup> <sup>≔</sup> <sup>Φ</sup><sup>∪</sup> *<sup>π</sup>* <sup>∗</sup> ð Þ, *<sup>m</sup>* <sup>≔</sup> 1 and the algorithm stops. 4: **end if** 5: **Main step.**

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem… DOI: http://dx.doi.org/10.5772/intechopen.93677*

6: **if** *L*max *ω*1, *t* <sup>0</sup> ð Þ≤*L*max *<sup>π</sup>* <sup>∗</sup> ð Þ **then** 7: *<sup>π</sup>* <sup>∗</sup> <sup>≔</sup> *<sup>π</sup>* <sup>∗</sup> ,*ω*<sup>1</sup> ð Þ, where *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ and go to the next step; 8: **end if** 9: **if** *L*max *ω*1, *t* <sup>0</sup> ð Þ<sup>&</sup>gt; *<sup>L</sup>*max *<sup>π</sup>* <sup>∗</sup> ð Þ **then** 10: **if** *L*max *ω*1, *t* <sup>0</sup> ð Þ<*y* **then** 11: find *θ* ¼ *θ N*<sup>0</sup> , *t* 0 , *<sup>y</sup>*<sup>0</sup> ð Þ using algorithm 1.2, where *<sup>y</sup>*<sup>0</sup> <sup>¼</sup> *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ; 12: **if** *θ* ¼ ∅ **then** 13: *<sup>π</sup>* <sup>∗</sup> <sup>≔</sup> *<sup>π</sup>* <sup>∗</sup> ,*ω*<sup>1</sup> ð Þ and go to the next step; 14: **else** 15: *<sup>π</sup>*<sup>0</sup> <sup>≔</sup> *<sup>π</sup>* <sup>∗</sup> ð Þ , *<sup>θ</sup>* 16: **if** *C*max *π*<sup>0</sup> *m* <sup>&</sup>lt;*C*max *<sup>π</sup>*<sup>0</sup> ð Þ **then** 17: *m* ≔ *m* þ 1, *π*<sup>0</sup> *<sup>m</sup>* ≔ *π*<sup>0</sup> , Φ ≔ Φ∪ *π*<sup>0</sup> *m* , *<sup>y</sup>* <sup>¼</sup> *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> *m* ; 18: **else** 19: *π*<sup>0</sup> *<sup>m</sup>* ¼ *π*<sup>0</sup> and go to next step; 20: **end if** 21: **end if** 22: **if** *L*max *ω*1, *t* <sup>0</sup> ð Þ≥ *y* **then** 23: find *<sup>ω</sup>*<sup>2</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ; 24: **if** *L*max *ω*2, *t* <sup>0</sup> ð Þ<*y* **then** 25: *<sup>π</sup>* <sup>∗</sup> <sup>¼</sup> *<sup>π</sup>* <sup>∗</sup> ,*ω*<sup>2</sup> ð Þ and go to the next step; 26: **else** 27: *<sup>π</sup>* <sup>∗</sup> <sup>¼</sup> *<sup>π</sup>m*<sup>0</sup> and the algorithm stops. 28: **end if** 29: **end if** 30: **end if** 31: **end if**

As a result of the algorithm 1.3, a set of schedules Φð Þ *N*, *t* is constructed, for the set of jobs *N* starting at time *t*, for which inequality 1≤ ∣Φð Þ *N*, *t* ∣ ≤*n* true. We should note that the set Φð Þ *N*, *t* for *Example 1.1* consists of two schedules, although set <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* consists of 2*<sup>n</sup>* <sup>2</sup> schedules:

$$
\pi\_{\mathbf{1}'} = (\mathbf{1}, \mathbf{2}, \mathbf{3}, \mathbf{4}, \dots, n - \mathbf{1}, n), \tag{31}
$$

$$
\pi\_{\mathcal{Z}'} = (2, 1, 4, 3, \ldots, n, n - 1). \tag{32}
$$

Lemma 1.5 The complexity of the algorithm 1.3 does not exceed *O n*<sup>3</sup> ð Þ log *<sup>n</sup>* operations.

**Proof:** At each iteration of the main step of the algorithm 1.3 we find schedules *<sup>ω</sup>*<sup>1</sup> and, if necessary, *<sup>ω</sup>*2, which requires *O n*ð Þ log *<sup>n</sup>* operations according to lemma 1.2, and also schedule *<sup>θ</sup>* in *O n*<sup>2</sup> ð Þ log *<sup>n</sup>* operations. As *<sup>ω</sup>*<sup>1</sup> and *<sup>ω</sup>*<sup>2</sup> consist of at least one job, then at any iteration of the algorithm one or more jobs are added to the schedule *π* <sup>∗</sup> , or the algorithm stops at last schedule *π*<sup>0</sup> . Therefore, the total number of iterations is at most *<sup>n</sup>*. Thus, it takes no more than *O n*<sup>3</sup> ð Þ log *<sup>n</sup>* operations to execute algorithm 1.3.

Theorem 1.5 If case if (9) is true for each job of the set *N*, then the schedule *π* <sup>∗</sup> , constructed by algorithm 1.3, is optimal according to the criterion *L*max. Moreover, for any schedule *π* ∈ Πð Þ *N* there exists a schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* such that *L*max *π*<sup>0</sup> ð Þ≤ *L*maxð Þ *π* and *C*max *π*<sup>0</sup> ð Þ≤*C*maxð Þ *π* .

**Proof:** According to Theorem 1.2, there exists an optimal (by *L*max) schedule from set Ωð Þ *N*, *t* . According to Lemma 1.3, all schedules of the set Ωð Þ *N*, *t* start with a partial schedule *ω*ð Þ *N*, *t* .

We denote by *θ*ð Þ *N*, *t*, *y* the schedule constructed by algorithm 1.2 starting at time *t* from the jobs of the set *N* with the maximum lateness not more than *y*. If

Theorem 1.4 Let the jobs of the set *N* satisfy conditions (9). If the algorithm 1.2

min f g *C*maxð Þ *π* : *L*maxð Þ *π* ≤*y*, *π* ∈ Πð Þ *N* . If, as a result of the algorithm 1.2 the schedule will not be generated, that is, *θ*ð Þ¼ *N*, *t*, *y* ∅, then *L*maxð Þ *π* >*y* for each *π* ∈ Πð Þ *N* . **Proof:** In case if for schedule *π* ∈ Πð Þ *N* condition *L*maxð Þ *π* ≤*y* is true, then according to Theorem 1.2 there is a schedule *π*<sup>0</sup> ∈ Ωð Þ *N*, *t* such that *L*max *π*<sup>0</sup> ð Þ≤ *L*maxð Þ *π* ≤ *y* and *C*max *π*<sup>0</sup> ð Þ≤*C*maxð Þ *π* . Therefore, the required schedule *θ* contains in set Ωð Þ *N*, *t* . According to Lemma 1.3, all schedules of the set Ωð Þ *N*, *t* start with *ω*ð Þ *N*, *t* . Let us

After *k*, *k*≥0 main steps of the algorithm 1.2 we got the schedule *θ<sup>k</sup>* and *N*<sup>0</sup> ¼

maximum completion time (*C*max) schedule *θ* starting with *θk*. According to Theorem 1.2, there is an optimal extension of the schedule *θ<sup>k</sup>* among the schedules

<sup>0</sup> ð Þ is the best by the criterion of *C*max among all feasible by maximum

<sup>0</sup> ð Þ start with either schedule *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup>

<sup>0</sup> ¼ *C*maxð Þ *θ<sup>k</sup>* . Let us assume that there is an optimal by the criterion of

<sup>0</sup> ð Þ start with jobs of the set *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup>

Thus, at each main step of the algorithm, we choose the fastest continuation of the partial schedule *θ<sup>k</sup>* among all those allowed by the maximum lateness. After no more than *n* main steps of the algorithm, the required schedule is constructed. Let us assume that after the *<sup>k</sup>* <sup>þ</sup> 1 steps of the algorithm *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>*

<sup>0</sup> ð Þ> *y*. If schedule *θ* could exist, that is, *θ* 6¼ ∅, then *θ* would start with *θk*.

Repeating our proof as many times as the main step of algorithm 1.2 (no more

Let us develop an algorithm for constructing a set of Pareto schedules Φð Þ¼ *N*, *t*

**Algorithm 1.3** for constructing a set of Pareto schedules by criteria *C*max and *L*max.

**3.3 Algorithm for constructing a set of Pareto schedules by criteria** *C***max and**

, *m* ≤ *n*, by criteria *C*max and *L*max according to conditions (5)–(6).

*<sup>m</sup>* is a solution to problem 1∣*rj*∣*L*max if (9) is true.

<sup>0</sup> ð Þ>*y* or *L*max *π*, *t*

<sup>0</sup> ð Þ ð Þ , that is, *L*maxð Þ *θ<sup>k</sup>*þ<sup>1</sup> ≤*y*. According to Theorem 1.3, for

<sup>0</sup> ð Þ, there is no artificial idle times of the machine and all

<sup>0</sup> ð Þ<sup>&</sup>gt; *<sup>y</sup>*, and *<sup>L</sup>*max *<sup>ω</sup>*2, *<sup>t</sup>*

<sup>0</sup> ð Þ there would exist a schedule *π*<sup>0</sup> ∈ Ω *N*<sup>0</sup>

, *t* <sup>0</sup> ð Þ or *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup>

, *t* <sup>0</sup> ð Þ.

<sup>0</sup> ð Þ≥*L*max *π*<sup>0</sup>

, *t* <sup>0</sup> ð Þ . Therefore,

<sup>0</sup> ð Þ≤*y*. All sched-

<sup>0</sup> ð Þ> *y* and

, *t* <sup>0</sup> ð Þ such

, *t* <sup>0</sup> ð Þ. As

, *t* <sup>0</sup> ð Þ≥

<sup>≔</sup> *<sup>N</sup>*<sup>n</sup> *<sup>π</sup>* <sup>∗</sup> f g and

*N* ¼ ∅, then *θ*ð Þ¼ ∅, t, y ∅ for any *t* and *y*.

take *θ*<sup>0</sup> ¼ *ω*ð Þ *N*, *t* .

from the set Ω *N*<sup>0</sup>

Let *<sup>θ</sup><sup>k</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>θ</sup>k*,*ω*<sup>1</sup> *<sup>N</sup>*<sup>0</sup>

schedules from the set Ω *N*<sup>0</sup>

If *<sup>θ</sup><sup>k</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>θ</sup>k*, *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup>

Then for any schedule *π* ∈ Π *N*<sup>0</sup>

<sup>0</sup> ð Þ≥*L*max *π*<sup>0</sup>

<sup>0</sup> ð Þ> *y*. Therefore *θ* ¼ ∅.

ules of the set Ω *N*<sup>0</sup>

schedule *<sup>ω</sup>*1, *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup>

, *t* <sup>0</sup> ð Þ.

, *t*

, *t*

lateness (*L*max) extensions of the partial schedule *θk*.

, *t*

, *t*

, *t*

, *t*

<sup>0</sup> ð Þ ð Þ , that is, *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>*

<sup>0</sup> ð Þ>*y*, then the only suitable extension is *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup>

, *t*

<sup>0</sup> ð Þ≥*L*max *<sup>ω</sup>*1, *<sup>t</sup>*

than *n*), we come to the truth of the statement of the theorem.

1: **Initial step.** *<sup>Y</sup>* <sup>≔</sup> <sup>þ</sup> <sup>∞</sup>, *<sup>π</sup>* <sup>∗</sup> <sup>≔</sup> *<sup>ω</sup>*ð Þ *<sup>N</sup>*, *<sup>t</sup>* , <sup>Φ</sup> <sup>≔</sup> <sup>∅</sup>, *<sup>m</sup>* <sup>≔</sup> 0, *<sup>N</sup>*<sup>0</sup>

3: <sup>Φ</sup> <sup>≔</sup> <sup>Φ</sup><sup>∪</sup> *<sup>π</sup>* <sup>∗</sup> ð Þ, *<sup>m</sup>* <sup>≔</sup> 1 and the algorithm stops.

*N*nf g *θ<sup>k</sup>* , *t*

*ω*<sup>1</sup> *N*<sup>0</sup> , *t*

*L*max *ω*1, *t*

*L*max *ω*2, *t*

*L*max *ω*2, *t*

*L***max**

Schedule *π*<sup>0</sup>

<sup>0</sup> <sup>≔</sup> *<sup>C</sup>*max *<sup>π</sup>* <sup>∗</sup> ð Þ.

4: **end if** 5: **Main step.**

2: **if** *N*<sup>0</sup> ¼ ∅ **then**

*π*0 1, *π*<sup>0</sup> 2, …, *π*<sup>0</sup> *m*

*t*

**24**

that *L*max *π*, *t*

constructs the schedule *θ*ð Þ *N*, *t*, *y* 6¼ ∅, then *C*maxð Þ¼ *θ*

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

Let *π*<sup>0</sup> ¼ *ω*ð Þ *N*, *t* . After *k*, *k*≥ 0, main steps of algorithm 1.3 we have a partial schedule *πk*. Suppose there is an optimal (by *L*max) schedule starting with *πk*. We denote *N*<sup>0</sup> ¼ *N*nf g *π<sup>k</sup>* and *t* <sup>0</sup> ¼ *C*maxð Þ *π<sup>k</sup>* .

If *<sup>π</sup>k*þ<sup>1</sup> <sup>¼</sup> *<sup>π</sup>k*,*ω*<sup>1</sup> ð Þ, where *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup> *t* <sup>0</sup> ð Þ, then either *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ≤ *L*maxð Þ *π<sup>k</sup>* , or *<sup>L</sup>*maxð Þ *<sup>π</sup><sup>k</sup>* <sup>&</sup>lt;*L*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ<*y*, that is, current value of the criterion and the maximum lateness will "appear" on next steps of the algorithm 1.3. That is, *θ N*<sup>0</sup> , *t* 0 , *y*<sup>0</sup> ð Þ¼ ∅, where *<sup>y</sup>*<sup>0</sup> <sup>¼</sup> *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ. If *θ* ¼ *θ N*<sup>0</sup> , *t* 0 , *y*<sup>0</sup> ð Þ 6¼ ∅, then we improve the current maximum lateness value: *<sup>π</sup>*<sup>0</sup> <sup>¼</sup> ð Þ *<sup>π</sup>k*, *<sup>θ</sup>* and *<sup>y</sup>* <sup>¼</sup> *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ¼ *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ. The schedule *π*<sup>0</sup> is added to the set of schedules Φð Þ *N*, *t* . Moreover, according to Theorem 1.3 jobs of set *ω*<sup>1</sup> precede jobs of set *N*<sup>0</sup> <sup>n</sup> *<sup>ω</sup>*<sup>1</sup> . Thus, the schedule *<sup>ω</sup>*1alert(without artificial idle times of the machine) would be the best continuation for *πk*.

If *<sup>π</sup>k*þ<sup>1</sup> <sup>¼</sup> *<sup>π</sup>k*,*ω*<sup>2</sup> ð Þ, where *<sup>ω</sup>*<sup>2</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup> , *t* <sup>0</sup> ð Þ, that is, according to algorithm 1.3 *L*max *ω*2, *t* <sup>0</sup> ð Þ<sup>&</sup>lt; *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ<sup>≤</sup> *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>* <sup>0</sup> ð Þ. In this case the continuation *<sup>ω</sup>*<sup>2</sup> is "better" than *<sup>ω</sup>*1. Hence, the partial schedule *<sup>π</sup><sup>k</sup>*þ<sup>1</sup> is a part of some optimal schedule.

Repeating our proof no more than *n* times, we come to optimality (for *L*max) of the schedule *π* <sup>∗</sup> .

For the set of schedules Φð Þ¼ *N*, *t π*<sup>0</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.93677*

problem were found. For a case when

exceed *O n*<sup>3</sup> ð Þ log *<sup>n</sup>* operations.

**Acknowledgements**

**27**

are true.

**Figure 1.**

The schedule *π*<sup>0</sup>

*The set of Pareto-optimal schedules.*

**4. Conclusions**

1, *π*<sup>0</sup> 2, …, *π*<sup>0</sup> *m*

<sup>1</sup> is optimal in terms of speed (*C*max), and *π*<sup>0</sup>

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem…*

of the maximum lateness (by *L*max) if the jobs of the set *N* satisfy the conditions (9).

Single machine scheduling problem with given release dates and two objective

an algorithm for constructing a Pareto-optimal set of schedules by criteria *C*max and *L*max is developed. It is proved that the complexity of the algorithm does not

An experimental study of the algorithm showed that it can be used to construct optimal schedules (by *L*max) even for instances not satisfying the conditions (33).

The research was supported by RFBR (project 20-58-S52006).

*d*<sup>1</sup> ≤ … ≤ *dn*, *d*<sup>1</sup> � *r*<sup>1</sup> � *p*<sup>1</sup> ≥ … ≥*dn* � *rn* � *pn*, (33)

functions is considered in this chapter, which is *NP*-hard in the strong sense. A number of new polynomially and pseudo-polynomially solvable subcases of the

, *m* ≤*n*, we conditions (5)–(6)

*<sup>m</sup>* is optimal in terms

The set of schedules Φð Þ *N*, *t* contains at most *n* schedules, since at each main step of the algorithm in the set Φð Þ *N*, *t* at most one schedule is "added," and this step is executed no more than *n* times.

Suppose there is a schedule *π* ∈ Πð Þ *N* , *π* ∉ Φð Þ *N*, *t* , such that either *C*maxð Þ *π* ≤*C*max *π*<sup>0</sup> ð Þ and *L*maxð Þ *π* ≥ *L*max *π*<sup>0</sup> ð Þ, or *C*maxð Þ *π* ≥*C*max *π*<sup>0</sup> ð Þ and *L*maxð Þ *π* ≤ *L*max *π*<sup>0</sup> ð Þ for each schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* . Moreover, in each pair of inequalities at least one inequality is strict. According to Theorem 1.1, there is a schedule *π*00 <sup>∈</sup> <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* such that *<sup>L</sup>*max *<sup>π</sup>*<sup>00</sup> <sup>≤</sup>*L*maxð Þ *<sup>π</sup>* and *<sup>C</sup>*max *<sup>π</sup>*<sup>00</sup> <sup>≤</sup>*C*maxð Þ *<sup>π</sup>* . If *<sup>π</sup>*<sup>00</sup> ∈ Φð Þ *N*, *t* . Thus, it becomes obvious that our assumption is not correct. Let *π*00 ∈ Ωð Þn *N*, *t* Φð Þ *N*, *t* . Algorithm 1.3 shows that the structure of each schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* can be represented as a sequence of partial schedules *π*<sup>0</sup> ¼ *ω*0 0, *ω*<sup>0</sup> 1,*ω*<sup>0</sup> 2, …, *ω*<sup>0</sup> *k*0 , where *<sup>ω</sup>*<sup>0</sup> <sup>0</sup> ¼ *ω*ð Þ *N*, *t* , and *ω*<sup>0</sup> *<sup>i</sup>* is either *ω*<sup>1</sup> *N*<sup>0</sup> *i* ,*C*<sup>0</sup> *i* , or *ω*<sup>2</sup> *N*<sup>0</sup> *i* ,*C*<sup>0</sup> *i* , and *N*<sup>0</sup> *<sup>i</sup>* ¼ *N*n *ω*<sup>0</sup> 0, …, *ω*<sup>0</sup> *i*�1 , *C*<sup>0</sup> *<sup>i</sup>* ¼ *C*max *ω*<sup>0</sup> 0, …,*ω*<sup>0</sup> *i*�1 , *<sup>t</sup>* , *<sup>i</sup>* <sup>¼</sup> 1, 2, …, *<sup>k</sup>*<sup>0</sup> . The schedule *π*<sup>00</sup> has the same structure according to the definition of the set Ωð Þ *N*, *t* , that is, *π* ¼ *ω*<sup>00</sup> 0,*ω*<sup>00</sup> 1,*ω*<sup>00</sup> 2, …, *ω*<sup>00</sup> *k*00 , possibly *k*<sup>00</sup> 6¼ *k*<sup>0</sup> , where *ω*00 <sup>0</sup> ¼ *ω*<sup>0</sup> <sup>0</sup> ¼ *ω*ð Þ *N*, *t* ,*ω*<sup>00</sup> *<sup>i</sup>* is equal to either *ω*<sup>1</sup> *N*00 *i* ,*C*00 *i* , or *ω*<sup>2</sup> *N*<sup>00</sup> *i* ,*C*00 *i* , a *N*<sup>00</sup> *<sup>i</sup>* ¼ *N*n *ω*<sup>00</sup> 0, …, *ω*<sup>00</sup> *i*�1 , *C*00 .

*<sup>i</sup>* ¼ *C*max *ω*<sup>00</sup> 0, …,*ω*<sup>00</sup> *i*�1 , *<sup>t</sup>* , *<sup>i</sup>* <sup>¼</sup> 1, 2, …, *<sup>k</sup>*<sup>00</sup> We assume that the first *k* partial schedules *π*00 and *π*<sup>0</sup> are equal, that is, *ω*00 *<sup>i</sup>* ¼ *ω*0 *<sup>i</sup>* ¼ *ωi*, *i* ¼ 0, 1, …, *k* � 1, *ω*<sup>00</sup> *<sup>k</sup>* 6¼ *ω*<sup>0</sup> *<sup>k</sup>:* If *y* ¼ *L*maxð Þ *ω*0, …, *ω<sup>k</sup>*�<sup>1</sup> , let us construct a schedule *θ* using algorithm 1.2, *θ* ¼ *θ*ð Þ *Nk*,*Ck*, *y :* If *θ* ¼ ∅, then according to algorithm 1.3, *ω*<sup>0</sup> *<sup>k</sup>* <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup>ð Þ *Nk*,*Ck* . Because of *<sup>ω</sup>*<sup>00</sup> *<sup>k</sup>* 6¼ *ω*<sup>0</sup> *<sup>k</sup>*, schedule *ω*<sup>00</sup> *<sup>k</sup>* <sup>¼</sup> *<sup>ω</sup>*<sup>2</sup>ð Þ *Nk*,*Ck :* Objective function value (*L*max) can be reached on a job from the set *Nk*, since *θ* ¼ ∅*:* The whole structure of the algorithm 1.3 construct in such a way that up to the "critical" job (according to *L*max) order the jobs as "tightly" as possible, therefore we complete the schedule *<sup>ω</sup>*1, after which *<sup>C</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*C*max *<sup>π</sup>*<sup>00</sup> and *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*L*max *<sup>π</sup>*<sup>00</sup> . If *<sup>θ</sup>* 6¼ <sup>∅</sup>, then for schedules *π*<sup>0</sup> and *π*00 *<sup>C</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*C*max *<sup>π</sup>*<sup>00</sup> and *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ¼ *<sup>L</sup>*max *<sup>π</sup>*<sup>00</sup> . Thus, for any schedule *π*00 ∈ Ωð Þn *N*, *t* Φð Þ *N*, *t* exists schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* such that *<sup>C</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*C*max *<sup>π</sup>*<sup>00</sup> and *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*L*max *<sup>π</sup>*<sup>00</sup> . Hence, for any schedule *<sup>π</sup>* <sup>∈</sup> <sup>Π</sup>ð Þ *<sup>N</sup>* there exists schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* such that *L*max *π*<sup>0</sup> ð Þ≤*L*maxð Þ *π* and *C*max *π*<sup>0</sup> ð Þ≤*C*maxð Þ *π* . The theorem is proved.

**Figure 1** schematically shows the considered schedule.

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem… DOI: http://dx.doi.org/10.5772/intechopen.93677*

**Figure 1.**

Let *π*<sup>0</sup> ¼ *ω*ð Þ *N*, *t* . After *k*, *k*≥ 0, main steps of algorithm 1.3 we have a partial schedule *πk*. Suppose there is an optimal (by *L*max) schedule starting with *πk*. We

<sup>0</sup> ð Þ, then either *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>*

, *y*<sup>0</sup> ð Þ 6¼ ∅, then we improve the current maxi-

<sup>n</sup> *<sup>ω</sup>*<sup>1</sup> . Thus, the schedule *<sup>ω</sup>*1alert(without artificial

<sup>0</sup> ð Þ, that is, according to algorithm 1.3

<sup>0</sup> ð Þ. In this case the continuation *<sup>ω</sup>*<sup>2</sup> is "better" than

*<sup>i</sup>* is either *ω*<sup>1</sup> *N*<sup>0</sup>

<sup>0</sup> ¼ *ω*<sup>0</sup>

*i*�1 ,

0, …, *ω*<sup>00</sup>

*<sup>k</sup>:* If *y* ¼ *L*maxð Þ *ω*0, …, *ω<sup>k</sup>*�<sup>1</sup> , let us construct a

*<sup>C</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*C*max *<sup>π</sup>*<sup>00</sup> and *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ¼ *<sup>L</sup>*max *<sup>π</sup>*<sup>00</sup> . Thus, for

*<sup>k</sup>*, schedule *ω*<sup>00</sup>

*i* ,*C*<sup>0</sup> *i* , or *ω*<sup>2</sup> *N*<sup>0</sup>

<sup>0</sup> ¼ *ω*ð Þ *N*, *t* ,*ω*<sup>00</sup>

<sup>0</sup> ð Þ<*y*, that is, current value of the criterion and the maximum

<sup>0</sup> ð Þ≤ *L*maxð Þ *π<sup>k</sup>* , or

, *t* 0 , *y*<sup>0</sup> ð Þ¼ ∅,

<sup>0</sup> ð Þ. The schedule *π*<sup>0</sup> is

∈ Φð Þ *N*, *t* .

*i* ,*C*<sup>0</sup> *i* ,

*<sup>i</sup>* is equal

*<sup>i</sup>* ¼

. The schedule

*<sup>k</sup>* <sup>¼</sup> *<sup>ω</sup>*<sup>2</sup>ð Þ *Nk*,*Ck :* Objec-

*t*

added to the set of schedules Φð Þ *N*, *t* . Moreover, according to Theorem 1.3 jobs of

, *t*

Repeating our proof no more than *n* times, we come to optimality (for *L*max) of

The set of schedules Φð Þ *N*, *t* contains at most *n* schedules, since at each main step of the algorithm in the set Φð Þ *N*, *t* at most one schedule is "added," and this step is

*L*maxð Þ *π* ≤ *L*max *π*<sup>0</sup> ð Þ for each schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* . Moreover, in each pair of inequalities at least one inequality is strict. According to Theorem 1.1, there is a schedule

∈ Ωð Þn *N*, *t* Φð Þ *N*, *t* . Algorithm 1.3 shows that the structure of each schedule

<sup>0</sup> ¼ *ω*ð Þ *N*, *t* , and *ω*<sup>0</sup>

*π*<sup>00</sup> has the same structure according to the definition of the set Ωð Þ *N*, *t* , that is,

6¼ *k*<sup>0</sup>

schedule *θ* using algorithm 1.2, *θ* ¼ *θ*ð Þ *Nk*,*Ck*, *y :* If *θ* ¼ ∅, then according to algo-

tive function value (*L*max) can be reached on a job from the set *Nk*, since *θ* ¼ ∅*:* The whole structure of the algorithm 1.3 construct in such a way that up to the "critical" job (according to *L*max) order the jobs as "tightly" as possible, therefore we complete the schedule *<sup>ω</sup>*1, after which *<sup>C</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*C*max *<sup>π</sup>*<sup>00</sup> and *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*L*max *<sup>π</sup>*<sup>00</sup> . If *<sup>θ</sup>* 6¼ <sup>∅</sup>,

. We assume that the first *k* partial schedules *π*00 and *π*<sup>0</sup> are equal, that is, *ω*00

*<sup>k</sup>* 6¼ *ω*<sup>0</sup>

∈ Ωð Þn *N*, *t* Φð Þ *N*, *t* exists schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* such that

*<sup>C</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*C*max *<sup>π</sup>*<sup>00</sup> and *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ≤*L*max *<sup>π</sup>*<sup>00</sup> . Hence, for any schedule *<sup>π</sup>* <sup>∈</sup> <sup>Π</sup>ð Þ *<sup>N</sup>*

there exists schedule *π*<sup>0</sup> ∈ Φð Þ *N*, *t* such that *L*max *π*<sup>0</sup> ð Þ≤*L*maxð Þ *π* and

**Figure 1** schematically shows the considered schedule.

0, …,*ω*<sup>0</sup> *i*�1 , *<sup>t</sup>* , *<sup>i</sup>* <sup>¼</sup> 1, 2, …, *<sup>k</sup>*<sup>0</sup>

, where *ω*00

*<sup>i</sup>* ¼ *N*n *ω*<sup>00</sup>

<sup>0</sup> ¼ *C*maxð Þ *π<sup>k</sup>* .

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

mum lateness value: *<sup>π</sup>*<sup>0</sup> <sup>¼</sup> ð Þ *<sup>π</sup>k*, *<sup>θ</sup>* and *<sup>y</sup>* <sup>¼</sup> *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ¼ *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>*

idle times of the machine) would be the best continuation for *πk*.

lateness will "appear" on next steps of the algorithm 1.3. That is, *θ N*<sup>0</sup>

, *t* 0

*<sup>ω</sup>*1. Hence, the partial schedule *<sup>π</sup><sup>k</sup>*þ<sup>1</sup> is a part of some optimal schedule.

Suppose there is a schedule *π* ∈ Πð Þ *N* , *π* ∉ Φð Þ *N*, *t* , such that either *C*maxð Þ *π* ≤*C*max *π*<sup>0</sup> ð Þ and *L*maxð Þ *π* ≥ *L*max *π*<sup>0</sup> ð Þ, or *C*maxð Þ *π* ≥*C*max *π*<sup>0</sup> ð Þ and

<sup>∈</sup> <sup>Ω</sup>ð Þ *<sup>N</sup>*, *<sup>t</sup>* such that *<sup>L</sup>*max *<sup>π</sup>*<sup>00</sup> <sup>≤</sup>*L*maxð Þ *<sup>π</sup>* and *<sup>C</sup>*max *<sup>π</sup>*<sup>00</sup> <sup>≤</sup>*C*maxð Þ *<sup>π</sup>* . If *<sup>π</sup>*<sup>00</sup>

*π*<sup>0</sup> ∈ Φð Þ *N*, *t* can be represented as a sequence of partial schedules *π*<sup>0</sup> ¼

*<sup>i</sup>* ¼ *C*max *ω*<sup>0</sup>

Thus, it becomes obvious that our assumption is not correct. Let

*i* ,*C*00 *i* , a *N*<sup>00</sup>

*<sup>k</sup>* 6¼ *ω*<sup>0</sup>

*<sup>k</sup>* <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup>ð Þ *Nk*,*Ck* . Because of *<sup>ω</sup>*<sup>00</sup>

, where *ω*<sup>0</sup>

0, …, *ω*<sup>0</sup> *i*�1 , *C*<sup>0</sup>

2, …, *ω*<sup>00</sup> *k*00 , possibly *k*<sup>00</sup>

0, …,*ω*<sup>00</sup>

*i*�1 , *<sup>t</sup>* , *<sup>i</sup>* <sup>¼</sup> 1, 2, …, *<sup>k</sup>*<sup>00</sup>

*C*max *π*<sup>0</sup> ð Þ≤*C*maxð Þ *π* . The theorem is proved.

denote *N*<sup>0</sup> ¼ *N*nf g *π<sup>k</sup>* and *t*

set *ω*<sup>1</sup> precede jobs of set *N*<sup>0</sup>

executed no more than *n* times.

2, …, *ω*<sup>0</sup> *k*0

> *i* ,*C*00 *i* , or *ω*<sup>2</sup> *N*<sup>00</sup>

*<sup>i</sup>* ¼ *ωi*, *i* ¼ 0, 1, …, *k* � 1, *ω*<sup>00</sup>

then for schedules *π*<sup>0</sup> and *π*00

0,*ω*<sup>00</sup> 1,*ω*<sup>00</sup>

to either *ω*<sup>1</sup> *N*00

rithm 1.3, *ω*<sup>0</sup>

any schedule *π*00

*<sup>i</sup>* ¼ *C*max *ω*<sup>00</sup>

*<sup>i</sup>* ¼ *N*n *ω*<sup>0</sup>

*<sup>L</sup>*maxð Þ *<sup>π</sup><sup>k</sup>* <sup>&</sup>lt;*L*max *<sup>ω</sup>*1, *<sup>t</sup>*

where *<sup>y</sup>*<sup>0</sup> <sup>¼</sup> *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>*

*L*max *ω*2, *t*

*π*00

*π*00

*ω*0 0, *ω*<sup>0</sup> 1,*ω*<sup>0</sup>

and *N*<sup>0</sup>

*π* ¼ *ω*<sup>00</sup>

*C*00

*ω*0

**26**

the schedule *π* <sup>∗</sup> .

If *<sup>π</sup>k*þ<sup>1</sup> <sup>¼</sup> *<sup>π</sup>k*,*ω*<sup>1</sup> ð Þ, where *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>1</sup> *<sup>N</sup>*<sup>0</sup>

If *<sup>π</sup>k*þ<sup>1</sup> <sup>¼</sup> *<sup>π</sup>k*,*ω*<sup>2</sup> ð Þ, where *<sup>ω</sup>*<sup>2</sup> <sup>¼</sup> *<sup>ω</sup>*<sup>2</sup> *<sup>N</sup>*<sup>0</sup>

<sup>0</sup> ð Þ<sup>&</sup>lt; *<sup>L</sup>*max *<sup>π</sup>*<sup>0</sup> ð Þ<sup>≤</sup> *<sup>L</sup>*max *<sup>ω</sup>*1, *<sup>t</sup>*

<sup>0</sup> ð Þ. If *θ* ¼ *θ N*<sup>0</sup>

*The set of Pareto-optimal schedules.*

For the set of schedules Φð Þ¼ *N*, *t π*<sup>0</sup> 1, *π*<sup>0</sup> 2, …, *π*<sup>0</sup> *m* , *m* ≤*n*, we conditions (5)–(6) are true.

The schedule *π*<sup>0</sup> <sup>1</sup> is optimal in terms of speed (*C*max), and *π*<sup>0</sup> *<sup>m</sup>* is optimal in terms of the maximum lateness (by *L*max) if the jobs of the set *N* satisfy the conditions (9).

#### **4. Conclusions**

Single machine scheduling problem with given release dates and two objective functions is considered in this chapter, which is *NP*-hard in the strong sense. A number of new polynomially and pseudo-polynomially solvable subcases of the problem were found. For a case when

$$d\_1 \le \dots \le d\_n, \quad d\_1 - r\_1 - p\_1 \ge \dots \ge d\_n - r\_n - p\_n,\tag{33}$$

an algorithm for constructing a Pareto-optimal set of schedules by criteria *C*max and *L*max is developed. It is proved that the complexity of the algorithm does not exceed *O n*<sup>3</sup> ð Þ log *<sup>n</sup>* operations.

An experimental study of the algorithm showed that it can be used to construct optimal schedules (by *L*max) even for instances not satisfying the conditions (33).

#### **Acknowledgements**

The research was supported by RFBR (project 20-58-S52006).

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

**References**

287-326

[1] Lenstra JK, Rinnooy Kan AHG, Brucker P. Complexity of machine scheduling problems. Annals of Discrete

*DOI: http://dx.doi.org/10.5772/intechopen.93677*

constraints. Management Science. 1973;

[11] Simons BB. A fast algorithm for single processor scheduling. In: Proceedings of the 19th IEEE Annual Symposium on Foundations of Computer Science. New York: Ann. Arbor. Mich; 1978. pp. 246-252

[12] Baker KR, Lawler EL, Lenstra JK, Rinnooy Kan AHG. Preemtive scheduling of a single machine to minimize maximum cost subject to release dates and precedence

constraints. Operations Research. 1983;

Mathematics of Operations Research.

[15] Vakhania N. Scheduling a single machine with primary and secondary objectives. Algorithms. 2018;**11**(6):80

[16] Vakhania N. Fast solution of singlemachine scheduling problem with embedded jobs. Theoretical Computer

[13] Hoogeveen JA. Minimizing maximum promptness and maximum

lateness on a single machine.

[14] Lazarev AA, Shulgina ON. Polynomially solvable subcases of the problem of minimizing maximum

lateness. Izvestiya VUZov. Mathematics. 2000 (in Russian)

Science. 2019;**782**:91-106

**19**(5):544-546

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem…*

**31**(2):381-386

1996;**21**:100-114

[2] Graham RL, Lawler EL, Lenstra JK, Rinnooy Kan AHG. Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics. 1979;**5**:

[3] Potts CN. Analysis of a heuristic for one machine sequencing with release dates and delivery times. Operations

[4] Jackson JR. Scheduling a Production Line to Minimize Maximum Tardiness.

[5] Hall LA, Shmoys DB. Jackson's rule for one-machine schedulings: Making a good heuristic better. Mathematics of Operations Research. 1992;**17**:22-35

approximation schemes for scheduling problems with release dates and delivery times. Journal of Scheduling.

[7] Garey MR, Johnson DS, Simons BB, Tarjan RE. Scheduling UnitTime tasks with arbitrary release times and

deadlines. SIAM Journal on Computing.

[9] Vakhania N. Dynamic restructuring framework for Scheduling with release times and due-dates. Mathematics.

[10] Lawler EL. Optimal sequencing of a single machine subject to precedence

[8] Vakhania N. Single-machine scheduling with release times and tails. Annals of Operations Research. 2004;

Research. 1980;**28**:1436-1441

Los Angeles, CA: University of California; 1955. Manag. Sci. Res. Project. Research Report N 43

[6] Mastrolilli M. Efficient

2003;**6**(6):521-531

1981;**10**:256-269

**129**(1–4):253-271

2019;**7**(11):1104

**29**

Mathematics. 1977;**1**:343-362

### **Author details**

Alexander A. Lazarev and Nikolay Pravdivets\* Institute of Control Sciences, Moscow, Russia

\*Address all correspondence to: pravdivets@ipu.ru

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Polynomial Algorithm for Constructing Pareto-Optimal Schedules for Problem… DOI: http://dx.doi.org/10.5772/intechopen.93677*

#### **References**

[1] Lenstra JK, Rinnooy Kan AHG, Brucker P. Complexity of machine scheduling problems. Annals of Discrete Mathematics. 1977;**1**:343-362

[2] Graham RL, Lawler EL, Lenstra JK, Rinnooy Kan AHG. Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics. 1979;**5**: 287-326

[3] Potts CN. Analysis of a heuristic for one machine sequencing with release dates and delivery times. Operations Research. 1980;**28**:1436-1441

[4] Jackson JR. Scheduling a Production Line to Minimize Maximum Tardiness. Los Angeles, CA: University of California; 1955. Manag. Sci. Res. Project. Research Report N 43

[5] Hall LA, Shmoys DB. Jackson's rule for one-machine schedulings: Making a good heuristic better. Mathematics of Operations Research. 1992;**17**:22-35

[6] Mastrolilli M. Efficient approximation schemes for scheduling problems with release dates and delivery times. Journal of Scheduling. 2003;**6**(6):521-531

[7] Garey MR, Johnson DS, Simons BB, Tarjan RE. Scheduling UnitTime tasks with arbitrary release times and deadlines. SIAM Journal on Computing. 1981;**10**:256-269

[8] Vakhania N. Single-machine scheduling with release times and tails. Annals of Operations Research. 2004; **129**(1–4):253-271

[9] Vakhania N. Dynamic restructuring framework for Scheduling with release times and due-dates. Mathematics. 2019;**7**(11):1104

[10] Lawler EL. Optimal sequencing of a single machine subject to precedence

constraints. Management Science. 1973; **19**(5):544-546

[11] Simons BB. A fast algorithm for single processor scheduling. In: Proceedings of the 19th IEEE Annual Symposium on Foundations of Computer Science. New York: Ann. Arbor. Mich; 1978. pp. 246-252

[12] Baker KR, Lawler EL, Lenstra JK, Rinnooy Kan AHG. Preemtive scheduling of a single machine to minimize maximum cost subject to release dates and precedence constraints. Operations Research. 1983; **31**(2):381-386

[13] Hoogeveen JA. Minimizing maximum promptness and maximum lateness on a single machine. Mathematics of Operations Research. 1996;**21**:100-114

[14] Lazarev AA, Shulgina ON. Polynomially solvable subcases of the problem of minimizing maximum lateness. Izvestiya VUZov. Mathematics. 2000 (in Russian)

[15] Vakhania N. Scheduling a single machine with primary and secondary objectives. Algorithms. 2018;**11**(6):80

[16] Vakhania N. Fast solution of singlemachine scheduling problem with embedded jobs. Theoretical Computer Science. 2019;**782**:91-106

**Author details**

**28**

Alexander A. Lazarev and Nikolay Pravdivets\* Institute of Control Sciences, Moscow, Russia

provided the original work is properly cited.

\*Address all correspondence to: pravdivets@ipu.ru

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

Section 2

Non-Traditional Approach:

Threshold Optimality

**31**

### Section 2

## Non-Traditional Approach: Threshold Optimality

**Chapter 3**

**Abstract**

**1. Introduction**

**33**

A Brief Look at Multi-Criteria

Multi-objective optimization problems are important as they arise in many practical circumstances. In such problems, there is no general notion of optimality, as there are different objective criteria which can be contradictory. In practice, often there is no unique optimality criterion for measuring the solution quality. The latter is rather determined by the value of the solution for each objective criterion. In fact, a practitioner seeks for a solution that has an acceptable value of each of the objective functions and, in practice, there may be different tolerances to the quality of the delivered solution for different objective functions: for some

objective criteria, solutions that are far away from an optimal one can be acceptable. Traditional Pareto-optimality approach aims to create all non-dominated feasible solutions in respect to all the optimality criteria. This often requires an inadmissible time. Besides, it is not evident how to choose an appropriate solution from the Pareto-optimal set of feasible solutions, which can be very large. Here we propose a new approach and call it multi-threshold optimization setting that takes into account different requirements for different objective criteria and so is more flexi-

**Keywords:** multi-critneria optimization, optimal solution, Pareto-optimization,

Multi-objective optimization problems are important as they arise in many practical circumstances. In such problems, there is no general notion of optimality, as there are different objective criteria which are often contradictory: an optimal solution for one criterion may be far away from an optimal one for some other criterion. Thus for many such real-life problems, there is no unique optimality criterion for measuring the solution quality. The latter is rather determined by the value of the solution for each objective criterion. In fact, a practitioner is not interested, generally, in optimizing a particular objective criterion, but he rather seeks for a solution that has an acceptable value of each of the objective functions. Furthermore, in practice, there may exist different tolerances to the quality of the

multi-threshold optimization, scheduling algorithm, time complexity

Problems: Multi-Threshold

Optimization versus

Pareto-Optimization

*Nodari Vakhania and Frank Werner*

ble and can often be solved in a more efficient way.

#### **Chapter 3**

## A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus Pareto-Optimization

*Nodari Vakhania and Frank Werner*

### **Abstract**

Multi-objective optimization problems are important as they arise in many practical circumstances. In such problems, there is no general notion of optimality, as there are different objective criteria which can be contradictory. In practice, often there is no unique optimality criterion for measuring the solution quality. The latter is rather determined by the value of the solution for each objective criterion. In fact, a practitioner seeks for a solution that has an acceptable value of each of the objective functions and, in practice, there may be different tolerances to the quality of the delivered solution for different objective functions: for some objective criteria, solutions that are far away from an optimal one can be acceptable. Traditional Pareto-optimality approach aims to create all non-dominated feasible solutions in respect to all the optimality criteria. This often requires an inadmissible time. Besides, it is not evident how to choose an appropriate solution from the Pareto-optimal set of feasible solutions, which can be very large. Here we propose a new approach and call it multi-threshold optimization setting that takes into account different requirements for different objective criteria and so is more flexible and can often be solved in a more efficient way.

**Keywords:** multi-critneria optimization, optimal solution, Pareto-optimization, multi-threshold optimization, scheduling algorithm, time complexity

### **1. Introduction**

Multi-objective optimization problems are important as they arise in many practical circumstances. In such problems, there is no general notion of optimality, as there are different objective criteria which are often contradictory: an optimal solution for one criterion may be far away from an optimal one for some other criterion. Thus for many such real-life problems, there is no unique optimality criterion for measuring the solution quality. The latter is rather determined by the value of the solution for each objective criterion. In fact, a practitioner is not interested, generally, in optimizing a particular objective criterion, but he rather seeks for a solution that has an acceptable value of each of the objective functions. Furthermore, in practice, there may exist different tolerances to the quality of the

delivered solution for different objective functions. In particular, for some objective criteria, solutions far away from an optimal one can be acceptable. Such solutions can often be obtained by relatively low computational efforts even for intractable problems.

**2. Multi-criteria optimization problems**

*DOI: http://dx.doi.org/10.5772/intechopen.91169*

and a survey paper [2] by the same authors.

impossible in practice.

problem. Let *F* <sup>∗</sup>

*fi*

**35**

minimize function *fi*

, *i* ¼ 1, … , *k*.

defined as follows.

such that *fi*

approximation solution methods.

For an extensive description of multi-criteria optimization problems and the solution methods, the reader may have a look on a book by T'kindt and Billaut [1]

Discrete *optimization* problems have emerged in the late 1940s of the last century due to the rapid growth of the industry and new rising demands in efficient solution methods. Modeled in mathematical language, such an optimization problem has a finite set of so-called feasible solutions; each feasible solution is determined by a set of mathematically formulated restrictions that naturally arise in practice. The quality of a feasible solution is measured by an objective function,

whose domain is the whole set of feasible solutions. Ideally, one aims to

*A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus…*

enumerate all the feasible solutions calculating for each of the value of the objective function and select any one with the optimal objective value. The main issue here is that a complete enumeration of all feasible solutions is mostly

*P* of polynomially solvable ones and the intractable NP-hard problems. For a problem from the class *P*, there exists an efficient (polynomial in the size of the problem) algorithm, whereas no such algorithm exists for an NP-hard problem (the number of feasible solutions of an NP-hard optimization problem grows exponentially with the size of the input). It is widely believed that it is very unlikely that an NP-hard problem can be solved in polynomial time. Hence, it is natural to develop

determine a feasible solution that gives an extremal (minimal or maximal) value to the objective function, a so-called optimal solution. Since the number of feasible solutions is typically finite, theoretically, finding an optimal solution is trivial: just

There are two distinct classes of combinatorial optimization problems, the class

Multi-criteria optimization problems are optimization problems with two or more different objective criteria. For the majority of such problems, there exists no single solution which optimizes (minimizes or maximizes) all the objective functions. In this sense, different objectives are contradictory, and hence, it is not straightforward to understand which feasible solution to the problem is optimal: a multi-criteria optimization problem typically has no optimal solution. In this situation, one may look for a solution which attains an acceptable value for each objective function or a solution which is not dominated by any other solution, in the sense that there is no other feasible solution which attains better objective values for all objective functions. We shall refer to the first and second versions of the multi-criteria optimization problem as *multi-threshold optimization* and *Pareto-*

Let the *k* objective functions over the set F of feasible solutions of a given multi-criteria optimization problem be *f* <sup>1</sup>, … , *f <sup>k</sup>*. Since these functions might be mutually contradictory, there may exist no feasible solution minimizing/

maximizing all objective functions simultaneously. Without loss of generality, let us consider from now on the minimization version of our multi-criteria optimization

In the multi-threshold optimization version, we look for a feasible solution *σ*

A commonly used dominance relation for the Pareto-optimization version is

*<sup>i</sup>* be the optimal value for a single-criterion problem with the objective to

, and let *Ai* be some threshold value for the objective function

*optimization* versions and define them more formally below.

ð Þ *<sup>σ</sup>* <sup>≤</sup> *Ai* for each *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>k</sup>*.

Taking into account these considerations, here we propose a new approach and call it multi-threshold optimization setting that takes into account different requirements for different objective criteria, in contrary to a traditional Paretooptimality approach. The Pareto-optimality concept, named after an Italian scientist Vilfredo Pareto, is a traditionally used compromise to address a complicated multiobjective scenario. It looks for a so-called Pareto-optimal frontier of the feasible solutions consisting of those solutions that are not dominated by any other feasible solution (with respect to any of the given objective functions). This often requires an inadmissible time: finding the Pareto-optimal frontier often remains an intractable (NP-hard) problem. This is always the case if at least one of the corresponding single-criterion problems is NP-hard. Finding the Pareto-optimal set of solutions may be NP-hard even if none of the single-criterion problem is NP-hard. Besides, it is not evident how to choose an appropriate solution from the Pareto-optimal set of feasible solutions, which can be very large. The multi-threshold optimization approach is more flexible since it takes into account different requirements for different objective criteria: in practice, some objective criteria can be more critical than the other ones, and hence there may exist different degrees of tolerance for the deviation of the objective value of different criteria from the optimal objective value of the corresponding single-criterion problems.

The multi-threshold optimization problem seeks for a feasible schedule whose objective values are acceptable for a given particular application for all objective functions; in particular, they do not exceed (for minimization problems) or are no smaller (for maximization problems) than the components of a threshold vector specified by the practitioner whose *i*th component is some threshold value for the *i*th objective function. As we observe, depending on the components of the above vector, it might be possible to solve the multi-threshold optimization problem in a low-degree polynomial time even if all the corresponding single-criterion problems are NP-hard. A threshold vector with specific threshold values for each objective function is supposed to have a direct practical meaning. For practically useful values of the threshold vector, the multi-threshold optimization problem might be solved in a low-degree polynomial time by a kit of heuristic algorithms, each one being designed for one of the corresponding single-criterion problems. If the kit of heuristic algorithms fails to find a feasible solution respecting the threshold vector, then the heuristics for NP-hard single-criterion problems can be replaced by implicit enumeration algorithms. In fact, the replacement can be accomplished step by step, starting from the most critical heuristics. This kind of approach may be more practical since some objective criteria can be optimized easier than other ones. Besides, as already noted, the practitioner may not be interested, in general, in the minimization of each objective function but rather in a solution of an acceptable quality for every objective function: in practice, there may be different tolerances to the quality of the delivered solution for each objective function, and different objective functions might be optimized with quite different costs.

Thus our approach may lead to more efficient and practical solution of a multicriteria problem than the corresponding Pareto-optimal setting. In the following sections, we give a brief comparative analysis of the Pareto-optimization approach with our multi-threshold optimization approach illustrating its advantage on single-machine scheduling problems.

#### **2. Multi-criteria optimization problems**

delivered solution for different objective functions. In particular, for some objective criteria, solutions far away from an optimal one can be acceptable. Such solutions can often be obtained by relatively low computational efforts even for intractable

Taking into account these considerations, here we propose a new approach and

The multi-threshold optimization problem seeks for a feasible schedule whose objective values are acceptable for a given particular application for all objective functions; in particular, they do not exceed (for minimization problems) or are no smaller (for maximization problems) than the components of a threshold vector specified by the practitioner whose *i*th component is some threshold value for the *i*th objective function. As we observe, depending on the components of the above vector, it might be possible to solve the multi-threshold optimization problem in a low-degree polynomial time even if all the corresponding single-criterion problems are NP-hard. A threshold vector with specific threshold values for each objective function is supposed to have a direct practical meaning. For practically useful values of the threshold vector, the multi-threshold optimization problem might be solved in a low-degree polynomial time by a kit of heuristic algorithms, each one being designed for one of the corresponding single-criterion problems. If the kit of heuristic algorithms fails to find a feasible solution respecting the threshold vector, then the heuristics for NP-hard single-criterion problems can be replaced by implicit enumeration algorithms. In fact, the replacement can be accomplished step by step, starting from the most critical heuristics. This kind of approach may be more practical since some objective criteria can be optimized easier than other ones. Besides, as already noted, the practitioner may not be interested, in general, in the minimization of each objective function but rather in a solution of an acceptable quality for every objective function: in practice, there may be different tolerances to the quality of the delivered solution for each

objective function, and different objective functions might be optimized with quite

Thus our approach may lead to more efficient and practical solution of a multicriteria problem than the corresponding Pareto-optimal setting. In the following sections, we give a brief comparative analysis of the Pareto-optimization approach with our multi-threshold optimization approach illustrating its advantage on

call it multi-threshold optimization setting that takes into account different requirements for different objective criteria, in contrary to a traditional Paretooptimality approach. The Pareto-optimality concept, named after an Italian scientist Vilfredo Pareto, is a traditionally used compromise to address a complicated multiobjective scenario. It looks for a so-called Pareto-optimal frontier of the feasible solutions consisting of those solutions that are not dominated by any other feasible solution (with respect to any of the given objective functions). This often requires an inadmissible time: finding the Pareto-optimal frontier often remains an intractable (NP-hard) problem. This is always the case if at least one of the corresponding single-criterion problems is NP-hard. Finding the Pareto-optimal set of solutions may be NP-hard even if none of the single-criterion problem is NP-hard. Besides, it is not evident how to choose an appropriate solution from the Pareto-optimal set of feasible solutions, which can be very large. The multi-threshold optimization approach is more flexible since it takes into account different requirements for different objective criteria: in practice, some objective criteria can be more critical than the other ones, and hence there may exist different degrees of tolerance for the deviation of the objective value of different criteria from the optimal objective value

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

of the corresponding single-criterion problems.

problems.

different costs.

**34**

single-machine scheduling problems.

For an extensive description of multi-criteria optimization problems and the solution methods, the reader may have a look on a book by T'kindt and Billaut [1] and a survey paper [2] by the same authors.

Discrete *optimization* problems have emerged in the late 1940s of the last century due to the rapid growth of the industry and new rising demands in efficient solution methods. Modeled in mathematical language, such an optimization problem has a finite set of so-called feasible solutions; each feasible solution is determined by a set of mathematically formulated restrictions that naturally arise in practice. The quality of a feasible solution is measured by an objective function, whose domain is the whole set of feasible solutions. Ideally, one aims to determine a feasible solution that gives an extremal (minimal or maximal) value to the objective function, a so-called optimal solution. Since the number of feasible solutions is typically finite, theoretically, finding an optimal solution is trivial: just enumerate all the feasible solutions calculating for each of the value of the objective function and select any one with the optimal objective value. The main issue here is that a complete enumeration of all feasible solutions is mostly impossible in practice.

There are two distinct classes of combinatorial optimization problems, the class *P* of polynomially solvable ones and the intractable NP-hard problems. For a problem from the class *P*, there exists an efficient (polynomial in the size of the problem) algorithm, whereas no such algorithm exists for an NP-hard problem (the number of feasible solutions of an NP-hard optimization problem grows exponentially with the size of the input). It is widely believed that it is very unlikely that an NP-hard problem can be solved in polynomial time. Hence, it is natural to develop approximation solution methods.

Multi-criteria optimization problems are optimization problems with two or more different objective criteria. For the majority of such problems, there exists no single solution which optimizes (minimizes or maximizes) all the objective functions. In this sense, different objectives are contradictory, and hence, it is not straightforward to understand which feasible solution to the problem is optimal: a multi-criteria optimization problem typically has no optimal solution. In this situation, one may look for a solution which attains an acceptable value for each objective function or a solution which is not dominated by any other solution, in the sense that there is no other feasible solution which attains better objective values for all objective functions. We shall refer to the first and second versions of the multi-criteria optimization problem as *multi-threshold optimization* and *Paretooptimization* versions and define them more formally below.

Let the *k* objective functions over the set F of feasible solutions of a given multi-criteria optimization problem be *f* <sup>1</sup>, … , *f <sup>k</sup>*. Since these functions might be mutually contradictory, there may exist no feasible solution minimizing/ maximizing all objective functions simultaneously. Without loss of generality, let us consider from now on the minimization version of our multi-criteria optimization problem.

Let *F* <sup>∗</sup> *<sup>i</sup>* be the optimal value for a single-criterion problem with the objective to minimize function *fi* , and let *Ai* be some threshold value for the objective function *fi* , *i* ¼ 1, … , *k*.

In the multi-threshold optimization version, we look for a feasible solution *σ* such that *fi* ð Þ *<sup>σ</sup>* <sup>≤</sup> *Ai* for each *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>k</sup>*.

A commonly used dominance relation for the Pareto-optimization version is defined as follows.

A solution *σ*<sup>1</sup> ∈ F *dominates* a solution *σ*<sup>2</sup> ∈ F if *fi* ð Þ *σ*<sup>1</sup> <*fi* ð Þ *σ*<sup>2</sup> for *i* ¼ 1, … , *k*; in fact, we allow ≤ instead of < for all values of *i* except one requiring to have at least one strict inequality.

problem in a low-degree polynomial time even if all the corresponding singlecriterion problems are NP-hard. The given threshold values for each objective function may have a direct practical meaning. For practically useful values of the threshold vector **A**, the corresponding instance of the multi-threshold optimization

*A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus…*

problem might be solved in a low-degree polynomial time though it may be NP-hard, in general (for an arbitrary threshold vector, see Section 3).

In the rest of this chapter, we illustrate the Pareto-optimality and the multithreshold optimization approaches for *scheduling problems*. For recent developments in multi-criteria optimization for scheduling problems, the reader is referred to a recent survey by Nagar et al. [3] and Parveen and Ullah [4] and for some earlier works approximately until the year 2005 to the earlier cited work by T'kindt and

The scheduling problems arise in various practical circumstances. Examples of such problems are job shop problems in industry, scheduling of information and computational processes, and traffic scheduling and servicing of cargo trains, ships, and airplanes. There are scheduling problems of diverse types and different complexities. Saying generally, one deals with two primary notions: *job* (or *task*) and *machine* (or *processor*). A job is a part of the whole work to be done; a machine is the means for the performance of a job. A common restriction in scheduling problems is that a machine cannot handle more than one job at a time. Each job *j* is characterized

have other parameters as well, which may yield additional restrictions and/or can be employed by an objective function. For instance, the *release time rj* of job *j* is the time moment when job *j* becomes available (it cannot be scheduled before that time). The *due date dj* of job *j* is the desirable completion time for job *j* (there may exist a penalty for the late or for the early completion of that job). A job *preemption* might be allowed, i.e., it might be split into portions, each portion being assigned at a different time interval to the machine(s). A *(feasible) schedule* assigns each job *j* to the machine(s) at the specified time moment(s) no less than *rj* with the total duration of *pj* so that no two jobs are assigned to the machine at any time moment (i.e., the job execution intervals cannot overlap in time). A job is *late* (*on time*, respectively) if it is completed after (at or before, respectively) its due date.

In the single-machine scheduling problems, there is a single machine on which all the jobs are to be scheduled. The majority of single-machine single-criterion scheduling problems are NP-hard, although there are polynomially solvable cases as well. For instance, if the objective function is the maximum job completion time called the *makespan* and denoted by *C*max, then the problem of minimizing *C*max, commonly abbreviated by 1k*C*max according to the standard Graham's notation for scheduling problems, is straightforwardly solvable if each job *j* has a single parameter *pj* (the processing time): schedule the jobs in any order without creating machine idle time before the first scheduled job and between any pair of jobs. It is very easy to see that this list scheduling algorithm gives an optimal solution. If each job *j* has also a release time *rj* (the problem 1∣*rj*∣*C*max), then scheduling the jobs in any order may not be good, but still there is a very simple greedy way to arrange them optimally: just order the jobs with non-decreasing release times and iteratively assign the next job from the list to the machine at the completion time of the previously assigned job or at the release time of the former job, whichever

, i.e., it needs this prescribed time on a machine. A job may

**3. Some basic single-criterion scheduling problems**

*DOI: http://dx.doi.org/10.5772/intechopen.91169*

Billaut [1].

by its *processing time pj*

magnitude is larger.

**37**

Now *σ* ∈ F is a *Pareto-optimal solution* if no other solution from the set F dominates the solution *σ*. We shall refer to the set of all such feasible solutions as *Pareto-optimal set*. Forming a Pareto-optimal set of feasible solutions may be not easy. For instance, let, for *k* ¼ 2, *f* <sup>1</sup>ð Þ *σ*<sup>1</sup> ≤*f* <sup>1</sup>ð Þ *σ*<sup>2</sup> ; then solution *σ*<sup>2</sup> is dominated by solution *σ*<sup>1</sup> if *f* <sup>2</sup>ð Þ *σ*<sup>1</sup> < *f* <sup>2</sup>ð Þ *σ*<sup>2</sup> . This condition can be verified in polynomial time for any pair of solutions *σ*<sup>1</sup> and *σ*<sup>2</sup> (given that the corresponding optimization problem is from the class NP). However, whenever the number of feasible solutions grows exponentially with the length of the input (which is the case for NP-hard problems), the explicit evaluation of all possible pairs of feasible solutions (which is unavoidable for finding a dominant solution) would lead us to an exponential-time performance. In particular, if one of the single-criterion problems is NP-hard, finding a Pareto-optimal set for the multi-objective setting will take an exponential time.

**Theorem 1** The problem of finding a Pareto-optimal set of feasible solutions for a multi-objective optimization problem with the objective functions *f* <sup>1</sup>, … , *f <sup>k</sup>* is NPhard if one of the corresponding single-criterion problems is NP-hard.

Proof. We basically reformulate the above reasoning. Consider a bi-criteria optimization problem with *k* ¼ 2. Consider the set *SA* of feasible solutions with *f* <sup>1</sup>ð Þ¼ *σ A* for all *σ* ∈ *SA* and some threshold value *A* of function *f* <sup>1</sup> (without loss of generality assume that *SA* 6¼ Ø). A Pareto-optimal solution from the set *SA* must attain the minimum possible value of function *f* <sup>2</sup> as otherwise it will be dominated by one that attains this value. Then we arrive at a single-criterion optimization problem with the objective function *f* <sup>2</sup>, which is NP-hard.

From the first glance, the multi-threshold optimization version of a multicriteria optimization problem may seem to be easier than the Pareto-optimality version. This is, in part, correct, but considering a threshold vector with arbitrary components, in general, we will also arrive at an intractable problem as the decision version of an NP-hard single-criterion optimization problem is NP-complete. In particular, suppose that we are given a single-criterion optimization problem with the objective to minimize the function *fi* (*i* ∈f g 1, … , *k* ). If this problem is NP-hard, then its decision version, given a threshold value *A* of function *fi* , if there is a feasible solution *σ* ∈ F with *fi* ð Þ *σ* ≤ *A*, is NP-complete. Hence, if one of the singlecriterion optimization problems is NP-hard, then the multi-threshold optimization version of the corresponding multi-criteria optimization problem is also NP-hard.

At the same time, finding a Pareto-optimal set of feasible solutions may be NPhard even if none of the single-criterion problem is NP-hard, i.e., they are solvable in polynomial time. Can the multi-threshold optimization version of a multi-criteria optimization problem be solved in polynomial time, if all the corresponding singlecriterion optimization problems are polynomial? In other words, suppose that the single-criterion problem of finding a feasible solution attaining the minimum value of the objective function *fi* for *i* ¼ 1, … , *k* can be solved in polynomial time. Then clearly, the decision version that seeks for a feasible solution *σ* ∈ F with *fi* ð Þ *σ* ≤ *A* is also polynomially solvable.

Unlike the Pareto-optimization problem, the multi-threshold optimization problem may be solvable in polynomial time even if all the corresponding singlecriterion problems are NP-hard; whether it is solvable in polynomial time or not essentially depends on the particular threshold vector **<sup>A</sup>** <sup>¼</sup> *<sup>A</sup>*<sup>1</sup> , … , *Ak* . As we shall argue in the next sections, depending on the particular threshold values for each objective function, it might be possible to solve the multi-threshold optimization

problem in a low-degree polynomial time even if all the corresponding singlecriterion problems are NP-hard. The given threshold values for each objective function may have a direct practical meaning. For practically useful values of the threshold vector **A**, the corresponding instance of the multi-threshold optimization problem might be solved in a low-degree polynomial time though it may be NP-hard, in general (for an arbitrary threshold vector, see Section 3).

#### **3. Some basic single-criterion scheduling problems**

In the rest of this chapter, we illustrate the Pareto-optimality and the multithreshold optimization approaches for *scheduling problems*. For recent developments in multi-criteria optimization for scheduling problems, the reader is referred to a recent survey by Nagar et al. [3] and Parveen and Ullah [4] and for some earlier works approximately until the year 2005 to the earlier cited work by T'kindt and Billaut [1].

The scheduling problems arise in various practical circumstances. Examples of such problems are job shop problems in industry, scheduling of information and computational processes, and traffic scheduling and servicing of cargo trains, ships, and airplanes. There are scheduling problems of diverse types and different complexities. Saying generally, one deals with two primary notions: *job* (or *task*) and *machine* (or *processor*). A job is a part of the whole work to be done; a machine is the means for the performance of a job. A common restriction in scheduling problems is that a machine cannot handle more than one job at a time. Each job *j* is characterized by its *processing time pj* , i.e., it needs this prescribed time on a machine. A job may have other parameters as well, which may yield additional restrictions and/or can be employed by an objective function. For instance, the *release time rj* of job *j* is the time moment when job *j* becomes available (it cannot be scheduled before that time). The *due date dj* of job *j* is the desirable completion time for job *j* (there may exist a penalty for the late or for the early completion of that job). A job *preemption* might be allowed, i.e., it might be split into portions, each portion being assigned at a different time interval to the machine(s). A *(feasible) schedule* assigns each job *j* to the machine(s) at the specified time moment(s) no less than *rj* with the total duration of *pj* so that no two jobs are assigned to the machine at any time moment (i.e., the job execution intervals cannot overlap in time). A job is *late* (*on time*, respectively) if it is completed after (at or before, respectively) its due date.

In the single-machine scheduling problems, there is a single machine on which all the jobs are to be scheduled. The majority of single-machine single-criterion scheduling problems are NP-hard, although there are polynomially solvable cases as well. For instance, if the objective function is the maximum job completion time called the *makespan* and denoted by *C*max, then the problem of minimizing *C*max, commonly abbreviated by 1k*C*max according to the standard Graham's notation for scheduling problems, is straightforwardly solvable if each job *j* has a single parameter *pj* (the processing time): schedule the jobs in any order without creating machine idle time before the first scheduled job and between any pair of jobs. It is very easy to see that this list scheduling algorithm gives an optimal solution. If each job *j* has also a release time *rj* (the problem 1∣*rj*∣*C*max), then scheduling the jobs in any order may not be good, but still there is a very simple greedy way to arrange them optimally: just order the jobs with non-decreasing release times and iteratively assign the next job from the list to the machine at the completion time of the previously assigned job or at the release time of the former job, whichever magnitude is larger.

A solution *σ*<sup>1</sup> ∈ F *dominates* a solution *σ*<sup>2</sup> ∈ F if *fi*

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

one strict inequality.

time.

fact, we allow ≤ instead of < for all values of *i* except one requiring to have at least

**Theorem 1** The problem of finding a Pareto-optimal set of feasible solutions for a multi-objective optimization problem with the objective functions *f* <sup>1</sup>, … , *f <sup>k</sup>* is NP-

Proof. We basically reformulate the above reasoning. Consider a bi-criteria optimization problem with *k* ¼ 2. Consider the set *SA* of feasible solutions with *f* <sup>1</sup>ð Þ¼ *σ A* for all *σ* ∈ *SA* and some threshold value *A* of function *f* <sup>1</sup> (without loss of generality assume that *SA* 6¼ Ø). A Pareto-optimal solution from the set *SA* must attain the minimum possible value of function *f* <sup>2</sup> as otherwise it will be dominated by one that attains this value. Then we arrive at a single-criterion optimization

From the first glance, the multi-threshold optimization version of a multicriteria optimization problem may seem to be easier than the Pareto-optimality version. This is, in part, correct, but considering a threshold vector with arbitrary components, in general, we will also arrive at an intractable problem as the decision version of an NP-hard single-criterion optimization problem is NP-complete. In particular, suppose that we are given a single-criterion optimization problem with the objective to minimize the function *fi* (*i* ∈f g 1, … , *k* ). If this problem is NP-hard,

criterion optimization problems is NP-hard, then the multi-threshold optimization version of the corresponding multi-criteria optimization problem is also NP-hard. At the same time, finding a Pareto-optimal set of feasible solutions may be NPhard even if none of the single-criterion problem is NP-hard, i.e., they are solvable in polynomial time. Can the multi-threshold optimization version of a multi-criteria optimization problem be solved in polynomial time, if all the corresponding singlecriterion optimization problems are polynomial? In other words, suppose that the single-criterion problem of finding a feasible solution attaining the minimum value of the objective function *fi* for *i* ¼ 1, … , *k* can be solved in polynomial time. Then

clearly, the decision version that seeks for a feasible solution *σ* ∈ F with *fi*

essentially depends on the particular threshold vector **<sup>A</sup>** <sup>¼</sup> *<sup>A</sup>*<sup>1</sup>

Unlike the Pareto-optimization problem, the multi-threshold optimization problem may be solvable in polynomial time even if all the corresponding singlecriterion problems are NP-hard; whether it is solvable in polynomial time or not

argue in the next sections, depending on the particular threshold values for each objective function, it might be possible to solve the multi-threshold optimization

hard if one of the corresponding single-criterion problems is NP-hard.

problem with the objective function *f* <sup>2</sup>, which is NP-hard.

then its decision version, given a threshold value *A* of function *fi*

feasible solution *σ* ∈ F with *fi*

also polynomially solvable.

**36**

Now *σ* ∈ F is a *Pareto-optimal solution* if no other solution from the set F dominates the solution *σ*. We shall refer to the set of all such feasible solutions as *Pareto-optimal set*. Forming a Pareto-optimal set of feasible solutions may be not easy. For instance, let, for *k* ¼ 2, *f* <sup>1</sup>ð Þ *σ*<sup>1</sup> ≤*f* <sup>1</sup>ð Þ *σ*<sup>2</sup> ; then solution *σ*<sup>2</sup> is dominated by solution *σ*<sup>1</sup> if *f* <sup>2</sup>ð Þ *σ*<sup>1</sup> < *f* <sup>2</sup>ð Þ *σ*<sup>2</sup> . This condition can be verified in polynomial time for any pair of solutions *σ*<sup>1</sup> and *σ*<sup>2</sup> (given that the corresponding optimization problem is from the class NP). However, whenever the number of feasible solutions grows exponentially with the length of the input (which is the case for NP-hard problems), the explicit evaluation of all possible pairs of feasible solutions (which is unavoidable for finding a dominant solution) would lead us to an exponential-time performance. In particular, if one of the single-criterion problems is NP-hard, finding a Pareto-optimal set for the multi-objective setting will take an exponential

ð Þ *σ*<sup>1</sup> <*fi*

ð Þ *σ*<sup>2</sup> for *i* ¼ 1, … , *k*; in

, if there is a

, … , *Ak* . As we shall

ð Þ *σ* ≤ *A* is

ð Þ *σ* ≤ *A*, is NP-complete. Hence, if one of the single-

Minimizing the makespan becomes more complicated with even two machines or if each job *j* has an additional job parameter called the *delivery time qj* , which is an extra amount of time needed for job *j* for its full completion *once* it is already completed on the machine (the delivery of each job is accomplished independently of the machine immediately after its completion on the machine). Thus, job *j* will take *pj* time on the machine and then an additional time *qj* for its full completion (during which another job might be assigned to the machine). Then the maximum job completion time in the schedule *σ* (the makespan) is:

$$C\_{\max}(\sigma) = \max\_{j \in \sigma} \left\{ s\_j(\sigma) + p\_j + q\_j \right\}. \tag{1}$$

1k

P*Uj* is polynomially solvable (by the algorithm of Moore and Hodgson); how-

*A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus…*

Hoogeveen [7] has considered the no machine idle time version in a bi-criteria setting. Instead of minimizing the lateness, he has introduced the so-called target start time *sj* of a job *j*: *sj* is the desirable starting time for job *j*, similarly as the due date *dj* is the desirable completion time for the job *j*. Together with the minimization of the maximum job lateness, the minimization of the maximum job promptness (the difference between the target and real start times of that job) can be considered. The above reference gives an algorithm that finds a Pareto-optimal set of

We can combine the objective functions described in the previous section and obtain the corresponding multi-criteria scheduling problems. We consider these multi-criteria problems from the point of view of multi-threshold optimization and

We start by considering a bi-criteria problem with two objective functions, *C*max

With the multi-threshold (bi-threshold) optimization approach, we are given two threshold values *A*<sup>1</sup> and *A*<sup>2</sup> on the functions *C*max and *L*max, respectively. We

*<sup>L</sup>*maxð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>2</sup>

*<sup>C</sup>*maxð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>1</sup> and we return a "no" answer. Otherwise, we know that there exists a

such feasible schedules (we may introduce idle time intervals of a required total length in the schedule *σ* arbitrarily between neighboring jobs in different ways obtaining different feasible schedules satisfying inequality (5)). Let us denote the

Now it remains to verify condition (6), i.e., we wish to know if, among all schedules from the set *SA*<sup>1</sup> , there is one satisfying condition (6). In general, it may take an exponential time to answer this question for an arbitrary value *A*<sup>2</sup> since the corresponding decision problem is NP-complete. At the same time, it also might be possible to obtain an answer in polynomial time, depending on the value of *A*<sup>2</sup>

The easiest way is to construct a greedy solution *σ*<sup>00</sup> to the problem obtained, for instance, by the earlier mentioned Jackson heuristic. It is well-known that the

As to condition (5), let us first construct a feasible schedule *σ*<sup>0</sup> in which the jobs are arranged in a non-decreasing order of their release times and are scheduled in this order without leaving unavoidable machine idle time. Recall that the schedule *σ*<sup>0</sup> (obtained in this way in *O n*ð Þ log *n* time) is optimal for the problem 1∣*rj*∣*C*max.

*<sup>C</sup>*maxð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>1</sup> (5)

, then there exists no (bi-threshold optimal) schedule *σ* with

. In fact, if *<sup>C</sup>*max *<sup>σ</sup>*<sup>0</sup> ð Þ¼ *<sup>A</sup>*<sup>1</sup>

*:* (6)

, then there are many

.

would like to know if there exists a feasible schedule *σ* such that

and *L*max obtained from the single-criterion problems 1∣*rj*∣*C*max and 1∣*rj*∣*L*max, respectively (note that in the first problem, no job delivery times are given). With the Pareto-optimization approach, we need to solve two relevant problems: (1) among all feasible schedules with a given maximum job lateness, find one with the minimum makespan, and (2) vice versa, among all feasible schedules with a given makespan, find one with the minimum maximum job lateness. Both of these

P*Uj* is again strongly NP-hard.

ever, with job release times, the problem 1∣*rj*∣

*DOI: http://dx.doi.org/10.5772/intechopen.91169*

feasible solutions for this bi-criteria scheduling problem.

**4. Basic multi-criteria scheduling problems**

Pareto-optimization approaches.

problems are strongly NP-hard [6].

Hence, if *<sup>C</sup>*max *<sup>σ</sup>*<sup>0</sup> ð Þ<sup>&</sup>gt; *<sup>A</sup>*<sup>1</sup>

**39**

feasible schedule *<sup>σ</sup>*<sup>0</sup> with *<sup>C</sup>*maxð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>1</sup>

set of these feasible schedules by *SA*<sup>1</sup> .

The objective is to find a feasible schedule in which the maximum job completion time is the minimum possible one.

If there are no job release times, i.e., all jobs are released simultaneously (the problem 1∣*qj* ∣*C*max), then the makespan can be minimized by the well-known Jackson heuristic [5]: first arranging the jobs in a non-increasing order of their delivery times and then scheduling them without leaving machine idle times, similarly as we did for the above versions. With job release times, however, the problem 1∣*rj*, *qj* ∣*C*max becomes strongly NP-hard. Besides the *C*max criterion, there are a number of other commonly used objective functions for scheduling problems. For instance, if for every job *j* its due date *dj* is given, then several objective criteria can be used to measure the solution quality.

The *lateness* of a job *j* in a schedule *σ*:

$$L\_j(\sigma) = d\_j - \left( s\_j(\sigma) + p\_j \right) \tag{2}$$

(note that *sj*ð Þþ *σ pj* is the completion time of job *j* in the schedule *σ*). One of the most commonly used due date oriented objective functions is the maximum job lateness

$$L\_{\max}(\sigma) = \max\_{j \in \sigma} L\_j. \tag{3}$$

The objective is to find a feasible schedule *σ* in which the maximum job lateness *L*max is the minimum possible one. This problem 1∣*rj*∣*L*max is, in fact, equivalent to the abovementioned one 1∣*rj*, *qj* ∣*Cmax* with job delivery times, and hence, it is also strongly NP-hard [6].

Another common due date-oriented objective function is the number of *late* jobs (the ones completed after their due date)

$$\sum\_{j \in \sigma} U\_j(\sigma),\tag{4}$$

where *Uj*ð Þ *σ* is a 0–1 function taking the value 1 if job *j* is late in the schedule *σ* and the value 0 otherwise. The objective here is to find a feasible schedule with the minimum possible value P *<sup>j</sup>*∈*<sup>σ</sup>Uj*ð Þ *σ* , equivalently, one maximizing the throughput, i.e., the number of jobs completed by their due dates (this model is motivated by applications in real-time overloaded systems, where the job due dates are crucial in a way that if a job is late, then it might rather be postponed for an undefined period of time in favor of other jobs which might be completed on time). Similarly to the above problems, if all jobs are simultaneously released, then the problem

#### *A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus… DOI: http://dx.doi.org/10.5772/intechopen.91169*

1k P*Uj* is polynomially solvable (by the algorithm of Moore and Hodgson); however, with job release times, the problem 1∣*rj*∣ P*Uj* is again strongly NP-hard.

Hoogeveen [7] has considered the no machine idle time version in a bi-criteria setting. Instead of minimizing the lateness, he has introduced the so-called target start time *sj* of a job *j*: *sj* is the desirable starting time for job *j*, similarly as the due date *dj* is the desirable completion time for the job *j*. Together with the minimization of the maximum job lateness, the minimization of the maximum job promptness (the difference between the target and real start times of that job) can be considered. The above reference gives an algorithm that finds a Pareto-optimal set of feasible solutions for this bi-criteria scheduling problem.

#### **4. Basic multi-criteria scheduling problems**

We can combine the objective functions described in the previous section and obtain the corresponding multi-criteria scheduling problems. We consider these multi-criteria problems from the point of view of multi-threshold optimization and Pareto-optimization approaches.

We start by considering a bi-criteria problem with two objective functions, *C*max and *L*max obtained from the single-criterion problems 1∣*rj*∣*C*max and 1∣*rj*∣*L*max, respectively (note that in the first problem, no job delivery times are given).

With the Pareto-optimization approach, we need to solve two relevant problems: (1) among all feasible schedules with a given maximum job lateness, find one with the minimum makespan, and (2) vice versa, among all feasible schedules with a given makespan, find one with the minimum maximum job lateness. Both of these problems are strongly NP-hard [6].

With the multi-threshold (bi-threshold) optimization approach, we are given two threshold values *A*<sup>1</sup> and *A*<sup>2</sup> on the functions *C*max and *L*max, respectively. We would like to know if there exists a feasible schedule *σ* such that

$$\mathcal{C}\_{\max}(\sigma) \le \mathcal{A}^1 \tag{5}$$

$$L\_{\max}(\sigma) \le A^2. \tag{6}$$

As to condition (5), let us first construct a feasible schedule *σ*<sup>0</sup> in which the jobs are arranged in a non-decreasing order of their release times and are scheduled in this order without leaving unavoidable machine idle time. Recall that the schedule *σ*<sup>0</sup> (obtained in this way in *O n*ð Þ log *n* time) is optimal for the problem 1∣*rj*∣*C*max. Hence, if *<sup>C</sup>*max *<sup>σ</sup>*<sup>0</sup> ð Þ<sup>&</sup>gt; *<sup>A</sup>*<sup>1</sup> , then there exists no (bi-threshold optimal) schedule *σ* with *<sup>C</sup>*maxð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>1</sup> and we return a "no" answer. Otherwise, we know that there exists a feasible schedule *<sup>σ</sup>*<sup>0</sup> with *<sup>C</sup>*maxð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>1</sup> . In fact, if *<sup>C</sup>*max *<sup>σ</sup>*<sup>0</sup> ð Þ¼ *<sup>A</sup>*<sup>1</sup> , then there are many such feasible schedules (we may introduce idle time intervals of a required total length in the schedule *σ* arbitrarily between neighboring jobs in different ways obtaining different feasible schedules satisfying inequality (5)). Let us denote the set of these feasible schedules by *SA*<sup>1</sup> .

Now it remains to verify condition (6), i.e., we wish to know if, among all schedules from the set *SA*<sup>1</sup> , there is one satisfying condition (6). In general, it may take an exponential time to answer this question for an arbitrary value *A*<sup>2</sup> since the corresponding decision problem is NP-complete. At the same time, it also might be possible to obtain an answer in polynomial time, depending on the value of *A*<sup>2</sup> . The easiest way is to construct a greedy solution *σ*<sup>00</sup> to the problem obtained, for instance, by the earlier mentioned Jackson heuristic. It is well-known that the

Minimizing the makespan becomes more complicated with even two machines

*<sup>j</sup>* <sup>∈</sup>*<sup>σ</sup> sj*ð Þþ *<sup>σ</sup> pj* <sup>þ</sup> *qj* n o

The objective is to find a feasible schedule in which the maximum job comple-

If there are no job release times, i.e., all jobs are released simultaneously (the

∣*C*max becomes strongly NP-hard. Besides the *C*max criterion, there are a number of other commonly used objective functions for scheduling problems. For instance, if for every job *j* its due date *dj* is given, then several objective criteria can

*Lj*ð Þ¼ *σ dj* � *sj*ð Þþ *σ pj*

*L*maxð Þ¼ *σ* max

(note that *sj*ð Þþ *σ pj* is the completion time of job *j* in the schedule *σ*). One of the most commonly used due date oriented objective functions is the maximum job

The objective is to find a feasible schedule *σ* in which the maximum job lateness *L*max is the minimum possible one. This problem 1∣*rj*∣*L*max is, in fact, equivalent to

Another common due date-oriented objective function is the number of *late* jobs

where *Uj*ð Þ *σ* is a 0–1 function taking the value 1 if job *j* is late in the schedule *σ* and the value 0 otherwise. The objective here is to find a feasible schedule with the

put, i.e., the number of jobs completed by their due dates (this model is motivated by applications in real-time overloaded systems, where the job due dates are crucial in a way that if a job is late, then it might rather be postponed for an undefined period of time in favor of other jobs which might be completed on time). Similarly to the above problems, if all jobs are simultaneously released, then the problem

X *j*∈*σ*

� �

*<sup>j</sup>* <sup>∈</sup>*<sup>σ</sup> Lj:* (3)

*Uj*ð Þ *σ* , (4)

∣*Cmax* with job delivery times, and hence, it is also

*<sup>j</sup>*∈*<sup>σ</sup>Uj*ð Þ *σ* , equivalently, one maximizing the through-

Jackson heuristic [5]: first arranging the jobs in a non-increasing order of their delivery times and then scheduling them without leaving machine idle times, similarly as we did for the above versions. With job release times, however, the problem

∣*C*max), then the makespan can be minimized by the well-known

, which is an

(2)

*:* (1)

or if each job *j* has an additional job parameter called the *delivery time qj*

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

job completion time in the schedule *σ* (the makespan) is:

tion time is the minimum possible one.

be used to measure the solution quality. The *lateness* of a job *j* in a schedule *σ*:

the abovementioned one 1∣*rj*, *qj*

minimum possible value P

(the ones completed after their due date)

strongly NP-hard [6].

problem 1∣*qj*

1∣*rj*, *qj*

lateness

**38**

*C*maxð Þ¼ *σ* max

extra amount of time needed for job *j* for its full completion *once* it is already completed on the machine (the delivery of each job is accomplished independently of the machine immediately after its completion on the machine). Thus, job *j* will take *pj* time on the machine and then an additional time *qj* for its full completion (during which another job might be assigned to the machine). Then the maximum

schedule *σ*<sup>00</sup> minimizes the function *C*max. Hence, if *L*max *σ*<sup>00</sup> � �≤ *A*<sup>2</sup> , then we return the schedule *σ*<sup>00</sup> with a "yes" answer. Otherwise, the answer may be "yes" or may also be "no." In this case, we need more costly calculations to seek for a feasible schedule *<sup>σ</sup>* from the set *SA*<sup>1</sup> with *<sup>L</sup>*maxð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>2</sup> . This may take an exponential time (as the second single-criterion problem 1∣*rj*∣*L*max is NP-hard).

Combining the objective function *C*max with P *j Uj*, we obtain another bi-criteria problem from the single-criterion problems 1∣*rj*∣*C*max and 1∣*rj*∣ P *j Uj*, respectively.

With the Pareto-optimization approach, we need to solve two relevant problems: (1) among all feasible schedules with a given maximum job lateness, find one with the minimum makespan, and (2) vice versa, among all feasible schedules with a given makespan, find one with the minimum number of late jobs. Both of these problems remain strongly NP-hard.

With the bi-threshold optimization approach, we are given two threshold values *A*<sup>1</sup> and *A*<sup>3</sup> on the functions *C*max and P *j Uj*, respectively. We would like to know if there exists a feasible schedule *σ* satisfying inequality (1) and the following inequality:

$$\sum\_{j} U\_{j}(\sigma) \le A^{3}. \tag{7}$$

by-threshold problem gets also less accessible but still more flexible than the Paretooptimality version, again essentially depending on the threshold values. We again consider the three conditions (5), (6), and (7) that come from the corresponding single-criterion problems and the set of feasible schedules *SA*<sup>1</sup> yielded by inequality (1). Using the fact that both schedules *<sup>σ</sup>*<sup>00</sup> and *<sup>σ</sup>*‴ are from the set *SA*<sup>1</sup> , it will suffice

*<sup>L</sup>*maxð Þ *<sup>σ</sup>*‴ <sup>≤</sup> *<sup>A</sup>*<sup>2</sup>

Intuitively, it is clear that the closer is *A*<sup>3</sup> to *n* (the total number of jobs) and

*<sup>σ</sup>*00, and *<sup>σ</sup>*‴ is *O n*ð Þ log *<sup>n</sup>* ). If any of the conditions (6), (7), (8), or (9) is not satisfied, then an implicit enumeration algorithm that generates feasible schedules respecting

We have seen that a multi-threshold optimization problem may solve practical multi-criteria problems in polynomial time while delivering a solution with an acceptable quality for a given threshold vector, which reflects real needs of a particular real-life application. We have compared the multi-threshold optimization problem with the Pareto-optimization problem for three basic multi-criteria scheduling problems on a single machine. It is clear that, in many multi-criteria applications, a practitioner may not be interested in a Pareto-optimal set of feasible solutions: an analysis of the set of Pareto-optimal solutions containing all nondominated feasible solutions might be beyond the interest and capacity of the practitioner. In practice, a feasible solution that attains some threshold value for each objective function is required. For instance, take an automobile manufacturing

section. Clearly, the manufacturer is interested in minimizing the total production time *C*max, whereas he imposes a maximum possible lateness in the production of each car (which might be far above the minimum possible lateness), and there is a maximum admissible number of cars whose production might be late and be delayed for an infinitive amount of time (according to the current demand on the product). Two heuristic algorithms that we have considered in the previous section, in practice, may well deliver such solutions while minimizing the total production time. It is well-known that Jackson's heuristic, in practice, delivers near-optimal solutions with a value of the objective function close to the optimum [9]. At the

*j*

delivered by the heuristic may also satisfy the threshold condition for that criterion.

In fact, it might be possible to combine Jackson's heuristic with Moore and Hodgson's one in such a way that the resultant heuristic would provide a solution with the desired thresholds for both objective functions with some high probability. The construction of such heuristics that deliver a solution respecting the threshold vector for two or more objective criteria is an interesting line for further research.

by-threshold problem will be solved in *O n*ð Þ log *n* time (remind that the time complexity of all the three heuristics that we use for the creation of the schedules *σ*<sup>0</sup>

, the more probable it is that these inequalities will hold. Hence, the

*j*

*Uj* considered in the previous

*Uj* is not too small, the solution

*Uj <sup>σ</sup>*<sup>00</sup> ð Þ<sup>≤</sup> *<sup>A</sup>*<sup>3</sup> (8)

*:* (9)

,

X *j*

*A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus…*

to verify whether

or

the larger is *A*<sup>2</sup>

**5. Conclusions**

**41**

the thresholds *A*<sup>2</sup> and *A*<sup>3</sup> can be applied.

*DOI: http://dx.doi.org/10.5772/intechopen.91169*

and the three objective functions *C*max, *L*max, and P

same time, if the threshold for the criterion P

Condition (5) can be treated as above. As to condition (7), we need to verify if, among all schedules from the set *SA*<sup>1</sup> , there is one satisfying this condition. As for condition (6), in general, it may take an exponential time to verify condition (7) for an arbitrary value *A*<sup>3</sup> , since the corresponding decision problem with a single objective function P *j Uj* is NP-complete [8]. But it again might be possible to obtain an answer in polynomial time. Instead of Jackson's heuristic that we used for condition (6), now we use an extended version of the algorithm of Moore and Hodgson for the problem 1k P*Uj*. Recall that the latter algorithm is designed for simultaneously released jobs. It sorts all jobs in a non-decreasing order of their due dates and includes them in this order whenever the last included job completes by its due date. Otherwise, from the last block of the continuously scheduled jobs (there will be only one such block for simultaneously released jobs), it discards a longest job and repeats the same step until all jobs are considered in this way. Note that all the included jobs are completed on time. Finally, it adds the discarded jobs at the end of the resultant partial schedule in any order without leaving machine idle times (these jobs are late).

We modify the above algorithm by considering the jobs in the order as they are released, but order each group of currently released jobs similarly by nondecreasing due dates and accomplish the same steps for each such group of the already released jobs. Although the modified algorithm, in general, does not guarantee optimality, it may typically deliver a near-optimal solution to the version 1∣*rj*∣ P*Uj* with job release times. Let us denote the schedule delivered by the extended Moore and Hodgson algorithm by *<sup>σ</sup>*‴. It can be readily verified that the schedule *<sup>σ</sup>*‴ minimizes the function *<sup>C</sup>*max. Hence, if <sup>P</sup> *j Uj*ð Þ *<sup>σ</sup>*‴ <sup>≤</sup> *<sup>A</sup>*<sup>2</sup> , then we return the schedule *<sup>σ</sup>*‴ with a "yes" answer. Otherwise, the answer may be "yes" or may also be "no." In this case, we need more costly calculations to seek for a feasible schedule *σ* from the set *SA*<sup>1</sup> with P *j Uj*ð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>2</sup> , which, similarly as for the earlier bi-criteria problem, may take an exponential time.

Finally, combining all the three objective functions *C*max, *L*max, and P *j Uj*, we obtain a more complicated three-criteria scheduling problem. Finding the Pareto-optimal set of feasible solutions obviously remains NP-hard. The

#### *A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus… DOI: http://dx.doi.org/10.5772/intechopen.91169*

by-threshold problem gets also less accessible but still more flexible than the Paretooptimality version, again essentially depending on the threshold values. We again consider the three conditions (5), (6), and (7) that come from the corresponding single-criterion problems and the set of feasible schedules *SA*<sup>1</sup> yielded by inequality (1). Using the fact that both schedules *<sup>σ</sup>*<sup>00</sup> and *<sup>σ</sup>*‴ are from the set *SA*<sup>1</sup> , it will suffice to verify whether

$$\sum\_{j} U\_{j}(\sigma'') \le A^3 \tag{8}$$

or

schedule *σ*<sup>00</sup> minimizes the function *C*max. Hence, if *L*max *σ*<sup>00</sup> � �≤ *A*<sup>2</sup>

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

(as the second single-criterion problem 1∣*rj*∣*L*max is NP-hard). Combining the objective function *C*max with P

problem from the single-criterion problems 1∣*rj*∣*C*max and 1∣*rj*∣

schedule *<sup>σ</sup>* from the set *SA*<sup>1</sup> with *<sup>L</sup>*maxð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>2</sup>

problems remain strongly NP-hard.

inequality:

for an arbitrary value *A*<sup>3</sup>

objective function P

for the problem 1k

jobs are late).

1∣*rj*∣

**40**

*A*<sup>1</sup> and *A*<sup>3</sup> on the functions *C*max and P

*j*

the schedule *σ*<sup>00</sup> with a "yes" answer. Otherwise, the answer may be "yes" or may also be "no." In this case, we need more costly calculations to seek for a feasible

With the Pareto-optimization approach, we need to solve two relevant problems: (1) among all feasible schedules with a given maximum job lateness, find one with the minimum makespan, and (2) vice versa, among all feasible schedules with a given makespan, find one with the minimum number of late jobs. Both of these

With the bi-threshold optimization approach, we are given two threshold values

*Uj*ð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>3</sup>

Condition (5) can be treated as above. As to condition (7), we need to verify if, among all schedules from the set *SA*<sup>1</sup> , there is one satisfying this condition. As for condition (6), in general, it may take an exponential time to verify condition (7)

an answer in polynomial time. Instead of Jackson's heuristic that we used for condition (6), now we use an extended version of the algorithm of Moore and Hodgson

neously released jobs. It sorts all jobs in a non-decreasing order of their due dates and includes them in this order whenever the last included job completes by its due date. Otherwise, from the last block of the continuously scheduled jobs (there will be only one such block for simultaneously released jobs), it discards a longest job and repeats the same step until all jobs are considered in this way. Note that all the included jobs are completed on time. Finally, it adds the discarded jobs at the end of the resultant partial schedule in any order without leaving machine idle times (these

We modify the above algorithm by considering the jobs in the order as they are released, but order each group of currently released jobs similarly by nondecreasing due dates and accomplish the same steps for each such group of the already released jobs. Although the modified algorithm, in general, does not guarantee optimality, it may typically deliver a near-optimal solution to the version

P*Uj* with job release times. Let us denote the schedule delivered by the extended Moore and Hodgson algorithm by *<sup>σ</sup>*‴. It can be readily verified that the

the schedule *<sup>σ</sup>*‴ with a "yes" answer. Otherwise, the answer may be "yes" or may also be "no." In this case, we need more costly calculations to seek for a feasible

*Uj*ð Þ *<sup>σ</sup>* <sup>≤</sup> *<sup>A</sup>*<sup>2</sup>

Finally, combining all the three objective functions *C*max, *L*max, and P

we obtain a more complicated three-criteria scheduling problem. Finding the Pareto-optimal set of feasible solutions obviously remains NP-hard. The

*j*

schedule *<sup>σ</sup>*‴ minimizes the function *<sup>C</sup>*max. Hence, if <sup>P</sup>

bi-criteria problem, may take an exponential time.

schedule *σ* from the set *SA*<sup>1</sup> with P

, since the corresponding decision problem with a single

*Uj* is NP-complete [8]. But it again might be possible to obtain

*j*

*Uj*ð Þ *<sup>σ</sup>*‴ <sup>≤</sup> *<sup>A</sup>*<sup>2</sup>

, which, similarly as for the earlier

, then we return

*j Uj*,

P*Uj*. Recall that the latter algorithm is designed for simulta-

*j*

there exists a feasible schedule *σ* satisfying inequality (1) and the following

X *j*

*j*

, then we return

*Uj*, respectively.

. This may take an exponential time

P *j*

*Uj*, respectively. We would like to know if

*:* (7)

*Uj*, we obtain another bi-criteria

$$L\_{\max}(\sigma''') \le A^2. \tag{9}$$

Intuitively, it is clear that the closer is *A*<sup>3</sup> to *n* (the total number of jobs) and the larger is *A*<sup>2</sup> , the more probable it is that these inequalities will hold. Hence, the by-threshold problem will be solved in *O n*ð Þ log *n* time (remind that the time complexity of all the three heuristics that we use for the creation of the schedules *σ*<sup>0</sup> , *<sup>σ</sup>*00, and *<sup>σ</sup>*‴ is *O n*ð Þ log *<sup>n</sup>* ). If any of the conditions (6), (7), (8), or (9) is not satisfied, then an implicit enumeration algorithm that generates feasible schedules respecting the thresholds *A*<sup>2</sup> and *A*<sup>3</sup> can be applied.

#### **5. Conclusions**

We have seen that a multi-threshold optimization problem may solve practical multi-criteria problems in polynomial time while delivering a solution with an acceptable quality for a given threshold vector, which reflects real needs of a particular real-life application. We have compared the multi-threshold optimization problem with the Pareto-optimization problem for three basic multi-criteria scheduling problems on a single machine. It is clear that, in many multi-criteria applications, a practitioner may not be interested in a Pareto-optimal set of feasible solutions: an analysis of the set of Pareto-optimal solutions containing all nondominated feasible solutions might be beyond the interest and capacity of the practitioner. In practice, a feasible solution that attains some threshold value for each objective function is required. For instance, take an automobile manufacturing and the three objective functions *C*max, *L*max, and P *j Uj* considered in the previous section. Clearly, the manufacturer is interested in minimizing the total production time *C*max, whereas he imposes a maximum possible lateness in the production of each car (which might be far above the minimum possible lateness), and there is a maximum admissible number of cars whose production might be late and be delayed for an infinitive amount of time (according to the current demand on the product). Two heuristic algorithms that we have considered in the previous section, in practice, may well deliver such solutions while minimizing the total production time. It is well-known that Jackson's heuristic, in practice, delivers near-optimal solutions with a value of the objective function close to the optimum [9]. At the same time, if the threshold for the criterion P *j Uj* is not too small, the solution delivered by the heuristic may also satisfy the threshold condition for that criterion. In fact, it might be possible to combine Jackson's heuristic with Moore and Hodgson's one in such a way that the resultant heuristic would provide a solution with the desired thresholds for both objective functions with some high probability. The construction of such heuristics that deliver a solution respecting the threshold vector for two or more objective criteria is an interesting line for further research.

We have illustrated the multi-threshold optimization approach on a few singlemachine scheduling problems, though the approach can obviously be applied, in general, for different kinds of multi-objective optimization problems.

**References**

88-104

[1] T'kindt V, Billaut J-C. Multicriteria Scheduling: Theory, Models and Algorithms. 2nd ed. Springer; 2006

*DOI: http://dx.doi.org/10.5772/intechopen.91169*

*A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus…*

[2] T'kindt V, Billaut J-C. Multicriteria

Operations Research. 2001;**35**:143-163

[3] Nagar A, Haddock J, Heragu S. Multiple and bicriteria scheduling: A literature review. European Journal of Operational Research. 1995;**81**(1):

[4] Parveen S, Ullah H. Review on job-shop and flow-shop scheduling using multi criteria decision making. Journal of Mechanical Engineering.

[5] Jackson JR. Scheduling a Production Line to Minimize the Maximum Tardiness. Management Science Research Project. Los Angeles, CA: University of California; 1955

[6] Vakhania N. Scheduling a single machine with primary and secondary objectives. Algorithms. 2018;**11**:80. DOI:

[7] Hoogeveen H. Single-machine bicriteria scheduling [Ph. D. Thesis].

[8] Garey MR, Johnson DS. Computers and Intractability: A Guide to the Theory of NP–Completeness. San

[9] Vakhania N, Perez D, Carballo L. Theoretical expectation versus practical performance of Jackson's heuristic. Mathematical Problems in Engineering. 2015;**2015**:484671. DOI: 10.1155/2015/

2010;**41**(2):130-146

10.3390/a11060080

Amsterdam: CWI; 1992

Francisco: Freeman; 1979

484671

**43**

scheduling: A survey, RAIRO.

#### **Author details**

Nodari Vakhania<sup>1</sup> \* and Frank Werner<sup>2</sup>


\*Address all correspondence to: nodari@uaem.mx

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*A Brief Look at Multi-Criteria Problems: Multi-Threshold Optimization versus… DOI: http://dx.doi.org/10.5772/intechopen.91169*

#### **References**

We have illustrated the multi-threshold optimization approach on a few singlemachine scheduling problems, though the approach can obviously be applied, in

general, for different kinds of multi-objective optimization problems.

*Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality*

**Author details**

Nodari Vakhania<sup>1</sup>

**42**

\* and Frank Werner<sup>2</sup>

2 Fakultät für Mathematik, Otto-von-Guericke-Universität Magdeburg, Germany

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

1 Centro de Investigación en Ciencias, UAEM, Mexico

\*Address all correspondence to: nodari@uaem.mx

provided the original work is properly cited.

[1] T'kindt V, Billaut J-C. Multicriteria Scheduling: Theory, Models and Algorithms. 2nd ed. Springer; 2006

[2] T'kindt V, Billaut J-C. Multicriteria scheduling: A survey, RAIRO. Operations Research. 2001;**35**:143-163

[3] Nagar A, Haddock J, Heragu S. Multiple and bicriteria scheduling: A literature review. European Journal of Operational Research. 1995;**81**(1): 88-104

[4] Parveen S, Ullah H. Review on job-shop and flow-shop scheduling using multi criteria decision making. Journal of Mechanical Engineering. 2010;**41**(2):130-146

[5] Jackson JR. Scheduling a Production Line to Minimize the Maximum Tardiness. Management Science Research Project. Los Angeles, CA: University of California; 1955

[6] Vakhania N. Scheduling a single machine with primary and secondary objectives. Algorithms. 2018;**11**:80. DOI: 10.3390/a11060080

[7] Hoogeveen H. Single-machine bicriteria scheduling [Ph. D. Thesis]. Amsterdam: CWI; 1992

[8] Garey MR, Johnson DS. Computers and Intractability: A Guide to the Theory of NP–Completeness. San Francisco: Freeman; 1979

[9] Vakhania N, Perez D, Carballo L. Theoretical expectation versus practical performance of Jackson's heuristic. Mathematical Problems in Engineering. 2015;**2015**:484671. DOI: 10.1155/2015/ 484671

Section 3

Applications and Overviews

**45**

Section 3
