1. Introduction

Research in multi-robot systems is motivated by several notions; namely, some motivation can be put as [1]:


Until recently, the number of real-life implementations of multi-robot systems is relatively small. The reason is the complexity associated with the field. Also, the related technologies are relatively new. Emergence of autonomous driving vehicle

technology and market can push the boundaries in the field. As technology develops, new venues for application will open for mainstream use rather than only in research and development labs. Due to its promising applicability, autonomous cars and vehicles (or various intelligent transportation systems in general) sit at the forefront [2, 3]. To name a few, benefits include reducing congestions [4], increasing road safety [5], and, of course, self-driving cars [6]. Another application in civil environments is related to safety and security like rescue missions of searching for missing people in areas hard for humans to operate in [7] or searching for dangerous materials or bombs [8] in an evacuated building. Also, another area of application of multi-robot systems is in military area; research was done heavily in the fields of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) [9, 10].

corresponds to the length of time the system input ui

other set consists of the variable sampling period, t

in (1) corresponds to objective function of the form

H xi tf � � � � <sup>þ</sup>

> min ui f g∀<sup>i</sup> ∑ L i¼1 J <sup>i</sup> ui � �

s:t: ui ∈ Ω<sup>i</sup>

ð Þk , for t

i <sup>k</sup> ≤t < t i <sup>k</sup>þ1, t<sup>i</sup>

Distributed Optimization of Multi-Robot Motion with Time-Energy Criterion

ð Þk .

variables; the first set resembles the discretized system inputs, u<sup>i</sup>

System behavior is governed by the nonlinear dynamic system in

ð Þ<sup>k</sup> ; ui

<sup>k</sup>þ<sup>1</sup> ¼ t i <sup>k</sup> þ t i s ð Þ<sup>k</sup> , t<sup>i</sup>

0. The above optimal control problem has a final state objective of

The above optimization problem can be viewed as having two sets of control

<sup>L</sup> <sup>¼</sup> xTQx <sup>þ</sup> uTRu <sup>þ</sup> <sup>β</sup>,

with β being the scalar weight on time. The performance is restricted by a collection of inequality constraints of robot-specific constraints gð Þ: ≤ 0 and robot-

individual objectives. In this paper, as it will be explained later, we consider only collision avoidance as robot-interaction requirement; however, the above formulation can also meet other considerations. It can be shown that the objective function

x tð ÞTQx tðÞþ u tð ÞTRu tðÞþ <sup>β</sup>dt � �

,for i ¼ 1, …, L

<sup>i</sup> ui � � as the objective function and Ω<sup>i</sup> the set of constraints. We can easily

separate the problem into its corresponding sub problems. An ith subproblem is

min ui J <sup>i</sup> ui � �

s:t: ui ∈ Ω<sup>i</sup>

Observing the global problem in (2), we can see that it is just equivalent to the combination of all the subproblems; it is easy to see that solving for each subproblem (3) individually will result in the solution for the whole global problem. Now,

Here in this section, the concept of distributed optimization is explored. This area tackles optimization problems with distributed nature. Consider the following

ðtf t0 ð Þk ; t i s

ð Þk as the robot i states an initial condition of

i s ð Þ<sup>k</sup> � �

ð Þ<sup>k</sup> � � information being embedded

ð Þ: ≤ 0. The objective function is just the summation of

hold): we assume the system input to be

DOI: http://dx.doi.org/10.5772/intechopen.85668

ðÞ¼ <sup>t</sup> <sup>u</sup><sup>i</sup>

ð Þ <sup>N</sup> � � with the Lagrangian L x<sup>i</sup>

ui

ð Þk ; t i s ð Þ<sup>k</sup> � � with xi

into a dummy state variable z<sup>i</sup>

Lagrangian for the problem be

interaction constraints of Ω<sup>i</sup>

min ∑ L i¼1

2. Distributed optimization

however, consider the following problem:

optimization problem:

with J

easily put as

121

f <sup>D</sup> xi

H xi

xi

ð Þ<sup>k</sup> ; ui

ð Þ¼ <sup>0</sup> xi

ð Þt is kept constant (zero-order

<sup>0</sup> ¼ 0, ∀i

ð Þ<sup>k</sup> � �

<sup>∀</sup>k. Let us have the

<sup>∀</sup>k, and the

(2)

(3)

Many approaches are developed to tackle the issue of multiple robot systems. Under the inspiration of biological systems and the need of technologies, many problems are defined as cooperative motions. Cooperative motion is discussed in [11–16]. Optimization in both time and energy has been tackled in the literature [17–20]. There is an opportunity to incorporate concept of time/energy optimization into the paradigm of multi-robot systems.

This paper investigates the solution of a time-energy optimal control problem for multiple mobile robots; namely, the paper is to study the problem as a nonlinear programming (NLP) problem. The main idea of the solution used here is to utilize distributed optimization techniques to solve the overall optimization problem. Solving for optimal time and energy of more than one robot system adds more burden on the problem; robot interaction with each other is added to the problem. This paper will focus more on the distributed aspect of the problem; more details about the numerical optimal control problem formulation can be found in [21]. In [21], the problem of controlling the motion of a single mobile robot is solved using the direct method of numerical optimal control (see [22]); this showed great flexibility in incorporating physical constraints and nonlinear dynamics of the system.

The rest of this section will define the global problem formulation. Discussion about distributed optimization and associated algorithm is presented in Section 2. Section 3 will apply the method on the multi-robot problem. Application to wheeled mobile robots and simulation examples are discussed in Section 4 followed by the conclusion.

#### 1.1 Global problem formulation

We can present the discrete time global optimization (numerical optimal control) problem for L robots as follows:

$$\begin{aligned} &\min\_{\{\mathbf{x}^i, \mathbf{t}\_j^i\}\_{\mathbf{W}, \mathbf{W}}} \sum\_{\forall i} H(\mathbf{x}^i(N)) + \mathbf{z}^i(N) \\ &\text{ s.t.} \\ &\mathbf{z}^i(k+1) = \mathbf{z}^i(k) + t\_s^i(k) \cdot \left\{ L(\mathbf{x}^i(k), \mathbf{u}^i(k), t\_s^i(k)) \right\} \\ &\mathbf{x}^i(k+1) = f\_D(\mathbf{x}^i(k), \mathbf{u}^i(k), t\_s^i(k)) \\ &\left\{ \mathbf{g}(\mathbf{x}^i(k), \mathbf{u}^i(k), t\_s^i(k)) \right\} \le 0 \\ &\mathcal{Q}\left( \left\{ \mathbf{x}^i(k) \right\}\_{\mathbf{W}}, \left\{ \mathbf{u}^i(k) \right\}\_{\mathbf{W}}, \left\{ t\_s^i(k) \right\}\_{\mathbf{W}} \right) \le 0 \\ &\forall k, \mathbf{z}^i(0) = \mathbf{0}, \mathbf{x}^i(0) = \mathbf{x}\_0^i, i = \mathbf{1, 2, ..., L} \end{aligned} \tag{1}$$

with t being the time-independent variable, k being the time index in discrete domain, t i <sup>s</sup> being the sampling period, and N being the number of time discrete instants across the time horizon, i.e., k ¼ 0, 1, …, N. The sampling period

Distributed Optimization of Multi-Robot Motion with Time-Energy Criterion DOI: http://dx.doi.org/10.5772/intechopen.85668

corresponds to the length of time the system input ui ð Þt is kept constant (zero-order hold): we assume the system input to be

$$u^i(t) = u^i(k), \text{ for } t\_k^i \le t < t\_{k+1}^i, t\_{k+1}^i = t\_k^i + t\_s^i(k), t\_0^i = 0, \forall i$$

System behavior is governed by the nonlinear dynamic system in f <sup>D</sup> xi ð Þ<sup>k</sup> ; ui ð Þk ; t i s ð Þ<sup>k</sup> � � with xi ð Þk as the robot i states an initial condition of xi ð Þ¼ <sup>0</sup> xi 0. The above optimal control problem has a final state objective of H xi ð Þ <sup>N</sup> � � with the Lagrangian L x<sup>i</sup> ð Þ<sup>k</sup> ; ui ð Þk ; t i s ð Þ<sup>k</sup> � � information being embedded into a dummy state variable z<sup>i</sup> ð Þk .

The above optimization problem can be viewed as having two sets of control variables; the first set resembles the discretized system inputs, u<sup>i</sup> ð Þ<sup>k</sup> � � <sup>∀</sup>k, and the other set consists of the variable sampling period, t i s ð Þ<sup>k</sup> � � <sup>∀</sup>k. Let us have the Lagrangian for the problem be

$$L = \mathbf{x}^T \mathbf{Q} \mathbf{x} + \mathbf{u}^T \mathbf{R} \mathbf{u} + \boldsymbol{\beta},$$

with β being the scalar weight on time. The performance is restricted by a collection of inequality constraints of robot-specific constraints gð Þ: ≤ 0 and robotinteraction constraints of Ω<sup>i</sup> ð Þ: ≤ 0. The objective function is just the summation of individual objectives. In this paper, as it will be explained later, we consider only collision avoidance as robot-interaction requirement; however, the above formulation can also meet other considerations. It can be shown that the objective function in (1) corresponds to objective function of the form

$$\min \sum\_{i=1}^{L} \left[ H\left(\mathbf{x}^{i}\left(\mathbf{t}\_{f}\right)\right) + \int\_{t\_{0}}^{t\_{f}} \mathbf{x}(t)^{T} \mathbf{Q}\mathbf{x}(t) + \boldsymbol{\mu}(t)^{T} \mathbf{R}\boldsymbol{u}(t) + \beta dt \right],$$

## 2. Distributed optimization

Here in this section, the concept of distributed optimization is explored. This area tackles optimization problems with distributed nature. Consider the following optimization problem:

$$\begin{aligned} \min\_{\{u^i\}\_{\forall i}} & \sum\_{i=1}^{L} l^i(u^i) \\ \text{s.t.} & \quad u^i \in \mathcal{Q}^i, \text{ for } i = 1, \dots, L \end{aligned} \tag{2}$$

with J <sup>i</sup> ui � � as the objective function and Ω<sup>i</sup> the set of constraints. We can easily separate the problem into its corresponding sub problems. An ith subproblem is easily put as

$$\begin{aligned} \min\_{u^i} J^i(u^i) \\ \text{s.t.} \quad u^i \in \mathcal{Q}^i \end{aligned} \tag{3}$$

Observing the global problem in (2), we can see that it is just equivalent to the combination of all the subproblems; it is easy to see that solving for each subproblem (3) individually will result in the solution for the whole global problem. Now, however, consider the following problem:

technology and market can push the boundaries in the field. As technology

Path Planning for Autonomous Vehicles - Ensuring Reliable Driverless Navigation…

tion into the paradigm of multi-robot systems.

conclusion.

domain, t

120

i

1.1 Global problem formulation

control) problem for L robots as follows:

s:t: zi

xi

g x<sup>i</sup>

Ω<sup>i</sup> xi ð Þ<sup>k</sup> ∀i ; ui ð Þ<sup>k</sup> ∀i ; t i s ð Þ<sup>k</sup> ∀i

∀k, z<sup>i</sup>

min ui ;ti f g<sup>s</sup> <sup>∀</sup>k,∀<sup>i</sup> ∑ ∀i H xi

ð Þ¼ <sup>k</sup> <sup>þ</sup> <sup>1</sup> <sup>f</sup> <sup>D</sup> xi

ð Þk ; t i s ð Þ<sup>k</sup> <sup>≤</sup> <sup>0</sup>

ð Þ¼ <sup>0</sup> <sup>0</sup>, xi

ð Þ¼ <sup>k</sup> <sup>þ</sup> <sup>1</sup> <sup>z</sup><sup>i</sup>

ð Þ<sup>k</sup> ; <sup>u</sup><sup>i</sup>

develops, new venues for application will open for mainstream use rather than only in research and development labs. Due to its promising applicability, autonomous cars and vehicles (or various intelligent transportation systems in general) sit at the forefront [2, 3]. To name a few, benefits include reducing congestions [4], increasing road safety [5], and, of course, self-driving cars [6]. Another application in civil environments is related to safety and security like rescue missions of searching for missing people in areas hard for humans to operate in [7] or searching for dangerous materials or bombs [8] in an evacuated building. Also, another area of application of multi-robot systems is in military area; research was done heavily in the fields of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) [9, 10]. Many approaches are developed to tackle the issue of multiple robot systems. Under the inspiration of biological systems and the need of technologies, many problems are defined as cooperative motions. Cooperative motion is discussed in [11–16]. Optimization in both time and energy has been tackled in the literature [17–20]. There is an opportunity to incorporate concept of time/energy optimiza-

This paper investigates the solution of a time-energy optimal control problem for multiple mobile robots; namely, the paper is to study the problem as a nonlinear programming (NLP) problem. The main idea of the solution used here is to utilize distributed optimization techniques to solve the overall optimization problem. Solving for optimal time and energy of more than one robot system adds more burden on the problem; robot interaction with each other is added to the problem. This paper will focus more on the distributed aspect of the problem; more details about the numerical optimal control problem formulation can be found in [21]. In [21], the problem of controlling the motion of a single mobile robot is solved using the direct method of numerical optimal control (see [22]); this showed great flexibility in incorporating physical constraints and nonlinear dynamics of the system. The rest of this section will define the global problem formulation. Discussion about distributed optimization and associated algorithm is presented in Section 2. Section 3 will apply the method on the multi-robot problem. Application to wheeled mobile robots and simulation examples are discussed in Section 4 followed by the

We can present the discrete time global optimization (numerical optimal

ð Þ N

ð Þ� <sup>k</sup> L xi

ð Þk ; t i s

ð Þ<sup>k</sup>

with t being the time-independent variable, k being the time index in discrete

<sup>s</sup> being the sampling period, and N being the number of time discrete

ð Þ<sup>k</sup> ; <sup>u</sup><sup>i</sup>

ð Þ<sup>k</sup>

≤0

<sup>0</sup>, i ¼ 1, 2, …, L

ð Þk ; t i s

(1)

ð Þ <sup>N</sup> <sup>þ</sup> <sup>z</sup><sup>i</sup>

ð Þ<sup>k</sup> ; ui

ð Þ¼ <sup>0</sup> xi

instants across the time horizon, i.e., k ¼ 0, 1, …, N. The sampling period

ð Þþ k t i s Path Planning for Autonomous Vehicles - Ensuring Reliable Driverless Navigation…

$$\begin{aligned} \min\_{\{u^i\}} & \sum\_{i=1}^L J^i(u^i) \\ \text{s.t.} & g^i(u^1, u^2, \dots, u^L) \le 0, \forall i \end{aligned} \tag{4}$$

optimization algorithm go to better value in a first-order optimality sense. Of course,

min<sup>u</sup> J uð Þ

Let g uð Þ be a vector of M constraints. Then, we can define the dual problem. Let

s:t: g uð Þ≤0

The vector λ of size M corresponds to the multipliers associated with each constraint. The dual problem relaxes the constraints of the original primal problem

> max <sup>λ</sup> <sup>q</sup>ð Þ <sup>λ</sup>; <sup>u</sup>

The dual optimization problem is the pair of two optimization problems, namely, a maximization in λ as in (12) and a minimization in u. The pair resembles a maximization-minimization problem. You can visualize the solution of the problem as attacking the effect of constraint violation while solving for the original minimi-

s:t: λ≥0

Now, an algorithm for solving the dual problem utilizing subgradient method is

The above definition is to clarify the minimum attained at any value of λ. So,

with the above definition, at iteration m, with also denoting uð Þ <sup>m</sup> ¼ u λð Þ <sup>m</sup>

<sup>þ</sup> <sup>λ</sup>Tg uð Þ <sup>m</sup>

function in (14) as function of <sup>λ</sup> can be computed as <sup>p</sup>ð Þ <sup>m</sup> <sup>¼</sup> g uð Þ <sup>m</sup>

of the algorithm, an update for the multipliers is constructed as

Now, it is obvious from (14) that at iteration m, a subgradient of the dual

λð Þ <sup>m</sup>þ<sup>1</sup> ¼ P<sup>λ</sup>≥<sup>0</sup> λð Þ <sup>m</sup> þ αð Þ <sup>m</sup> g uð Þ <sup>m</sup>

The projection operator P<sup>λ</sup> <sup>≥</sup><sup>0</sup>f g: is to ensure that the value of the update <sup>λ</sup>ð Þ <sup>m</sup> <sup>þ</sup> <sup>α</sup>ð Þ <sup>m</sup> <sup>p</sup>ð Þ <sup>m</sup> is positive or enforced to zero. Also, observe the ascent update with the "+" sign rather than a descent update as it is a maximization. We can assume an initial λð Þ <sup>0</sup> ¼ 0 or any other positive value. An optimal solution to the original problem in (10) will be attained as m ! ∞, with the optimal solution value of uð Þ <sup>m</sup> .

it is a first-order method, it could have a lower performance than other secondorder approaches. However, the advantage here is that it does not require differentiation. Also, and perhaps more importantly, it gives us flexibility to solve problems

Now observe the following constrained optimization problem:

Distributed Optimization of Multi-Robot Motion with Time-Energy Criterion

exists, we can compute the subgradient as the gradient. As

<sup>q</sup>ð Þ¼ <sup>λ</sup>; <sup>u</sup> J uð Þþ <sup>λ</sup>Tg uð Þ (11)

<sup>u</sup>ð Þ¼ <sup>λ</sup> arg min<sup>u</sup> J uð Þþ <sup>λ</sup>Tg uð Þ (13)

<sup>¼</sup> min<sup>u</sup> J uð Þþ <sup>λ</sup>Tg uð Þ (14)

(15)

(10)

(12)

, we can

. At iteration m

when a gradient ∇J uð Þ <sup>m</sup>

in a distributed manner as will be seen later.

DOI: http://dx.doi.org/10.5772/intechopen.85668

us define the dual function of λ and u as

zation problem concurrently.

q λ; uð Þ <sup>m</sup>

<sup>¼</sup> J uð Þ <sup>m</sup>

discussed. Let us define

safely have

123

in (10) and solves for λ to maximize the dual function:

You see clearly that the above problem cannot be trivially separated into some subproblems due to the constraint g<sup>i</sup> u1; u2; …; uL ≤0. This can be called a complicating constraint or a coupling constraint. In the next subsection, we discuss an optimization method that will help us in solving this kind of problems.

Decomposition in mathematics is the concept of breaking a mathematical problem into smaller subproblems that can be solved independently while not violating the original problem. Primary works of [23, 24] discuss multiple aspects of optimization in general while exploring specific classes as well; these works are excellent resources for reading and understanding. Viewing the applications of distributed optimization will convey the impression that they, however different, are all mostly very similar theoretically. Terms of networked, distributed, decentralized, cooperative, and the like are becoming all corresponding to somewhat similar problems. Other works related to this area and the area of multi-agent systems can be found in [25–29].

#### 2.1 Subgradient method

Before going further, we discuss a method used in solving distributed optimization problem which will help us in solving the problem of this paper. This method is called subgradient methods [30]. These methods are similar to the popular optimization algorithms using gradient descent. However, they are extended to escape function differentiation. The works [31–33] also explore the method in the perspective of multi-agent systems.

Consider the typical problem:

$$\min\_{u} J(u) \tag{5}$$

This typical problem can be solved using any gradient descent method. At iteration m of an algorithm, a solver, or an optimizer, can be constructed as

$$
\mu\_{(m+1)} = \mu\_{(m)} - a\_{(m)}d\_{(m)}\tag{6}
$$

with αð Þ <sup>m</sup> as a predefined step size. For a standard gradient method, the vector dð Þ <sup>m</sup> contains the gradient information of the problem. The simplest definition is to have

$$d\_{(m)} = \nabla f(\boldsymbol{u}\_{(m)})\tag{7}$$

However, for the subgradient method [33], we will have a definition of

$$d\_{(m)} = p\_{(m)} \tag{8}$$

with <sup>p</sup>ð Þ <sup>m</sup> , called subgradient of J uð Þ, being any vector that satisfies the following:

$$J(\mathbf{x}) - J(u\_{(m)}) \ge p\_{(m)} \, ^T [\mathbf{x} - u\_{(m)}], \,\forall \mathbf{x} \tag{9}$$

The subgradient method is a simple first-order algorithm to minimize a possibly nondifferentiable function. The above definition escapes the requirement of a differentiated objective function. It is defined as finding any vector that makes the

Distributed Optimization of Multi-Robot Motion with Time-Energy Criterion DOI: http://dx.doi.org/10.5772/intechopen.85668

optimization algorithm go to better value in a first-order optimality sense. Of course, when a gradient ∇J uð Þ <sup>m</sup> exists, we can compute the subgradient as the gradient. As it is a first-order method, it could have a lower performance than other secondorder approaches. However, the advantage here is that it does not require differentiation. Also, and perhaps more importantly, it gives us flexibility to solve problems in a distributed manner as will be seen later.

Now observe the following constrained optimization problem:

$$\min\_{u} J(u) \tag{10}$$

$$\text{s.t. } g(u) \le 0$$

Let g uð Þ be a vector of M constraints. Then, we can define the dual problem. Let us define the dual function of λ and u as

$$q(\boldsymbol{\lambda}, \boldsymbol{u}) = \boldsymbol{f}(\boldsymbol{u}) + \boldsymbol{\lambda}^T \mathbf{g}(\boldsymbol{u}) \tag{11}$$

The vector λ of size M corresponds to the multipliers associated with each constraint. The dual problem relaxes the constraints of the original primal problem in (10) and solves for λ to maximize the dual function:

$$\max\_{\lambda} q(\lambda, u) \tag{12}$$
 
$$\text{s.t. } \lambda \ge 0$$

The dual optimization problem is the pair of two optimization problems, namely, a maximization in λ as in (12) and a minimization in u. The pair resembles a maximization-minimization problem. You can visualize the solution of the problem as attacking the effect of constraint violation while solving for the original minimization problem concurrently.

Now, an algorithm for solving the dual problem utilizing subgradient method is discussed. Let us define

$$\mu(\lambda) = \arg\min\_{\mu} \left\{ J(\mu) + \lambda^T \mathbf{g}(\mu) \right\} \tag{13}$$

The above definition is to clarify the minimum attained at any value of λ. So, with the above definition, at iteration m, with also denoting uð Þ <sup>m</sup> ¼ u λð Þ <sup>m</sup> , we can safely have

$$q\left(\lambda, u\_{(m)}\right) = f\left(u\_{(m)}\right) + \lambda^T \mathbf{g}\left(u\_{(m)}\right) = \min\_{u} \left\{ f(u) + \lambda^T \mathbf{g}(u) \right\} \tag{14}$$

Now, it is obvious from (14) that at iteration m, a subgradient of the dual function in (14) as function of <sup>λ</sup> can be computed as <sup>p</sup>ð Þ <sup>m</sup> <sup>¼</sup> g uð Þ <sup>m</sup> . At iteration m of the algorithm, an update for the multipliers is constructed as

$$\lambda\_{(m+1)} = P\_{\dot{\lambda} \ge 0} \left\{ \lambda\_{(m)} + a\_{(m)} \mathbf{g} \left( u\_{(m)} \right) \right\} \tag{15}$$

The projection operator P<sup>λ</sup> <sup>≥</sup><sup>0</sup>f g: is to ensure that the value of the update <sup>λ</sup>ð Þ <sup>m</sup> <sup>þ</sup> <sup>α</sup>ð Þ <sup>m</sup> <sup>p</sup>ð Þ <sup>m</sup> is positive or enforced to zero. Also, observe the ascent update with the "+" sign rather than a descent update as it is a maximization. We can assume an initial λð Þ <sup>0</sup> ¼ 0 or any other positive value. An optimal solution to the original problem in (10) will be attained as m ! ∞, with the optimal solution value of uð Þ <sup>m</sup> .

min ui f g ∑ L i¼1 J <sup>i</sup> ui

s:t: g<sup>i</sup> u1; u2; …; u<sup>L</sup> ≤0, ∀i

You see clearly that the above problem cannot be trivially separated into some

Decomposition in mathematics is the concept of breaking a mathematical problem into smaller subproblems that can be solved independently while not violating the original problem. Primary works of [23, 24] discuss multiple aspects of optimization in general while exploring specific classes as well; these works are excellent resources for reading and understanding. Viewing the applications of distributed optimization will convey the impression that they, however different, are all mostly very similar theoretically. Terms of networked, distributed, decentralized, cooperative, and the like are becoming all corresponding to somewhat similar problems. Other works related

Before going further, we discuss a method used in solving distributed optimization problem which will help us in solving the problem of this paper. This method is called subgradient methods [30]. These methods are similar to the popular optimization algorithms using gradient descent. However, they are extended to escape function differentiation. The works [31–33] also explore the method in the

This typical problem can be solved using any gradient descent method. At itera-

with αð Þ <sup>m</sup> as a predefined step size. For a standard gradient method, the vector dð Þ <sup>m</sup> contains the gradient information of the problem. The simplest definition is to

dð Þ <sup>m</sup> ¼ ∇J uð Þ <sup>m</sup>

with <sup>p</sup>ð Þ <sup>m</sup> , called subgradient of J uð Þ, being any vector that satisfies the following:

The subgradient method is a simple first-order algorithm to minimize a possibly nondifferentiable function. The above definition escapes the requirement of a differentiated objective function. It is defined as finding any vector that makes the

<sup>T</sup> <sup>x</sup> � <sup>u</sup>ð Þ <sup>m</sup>

However, for the subgradient method [33], we will have a definition of

<sup>≥</sup>pð Þ <sup>m</sup>

J xð Þ� J uð Þ <sup>m</sup>

tion m of an algorithm, a solver, or an optimizer, can be constructed as

min<sup>u</sup> J uð Þ (5)

(7)

, ∀x (9)

<sup>d</sup>ð Þ <sup>m</sup> <sup>¼</sup> <sup>p</sup>ð Þ <sup>m</sup> (8)

uð Þ <sup>m</sup>þ<sup>1</sup> ¼ uð Þ <sup>m</sup> � αð Þ <sup>m</sup> dð Þ <sup>m</sup> (6)

complicating constraint or a coupling constraint. In the next subsection, we discuss

subproblems due to the constraint g<sup>i</sup> u1; u2; …; uL ≤0. This can be called a

Path Planning for Autonomous Vehicles - Ensuring Reliable Driverless Navigation…

an optimization method that will help us in solving this kind of problems.

to this area and the area of multi-agent systems can be found in [25–29].

2.1 Subgradient method

have

122

perspective of multi-agent systems. Consider the typical problem:

(4)

Path Planning for Autonomous Vehicles - Ensuring Reliable Driverless Navigation…
