3. Sequential distributed consensus-based ADMM approach

As mentioned in Section 2, the optimization problem (1) will be solved by a sequential approach presented in the current section, which separates (1) into the following subproblems

$$\min \sum\_{i=1}^{M} f\_i(\mathbf{x}\_i(t)) \tag{25}$$

Because the indicator function of a closed, non-empty convex set is proper, closed, and convex [25], the cost functions in (28) are also proper, closed, and convex with respect to P and X. Hence, (28) and (29) is exactly in a 2-block ADMM form [24]. Next, an augmented Lagrangian associated with (28) is defined as

A Distributed Optimization Method for Optimal Energy Management in Smart Grid

ð Þþ Pi <sup>I</sup>1ð Þþ <sup>P</sup> <sup>I</sup>2ð Þþ <sup>X</sup> <sup>ρ</sup>

where ρ . 0 is a scalar penalty parameter and u is called the scaled dual variable or scaled Lagrange multiplier [24]. Subsequently, the optimization problem (28) and (29) can be iteratively solved where at each iteration k ¼ 1, 2, …, the variables

L<sup>ρ</sup> P;X<sup>k</sup>

L<sup>ρ</sup> Pkþ<sup>1</sup>

� � �

iteration k; εpri . 0 and εdual . 0 are called primal and dual feasibility tolerances that

with εabs . 0 and εrel . 0 which are some absolute and relative tolerances suggested to be 10�<sup>3</sup> or 10�<sup>4</sup> [24]. In [26], those tolerances and stopping criteria are

using MAS and consensus theory. It can be observed that u is updated in a decentralized fashion when P and X are already updated. Therefore, only the

In the following, the variables P, X, u will be updated in a distributed manner

The update of P in (30) is in fact equivalent to solving the following convex

s:t:∑ M i¼1

of which the strong duality holds [25]. Let λ<sup>k</sup>þ<sup>1</sup> be the Lagrange multiplier

SDC-ADMM algorithm. Then the Karush-Kuhn-Tucker (KKT) conditions can be

<sup>P</sup> � <sup>X</sup><sup>k</sup> <sup>þ</sup> uk � � �

<sup>k</sup>þ<sup>1</sup> its optimal value, at the iteration <sup>k</sup> <sup>þ</sup> 1 of the

� 2

μiPi ¼ ξ (34)

<sup>P</sup><sup>k</sup>þ<sup>1</sup> <sup>¼</sup> argmin P

<sup>X</sup><sup>k</sup>þ<sup>1</sup> <sup>¼</sup> argmin X

> � � � 2 ; X<sup>k</sup> � � � � 2 n o, <sup>ε</sup>dual <sup>¼</sup> ffiffiffiffiffi

2

ukþ<sup>1</sup> <sup>¼</sup> uk <sup>þ</sup> Pkþ<sup>1</sup> � <sup>X</sup><sup>k</sup>þ<sup>1</sup> (32)

<sup>2</sup> <sup>≤</sup> <sup>ε</sup>pri and <sup>s</sup>

k k P � X þ u

; uk � � (30)

;X; uk � � (31)

<sup>k</sup> � � � �

<sup>k</sup> <sup>≜</sup> <sup>X</sup><sup>k</sup> � <sup>X</sup><sup>k</sup>�<sup>1</sup> are the primal and dual residuals at

M

<sup>p</sup> <sup>ε</sup>abs <sup>þ</sup> <sup>ε</sup>rel <sup>ρ</sup>u<sup>k</sup> �

2 2

<sup>2</sup> <sup>≤</sup> <sup>ε</sup>dual are both

� � � 2

<sup>2</sup> (33)

follows:

Lρð Þ P;X; u ≜ ∑

DOI: http://dx.doi.org/10.5772/intechopen.84136

This algorithm is stopped if the criteria rk �

<sup>p</sup> <sup>ε</sup>abs <sup>þ</sup> <sup>ε</sup>relmax Pk �

updates of P and X are introduced below.

shown to be computed and verified in distributed manners.

min <sup>P</sup> <sup>∑</sup> M i¼1 fi ð Þþ Pi ρ 2

used to find Pkþ<sup>1</sup> from the following equations:

satisfied, where <sup>r</sup><sup>k</sup> <sup>≜</sup>Pk � <sup>X</sup><sup>k</sup> and <sup>s</sup>

can be chosen by

3.3 P-update step

optimization problem:

associated with (34) and λ

37

<sup>ε</sup>pri <sup>¼</sup> ffiffiffiffiffi M

P, X, u are updated by [24]:

M i¼1 fi

$$\text{s.t.} \sum\_{i=1}^{M} \mu\_i \mathbf{x}\_i(t) = \xi(t) \tag{26}$$

$$
\omega\_i(t) \in \Omega\_i(t) \tag{27}
$$

and solves them starting from t ¼ 1 until t ¼ N. Thus, (25) is well-defined and the time index t can be dropped for conciseness. Then to solve (25), existing methods can be utilized, e.g., gradient-based methods, dual decomposition method, etc. Nevertheless, an approach called sequential distributed consensus-based ADMM (SDC-ADMM), which combines the advantages of the aforementioned methods [24] and avoids problems of centralized approaches, will be proposed in this section.

#### 3.1 Multi-agent system description for smart grid

The SDC-ADMM approach is based on MAS and consensus theory so that it can be run in parallel in all generation and consumption units. Hence, the MAS description for smart grid needs to be introduced first. More specifically, each agent is assigned to a generation or demand unit, and the communication among agents is represented by an undirected graph ℊ. Each node represents a unit in the grid whose variable is xi or ξ, and each edge represents the communication between two nodes. For each node i, denote ℵ<sup>i</sup> its neighbor set and ℵ<sup>i</sup> j j the cardinality of ℵi.

#### 3.2 Reformulation of smart grid optimization problems

To develop the SDC-ADMM approach, (25) is first reformulated into the 2-block form of ADMM. The following closed convex sets are defined corresponding to the global and local constraints:

$$\Pi\_1 = \left\{ P \in \mathbb{R}^M \, : \, \sum\_{i=1}^M \mu\_i P\_i = \xi \right\}, \\ \Pi\_2 = \left\{ X \in \mathbb{R}^M \, : \, X\_i \in \mathfrak{Q}\_i \right\}.$$

together with their indicator functions [24]

$$I\_1(P) = \begin{cases} \mathbf{0} : P \in \Pi\_1 \\ \infty : P \notin \Pi\_1 \end{cases} \quad I\_2(X) = \begin{cases} \mathbf{0} : X \in \Pi\_2 \\ \infty : X \notin \Pi\_2 \end{cases}$$

Hence, (25) can be rewritten as follows:

$$\min \sum\_{i=1}^{M} f\_i(P\_i) + I\_1(P) + I\_2(X) \tag{28}$$

$$\text{s.t.}\\P - X = 0\tag{29}$$

## A Distributed Optimization Method for Optimal Energy Management in Smart Grid DOI: http://dx.doi.org/10.5772/intechopen.84136

Because the indicator function of a closed, non-empty convex set is proper, closed, and convex [25], the cost functions in (28) are also proper, closed, and convex with respect to P and X. Hence, (28) and (29) is exactly in a 2-block ADMM form [24]. Next, an augmented Lagrangian associated with (28) is defined as follows:

$$L\_{\rho}(P, X, \mu) \triangleq \sum\_{i=1}^{M} f\_i(P\_i) + I\_1(P) + I\_2(X) + \frac{\rho}{2} \| P - X + \mu \|\_{2}^{2}$$

where ρ . 0 is a scalar penalty parameter and u is called the scaled dual variable or scaled Lagrange multiplier [24]. Subsequently, the optimization problem (28) and (29) can be iteratively solved where at each iteration k ¼ 1, 2, …, the variables P, X, u are updated by [24]:

$$P^{k+1} = \underset{P}{\text{argmin}} \, L\_{\rho} \left( P, X^k, u^k \right) \tag{30}$$

$$X^{k+1} = \underset{X}{\operatorname{argmin}} \, L\_{\rho} \left( P^{k+1}, X, u^{k} \right) \tag{31}$$

$$u^{k+1} = u^k + P^{k+1} - X^{k+1} \tag{32}$$

This algorithm is stopped if the criteria rk � � � � <sup>2</sup> <sup>≤</sup> <sup>ε</sup>pri and <sup>s</sup> <sup>k</sup> � � � � <sup>2</sup> <sup>≤</sup> <sup>ε</sup>dual are both satisfied, where <sup>r</sup><sup>k</sup> <sup>≜</sup>Pk � <sup>X</sup><sup>k</sup> and <sup>s</sup> <sup>k</sup> <sup>≜</sup> <sup>X</sup><sup>k</sup> � <sup>X</sup><sup>k</sup>�<sup>1</sup> are the primal and dual residuals at iteration k; εpri . 0 and εdual . 0 are called primal and dual feasibility tolerances that can be chosen by

$$\varepsilon^{\rm pri} = \sqrt{M} \mathbf{e}^{\rm abs} + \varepsilon^{\rm rel} \max\left\{ ||P^k||\_2, ||\mathbf{X}^k||\_2 \right\}, \\ \mathbf{e}^{\rm dual} = \sqrt{M} \mathbf{e}^{\rm abs} + \varepsilon^{\rm rel} ||\rho \mathbf{u}^k||\_2.$$

with εabs . 0 and εrel . 0 which are some absolute and relative tolerances suggested to be 10�<sup>3</sup> or 10�<sup>4</sup> [24]. In [26], those tolerances and stopping criteria are shown to be computed and verified in distributed manners.

In the following, the variables P, X, u will be updated in a distributed manner using MAS and consensus theory. It can be observed that u is updated in a decentralized fashion when P and X are already updated. Therefore, only the updates of P and X are introduced below.
