**1. Introduction**

Nowadays, many machine learning and deep learning algorithms have been developed in various applications such as computer vision, natural language processing, and so forth. Furthermore, the performances of the algorithms are getting better. Thus, the developments of machine learning and deep learning algorithms become mature. However, the research contributions which are focusing on the real-world implementation of the algorithms are relatively less than the developments of the algorithms themselves.

In order to operate the deep learning algorithms in real-world applications, it is essential to think about the real-time computation. Thus, the consideration of delay handling is desired because deep learning algorithm computation generally introduces large delays [1].

In communications and networks research literature, there exists a well-known stochastic optimization algorithm which is for utility function maximization while maintaining system stability. Here, the stability is modeled with queue, and then the algorithm aims at the optimization computation while stabilizing the queue dynamics. In order to formulate the stability, the queue is mathematically modeled with Lyapunov drift [2].

This algorithm is designed inspired by Lyapunov control theory, and thus, it is named to Lyapunov optimization theory [2]. In this chapter, the basic theory, examples, and discussions of the Lyapunov optimization theory are presented. Then, the use of Lyapunov optimization theory for real-time computer vision and deep learning platforms is discussed. Furthermore, the performance evaluation results with real-world deep learning framework computation (e.g., real-world image super-resolution computation results with various models) are presented in various aspects. Finally, the emerging applications will be introduced.

Subject to queue stability:

*DOI: http://dx.doi.org/10.5772/intechopen.92971*

function is defined as *LQt* ð Þ¼ ½ � <sup>1</sup>

function at *t* is derived as follows:

≤ 1

where *C* is a constant given by

3: Decision Action: ∀*α*½ �*t* ∈ A

5: Observe *Q t*½ �;

4: **while** *t* ≤*T* **do** // *T*: operation time

1 2

on DPP which is given by

follows:

follows:

**99**

**Initialize:** 1: *t* 0; 2: *Q t*½� 0;

making is *α*½ �*t* at *t*.

lim*t*!∞ 1 *t* X*t*�1 *τ*¼0

*Dynamic Decision-Making for Stabilized Deep Learning Software Platforms*

<sup>2</sup> ð Þ *Q t*½ � <sup>2</sup>

*LQt* ð Þ� ½ � <sup>þ</sup> <sup>1</sup> *LQt* ð Þ¼ ½ � <sup>1</sup>

In (3), *P*ð Þ *α*½ �*t* stands for the penalty function when a control action decision-

As mentioned, the Lyapunov optimization theory can be used when tradeoff between utility maximization (or penalty function minimization) and delays exists. Based on this nature, drift-plus-penalty (DPP) algorithm [2–4] is designed for maximizing the time-average utility subject to queue stability. Here, the Lyapunov

Lyapunov function which is formulated as ½ � *LQt* ð Þ� ½ � þ 1 *LQt* ð Þj ½ � *Q t*½ � , which is called as the drift on *t*. According to [2], this dynamic policy is designed to achieve queue stability by minimizing an upper bound of our considering penalty function

where *V* is a tradeoff coefficient. The upper bound on the drift of the Lyapunov

Therefore, the upper bound of the conditional Lyapunov drift can be derived as

which supposes that the arrival and departure process rates are upper bounded.

Δð Þ¼ *Q t*ð Þ ½ � *LQt* ð Þ� ½ � þ 1 *LQt* ð Þj ½ � *Q t*½ �

*<sup>a</sup>*ð Þ *<sup>α</sup>*½ �*<sup>t</sup>* <sup>2</sup> <sup>þ</sup> *<sup>b</sup>*ð Þ *<sup>α</sup>*½ �*<sup>t</sup>* <sup>2</sup>

Due to the fact that *C* is a constant, minimizing the upper bound on DPP is as

**Algorithm 1**. Stabilized Time-Average Penalty Function Minimization

**Time-Average Penalty Function Minimization subject to Stability**

*Q*½ � *τ* < ∞*:* (4)

, and let Δð Þ*:* be a conditional quadratic

<sup>2</sup> *Q t*½ � <sup>þ</sup> <sup>1</sup> <sup>2</sup> � *Q t*½ �<sup>2</sup> � � � (6)

Δð Þþ *Q t*½ � *V*½ � *P*ð Þ *α*½ �*t* , (5)

<sup>2</sup> *<sup>a</sup>*ð Þ *<sup>α</sup>*½ �*<sup>t</sup>* <sup>2</sup> <sup>þ</sup> *<sup>b</sup>*ð Þ *<sup>α</sup>*½ �*<sup>t</sup>* <sup>2</sup> � � <sup>þ</sup> *Q t*½ �ð Þ *<sup>a</sup>*ð Þ� *<sup>α</sup>*½ �*<sup>t</sup> <sup>b</sup>*ð Þ *<sup>α</sup>*½ �*<sup>t</sup> :* (7)

<sup>≤</sup>*<sup>C</sup>* <sup>þ</sup> ½*Q t*½ �ð � *<sup>a</sup>*ð Þ� *<sup>α</sup>*½ �*<sup>t</sup> <sup>b</sup>*ð Þ *<sup>α</sup>*½ �*<sup>t</sup>* <sup>∣</sup>*Q t*½ � , (8)

<sup>j</sup>*Q t*½ � h i <sup>≤</sup>*C*, (9)

*V*½ �þ *P*ð Þ *α*½ �*t* ½ � *Q t*½�� ð Þ *a*ð Þ� *α*½ �*t b*ð Þ *α*½ �*t :* (10)
