**3.1 Lyapunov control over departure processes**

As illustrated in **Figure 2**, stabilized real-time computer vision platforms should be equipped with queues in order to handle bursty traffics. If the queue is busy or near-overflow, the departure process should be accelerated. Thus, the simplest model should be used for reducing the corresponding computation. On the other hand, if the queue is empty, deep learning computation accuracy can be improved with more sophisticate models because we have enough time to conduct the computation. Thus, multiple models are desired in order to select one depending on queue backlog.

In **Figure 2**, multiple models exist, and it can be seen that the simplest model (i.e., low-resolution model) is able to conduct fast computation, but it presents low learning accuracy. On the other hand, the most sophisticate model (i.e., highresolution model) is good for accurate learning performance, but it introduces computation delays. Thus, the tradeoff exists between performance and delays, i.e.,

#### **Figure 2.**

*Lyapunov control over departure processes in real-time computer vision platforms for time-average learning accuracy maximization subject to queue stability.*

*<sup>α</sup>*<sup>∗</sup> ½ � *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> arg max *<sup>α</sup>*½ �*<sup>t</sup>* <sup>∈</sup> <sup>A</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.92971*

*<sup>α</sup>*<sup>∗</sup> ½ � *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> arg max *<sup>α</sup>*½ �*<sup>t</sup>* <sup>∈</sup> <sup>A</sup>

*Dynamic Decision-Making for Stabilized Deep Learning Software Platforms*

optimal control action decision-making for next time slot.

**3.3 Performance evaluation and discussions**

CPU-only and CPU-GPU) become slow.

is out of control:

**Figure 4.**

**Table 1.**

**105**

½ � *V* � *A*ð Þ� *α*½ �*t Q t*½�� ð Þ *a*ð Þ� *α*½ �*t b*ð Þ *α*½ �*t* (25)

½ � *V* � *A*ð Þ� *α*½ �*t Q t*½�� *a*ð Þ *α*½ �*t* (26)

and this can be reformulated as follows due to the fact that the departure process

where *A*ð Þ *α*½ �*t* stands for the learning accuracy when the sample rate selection decision is *<sup>α</sup>*½ �*<sup>t</sup>* at *<sup>t</sup>*. Here, <sup>A</sup> is the set of all possible sample rates, and *<sup>α</sup>*<sup>∗</sup> ½ � *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> is the

In this section, the performance evaluation results of the proposed algorithm in

As illustrated in **Figure 4**, if the models are static (i.e., *deep* or *shallow*), the curves show that the two models are not efficient. The deep model cannot handle the overflow situations; thus, the queue diverges. On the other hand, the shallow

*Performance evaluation: Queue-backlog (x-axis, unit time; x-axis, queue occupancy (unit: Bits)).*

**Depth (# of hidden layers) 0 4 6 8 11 14 17 20** PSNR (dB) 30.400 32.560 33.010 33.229 33.379 33.435 33.495 33.523 SSIM 0.8682 0.9100 0.9160 0.9180 0.9200 0.9200 0.9210 0.9220 Processing time (CPU � only) 0.0020 0.3210 0.5468 0.7725 0.9940 1.3170 1.6220 1.9600 Processing time (CPU + GPU) 0.0010 0.0100 0.0120 0.0152 0.0189 0.0224 0.0262 0.0305

*Tradeoff between utility and delay obtained from super-resolution performance measurement results*

*(processing time have measured on* 512 � 768 *images).*

Section 3.1 are presented. The data-intensive simulation-based evaluation is performed, and then the results are presented in **Figure 4**. In addition, **Table 1** shows the performance of super-resolution depending on the number of hidden layers. If the number of hidden layers is maximum (i.e., 20 in this research), the PSNR and structural similarity (SSIM, one of the widely used performance metrics in super-resolution) values are maximum. However, the computation times (for

#### **Figure 3.**

*Lyapunov control over arrival processes in real-time computer vision platforms for time-average learning accuracy maximization subject to queue stability.*

Lyapunov optimization theory-based dynamic model selection decision-making algorithm can be designed as follows:

$$a^\*[t+1] \leftarrow \arg\max\_{a[t]\in \mathcal{A}} [V \cdot A(a[t]) - Q[t] \cdot (a(a[t]) - b(a[t]))] \tag{23}$$

and this can be reformulated as follows due to the fact that the arrival process is out of control:

$$a^\*[t+1] \leftarrow \arg\max\_{a[t]\in \mathcal{A}} [V \cdot A(a[t]) + Q[t] \cdot b(a[t])] \tag{24}$$

where *A*ð Þ *α*½ �*t* stands for the learning-accuracy when the model selection decision is *<sup>α</sup>*½ �*<sup>t</sup>* at *<sup>t</sup>*. Here, <sup>A</sup> is the set of all possible deep learning models, and *<sup>α</sup>*<sup>∗</sup> ½ � *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> is the optimal control action decision-making for next time slot.

#### **3.2 Lyapunov control over arrival processes**

The stabilized real-time computer vision platform in Section 3.1 is novel and scalable; however it has burden because multiple deep learning models should be implemented in a single platform.

Thus, a new dynamic control algorithm with a single deep learning model is also needed for resource-limited systems. As illustrated in **Figure 3**, our considering system has a single computer vision and deep learning model in computing platforms. In addition, the queue is in front of the system. Thus, the departure process is not controllable anymore. In this case, the arrival process should be controllable in order to control the queue dynamics for stability. Therefore, the arrival image/ video streams should be controlled by handling sample rates. If high-frequency sampling is available, more signals will be generated, and then the results will be enqueued. Thus, the arrival process increases. This is beneficial because it increases computer vision performance due to the fact that more images/videos can be obtained especially in surveillance applications. On the other hand, i.e., if lowfrequency sampling is conducted, the computer vision performance can be degraded, whereas the number of arrival process data decreases which is beneficial in terms of stability. Eventually, the tradeoff between computer vision performance and delays can be observed. Finally, Lyapunov optimization theory-based sampling rate selection decision-making algorithm can be designed as follows:

*Dynamic Decision-Making for Stabilized Deep Learning Software Platforms DOI: http://dx.doi.org/10.5772/intechopen.92971*

$$a^\*[t+1] \leftarrow \arg\max\_{a[t]\in \mathcal{A}} \left[ V \cdot A(a[t]) - Q[t] \cdot (a(a[t]) - b(a[t])) \right] \tag{25}$$

and this can be reformulated as follows due to the fact that the departure process is out of control:

$$\alpha^\*[t+1] \leftarrow \arg\max\_{a[t]\in\mathcal{A}} \left[ V \cdot A(a[t]) - Q[t] \cdot a(a[t]) \right] \tag{26}$$

where *A*ð Þ *α*½ �*t* stands for the learning accuracy when the sample rate selection decision is *<sup>α</sup>*½ �*<sup>t</sup>* at *<sup>t</sup>*. Here, <sup>A</sup> is the set of all possible sample rates, and *<sup>α</sup>*<sup>∗</sup> ½ � *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> is the optimal control action decision-making for next time slot.

## **3.3 Performance evaluation and discussions**

In this section, the performance evaluation results of the proposed algorithm in Section 3.1 are presented. The data-intensive simulation-based evaluation is performed, and then the results are presented in **Figure 4**. In addition, **Table 1** shows the performance of super-resolution depending on the number of hidden layers. If the number of hidden layers is maximum (i.e., 20 in this research), the PSNR and structural similarity (SSIM, one of the widely used performance metrics in super-resolution) values are maximum. However, the computation times (for CPU-only and CPU-GPU) become slow.

As illustrated in **Figure 4**, if the models are static (i.e., *deep* or *shallow*), the curves show that the two models are not efficient. The deep model cannot handle the overflow situations; thus, the queue diverges. On the other hand, the shallow

#### **Figure 4.**

Lyapunov optimization theory-based dynamic model selection decision-making

*Lyapunov control over arrival processes in real-time computer vision platforms for time-average learning*

and this can be reformulated as follows due to the fact that the arrival process is

where *A*ð Þ *α*½ �*t* stands for the learning-accuracy when the model selection decision is *<sup>α</sup>*½ �*<sup>t</sup>* at *<sup>t</sup>*. Here, <sup>A</sup> is the set of all possible deep learning models, and *<sup>α</sup>*<sup>∗</sup> ½ � *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> is

The stabilized real-time computer vision platform in Section 3.1 is novel and scalable; however it has burden because multiple deep learning models should be

degraded, whereas the number of arrival process data decreases which is beneficial in terms of stability. Eventually, the tradeoff between computer vision performance and delays can be observed. Finally, Lyapunov optimization theory-based sampling

rate selection decision-making algorithm can be designed as follows:

Thus, a new dynamic control algorithm with a single deep learning model is also needed for resource-limited systems. As illustrated in **Figure 3**, our considering system has a single computer vision and deep learning model in computing platforms. In addition, the queue is in front of the system. Thus, the departure process is not controllable anymore. In this case, the arrival process should be controllable in order to control the queue dynamics for stability. Therefore, the arrival image/ video streams should be controlled by handling sample rates. If high-frequency sampling is available, more signals will be generated, and then the results will be enqueued. Thus, the arrival process increases. This is beneficial because it increases computer vision performance due to the fact that more images/videos can be obtained especially in surveillance applications. On the other hand, i.e., if lowfrequency sampling is conducted, the computer vision performance can be

½ � *V* � *A*ð Þ� *α*½ �*t Q t*½�� ð Þ *a*ð Þ� *α*½ �*t b*ð Þ *α*½ �*t* (23)

½ � *V* � *A*ð Þþ *α*½ �*t Q t*½�� *b*ð Þ *α*½ �*t* (24)

algorithm can be designed as follows:

*accuracy maximization subject to queue stability.*

*Advances and Applications in Deep Learning*

out of control:

**104**

**Figure 3.**

*<sup>α</sup>*<sup>∗</sup> ½ � *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> arg max *<sup>α</sup>*½ �*<sup>t</sup>* <sup>∈</sup> <sup>A</sup>

*<sup>α</sup>*<sup>∗</sup> ½ � *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> arg max *<sup>α</sup>*½ �*<sup>t</sup>* <sup>∈</sup> <sup>A</sup>

the optimal control action decision-making for next time slot.

**3.2 Lyapunov control over arrival processes**

implemented in a single platform.

*Performance evaluation: Queue-backlog (x-axis, unit time; x-axis, queue occupancy (unit: Bits)).*


#### **Table 1.**

*Tradeoff between utility and delay obtained from super-resolution performance measurement results (processing time have measured on* 512 � 768 *images).*
