**5.2 PD implementation using combined DDPG and deep-Q learning**

The CSUF graduate student team<sup>1</sup> proposed a hybrid concept to use the MATLAB's DDPG for designing a good MORL-ANN PD and then deep-Q learning network (DQN) to make further corrections. **Figure 5** illustrates the proposed concept. The DQN uses a single, smaller neural network. It uses much less memory and can feasibly take many small, discrete actions due to the agent's smaller size, thereby more efficient for stabilization purposes. Thus, using the initial state prediction provided by OEP concerning the HPA operating temperature and IPBO, the DQN agent will take a much shorter time due to the small number of steps required to reach a desirable stable state. Combining DQN with DPPG can improve the training time and enhance the MORL-ANN PD performance.
