**5. References**


**Stochastic Control for Jump Diffusions** \*

In this chapter, we will discuss the stochastic optimal control problem for jump diffusions. That is, the controlled stochastic system is driven by both Brownian motion and Poisson random measure and the controller wants to minimize/maximize some cost functional subject to the above stated state equation (stochastic control system) over the admissible control set. This kind of stochastic optimal control problems can be encountered naturally when some sudden and rare breaks take place, such as in the practical stock price market. An admissible control is called optimal if it achieves the infimum/supremum of the cost functional and the corresponding state variable and the cost functional are called the optimal trajectory and the

**Chapter 7**

It is well-known that Pontryagin's maximum principle (MP for short) and Bellman's dynamic programming principle (DPP for short) are the two principal and most commonly used approaches in solving stochastic optimal control problems. In the statement of maximum principle, the necessary condition of optimality is given. This condition is called the maximum condition which is always given by some Hamiltonian function. The Hamiltonian function is defined with respect to the system state variable and some adjoint variables. The equation that the adjoint variables satisfy is called adjoint equation, which is one or two backward stochastic differential equations (BSDEs for short) of [13]'s type. The system which consists of the adjoint equation, the original state equation, and the maximum condition is referred to as a generalized Hamiltonian system. On the other hand, the basic idea of dynamic programming principle is to consider a family of stochastic optimal control problems with different initial time and states and establish relationships among these problems via the so-called Hailton-Jacobi-Bellman (HJB for short) equation, which is a nonlinear second-order partial differential equation (PDE for short). If the HJB equation is solvable, we can obtain an optimal control by taking the maximizer/miminizer of the generalized Hamiltonian function involved in the HJB equation. To a great extent these two approaches have been developed separately and independently during the research in stochastic optimal control problems.

\*The main content of this chapter is from the following published article paper: Shi, J.T., & Wu, Z. (2011). Relationship between MP and DPP for the stochastic optimal control problem of jump diffusions. *Applied mathematics and Optimization*,

> ©2012 Shi, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 Shi, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Additional information is available at the end of the chapter

Jingtao Shi

http://dx.doi.org/10.5772/45719

value function, respectively.

Vol. 63, 151–189.

**1. Introduction**


#### 118 Stochastic Modeling and Control **Chapter 0 Chapter 7**
