*2.2.1. Making the decision*

Assuming that for the two biggest beliefs we have *Bi*, *<sup>j</sup>* (*i*)≥ *Bi*, *<sup>j</sup>* ( *j*), equation (23) can be written as

$$\begin{aligned} \left(\boldsymbol{V}\_{i,j}\right)^{\*}\left(\boldsymbol{s}\right) &= \left(\boldsymbol{B}\_{i,j}\left(\boldsymbol{i}\right) - \alpha \boldsymbol{V}\_{i,j}^{\*}\left(\boldsymbol{s} - \mathbf{1}\right)\right) \text{Pr}\left\{\boldsymbol{B}\_{i,j}\left(\boldsymbol{i}\right) \geq \boldsymbol{d}\_{i,j}^{\*}\left(\boldsymbol{s}\right)\right\} + \\ \left(\boldsymbol{B}\_{i,j}\left(\boldsymbol{j}\right) - \alpha \boldsymbol{V}\_{i,j}^{\*}\left(\boldsymbol{s} - \mathbf{1}\right)\right) \text{Pr}\left\{\boldsymbol{B}\_{i,j}\left(\boldsymbol{j}\right) \geq \boldsymbol{d}\_{i,j}^{\*}\left(\boldsymbol{s}\right)\right\} + \alpha \boldsymbol{V}\_{i,j}^{\*}\left(\boldsymbol{s} - \mathbf{1}\right) \end{aligned} \tag{24}$$

For the decision-making problem at hand, three cases may happen

$$\mathbf{1}. \quad B\_{i,j}(\mathbf{i}) \le \alpha \, V\_{i,j}^\*(\mathbf{s} - \mathbf{1}) :$$

In this case, both (*Bi*, *<sup>j</sup>* (*i*)−*αVi*, *<sup>j</sup>* \* (*<sup>s</sup>* <sup>−</sup>1)) and (*Bi*, *<sup>j</sup>* ( *j*)−*αVi*, *<sup>j</sup>* \* (*s* −1)) are negative. Since we are maximizing *Vi*, *<sup>j</sup>* (*s*, *di*, *<sup>j</sup>* (*s*)), then the two probability terms in equation (24) must be minimized. This only happens when we let *di*, *<sup>j</sup>* \* (*s*)=1, making the probability terms equal to zero. Now since *Bi*, *<sup>j</sup>* (*i*)<*di*, *<sup>j</sup>* \* (*s*)=1, we continue to the next stage.

$$\mathbf{2}. \quad B\_{i, \, f}(j) > \alpha \, V\_{i, \, f}^{\*}(\mathbf{s} - \mathbf{1}) :$$

In this case, (*Bi*, *<sup>j</sup>* (*i*)−*αVi*, *<sup>j</sup>* \* (*<sup>s</sup>* <sup>−</sup>1)) and (*Bi*, *<sup>j</sup>* ( *j*)−*αVi*, *<sup>j</sup>* \* (*s* −1)) are both positive and to maximize *Vi*, *<sup>j</sup>* (*s*, *di*, *<sup>j</sup>* (*s*)) we need the two probability terms in equation (24) to be maximized. This only happens when we let *di*, *<sup>j</sup>* \* (*s*)=0.5. Since *Bi*, *<sup>j</sup>* (*i*)>*di*, *<sup>j</sup>* \* (*s*)=0.5, we select population *i* as the optimal subspace.

$$\mathbf{3.}\quad \mathcal{B}\_{i,\,\,\,\,\,\,}(j) \le \alpha \, V\_{i,\,\,,\,\,\,}^{"\,\,"}(\mathbf{s}-1) \le \mathcal{B}\_{i,\,\,,\,\,\,}(i) \,\,\,\,\,\,.$$

In this case, one of the probability terms in equation (24) has positive coefficient and the other has negative coefficient. In this case, in order to maximize *Vi*, *<sup>j</sup>* (*s*, *di*, *<sup>j</sup>* (*s*)) we take the derivative as follows.

Substituting equations (20) and (21) in equation (24) we have

$$\begin{aligned} \left| \boldsymbol{V}\_{i,j} \left( \boldsymbol{s}, \boldsymbol{d}\_{i,j} \left( \boldsymbol{s} \right) \right) \right| &= \left| \boldsymbol{B}\_{i,j} \left( \boldsymbol{i} \right) - \alpha \boldsymbol{V}\_{i,j}^{\*} \left( \boldsymbol{s} - 1 \right) \right| \left| \boldsymbol{F} \left( \boldsymbol{h} \left( \boldsymbol{1} - \boldsymbol{d}\_{i,j} \left( \boldsymbol{s} \right) \right) \right) \right| + \\ \left| \boldsymbol{B}\_{i,j} \left( \boldsymbol{j} \right) - \alpha \boldsymbol{V}\_{i,j}^{\*} \left( \boldsymbol{s} - 1 \right) \right| &\left| \boldsymbol{1} - \boldsymbol{F} \left( \boldsymbol{h} \left( \boldsymbol{d}\_{i,j} \left( \boldsymbol{s} \right) \right) \right) \right| + \alpha \boldsymbol{V}\_{i,j}^{\*} \left( \boldsymbol{s} - 1 \right) \end{aligned} \tag{25}$$

Also Pr{ *<sup>p</sup>*¯

*j*,*k p*¯ *i*,*k*

1


≥*h* (*di*, *<sup>j</sup>*

,

*p*

Now following can be resulted,

( ( )) ( )

a

, <sup>0</sup>

, , ,

¶

( )

=

Now the approximate value of *di*, *<sup>j</sup>*

1

*i j*

*d s*

¶

*ij ij i j*

*V sd s d s*

( ( ) ( )) ( ( ) ( ))

a

a

a

*j k*

( ( ( )))

*ik ij*

ï ï î þ

*F p hd s*

Pr Pr

\* 1, ,

\*

¶ - - <sup>=</sup> ¶

1 ,

( ( ) ( )) ( )

= Þ

( ( ) ( )) ( )


1 1 1

\* \* , , , \* , , ,

*Bi Vs d s Bj Vs d*


> ( ( ) ( )) ( ( ) ( ))

a

Second using another approximation, we assume that *<sup>p</sup>*¯

mean thus with similar reasoning, following is obtained:

( )

=

2

*i j*

*d s*

a

*ij ij ij ij*

*Bi Vs Bj Vs*

, 1 \* <sup>2</sup> , , \* , ,

*ij ij i j ij ij i j*

*F p hd s B*

( ( ( ( )))) ( )

\* (*s*))} is obtained as follows,

, 1 \* \*

*ik ij ik ik*

*i j i j jk jk*

( ( )) { ( ( ))} ( ) ( ( ))

*i j jk ik ij jk jk p hd s i k*

ì ü ï ï í ý ³ =³ = =

*<sup>p</sup> hd s p p hd s f p dp*

, , ,, 1, ,

( )

 b


*f p hd s d s dsB* a

a

\* \* ,1 ,1 , , <sup>2</sup> 1, , \*

*ik ik ij ij ik ij i j jk jk ik ik ij ij ik ij i j jk jk*

 b


a

 b


2

æ ö ç ÷ <sup>Þ</sup> ç ÷ è ø

a b

a

*Bi Vs fph d s dsB B Bj Vs f p hd s dsB*


, ,1 ,1 <sup>1</sup> \* \* ,1 ,1

1, , ,1 ,1 \* 1, , \* 2 \* , , ,1 ,1

( ( )) ( ) ( ( ( )))

,

 b


( ( )) ( ) ( ( ( )))

, <sup>1</sup> 1 =

( ( )) ( ) ( ( ( )))

( )

+

*ik ik*

,1 ,1


 b


,

( )

+

*ik ik*

*i*, *j*

1

1

æ ö - - ç ÷ <sup>+</sup> -- - è ø

1

a b

,1 ,1


1

(*s*) is determined.

( ,1 ,1 )


*ik ik*

a b

1

*<sup>j</sup>*,*k* is a constant number equal to its

 b


( )

*s*

, , <sup>2</sup> 1, , \* , ,1 ,1

\*

1

1

æ ö - - ç ÷ <sup>+</sup> --- è ø

1

(*s*) say *d* <sup>1</sup>

( ( ) ( )) ( ( ) ( ))

a

*ij ij ij ij*

*Bj Vs Bi Vs* a

, 1 \* <sup>2</sup> , , \* , ,


a

, <sup>1</sup>

*B*

1 ,

a

\* , ,

Using Dynamic Programming Based on Bayesian Inference in Selection Problems

*ik ij*

(28)

135

http://dx.doi.org/10.5772/57423

(29)

(30)

*ik ij*

ò

Thus following is obtained,

$$\begin{split} \boldsymbol{V}\_{i,j}^{\*}\left(\mathbf{s}\right) &= \left(\boldsymbol{B}\_{i,j}\left(i\right) - \alpha \boldsymbol{V}\_{i,j}^{\*}\left(\mathbf{s} - \mathbf{1}\right)\right) \mathbf{Pr}\left\{\frac{\overline{p\_{j,k}}}{\overline{p\_{i,k}}} \le \boldsymbol{h}\left(\mathbf{1} - \boldsymbol{d}\_{i,j}^{\*}\left(\mathbf{s}\right)\right)\right\} + \\ & \left(\boldsymbol{B}\_{i,j}\left(j\right) - \alpha \boldsymbol{V}\_{i,j}^{\*}\left(\mathbf{s} - \mathbf{1}\right)\right) \mathbf{Pr}\left\{\frac{\overline{p\_{j,k}}}{\overline{p\_{i,k}}} \ge \boldsymbol{h}\left(\boldsymbol{d}\_{i,j}^{\*}\left(\mathbf{s}\right)\right)\right\} + \alpha \boldsymbol{V}\_{i,j}^{\*}\left(\mathbf{s} - \mathbf{1}\right) \end{split} \tag{26}$$

For determining Pr{ *<sup>p</sup>*¯ *j*,*k p*¯ *i*,*k* ≤*h* (1−*di*, *<sup>j</sup>* \* (*s*))}, first using an approximation, we assume that *p*¯ *<sup>i</sup>*,*k* is a constant number equal to its mean, then we have:

$$\begin{split} & \left| \Pr \left\{ \frac{\overline{p\_{j,k}}}{\overline{p\_{i,k}}} \le h \left( 1 - d\_{i,j}^{\prime} \left( s \right) \right) \right\} \right| = \\ & \left| \Pr \left\{ \overline{p\_{j,k}} \le \overline{p\_{i,k}} h \left( 1 - d\_{i,j}^{\prime} \left( s \right) \right) \right\} \right| = \int\_{0}^{\overline{p\_{i,k}} h \left( 1 - d\_{i,j}^{\prime} \left( s \right) \right)} f\_{1} \left( p\_{j,k} \right) d\nu\_{j,k} = \\ & \left| F\_{1} \overline{\left( \overline{p\_{i,k}} h \left( 1 - d\_{i,j}^{\prime} \left( s \right) \right) \right)} \right| \\ & \left| \frac{\overline{\sigma\_{i}} \overline{f\_{i,k}} h \left( 1 - d\_{i,j}^{\prime} \left( s \right) \right)}{\overline{\sigma\_{i}} d\_{i,j}^{\prime} \left( s \right)} \right| = \frac{B \left( \alpha\_{i,k-1}, \overline{\rho\_{i,k-1}} \right)}{\left( 1 - d\_{i,j}^{\prime} \left( s \right) \right)^{2} B \left( \alpha\_{j,k-1}, \overline{\rho\_{j,k-1}} \right)} f\_{1} \left( \overline{\overline{p\_{i,k}} h \left( 1 - d\_{i,j}^{\prime} \left( s \right) \right)} \right) \end{split} \tag{27}$$

$$\text{Also } \Pr\left[\frac{\overline{p\_{j,k}}}{\overline{p\_{i,k}}} \ge h\left(d\_{i,j}^\*(s)\right)\right] \text{ is obtained as follows.}$$

$$\begin{split} \Pr\left\{\frac{\overline{p\_{j,k}}}{\overline{p\_{i,k}}} \ge h\left(\boldsymbol{d\_{i,j}^{\prime}}\left(\boldsymbol{s}\right)\right)\right\} &= \Pr\left\{\overline{p\_{j,k}} \ge \overline{p\_{i,k}}h\left(\boldsymbol{d\_{i,j}^{\prime}}\left(\boldsymbol{s}\right)\right)\right\} = \int\_{\overline{p\_{i,k}}h\left(\boldsymbol{d\_{i,j}^{\prime}}\left(\boldsymbol{s}\right)\right)} f\_{1}\left(\boldsymbol{p\_{j,k}}\right) d\boldsymbol{p\_{j,k}} = \\ 1 - F\_{1}\left(\overline{p\_{i,k}}h\left(\boldsymbol{d\_{i,j}^{\prime}}\left(\boldsymbol{s}\right)\right)\right) &= \\ \frac{\left\|\mathcal{E}\left\{1 - F\_{1}\left(\overline{p\_{i,k}}h\left(\boldsymbol{d\_{i,j}^{\prime}}\left(\boldsymbol{s}\right)\right)\right)\right\|}{\boldsymbol{\mathcal{E}}\boldsymbol{d\_{i,j}^{\prime}}\left(\boldsymbol{s}\right)} &= \frac{-\mathsf{B}\left(\boldsymbol{\alpha\_{i,k-1}}, \boldsymbol{\mathcal{B}}\_{i,k-1}\right)}{\left(\boldsymbol{d\_{i,j}^{\prime}}\left(\boldsymbol{s}\right)\right)^{2}\mathsf{B}\left(\boldsymbol{\alpha\_{j,k-1}}, \boldsymbol{\mathcal{B}}\_{j,k-1}\right)} f\_{1}\left(\overline{p\_{i,k}}h\left(\boldsymbol{d\_{i,j}^{\prime}}\left(\boldsymbol{s}\right)\right)\right) \end{split} \tag{28}$$

Now following can be resulted,

**2.** *Bi*, *<sup>j</sup>*

*Vi*, *<sup>j</sup>*

( *j*)>*αVi*, *<sup>j</sup>*

happens when we let *di*, *<sup>j</sup>*

( *j*)≤*αVi*, *<sup>j</sup>*

Thus following is obtained,

For determining Pr{ *<sup>p</sup>*¯

In this case, (*Bi*, *<sup>j</sup>*

(*s*, *di*, *<sup>j</sup>*

subspace.

**3.** *Bi*, *<sup>j</sup>*

as follows.

\* (*s* −1) :

(*i*)−*αVi*, *<sup>j</sup>*

134 Dynamic Programming and Bayesian Inference, Concepts and Applications

\* (*<sup>s</sup>* <sup>−</sup>1)<sup>≤</sup> *Bi*, *<sup>j</sup>*

\* (*<sup>s</sup>* <sup>−</sup>1)) and (*Bi*, *<sup>j</sup>*

\* (*s*)=0.5. Since *Bi*, *<sup>j</sup>*

(*i*) :

has negative coefficient. In this case, in order to maximize *Vi*, *<sup>j</sup>*

Substituting equations (20) and (21) in equation (24) we have

a

a

≤*h* (1−*di*, *<sup>j</sup>*

{ ( ( ))} ( ) ( ( ))

*ik ij ik ik*

1 \* ,, , 1, , 0

*p ph d s f p dp*

*jk ik ij jk jk*

ò

£- = =

*i j i j jk jk*

*j*,*k p*¯ *i*,*k*

( ( ( )))


1

*Fph d s*

*ik ij*

1, ,

¶ -

Pr 1

, \*

*<sup>p</sup> h ds*

ì ü ï ï í ý £- = ï ï î þ

,

*p*

*i k*

Pr 1

*j k*

( ( ( ))) ( )

\*

\*

*Fph d s B*

constant number equal to its mean, then we have:

( ( ))

,

*i j*

( *j*)−*αVi*, *<sup>j</sup>*

(*i*)>*di*, *<sup>j</sup>*

In this case, one of the probability terms in equation (24) has positive coefficient and the other

( ( )) ( ( ) ( )){ ( ( ( )))}

\* \* , , , ,

( ( ) ( )) ( ( ))

ï ï = - - £- + í ý

\* , \* \* , , , , ,

( )

 b


*fph d s d s dsB*

a

1, , ,1 ,1 \* 1, , \* 2 \* , , ,1 ,1

a

*ik ij*

*ph d s*


<sup>1</sup> , <sup>1</sup> 1 ,

<sup>=</sup> - ¶ -

\* , ,

( ( )) ( ) ( ( ( )))

 b


ì ü ï ï -- ³ +- í ý ï ï î þ

,

*p*

*i k*

*j k*

1 Pr 1

 a

\* (*s*))}, first using an approximation, we assume that *p*¯

*ik ij*

ì ü

ï ï î þ

( ( ) ( )) ( ( )) ( )

*j k ij ij i j i j i k*

*<sup>p</sup> Bj Vs hd s V s p*

\*\* \* , ,,, ,

*<sup>p</sup> V s Bi Vs h ds*

*ij ij ij i j*

( ) 1 Pr 1

a


1 1 1

=-- - +

 a

\* ,, , , ,

( ( ) ( )){ ( ( ( )))} ( )

, 1 1

*ij ij i j i j*

*B j V s Fhd s V s*

*ij ij ij ij i j*

*V sd s B i V s F h d s*

a

(*s*)) we need the two probability terms in equation (24) to be maximized. This only

\* (*s* −1)) are both positive and to maximize

\* (*s*)=0.5, we select population *i* as the optimal

(*s*)) we take the derivative

(25)

(26)

*<sup>i</sup>*,*k* is a

(27)

(*s*, *di*, *<sup>j</sup>*

$$\begin{split} & \frac{\partial V\_{i,j}(s, d\_{i,j}(s))}{\partial d\_{i,j}(s)} = 0 \Rightarrow \\ & \left( B\_{i,j}(i) - \alpha V\_{i,j}'(s-1) \right) \frac{\mathbf{B}\left(\alpha\_{i,k-1}, \theta\_{i,k-1}\right)}{\left(1 - d\_{i,j}'(s)\right)^2 \mathbf{B}\left(\alpha\_{j,k-1}, \theta\_{j,k-1}\right)} \cdot f\_1\left(\overline{p\_{i,j}} h\left(1 - d\_{i,j}'(s)\right)\right) = \\ & - \left( B\_{i,j}\left(j\right) - \alpha V\_{i,j}'(s-1) \right) \frac{\mathbf{B}\left(\alpha\_{i,k-1}, \theta\_{i,k-1}\right)}{\left(d\_{i,j}'(s)\right)^2 \mathbf{B}\left(\alpha\_{j,k-1}, \theta\_{j,k-1}\right)} f\_1\left(\overline{p\_{i,j}} h\left(d\_{i,j}'(s)\right)\right) = \\ & \frac{\left(B\_{i,j}\left(j\right) - \alpha V\_{i,j}'\left(s-1\right)\right)}{\left(B\_{i,j}\left(j\right) - \alpha V\_{i,j}'\left(s-1\right)\right)} \frac{1 - d\_{i,j}'(s)}{\left(d\_{i,j}'(s)\right)^2} \xrightarrow{\mathbf{2}\left(\alpha\_{i,k-1}, \theta\_{i,k-1}\right)} \\ & d\_{i,j}'(s) = \frac{1}{\left(\begin{array}{c} \left(B\_{i,j}\left(j\right) - \alpha V\_{i,j}'\left(s-1\right)\right) \\ \hline & \left(B\_{i,j}\left(j\right) - \alpha V\_{i,j}'\left(s-1\right)\right) \end{array}} \tag{29}$$

Now the approximate value of *di*, *<sup>j</sup>* (*s*) say *d* <sup>1</sup> *i*, *j* (*s*) is determined.

Second using another approximation, we assume that *<sup>p</sup>*¯ *<sup>j</sup>*,*k* is a constant number equal to its mean thus with similar reasoning, following is obtained:

$$d\_{i,j}^2(\mathbf{s}) = \frac{1}{\left(\frac{\left(\mathcal{B}\_{i,j}\left(j\right) - \alpha \mathcal{V}\_{i,j}^\*\left(\mathbf{s} - \mathbf{1}\right)\right)}{-\left(\mathcal{B}\_{i,j}\left(i\right) - \alpha \mathcal{V}\_{i,j}^\*\left(\mathbf{s} - \mathbf{1}\right)\right)}\right)^{\frac{1}{2\left(\alpha\_{i,k-1} + \beta\_{i,k-1}\right)}}}\tag{30}$$

Therefore the approximate optimal value of *di*, *<sup>j</sup>* \* (*s*) can be determined from following equation,

$$\mathbf{d}\_{i,j}^{\*}\left(\mathbf{s}\right) = \text{Max}\left\{\mathbf{d}\_{i,j}^{1}\left(\mathbf{s}\right), \mathbf{d}\_{i,j}^{2}\left(\mathbf{s}\right)\right\} \tag{31}$$

Then, using the Bayesian rule the posterior belief is:

*i i*

1 1

*j j*

Bayes' theorem, one can write equation (34) as

*B*

The term Pr{*xk* |*OOCi*

only the *i*

normal density, *A* exp( −<sup>1</sup>

2

*k k-1*

*x O*

= =

*m m*

å å

Pr{ , , } ( , ) Pr{ , } Pr{ , }

*x O x O x O*

*OOC <sup>B</sup> OOC*

*k k-1 k k-1*

=

1 1

2

ss

ss

ss

order to update the beliefs at iteration *k* one needs to evaluate Pr{*xk* |*OOCi*

(*xk* <sup>−</sup>*μ***1***i*)*<sup>T</sup> <sup>Σ</sup>* **-1**

*j j*

= = å å

bution with mean vector *μ* = *μ*1, *μ*2, ..., *μm <sup>T</sup>* and covariance matrix

=

= =

*i m m*

Pr{ }Pr{ } ( , )Pr{ } (, )

Pr{ }Pr{ } ( , )Pr{ } *i i i i*

When the system is in-control, we assume the *m* characteristics follow a multinormal distri‐

1 12 1 2 21 2 2

é ù ê ú

*mm m*

In out-of-control situations, only the mean vector changes and the probability distribution along with the covariance matrix remain unchanged. In latter case, equation (35) is used to calculate the probability of shifts in the process mean *μ* at different iterations. Moreover, in

} is the probability of observing *xk* if only the *i*

out-of-control. The exact value of this probability can be determined using the multivariate

th characteristic has shifted to an out-of-control condition and *A* is a known constant.

ë û

. . . ... .

 s

 s

> s

1 2

*OOC OOC B OOC*

*k-1 k k-1 k-2 k*

2

*m m*

*k-1 k k-1 k-2 k*

*OOC OOC B OOC*

*O x xO x*

*j j j j*

*O x xO x* (35)

*Σ* (36)

(*xk* −*μ***1***i*)), where *μ***1***i* denotes the mean vector in which

}.

th quality characteristic is

Pr{ , , } Pr{ }Pr{ , }

*OOC OOC OOC*

*OOC OOC OOC*

*i i i*

Pr{ , , } Pr{ }Pr{ , }

*j jj*

== =

*x O Ox O*

*k k-1 k-1 k k-1*

*x O Ox O*

Since the goal is to detect the variable with the maximum mean shift, only one quality characteristic can be considered out-of-control at each iteration. In this way, there are *m*−1 remaining candidates for which *m*−1 quality characteristics are in-control. Hence, one can say that the candidates are mutually exclusive and collectively exhaustive. Therefore, using the

*k k-1 k-1 k k-1*

*i*

*x O*

*k k-1*

*k k-1*

Using Dynamic Programming Based on Bayesian Inference in Selection Problems

(34)

137

http://dx.doi.org/10.5772/57423
