**3. A necessary condition for the optimal sensor rules**

In this section, when the fusion rule is fixed, we derive a necessary condition for the optimal sensor rules that minimize the Monte Carlo cost function. First, we need some equivalent transformations for *PH*<sup>0</sup> 0 . Then based on the transformations, the necessary condition can be obtained. At the same time, an analytical result is obtained when the fusion rule is the *K*-out-of-*L* rule.

## **3.1 Necessary condition**

First, we need some equivalent transformations for *PH*<sup>0</sup> 0 . Lemma 1 *PH*<sup>0</sup> <sup>0</sup> can be rewritten as follows:

$$\begin{aligned} P\_{H\_0^0} \triangleq \left[ 1 - I\_j(\boldsymbol{\mathcal{y}}\_j) \right] P\_{j1} \left( I\_1(\boldsymbol{\mathcal{y}}\_1), \dots, I\_{j-1}(\boldsymbol{\mathcal{y}}\_{j-1}), I\_{j+1}(\boldsymbol{\mathcal{y}}\_{j+1}), \dots, I\_L(\boldsymbol{\mathcal{y}}\_L); F^0; P^{\text{ext}}, P^{\text{ext}} \right) \\ + P\_{j2} \left( I\_1(\boldsymbol{\mathcal{y}}\_1), \dots, I\_{j-1}(\boldsymbol{\mathcal{y}}\_{j-1}), I\_{j+1}(\boldsymbol{\mathcal{y}}\_{j+1}), \dots, I\_L(\boldsymbol{\mathcal{y}}\_L); F^0; P^{\text{ext}}, P^{\text{ext}} \right), \end{aligned} \tag{16}$$

where for *j* ¼ 1, 2, … , *L*,

$$P\_{j1}(\cdot) \triangleq \sum\_{k'=1}^{2^L} [\mathbf{1} - F(\mathbf{s}\_{k'})] \left(\mathbf{1} - P\_j^{\mathrm{e}\mathbf{e}\mathbf{0}} - P\_j^{\mathrm{e}\mathbf{1}}\right) (\mathbf{1} - 2\mathbf{s}\_{k'}(j)) P\_{m \neq j},\tag{17}$$

$$P\_{j2}(\cdot) \triangleq \sum\_{k'=1}^{2^L} [\mathbf{1} - F(s\_{k'})] \left[ s\_{k'}(j) + P\_j^{\text{ex1}}(\mathbf{1} - \mathfrak{L}\_{k'}(j)) \right] P\_{m \neq j},\tag{18}$$

$$\begin{split} P\_{m \neq j} \triangleq \prod\_{m, m \neq j}^{L} \left\{ (\mathbf{1} - P\_m^{\rm ex0}) (\mathbf{1} - s\_{k'}(m)) (\mathbf{1} - I\_m) + P\_m^{\rm ex0} s\_{k'}(m) (\mathbf{1} - I\_m) \\ &+ (\mathbf{1} - P\_m^{\rm ex1}) s\_{k'}(m) I\_m + P\_m^{\rm ex1} (\mathbf{1} - s\_{k'}(m)) I\_m \right\}. \end{split} \tag{19}$$

**Proof:** If *sk*ð Þ¼ *m Im ym* � � for all *<sup>m</sup>* <sup>¼</sup> 1, … , *<sup>L</sup>*, then the continued product term Q*<sup>L</sup> <sup>m</sup>*¼<sup>1</sup> <sup>1</sup> � *Im ym* � � � � ½ �þ <sup>1</sup> � *sk*ð Þ *<sup>m</sup> Im ym* � �*sk*ð Þ *<sup>m</sup>* � � <sup>¼</sup> 1 in *PH*<sup>0</sup> 0 . Otherwise, it is 0. Thus, *PH*<sup>0</sup> 0 can be rewritten as *PH*<sup>0</sup> <sup>0</sup> <sup>¼</sup> <sup>P</sup><sup>2</sup>*<sup>L</sup> k*0 <sup>¼</sup><sup>1</sup> <sup>1</sup> � *F sk* ½ � ð Þ0 *P sk* ð Þ <sup>0</sup> jð Þ *<sup>I</sup>*1, *<sup>I</sup>*2, … , *IL* , where the terms that equal zero are omitted and *P sk* <sup>ð</sup> <sup>0</sup> jð Þ *<sup>I</sup>*1, *<sup>I</sup>*2, … , *IL* Þ ¼ <sup>Q</sup>*<sup>L</sup> <sup>j</sup>*¼<sup>1</sup>*P sk*0ð Þj *<sup>j</sup> Ij* � �*:* Recalling the conditional probability formula (5), we rewrite *P sk*0ð Þj *j Ij* � � as *P sk*0ð Þj *<sup>j</sup> Ij* � � <sup>¼</sup> 1 � *Ij* � � <sup>1</sup> � *<sup>P</sup>ce*<sup>0</sup> *<sup>j</sup>* � *Pce*<sup>1</sup> *j* � � <sup>1</sup> � <sup>2</sup>*sk* ð Þþ 0ð Þ*<sup>j</sup> sk*0ðÞþ*<sup>j</sup> <sup>P</sup>ce*<sup>1</sup> *<sup>j</sup>* 1 � 2*sk* ð Þ 0ð Þ*j :* Based on these transformations, *PH*<sup>0</sup> <sup>0</sup> can be decomposed as (16).

Remark 2: Note that *Pj*1ð Þ� and *Pj*2ð Þ� are both independent of *Ij yj* � � for *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>L</sup>*. In addition, they can also be applied in the Riemann sum approximation (see, e.g., [31]). Compared with [36], the sum of 2*<sup>L</sup>* terms about *sk* is eliminated and it greatly reduces the computational time. In addition, the expression for *Pj*1ð Þ� given in (17) is also a key equation in the following results:

Substituting the transformations (16) into (15), we obtain

$$\begin{split} &C\_{\rm MC}(I\_{1}(\boldsymbol{y}\_{1}),\ldots,I\_{L}(\boldsymbol{y}\_{L});\boldsymbol{F}^{\rm PC},\boldsymbol{P}^{\rm cr1},\boldsymbol{P}^{\rm cr1},\boldsymbol{N}) \\ &= c + \frac{1}{N} \sum\_{i=1}^{N} \left\{ [1 - I\_{j}(\boldsymbol{Y}\_{ij})] P\_{j1}(I\_{1}(\boldsymbol{Y}\_{i1}),\ldots,I\_{j-1}(\boldsymbol{Y}\_{i(j-1)}),I\_{j+1}(\boldsymbol{Y}\_{i(j+1)}),\ldots,I\_{L}(\boldsymbol{Y}\_{iL});\boldsymbol{P}^{\rm cr0},\boldsymbol{P}^{\rm cr1}) \right\} \\ &+ P\_{j2}(I\_{1}(\boldsymbol{Y}\_{i1}),\ldots,I\_{j-1}(\boldsymbol{Y}\_{i(j-1)}),I\_{j+1}(\boldsymbol{Y}\_{i(j+1)}),\ldots,I\_{L}(\boldsymbol{Y}\_{iL});\boldsymbol{P}^{\rm cr0},\boldsymbol{P}^{\rm cr1}) \right\} \cdot \frac{\boldsymbol{L}(\boldsymbol{Y}\_{i})}{\boldsymbol{g}(\boldsymbol{Y}\_{i})}, \end{split} \tag{20}$$

where *Yi* ¼ ð Þ *Yi*1, *Yi*2, … , *YiL* . According to (20), the necessary condition for the optimal sensor rules that minimize *CMC I*<sup>1</sup> *y*<sup>1</sup> � �, � … , *IL yL* � �; *F*0; *Pce*0, *Pce*<sup>1</sup> , *N*Þ is stated in the following lemma:

Lemma 2: Let *I*<sup>1</sup> *y*<sup>1</sup> � �, … , *IL yL* � � � � be the set of optimal sensor rules, i.e., they minimize *CMC I*<sup>1</sup> *y*<sup>1</sup> � �, � … , *IL yL* � �; *F*0; *Pce*0, *Pce*<sup>1</sup> , *N*Þ in the parallel Bayesian detection network, then they must satisfy the following equations:

$$I\_1(Y\_{i1}) = I\left[P\_{11}(I\_2(Y\_{i2}), I\_3(Y\_{i3}), \dots, I\_L(Y\_{iL}); F^0; P^{e0}, P^{e1}) \cdot \hat{L}(Y\_i)\right],\tag{21}$$

$$I\_2(Y\_{i2}) = I\left[P\_{21}(I\_1(Y\_{i1}), I\_3(Y\_{i3}), \dots, I\_L(Y\_{iL}); F^0; P^{c0}, P^{c1}) \cdot \hat{L}(Y\_i)\right],\tag{22}$$

$$I\_L(Y\_{iL}) = I\left[P\_{L1}(I\_1(Y\_{i1}), I\_2(Y\_{i2}), \dots, I\_{L-1}(Y\_{i(L-1)}); P^0; P^{c0}, P^{c1}) \cdot \hat{L}(Y\_i)\right],\tag{23}$$

⋯⋯

where *Pj*1ð Þ� are defined by (17) and *I*½ �� is an indicator function defined as follows:

$$I[\mathfrak{x}] = \begin{cases} \mathbf{1}, & \text{if } \; \mathfrak{x} \ge \mathbf{0}; \\ \mathbf{0}, & \text{if } \; \; \mathfrak{x} < \mathbf{0}. \end{cases} \tag{24}$$

**Proof:** Note that both *Pj*1ð Þ� and *Pj*2ð Þ� are independent of *Ij yj* � � for *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>L</sup>*. If *I*1ð Þ *Yi*<sup>1</sup> minimizes the Monte Carlo cost function under the given *I*2ð Þ *Yi*<sup>2</sup> , … , *IL*ð Þ *YiL* , we only need to minimize the first term of the summation in (20), that is,

½ � <sup>1</sup> � *<sup>I</sup>*1ð Þ *Yi*<sup>1</sup> *<sup>P</sup>*<sup>11</sup> *<sup>I</sup>*2ð Þ *Yi*<sup>2</sup> , *<sup>I</sup>*3ð Þ *Yi*<sup>3</sup> , … , *IL*ð Þ *YiL* ; *<sup>F</sup>*0; *Pce*0, *<sup>P</sup>ce*<sup>1</sup> � � *L Y* ^ð Þ*<sup>i</sup> g Y*ð Þ*<sup>i</sup> :* Note that the value of *I*1ð Þ *Yi*<sup>1</sup> is 0 or 1 and *g y*ð Þ is well defined, that is, *g Y*ð Þ*<sup>i</sup>* > 0, *I*1ð Þ *Yi*<sup>1</sup> should be equal to 1 when *<sup>P</sup>*11ð*I*2ð Þ *Yi*<sup>2</sup> , *<sup>I</sup>*3ð Þ *Yi*<sup>3</sup> , … , *IL*ð Þ *YiL* ; *<sup>F</sup>*0; *Pce*0, *Pce*<sup>1</sup> <sup>Þ</sup>*L Y* ^ð Þ*<sup>i</sup>* <sup>≥</sup> 0 for *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>N</sup>*, otherwise it should be equal to 0. Therefore, we obtain (21) by the definition of *I x*½ � in (24). Similarly, we obtain (22) and (23) by minimizing (20).

## **3.2 An analytical result for the** *K***-out-of-***L* **rule**

When the fusion rule is a *K*-out-of-*L* rule, we would obtain an analytical result in the presence of nonideal channels. It is described as follows:

Theorem 1.1: If the fusion rule is a *K*-out-of-*L* rule and the probabilities of channel errors are less than 0.5 (i.e., 0 <*Pce*<sup>0</sup> *<sup>j</sup>* < 0*:*5, 0<*Pce*<sup>1</sup> *<sup>j</sup>* <0*:*5) for each channel, the optimal sensor rules are *Ij Yij* � � <sup>¼</sup> *<sup>I</sup> L Y* ^ð Þ*<sup>i</sup>* � � for *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>N</sup>* and *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>L</sup>*.

*Decision Fusion for Large-Scale Sensor Networks with Nonideal Channels DOI: http://dx.doi.org/10.5772/intechopen.106075*

**Proof:** From Lemma 1, we know

$$\begin{split} P\_{j1}(\cdot) &= \sum\_{k'=1}^{2^{\mathsf{L}}} [\mathbbm{1} - F(s\_{k'})] P\_{m \neq j} \cdot \left( \mathbbm{1} - P\_j^{\text{ec0}} - P\_j^{\text{ec1}} \right) [\mathbbm{1} - \mathfrak{D}\_{k'}(j)] \\ &= \left( \mathbbm{1} - P\_j^{\text{ec0}} - P\_j^{\text{ec1}} \right) \sum\_{k'=1}^{2^{\mathsf{L}}} [(\mathbbm{1} - F(s\_{k'}|s\_{k'}(j) = \mathbf{0})) - (\mathbbm{1} - F(s\_{k'}|s\_{k'}(j) = \mathbf{1}))] P\_{m \neq j} \\ &= \left( \mathbbm{1} - P\_j^{\text{ec0}} - P\_j^{\text{ec1}} \right) \sum\_{k'=1}^{2^{\mathsf{L}}} [F(s\_{k'}|s\_{k'}(j) = \mathbf{1}) - F(s\_{k'}|s\_{k'}(j) = \mathbf{0})] P\_{m \neq j}. \end{split} \tag{25}$$

Since 0 <*Pce*<sup>0</sup> *<sup>j</sup>* <0*:*5, 0<*Pce*<sup>1</sup> *<sup>j</sup>* <sup>&</sup>lt;0*:*5, we have 1 � *Pce*<sup>0</sup> *<sup>j</sup>* � *Pce*<sup>1</sup> *<sup>j</sup>* > 0. Obviously, *Pm*6¼*<sup>j</sup>* >0 holds from its definition. If *F sk*<sup>0</sup> j*sk* ð Þ� 0ðÞ¼ *j* 1 *F sk*<sup>0</sup> j*sk* ½ � ð Þ 0ðÞ¼ *j* 0 ≥0, *Pj*1ð Þ� ≥0 can be derived. When the fusion rule is a *<sup>K</sup>*-out-of-*<sup>L</sup>* rule, *F sk* ð Þ¼0 *<sup>I</sup>* <sup>P</sup>*<sup>L</sup> <sup>j</sup>*¼<sup>1</sup>*sk*0ðÞ�*<sup>j</sup> <sup>K</sup>* h i. Thus,

$$\begin{aligned} F(s\_{k'}|s\_{k'}(j) = \mathbf{1}) &= I \left[ \sum\_{m=1, m \neq j}^{L} s\_{k'}(m) + \mathbf{1} - K \right], \\ F(s\_{k'}|s\_{k'}(j) = \mathbf{0}) &= I \left[ \sum\_{m=1, m \neq j}^{L} s\_{k'}(m) + \mathbf{0} - K \right]. \end{aligned}$$

If P*<sup>L</sup> m*¼1,*m*6¼*j sk*0ð Þþ *<sup>m</sup>* <sup>0</sup> � *<sup>K</sup>* <sup>≥</sup> 0, then <sup>P</sup>*<sup>L</sup> m*¼1,*m*6¼*j sk*0ð Þþ *m* 1 � *K* ≥ 0, and we can get that *F sk*<sup>0</sup> <sup>j</sup>*sk* ð Þ� 0ðÞ¼ *<sup>j</sup>* <sup>1</sup> *F sk*<sup>0</sup> <sup>j</sup>*sk* ð Þ¼ 0ðÞ¼*<sup>j</sup>* <sup>0</sup> 0. If <sup>P</sup>*<sup>L</sup> m*¼1,*m*6¼*j sk*0ð Þþ *m* 0 � *K* <0, then *F sk*<sup>0</sup> j*sk* ð Þ� 0ðÞ¼ *j* 1 *F sk*<sup>0</sup> j*sk* ð Þ 0ðÞ¼ *j* 0 ≥ 0. In a word, *F sk*<sup>0</sup> j*sk* ð Þ� 0ðÞ¼*j* 1 *F sk*<sup>0</sup> j*sk* ð Þ 0ðÞ¼*j* 0 ≥0 is derived, thus *Pj*<sup>1</sup> <sup>≥</sup>0. It is easy to find a *sk*0ð Þ *<sup>m</sup>* , *<sup>m</sup>* 6¼ *<sup>j</sup>* so that <sup>P</sup>*<sup>L</sup> m*¼1,*m*6¼*j sk*0ð Þþ *m* 0 ¼ *<sup>K</sup>* � 1 and <sup>P</sup>*<sup>L</sup> m*¼1,*m*6¼*j sk*0ð Þþ *m* 1 ¼ *K*. Thus, there must exist *F sk*<sup>0</sup> j*sk* ð Þ� 0ðÞ¼ *j* 1 *F sk*<sup>0</sup> j*sk* ð Þ 0ðÞ¼ *j* 0 > 0. Therefore, the *Pj*<sup>1</sup> >0 is derived. Recalling the necessary condition for the optimal sensor rules, that is, *I Yij* � � <sup>¼</sup> *I Pj*1ðÞ� � *L Y* ^ð Þ*<sup>i</sup>* � �, the proof is completed.

Remark 3: The *K*-out-of-*L* rule counts the number of sensors that vote in favor of *H*<sup>1</sup> and compares it with a given threshold *K* [37]. It is also referred to as the counting rule or voting rule and is widely used in the practical decision fusion area [38, 39]. It encompasses a general class of fusion rules such as AND, OR, and Majority Boolean fusion rules [40]. The reason we assume that the probabilities of channel errors are less than 0.5 is based on practical considerations. If the probabilities of channel errors are greater than or equal to 0.5, the channel is totally unreliable and the performance is not better than a random decision. Obviously, the analytical solution is very efficient to tackle large-scale sensor networks with dependent observations and channel errors.
