6. HCM incorporating local data and membership KL divergence

To incorporate local spatial data into the LMKLFCM objective function in (19), the following objective function has been proposed in [18].

$$\begin{split} \mathcal{J}\_{\text{IDMKLFCM}} &= \sum\_{i=1}^{\mathbb{C}} \sum\_{n=1}^{N} u\_{in} (d\_{in} + a \overline{d}\_{in}) + \\ &\quad \mathcal{Y} \left( \sum\_{i=1}^{\mathbb{C}} \sum\_{n=1}^{N} u\_{in} \log \left( \frac{u\_{in}}{\tau\_{in}} \right) + \sum\_{i=1}^{\mathbb{C}} \sum\_{n=1}^{N} \overline{u}\_{in} \log \left( \frac{\overline{u}\_{in}}{\overline{\tau}\_{in}} \right) \right) \end{split} \tag{24}$$

Therefore, similar to (22) and (23), the membership function uin and the cluster-center vi are, respectively, given by [18].

$$\mu\_{\rm int} = \frac{1}{\sum\_{j=1}^{\mathcal{C}} \left( \frac{\pi\_{jn} \left( (1 - \pi\_{in}) \exp\left( \left( d\_{in} + \alpha \overline{d}\_{in} \right) / \gamma \right) + \pi\_{in} \right)}{\left( \left( 1 - \pi\_{jn} \right) \exp\left( \left( d\_{in} + \alpha \overline{d}\_{in} \right) / \gamma \right) + \pi\_{jn} \right)} \right)} \pi\_{\rm int} \tag{25}$$

$$w\_i = \frac{\sum\_{n=1}^{N} u\_{in} (\mathbf{x}\_n + \alpha \overline{\mathbf{x}}\_n)}{(1 + \alpha) \sum\_{n=1}^{N} u\_{in}} \tag{26}$$

It is obvious that the LDMKLFCM algorithm in (24)–(26) provides a membership that depends upon the local spatial data and membership information while the cluster center is dependent upon the locally-smoothed data. Thus the algorithm has twofold approach to handle additive noise.

#### 7. Simulation results

This simulation aims at examining the performance of the conventional FCM, the membership entropy-based FCM (MEFCM), the spatial distance weighted FCM (SFCM), the local membership KL divergence-based FCM (LMKLFCM) and the local data and membership KL divergence-based FCM (LDMKLFCM) algorithms. It is to be noticed that all the algorithms can be implemented almost similar to the pseudo code in Table 1 by replacing the steps 3 and 4 by the corresponding computation of the membership function and cluster centers of each algorithm.

#### 7.1. Clustering validity

It is obvious from (22) that uin is proportional to πin and the proportional parameter δin is inversely proportional to the entity's distance din and the maximum δkn occurs when dkn ¼ 0.

dent of the data to be clustered but dependent on the initial value of the membership matrix U<sup>0</sup>

which, in this case, means too much fuzzy membership function. This has been proved experimentally by using a synthetic image of 4 clusters and <sup>γ</sup> <sup>¼</sup> 1010: Finally, as shown by (23), the

To incorporate local spatial data into the LMKLFCM objective function in (19), the following

πin � �

<sup>π</sup>jnð Þ ð Þ <sup>1</sup>�πin expð Þ ð Þ dinþαdin <sup>=</sup><sup>γ</sup> <sup>þ</sup>πin ð Þ ð Þ <sup>1</sup>�πjn expð Þ ð Þ dinþαdin <sup>=</sup><sup>γ</sup> <sup>þ</sup>πjn

> <sup>n</sup>¼<sup>1</sup> uinð Þ xn <sup>þ</sup> <sup>α</sup>xn ð Þ <sup>1</sup> <sup>þ</sup> <sup>α</sup> <sup>P</sup><sup>N</sup>

It is obvious that the LDMKLFCM algorithm in (24)–(26) provides a membership that depends upon the local spatial data and membership information while the cluster center is dependent upon the locally-smoothed data. Thus the algorithm has twofold approach to handle additive

This simulation aims at examining the performance of the conventional FCM, the membership entropy-based FCM (MEFCM), the spatial distance weighted FCM (SFCM), the local membership KL divergence-based FCM (LMKLFCM) and the local data and membership KL

<sup>n</sup>¼<sup>1</sup> uin

Therefore, similar to (22) and (23), the membership function uin and the cluster-center vi are,

<sup>þ</sup> <sup>P</sup><sup>C</sup> i¼1 P<sup>N</sup>

� � � � (24)

� � <sup>π</sup>in (25)

in versus the number of iteration t converges, because of recursive averaging and normal-

<sup>j</sup>¼<sup>1</sup> <sup>π</sup>jn. Therefore, the resultant membership is indepen-

in is generated from a random process greater than zero,

<sup>C</sup> <sup>¼</sup> E u<sup>t</sup> in

� � <sup>¼</sup> <sup>E</sup>f g <sup>π</sup>in <sup>=</sup>

<sup>n</sup>¼<sup>1</sup> uin log uin

πin

P<sup>C</sup>

<sup>j</sup>¼<sup>1</sup> <sup>E</sup> <sup>π</sup>jn � �

(26)

P<sup>C</sup>

computation of the cluster-center vi is still independent of the local original data.

6. HCM incorporating local data and membership KL divergence

<sup>n</sup>¼<sup>1</sup> uin din <sup>þ</sup> <sup>α</sup>din � �<sup>þ</sup>

uin <sup>¼</sup> <sup>1</sup>

vi ¼

P<sup>N</sup>

P C j¼1 <sup>n</sup>¼<sup>1</sup> uin log uin

izing, to a normal distribution variable with mean equal to <sup>1</sup>

It is clear that if γ ! ∞, uin ¼ πin=

42 Recent Applications in Data Clustering

and on the smoothing fashion. If u<sup>0</sup>

objective function has been proposed in [18].

i¼1 P<sup>N</sup>

γ P<sup>C</sup> i¼1 P<sup>N</sup>

JLDMKLFCM <sup>¼</sup> <sup>P</sup><sup>C</sup>

respectively, given by [18].

7. Simulation results

noise.

then u<sup>t</sup>

To measure the performance of the fuzzy clustering algorithms, several quantitative measures or indices have been adopted in [23, 25] and references therein. Few of these measures are the partition coefficient VPC and the partition entropy VPE index of Bezdek and Xie-Beni (XB index) VXB, given respectively by

$$V\_{\rm PC} = \frac{1}{N} \sum\_{n=1}^{N} \sum\_{i=1}^{C} u\_{in} \tag{27}$$

$$V\_{PE} = -\frac{1}{N} \sum\_{n=1}^{N} \sum\_{i=1}^{c} u\_{in} \log \left( u\_{in} \right) \tag{28}$$

The closer of the VPC to 1, the better the performance since the minimization is constrained by P<sup>C</sup> <sup>i</sup>¼<sup>1</sup> uin <sup>¼</sup> <sup>1</sup>: The closer the VPE to 0, the better the performance since this means the less fuzziness of the membership and thus clusters are well-separated.

In synthetic images, in addition to the above clustering validity measures, several clustering validity and performance measures have also been used such as the accuracy, sensitivity and specificity given respectively by

$$A\infty. = (TP + TN)/(TP + TN + FP + FN)\tag{29}$$

$$\text{Sen.} = \text{TP/(TP} + \text{TN)} \tag{30}$$

$$\text{Spe.} = \text{TN/(TN} + \text{FN)}\tag{31}$$

where T, F, P, and N are mean true, false, positive, and negative, respectively. The TP, FP, TN, and FN are computed as follows. While generating the synthetic image, the ground truth labels are formulated as the logical matrix given by [23].

$$L\_{in} = \begin{cases} 1; \text{if } \mathbf{x}\_n \in \mathbf{i} \\ 0; \text{otherwise} \end{cases}; \text{ i } = 1, 2, \dots, \mathbb{C}, n = 1, 2, \dots, N. \tag{32}$$

where xn is the noise-free pixel in the synthetic image and 1 and 0 represent True and False, respectively. After the segmentation is done, the estimated labels are also formulated as logical matrices generated by [20].

$$\widehat{L}\_{kn} = \begin{cases} 1; k = \arg\max\_i(u\_{in}) \\ 0; otherwise \end{cases}; \text{ i } = 1, 2, \dots, \mathbb{C}, n = 1, 2, \dots, N. \tag{33}$$

Finally, the TP, TN, FP, and FN are given by [20].

$$\begin{aligned} TP &= \sum\_{i=1}^{\mathbb{C}} \sum\_{n=1}^{N} \widehat{L}\_{in} L\_{in}; \qquad TN = \sum\_{i=1}^{\mathbb{C}} \sum\_{n=1}^{N} \overline{\widehat{L}}\_{in} \overline{L}\_{in} \\ FP &= \sum\_{i=1}^{\mathbb{C}} \sum\_{n=1}^{N} \widehat{L}\_{in} \overline{L}\_{in}; \qquad FN = \sum\_{i=1}^{\mathbb{C}} \sum\_{n=1}^{N} \overline{\widehat{L}}\_{in} L\_{in} \end{aligned} \tag{34}$$

shown in Figure 1(b) is for 0.08 noise variance. We have studied the performance of the five algorithms, namely, the standard FCM, the membership entropy-based FCM (MEFCM), the spatial distance weighted FCM (SFCM), the local membership KLFCM (LMKLFCM) and the local data and membership KLFCM (LDMKLFCM) algorithms in segmenting these noisy images with m ¼ 2 and C ¼ 4. The parameters for the algorithms have been elected via simulation as β ¼ 1000 for MEFCM; λ ¼ 0:5 for SFCM; γ ¼ 1000 for LMKLFCM; and γ ¼ 1000 and α ¼ 0:5 for LDMKLFCM. For the computation of the locally smoothed data xn, a neighboring window of size 3x3 has been used. Also, the same spatial window has been used for the computation of the locally-smoothed membership function πin. The initial values of the membership functions U and the cluster-centers V are generated from a uniformly distributed random process with means 0.5 and equal to the image mean, respectively. We have collected results from 25 Monte Carlo runs of each algorithm. In each run, the initial values of U and V of the FCM are new random samples while the ones of the rest algorithms are generated by executing few number of iterations of the FCM algorithm. Simulation results, not included for space limitation, have shown that the algorithms provide further improvement with these initial values generated by the FCM algorithm than those randomly generated. Also, in each run, a new random sample of WGN is used in generating the noisy images. Figure 1(c–g) show the clustered images generated by the five algorithms in the case of 0.08 noise variance. These clustered images show that the LMKLFCM and the LDMKLEFCM algorithms provide the ones with lesser noise which means lesser number of misclassified pixels. Moreover, the LDMKLFCM algorithm offers the superior clustered image. Table 2 summarizes the averages and standard deviations (μ � σ) of the performance measures. The LMKLFCM and LDMKLFCM show the maximum VPC and the minimum VPE. The averages of the accuracy, sensitivity and the specificity performance measures of the five algorithms have been studied

Incorporating Local Data and KL Membership Divergence into Hard C-Means Clustering for Fuzzy and Noise-Robust…

Algorithm Images VPC VPE

0.8105 �0.0007 0.7921 �0.0011 0.8930 �0.0140 0.8286 �0.0004

0.8370 � 0.0010 0.8674 � 0.0009 0.9204 � 0.0006 0.8936 � 0.0006

0.8616 � 0.0012 0.8873 � 0.0012 0.9602 � 0.0113 0.9268 � 0.0004

0.9853 � 0.0011 0.8958 � 0.0088 0.9625 � 0.0087 0.9609 � 0.0012

0.9874 � 0.0011 0.9234 � 0.0030 0.9519 � 0.0016 0.9730 � 0.0026 0.3517 � 0.0012 0.3986 � 0.0020 0.1998 � 0.0240 0.2824 � 0.0006

http://dx.doi.org/10.5772/intechopen.74514

45

0.3017 � 0.0017 0.2409 � 0.0014 0.1440 � 0.0012 0.1786 � 0.0009

0.2271 � 0.0019 0.1841 � 0.0018 0.0650 � 0.0183 0.1198 � 0.0007

0.0270 � 0.0028 0.1721 � 0.0146 0.0441 � 0.0128 0.0643 � 0.0020

0.0227 � 0.0022 0.1258 � 0.0049 0.0604 � 0.0025 0.0446 � 0.0026

Simulated MR Real MR Lena

Simulated MR Real MR Lena

Simulated MR Real MR Lena

Simulated MR Real MR Lena

Simulated MR Real MR Lena

Table 2. Clustering validation measures for synthetic and real-world images.

FCM Synthetic

SFCM Synthetic

MEFCM Synthetic

LMKLFCM Synthetic

LDMKLFCM Synthetic

where "\_\_" means the logical complement.

#### 7.2. Artificial image

In this simulation, the artificial or synthetic noise-free image shown in Figure 1(a) is degraded by adding zero-mean white Gaussian noise (WGN) with different variances. The noisy image

Figure 1. Clustering of the synthetic image: (a), noise free-image; (b), the noise-free image plus zero-mean and 0.08 variance WGN; (c) FCM; (d), MEFCM; (e), SFCM; (f), LMKLFCM; (g), LDMKLFCM. It is evident that the clustered images in (f) and (g) have lesser number of misclassified pixels which means that noisy pixels are rightly clustered. Clustering validation measures are summarized in Table 2.

TP <sup>¼</sup> <sup>P</sup><sup>C</sup>

FP <sup>¼</sup> <sup>P</sup><sup>C</sup>

where "\_\_" means the logical complement.

validation measures are summarized in Table 2.

7.2. Artificial image

44 Recent Applications in Data Clustering

i¼1 P<sup>N</sup>

i¼1 P<sup>N</sup>

<sup>n</sup>¼<sup>1</sup> <sup>b</sup>LinLin; TN <sup>¼</sup> <sup>P</sup><sup>C</sup>

<sup>n</sup>¼<sup>1</sup> <sup>b</sup>LinLin; FN <sup>¼</sup> <sup>P</sup><sup>C</sup>

In this simulation, the artificial or synthetic noise-free image shown in Figure 1(a) is degraded by adding zero-mean white Gaussian noise (WGN) with different variances. The noisy image

Figure 1. Clustering of the synthetic image: (a), noise free-image; (b), the noise-free image plus zero-mean and 0.08 variance WGN; (c) FCM; (d), MEFCM; (e), SFCM; (f), LMKLFCM; (g), LDMKLFCM. It is evident that the clustered images in (f) and (g) have lesser number of misclassified pixels which means that noisy pixels are rightly clustered. Clustering

i¼1 P<sup>N</sup>

i¼1 P<sup>N</sup>

<sup>n</sup>¼<sup>1</sup> <sup>b</sup>LinLin

(34)

<sup>n</sup>¼<sup>1</sup> <sup>b</sup>LinLin

shown in Figure 1(b) is for 0.08 noise variance. We have studied the performance of the five algorithms, namely, the standard FCM, the membership entropy-based FCM (MEFCM), the spatial distance weighted FCM (SFCM), the local membership KLFCM (LMKLFCM) and the local data and membership KLFCM (LDMKLFCM) algorithms in segmenting these noisy images with m ¼ 2 and C ¼ 4. The parameters for the algorithms have been elected via simulation as β ¼ 1000 for MEFCM; λ ¼ 0:5 for SFCM; γ ¼ 1000 for LMKLFCM; and γ ¼ 1000 and α ¼ 0:5 for LDMKLFCM. For the computation of the locally smoothed data xn, a neighboring window of size 3x3 has been used. Also, the same spatial window has been used for the computation of the locally-smoothed membership function πin. The initial values of the membership functions U and the cluster-centers V are generated from a uniformly distributed random process with means 0.5 and equal to the image mean, respectively. We have collected results from 25 Monte Carlo runs of each algorithm. In each run, the initial values of U and V of the FCM are new random samples while the ones of the rest algorithms are generated by executing few number of iterations of the FCM algorithm. Simulation results, not included for space limitation, have shown that the algorithms provide further improvement with these initial values generated by the FCM algorithm than those randomly generated. Also, in each run, a new random sample of WGN is used in generating the noisy images. Figure 1(c–g) show the clustered images generated by the five algorithms in the case of 0.08 noise variance. These clustered images show that the LMKLFCM and the LDMKLEFCM algorithms provide the ones with lesser noise which means lesser number of misclassified pixels. Moreover, the LDMKLFCM algorithm offers the superior clustered image. Table 2 summarizes the averages and standard deviations (μ � σ) of the performance measures. The LMKLFCM and LDMKLFCM show the maximum VPC and the minimum VPE. The averages of the accuracy, sensitivity and the specificity performance measures of the five algorithms have been studied


Table 2. Clustering validation measures for synthetic and real-world images.

Figure 2. The average versus noise variance of accuracy, (a); sensitivity, (b); and specificity, (c); ⊳, FCM; þ, MEFCM; SFCM; LMKLFCM; LDMKLFCM. The proposed LMKLFCM and LDMKLFCM algorithms provide the superior performance among the five algorithms. The LDMKLFCM algorithm shows more noise-robust capability.

against noise variance. Figure 2 shows these measures versus noise variance. It is clear that both the LMKLFCM and the LDMKLFCM algorithms provide the superior performance among the five algorithms and the LDMKLFCM algorithm shows more noise-robustness.

the LMRKlCM; and LDMKLFCM provide the maximum VPC and the minimum VPE.

Figure 3. Clustering of simulated MRI: (a), noise-free MRI; (b), the MRI in (a) plus zero-mean WGN with 0.005 variance. Segmented images by: (c), FCM; (d), MEFCM; (e), SFCM; (f), LMKLFCM (g), LDMKLFCM. Obviously, the segmented images in (f) and (g) provided by the LMKLFCM and the LDMRKLCM algorithms, respectively, have lesser noise which means that the noisy pixels are correctly clustered. The clustering validation measures summarized in Table 2 show that

Incorporating Local Data and KL Membership Divergence into Hard C-Means Clustering for Fuzzy and Noise-Robust…

http://dx.doi.org/10.5772/intechopen.74514

47

A simulated MRI of [26], illustrated by Figure 3(a), has been used as a noise-free image. It has been degraded by adding white Gaussian noise (WGN) with zero-mean and 0.005 variance to

7.3. Magnetic resonance image (MRI)

Incorporating Local Data and KL Membership Divergence into Hard C-Means Clustering for Fuzzy and Noise-Robust… http://dx.doi.org/10.5772/intechopen.74514 47

Figure 3. Clustering of simulated MRI: (a), noise-free MRI; (b), the MRI in (a) plus zero-mean WGN with 0.005 variance. Segmented images by: (c), FCM; (d), MEFCM; (e), SFCM; (f), LMKLFCM (g), LDMKLFCM. Obviously, the segmented images in (f) and (g) provided by the LMKLFCM and the LDMRKLCM algorithms, respectively, have lesser noise which means that the noisy pixels are correctly clustered. The clustering validation measures summarized in Table 2 show that the LMRKlCM; and LDMKLFCM provide the maximum VPC and the minimum VPE.

against noise variance. Figure 2 shows these measures versus noise variance. It is clear that both the LMKLFCM and the LDMKLFCM algorithms provide the superior performance among the five algorithms and the LDMKLFCM algorithm shows more noise-robustness.

#### 7.3. Magnetic resonance image (MRI)

Figure 2. The average versus noise variance of accuracy, (a); sensitivity, (b); and specificity, (c); ⊳, FCM; þ, MEFCM; SFCM; LMKLFCM; LDMKLFCM. The proposed LMKLFCM and LDMKLFCM algorithms provide the superior perfor-

mance among the five algorithms. The LDMKLFCM algorithm shows more noise-robust capability.

46 Recent Applications in Data Clustering

A simulated MRI of [26], illustrated by Figure 3(a), has been used as a noise-free image. It has been degraded by adding white Gaussian noise (WGN) with zero-mean and 0.005 variance to

been added. The noisy MRI has been clustered by the FCM, SFCM, MEFCM, LMKLCM and the LDMKLFCM algorithms. The parameters for all algorithms have been taken similar to the ones of the synthetic image simulation except, for the MEFCM algorithm, β ¼ 300 and, for both the LMKLFCM and LDMKLFCM algorithms, γ ¼ 800: We have also obtained the results of 25 runs of each algorithm. The initial values of uin and vi have been generated and adjusted as

Incorporating Local Data and KL Membership Divergence into Hard C-Means Clustering for Fuzzy and Noise-Robust…

http://dx.doi.org/10.5772/intechopen.74514

49

Figure 5. Segmentation of Lena image: (a), noise-free image; (b), the image in (a) plus WGN noise with zero-mean and 0.05 variance. It is obvious that the images in (f) and (g) have lesser number of misclassified pixels. The clustering validation coefficients summarized in Table 2 which shows that the LMKLFCM and the LDMKLFCM algorithms provide

the superior VPC and VPE.

Figure 4. Clustering of real MRI example: (a), noise-free real MRI; (b), the image in (a) plus salt&pepper with 0.05 variance. Segmented images by: (c), FCM; (d), MEFCM; (e), SFCM; (f), LMKLFCM (g), LDMKLFCM. Clearly, the segmented images in (f) and (g) generated by the LMKLFCM and the LDMRKLCM algorithms, respectively, have lesser noise. The clustering validation coefficients summarized in Table 2 show that the LMRKlCM; and LDMKLFCM provide the maximum VPC and the minimum VPE.

generate the noisy MRI illustrated by Figure 3(b). This noisy MRI image has been clustered by the five algorithms. The parameters for all algorithms have been taken similar to the ones of the synthetic image simulation except, for the MEFCM algorithm, β ¼ 200 and, for both LMKLFCM and LDMKLFCM algorithms, γ ¼ 1000: We have also executed 25 runs of each algorithm. The initial values of uin and vi have been generated and adjusted as explained in the synthetic image simulation. Figure 3(c–g) shows the resulting clustered images provided by the five algorithms in a certain run. Table 2 shows the averages and standard deviations (μ � σ) of the performance measures VPC and VPE of the five algorithms. It obvious that the LMKLFCM and LDMKLFCM provide the segmented images with lesser noise or lesser number of misclassified pixels, the maximum VPC and the minimum VPE.

A real MRI from [27], shown in Figure 4(a), has been considered as a noise-free image. To generate the noisy MRI shown in Figure 4(b), salt & pepper noise with 0.050 variance have been added. The noisy MRI has been clustered by the FCM, SFCM, MEFCM, LMKLCM and the LDMKLFCM algorithms. The parameters for all algorithms have been taken similar to the ones of the synthetic image simulation except, for the MEFCM algorithm, β ¼ 300 and, for both the LMKLFCM and LDMKLFCM algorithms, γ ¼ 800: We have also obtained the results of 25 runs of each algorithm. The initial values of uin and vi have been generated and adjusted as

generate the noisy MRI illustrated by Figure 3(b). This noisy MRI image has been clustered by the five algorithms. The parameters for all algorithms have been taken similar to the ones of the synthetic image simulation except, for the MEFCM algorithm, β ¼ 200 and, for both LMKLFCM and LDMKLFCM algorithms, γ ¼ 1000: We have also executed 25 runs of each algorithm. The initial values of uin and vi have been generated and adjusted as explained in the synthetic image simulation. Figure 3(c–g) shows the resulting clustered images provided by the five algorithms in a certain run. Table 2 shows the averages and standard deviations (μ � σ) of the performance measures VPC and VPE of the five algorithms. It obvious that the LMKLFCM and LDMKLFCM provide the segmented images with lesser noise or lesser num-

Figure 4. Clustering of real MRI example: (a), noise-free real MRI; (b), the image in (a) plus salt&pepper with 0.05 variance. Segmented images by: (c), FCM; (d), MEFCM; (e), SFCM; (f), LMKLFCM (g), LDMKLFCM. Clearly, the segmented images in (f) and (g) generated by the LMKLFCM and the LDMRKLCM algorithms, respectively, have lesser noise. The clustering validation coefficients summarized in Table 2 show that the LMRKlCM; and LDMKLFCM provide

A real MRI from [27], shown in Figure 4(a), has been considered as a noise-free image. To generate the noisy MRI shown in Figure 4(b), salt & pepper noise with 0.050 variance have

ber of misclassified pixels, the maximum VPC and the minimum VPE.

the maximum VPC and the minimum VPE.

48 Recent Applications in Data Clustering

Figure 5. Segmentation of Lena image: (a), noise-free image; (b), the image in (a) plus WGN noise with zero-mean and 0.05 variance. It is obvious that the images in (f) and (g) have lesser number of misclassified pixels. The clustering validation coefficients summarized in Table 2 which shows that the LMKLFCM and the LDMKLFCM algorithms provide the superior VPC and VPE.

mentioned in the synthetic image simulation. Figure 4(c–g) show the segmented images provided by the five algorithms in a certain run while Table 2 summarizes the averages and standard deviations (μ � σ) of the performance measures. It is obvious that the proposed LMKLFCM and LDMKLFCM algorithms provide the segmented images with lesser noise or lesser number of misclassified pixels, the maximum VPC and the minimum VPE.

Acknowledgements

contribution to this work.

Conflict of interest

Author details

Reda R. Gharieb1,2\*

References

26(9):1277-1294

2013;2:01-07

651-666

1967;14:281-297

Press; 1981

No potential conflicts of interest to report.

\*Address all correspondence to: rrgharieb@gmail.com

1 Faculty of Engineering, Assiut University, Assiut, Egypt

Review of Biomedical Engineering. 2000;2:315-337

neering and Computer Science. Paper 43

2 Higher Institute of Engineering, Thebes Academy for Sciences, Cairo, Egypt

[1] Pal NR, Pal SK. A review on image segmentation techniques. Pattern Recognition. 1993;

[2] Ravindraiah R, Tejaswini KA. Survey of image segmentation algorithms based on expectation-maximization. IOSR Journal of VLSI and Signal Processing (IOSR-JVSP).

[3] Pham DL, Xu C, Prince JL. Current methods in medical image segmentation. Annual

[4] Jain AK. Data clustering: 50 years beyond K-means. Pattern Recognition Letters. 2010;3:

[5] MacQueen J. Some methods for classification and analysis of multivariate observations. Proceedings of the fifth Berkeley symposium on mathematical statistics and probability.

[6] Alsabti K, Ranka S, Singh V: An efficient k-means clustering algorithm. Electrical Engi-

[7] Bezdek JC. Pattern Recognition with Objective Fuzzy Algorithms. New York: Plenum

The author would like to thank for funding the open access publication of this Chapter. Also, the author would like to thank Prof. H. Selim, Dr. A. AbdelFattah and Eng. G. Gendy for their

Incorporating Local Data and KL Membership Divergence into Hard C-Means Clustering for Fuzzy and Noise-Robust…

http://dx.doi.org/10.5772/intechopen.74514

51

#### 7.4. Lena image

A popular Lena image shown in Figure 5(a) has been considered as a noise-free image example. The noisy Lena image shown in Figure 5(b) has been generated by adding WGN noise with zero-mean and 0.01 variance. The parameters of the five algorithms have been adjusted to the values similar to the ones used in the previous simulations except C ¼ 2; β ¼ 1000 for the MEFCM algorithm; γ ¼ 2000 for the LMREFCM and γ ¼ 2000 and α ¼ 0:5 for the LDMREFCM algorithms. We have also executed 25 Mont Carlo Runs of each algorithm as explained above. Figure 5(c–g) shows the resulting segmented images obtained by the five algorithms. Visually investigation of the segmented images shows that the LMKLFCM and LDMKLFCM algorithms provide the images with lesser number of misclassified pixels. Table 2 shows the average and standard deviation (μ � σ) of the performance measures of the five algorithms. It is also clear that the two algorithms provide the maximum VPC and the minimum VPE.
