Information Extraction

## **Chapter 5**

## Thresholding Image Techniques for Plant Segmentation

*Miguel Ángel Castillo-Martínez,*

*Francisco Javier Gallegos-Funes, Blanca E. Carvajal-Gámez, Guillermo Urriolagoitia-Sosa and Alberto J. Rosales-Silva*

## **Abstract**

There are challenges in the image-based research to obtain information from the objects in the scene. Moreover, an image is a set of data points that can be processed as an object in similarity way. In addition, the research fields can be merged to generate a method for information extraction and pixel classification. A complete method is proposed to extract information from the data and generate a classification model capable to isolate those pixels that are plant from others are not. Some quantitative and qualitative results are shown to compare methods to extract information and create the best model. Classical and threshold-based state-of-art methods are grouped in the present work for reference and application in image segmentation, obtaining acceptable results in the plant isolation.

**Keywords:** similarity, classification, threshold, image processing, segmentation

## **1. Introduction**

There are three fields for the image-based research: Image processing, Computer Vision and Computer Graphics [1]. In a graphical way, this is shown in the **Figure 1**.

Image processing takes an image as input and realize a set of operations to create a new image that improves the interest feature visualization. It is not limited to; the image processing can isolate those features have not meaningful information to be removed from the scene. For Cancer Aided Diagnostic in dermoscopy, shown in **Figure 2**, the lesion must be bounded but there are meaningless elements as hair and air bubbles. Here, there is a need to improve the lesion visualization removing all those elements. An approach for the processing chain begins with color space transformation, continues with hair detection and finishes with image inpainting [2].

Computer Vision starts with an image and provides features of the object in the scene. These features allow a quantitative description for object interpretation. The **Figure 3** takes the region of interest (ROI), hand for this case, and the 7 Hu's moments are calculated to describe the hand sign as a feature vector, simplifying the classification process [3].

Computer graphics generates a visual representation of mathematical model behavior. The models can be visualized from lines to a video that shows time evolve

#### **Figure 1.**

*Fields in image-based research. Source: [1].*

#### **Figure 2.** *Hair Inpainting in dermoscopic images.*

**Figure 3.** *Static sign recognition. Source: [3].*

behaviors. **Figure 4** shows a finite differences description to represent a coil with ferromagnetic core.

The applications are not exclusive from each field of study, these fields could be merged to generate a hybrid system that improves the description and composition to solve a specific problem. To hair removal in the **Figure 2**, a new image with the hair, described by a statistical operator and a thresholding rule, is generated. The inpainting takes the original and generated images to process only the pixels that do not describe the lesion, this process minimizes the classification error. The solution employees three research fields in image-based systems.

**Color indexes.** For the primary color image encoding the RGB color space is used. This allows the storage and image representation in Red, Green and Blue colors [5–7]. *Thresholding Image Techniques for Plant Segmentation DOI: http://dx.doi.org/10.5772/intechopen.104587*

**Figure 4.** *Numerical analysis for non-destructive electromagnetic test. Source: [4].*

Transform the color space brings other image description to simplify the processing or improve visualization.

In matrix representation, the color transformation has the following form,

$$I = T\_S \bullet p + k \tag{1}$$

where *I* is the color space with *N* components, *TS* is the *N* � 3 transformation matrix, *p* is the RGB vector representation of the pixel, and *k* is a constant vector. This representation allows a generalized transformation for *N* Channels with *N* transformation equations.

Suppose the RGB to YUV color space transformation [5],

$$Y = 0.299R + 0.587G + 0.114B$$

$$\begin{aligned} U &= 0.492(B - Y) \\ V &= 0.877(R - Y) \end{aligned} \tag{2}$$

Substituting and expanding Eq. (2),

$$Y = 0.299R + 0.587G + 0.114B$$

$$U = -0.147R - 0.289G + 0.436B$$

$$V = 0.615R - 0.515G - 0.100B\tag{3}$$

For this case, *k* is a zeros vector. This allows represent RGB to YUV as matrices with Eq. (4)

$$
\begin{pmatrix} Y \\ U \\ V \end{pmatrix} = \begin{pmatrix} 0.299 & 0.587 & 0.114 \\ -0.147 & -0.289 & 0.436 \\ 0.615 & -0.515 & -0.100 \end{pmatrix} \begin{pmatrix} R \\ G \\ B \end{pmatrix} \tag{4}
$$

Because color transformations do not consider neighborhood pixels, these are known as point-to-point operations. The color spaces are described for break up color and lighting. Moreover, there are other methods to transform color spaces and simplify feature extraction in images, introducing the color index concept.

A color index can take the color space information and generates a new channel, this improves the feature visualization according to the requirements. For plant images, Yuan et al. estimates the Nitrogen content in rice from plant in the digital image [8]. Authors get a measure called GMR, see a representation in **Figure 5**, subtracting the red channel from green channel responses respectively. After, they apply a fixed threshold for the plant segmentation.

The Color Index Vegetation Extraction (CIVE) is used to break up plants and soil. This allows a grow evaluation in the crops. Furthermore, the CIVE shows good response in outdoor environments [9, 10]. If the color is processed in the GMR and CIVE, the color transformation is defined as,

$$
\begin{pmatrix} \text{GMR} \\ \text{CIVE} \end{pmatrix} = \begin{pmatrix} -1 & 1 & 0 \\ 0.441 & -0.811 & 0.385 \end{pmatrix} \begin{pmatrix} R \\ G \\ B \end{pmatrix} + \begin{pmatrix} 0 \\ 18.78745 \end{pmatrix} \tag{5}
$$

**Similarity Measure.** Minkowski distance is a generalized way to similarity measure [11–14] defined as,

$$d\_n(X, Y) = \left(\sum\_{i=1}^n \left(|x\_i - y\_i|\right)^n\right)^{\mathbb{I}\_n^\*} \tag{6}$$

where *X* ¼ f g *x*1, *x*2, ⋯, *xn* , *Y* ¼ *y*1, *y*2, ⋯, *yn* � �∈ R*<sup>n</sup>* are data points which the algorithm seeks minimum distance. If *n* ¼ 2 then Euclidean distance is measured. Substituting in Eq. (6) the following expression is obtained,

$$d\_2(\mathbf{X}, Y) = \sqrt{(\mathbf{X} - \mathbf{Y})^\mathsf{I}(\mathbf{X} - \mathbf{Y})} \tag{7}$$

Other way to measure similarity is by Mahalanobis distance [11, 15], this is calculated by Eq. (8),

$$d\_{\mathcal{M}}(\mathbf{x}, \mathcal{y}) = \sqrt{(\mathbf{X} - \mathbf{Y})^{\mathbf{I}} \Sigma^{-1} (\mathbf{X} - \mathbf{Y})} \tag{8}$$

where Σ is the covariance matrix.

**Figure 5.** *Colored GMR response of a leaf in an image acquired with polarized light.*

*Thresholding Image Techniques for Plant Segmentation DOI: http://dx.doi.org/10.5772/intechopen.104587*

$$
\Sigma = \begin{bmatrix}
\sigma\_{11}^2 & \sigma\_{12}^2 & \cdots & \sigma\_{1n}^2 \\
\sigma\_{21}^2 & \sigma\_{22}^2 & \cdots & \sigma\_{2n}^2 \\
\vdots & \vdots & \ddots & \vdots \\
\sigma\_{n1} & \sigma\_{n2}^2 & \cdots & \sigma\_{nn}^2
\end{bmatrix} \tag{9}
$$

If Σ ¼ *I* then Eq. (8) brings the Euclidean distance. Thus, a Weighted Euclidean distance can be calculated as

$$d\_{\Omega}(\mathbf{x}, \mathbf{y}) = \sqrt{(\mathbf{X} - \mathbf{Y})^{\mathsf{I}} \mathfrak{Q}^{-1} (\mathbf{X} - \mathbf{Y})} \tag{10}$$

where Ω is a *n* � *n* weight matrix. If each component is independent form others, then is possible define a weight matrix *W* ¼ ΣI*:*

$$\boldsymbol{W} = \begin{bmatrix} \sigma\_{11}^2 & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{0} & \sigma\_{22}^2 & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0} & \mathbf{0} & \cdots & \sigma\_{nn}^2 \end{bmatrix} \tag{11}$$

With the analysis above, there are 3 cases for the weight matrix:

1.Ω ¼ Σ: Mahalanobis distance is calculated

2.Ω ¼ *I*: Euclidean distance is calculated

3.Ω ¼ *W*: Weighted Euclidean distance is calculated.

Find features that describe the object of interest is required to calculate the similarity for each pixel in the image. The distance measure brings the similarity between the pixel and plants, background, fruit and more.

**Thresholding.** Thresholding is based on a simple rule to cluster data in groups *k*<sup>0</sup> and *k*<sup>1</sup> from a threshold *T*. The groups have not a meaning in their clustering. Moreover, it is possible use this rule as supervised method (classification) [16–19]. The rule *R* to identify a data point is described below.

$$R = \begin{cases} k\_0, & \varepsilon < T \\ k\_1, & \text{otherwise} \end{cases} \tag{12}$$

where *ε* is a measure according to analysis.

For digital images, an example for pixel identification is shown in **Figure 6**. From this figure, *T* is selected with the mean of the data and the pixels are assigned to a group according to it.

There are two challenges using this rule: which parameters define *ε* and what conditions bring *T*. According to the analysis, *ε* could be an independent component. This means that the rule can use only one-color feature. Furthermore, there is a possibility to make a feature composition, improving results. In other perspective, *T* can be defined by a measure that describes data features for *ε* and allow an acceptable division between groups.

**Figure 6.**

*Detected ROI by mean thresholding. a) Original image (Green Channel). b) Histogram (blue) and mean value (red). c) Grouped pixels.*

In state-of-art methods for threshold calculation, the Otsu method is reached [20]. This method brings thresholds that have the better separability over the data. Some indexes and separability obtained are shown in **Figure 7**.

In order to measure the quality in the data isolation, the information gain is used. The information gain, good for decision tree generation, is a data homogeneity indicator [21–23]. This allows a class probability measure for a dataset distribution, indicating that all classes have the same probability when entropy is equal to one.

Information gain is calculated by Eq. (13)

$$G(X|T\_n) = H(X) - \sum\_{v \in T} \frac{|X|T\_v}{|X|} \bullet H(X|T\_v) \tag{13}$$

where *v* is each possible value in the random variable for the analyzed color index *Tn*, *X*∣*Tv* is the sub generated dataset for *Tn*,j j ∙ is the dataset cardinality and *H X*ð Þ is dataset entropy.

For threshold case,

$$G(X|T\_n) = H(X) - \left(\frac{|X|T\_{<}|}{|X|} \bullet H(X|T\_{<}) + \frac{|X|T\_{\ge}|}{|X|} \bullet H(X|T\_{\ge})\right) \tag{14}$$

**Figure 7.**

*CVPPP color index pixel generated dataset break up. a) B Channel from RGB space: There is no way to separate data; b) S Channel from HSV space: There is a medium separability; c) a channel from lab space: There is an acceptable break up. Green: Plant, red: Background, histogram: 64 levels.*

*Thresholding Image Techniques for Plant Segmentation DOI: http://dx.doi.org/10.5772/intechopen.104587*

Entropy is a random variable uncertainty measure. This is the most utilized information measure in this kind of processes [24–26]. In a normalized way, the Shannon entropy is calculated as follows

$$H(\mathbf{X}) = -\sum\_{k=1}^{C} P\_k \bullet \log\_C(P\_k) \tag{15}$$

where *C* is the available classes in the dataset and *Pk* is the corresponding probability for each class.

Considering *M* pixels and each class has *M=C* elements, Eq. (15) changes to,

$$H(X) = -\sum\_{k=1}^{C} \frac{\mathbf{1}}{\mathbf{C}} \bullet \log\_{C} \left(\frac{\mathbf{1}}{\mathbf{C}}\right) \tag{16}$$

For the binary case, *C* is replaced by 2, then

$$H(X) = -\sum\_{k=1}^{2} \frac{1}{2} \bullet \log\_2\left(\frac{1}{2}\right) = 1 \tag{17}$$

If *H X*ð Þ¼ 1,*C*> 1 then all classes have the same probability. This means that the dataset distribution for feature selection is uniform.

If the conditional entropy *H X*ð Þ¼ j*T* <sup>&</sup>lt; *H X*ð Þ¼ j*T* <sup>≥</sup> 0 then each sub dataset has elements that belongs to only one class *k*, this is a total separability indicator.

## **2. Methodology**

A proposed method consists in two main sections: Feature selection and classification. Feature selection takes the better features that break up those pixels in foreground and background. Classification utilizes a similarity measure to compare pixels and plant description to assign a specific class. Both processes are described below.

To select features that belong to plants in the best way, the method requires a similarity measure. This measure allows to compare and identify which can provide an acceptable separation between data. Considering a dataset with the same quantity samples of each class, the dataset has the form described in **Figure 8**.

The feature *in* is considered as continuous random variable. In order to simplify and use simple tools in the classification model generation, the random variable is discretized to a binary variable. Thus, the dataset is presented in **Figure 9**.

For data classification, the minimum distance is wanted between the pixel and the object of interest. This analysis requires those features that describes the object to be compared with those features that describes the pixel. For plant image processing, some features are the color indexes as NCIVE, MNGRDI, GMR, etc. The feature selection is dependent to those maximizes data separation.

When the features are defined, independent statistical features are calculated. This information allows an orientation correction and distance threshold magnitude computation. According to this analysis, all components are considered independent from others, thus, the distance measure takes the third scenario. This weights those features that have more variability and brings an eccentricity adjust. An orientation correction should be applied to data because Euclidean distance is rotation variant.


**Figure 8.**

*Pixel dataset representation. Where i es the feature vector that describes the object under study. The dataset contains N features, and Class describes what is the meaning of the feature vector, in this case, foreground or background.*


**Figure 9.**

*Discretized variable dataset representation. Where Tn is a threshold overcome indicator.*

For threshold calculation the standard deviations *σ*11, *σ*22, ⋯*σnn* ∈ R*<sup>n</sup>* are considered as a component vector. The magnitude is calculated with the Weighted Euclidean Distance and is assigned to the threshold. The standard deviation is considered because variance cannot be expressed in the same plane. This thought is expressed as follows,

$$Th = d\_{\Omega}(0, \sigma)|\Omega = W \tag{18}$$

Before calculating distances, an orientation correction is needed with the angle described by the data. Finally, the thresholding defines if a pixel is plant or not with the following rule

$$Plant\_{(r,c)} = \begin{cases} True & d\_W \left( p\_{(r,c)}, R \right) < Th \\ False & otherwise \end{cases} \tag{19}$$

where *Plant*ð Þ *<sup>r</sup>*,*<sup>c</sup>* is the classification as plant for pixel *<sup>p</sup>*ð Þ *<sup>r</sup>*,*<sup>c</sup>* and *<sup>R</sup>* is the feature vector defined by data in plant class. As result, the classification model with this method is shown in **Figure 10**.

*Thresholding Image Techniques for Plant Segmentation DOI: http://dx.doi.org/10.5772/intechopen.104587*

#### **Figure 10.**

*Pixel classification map. a) Data classification with decision boundary in Th; b) normalized classification map with decision boundary in [Th, 2Th, 3Th]. Green: Plant, red: Background, black: Boundary decision. Th: Computed threshold.*

## **3. Results**

In the following processes the MATLAB 2020b was used in a Laptop with Xeon E3-1505 M processor with32GB RAM. In addition, only the matrix operations were used for data processing and data image representation.

According to the study in [27], the optimal color index for data break up is the L and a channels in the Lab color space. Moreover, some experiments are done according the dataset variability from the CVPPP dataset [28], which is the test dataset. In one hand, comparing both results is observable that a channel matches with [27] as best index to isolate the plant pixels. In the other hand, there are other indexes that provide an acceptable separability, NCIVE and MNGRDI, shown in **Tables 1** and **2**. In a comparative way, the Otsu method thresholding has a separability less than the supervised entropy.


**Table 1.** *Indexes gain information. Otsu method.* *Information Extraction and Object Tracking in Digital Video*


#### **Table 2.**

*Indexes gain information. Supervised entropy.*

Other observation is the similarity in the order of separability quality. Moreover, the supervised entropy knows the data meaning, consequently, the quality measure is better in most of the cases.

In **Figure 11** some examples of the index threshold data separability are shown. From here, is observed twice best cases, with an acceptable separability, and twice worst cases, with no apparent separability way.

Finally, Classification map for plant description pixels based on a-NCIVE indexes is shown in the **Figure 12**.

According to this model, the pixels can be classified as plant or not. Some visual results are shown in the **Figure 13**.

**Figure 11.** *Thresholds and information gains for some color indexes. Left: Supervised entropy method, right: Otsu method.*

*Thresholding Image Techniques for Plant Segmentation DOI: http://dx.doi.org/10.5772/intechopen.104587*

**Figure 12.**

*Classification map for plant pixel segmentation. Th: Computed threshold.*

**Figure 13.** *Visual results in plant image segmentation for CVPPP dataset.*

## **4. Conclusions**

The thresholding methods are effective when the problem data is defined and the superposition between groups is minimum. Furthermore, they are simple methods that provide acceptable results in the segmentation problem. The combination of methods is possible to rise the quality in the models.

The entropy, in a supervised way, can improve the data separability. Because Otsu methods minimizes the variance between groups, the quality in the results using supervised entropy is improved in consequence of consider data meaning. The supervised methods know the expected response, breaking up data in corresponding classes. Otsu only allow clustering data that have not a well-defined meaning yet.

The distance measures refer to reach a similarity in the compared data. It only needs a reference description of the studied object. In this case, all pixels are defined as plant, in a statistical way, a reference with the new pixels going to compared and classified. Thus, the segmentation is possible on those pixels that belong to plant and discard those are not.

From **Tables 1** and **2**, the best indexes have the same order in both results. Furthermore, the calculated threshold improves the data separability for the supervised entropy case. This allows development of classification maps like in **Figure 12** to consider those indexes that achieve the best pixels break up. **Figure 11** shown some separation scenarios where pixels that are plant (Green distribution) and are not (Red distribution). Finally, the classification map, shown in **Figure 12**, illustrates the best classifier obtained with the method to select pixels that belong to plant class.

## **Acknowledgements**

To M. Minervi et al. for providing their dataset. This work is supported by Instituto Politecnico Nacional de Mexico (IPN) (Grant ID: 20201681) and Consejo Nacional de Ciencia y Tecnologia (CONACyT), project 240820.

## **Conflict of interest**

The authors declare no conflict of interest.

## **Author details**

Miguel Ángel Castillo-Martínez\*, Francisco Javier Gallegos-Funes, Blanca E. Carvajal-Gámez, Guillermo Urriolagoitia-Sosa and Alberto J. Rosales-Silva Escuela Superior de Ingeniería Mecánica y Eléctrica, Instituto Politécnico Nacional, Col. Lindavista, Ciudad de Mexico, Mexico

\*Address all correspondence to: macastillom@ipn.mx

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Thresholding Image Techniques for Plant Segmentation DOI: http://dx.doi.org/10.5772/intechopen.104587*

## **References**

[1] Hunt K. Introduction. In: The Art of Image Processing with Java. United States: A K Peters/CRC Press; 2010. pp. 1–12. Available from: https://doi.org/ 10.1201/9781439865590

[2] Castillo Martínez MA, Gallegos Funes FJ, Rosales Silva AJ, Ramos Arredondo RI. Preprocesamiento de imágenes dermatoscopicas para extracción de características. Research Computers Science. 2016;**114**(1):59-70 [Internet] Available from: http://rcs.cic. ipn.mx/2016\_114/Preprocesamiento de imagenes dermatoscopicas para extraccion de caracteristicas.pdf

[3] Pérez LM, Rosales AJ, Gallegos FJ, Barba AV. LSM static signs recognition using image processing. In: 14th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE) [Internet]. Mexico City: IEEE; 2017. pp. 1-5. Available from: http://ieeexplore.ieee. org/document/8108885/

[4] Chávez-González AF, Aguila-Munoz J, Perez-Benitez JA, Espina-Hernandez JH. Finite differences software for the numeric analysis of a non-destructive electromagnetic testing system. In: 23rd International Conference on Electronics, Communications and Computing [Internet]. Cholula: IEEE; 2013. pp. 82-86. Available from: http://ieeexplore.ieee.org/document/ 6525764/

[5] Burger W, Burge MJ. Color images. In: Digital Image Processing. London: Springer; 2nd edition, 2016. pp. 291–328. Available from: http://link.springer.com/ 10.1007/978-1-4471-6684-9\_12

[6] Sundararajan D. Color image processing. In: Digital Image Processing [Internet]. Singapore: Springer

Singapore; 2017. pp. 407-438 Available from: http://link.springer.com/10.1007/ 978-981-10-6113-4\_14

[7] Gonzalez RC, Woods RE. Color image processing. In: Digital Image Processing. England: Pearson; 2018. 4th edition, pp. 399-461

[8] Wang Y, Wang D, Zhang G, Wang J. Estimating nitrogen status of rice using the image segmentation of G-R thresholding method. Field Crops Research. 2013;**149**: 33-39. [Internet]. Available from:. DOI: 10.1016/j.fcr.2013.04.007

[9] Hamuda E, Glavin M, Jones E. A survey of image processing techniques for plant extraction and segmentation in the field. Computer and Electronics Agriculture. 2016;**125**:184-199. [Internet]. Available from:. DOI: 10.1016/j.compag.2016.04.024

[10] Kataoka T, Kaneko T, Okamoto H, Hata S. Crop growth estimation system using machine vision. Proceedings of 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003). 2003;**2** (Aim):1079-1083

[11] Flach P. Distance-based models. In: Machine Learning [Internet]. Cambridge: Cambridge University Press; 2012. pp. 231-261. Available from: https://www. cambridge.org/core/product/identifier/ CBO9780511973000A014/type/book\_part

[12] Aldridge M. Clustering: An overview. In: Lecture Notes in Data Mining [Internet]. London: World Scientific; 2006. pp. 99-107. Available from: http://www.worldscientific.com/ doi/abs/10.1142/9789812773630\_0009

[13] Zhou ZH. Clustering. In: Machine Learning [Internet]. Singapore: Springer Singapore; 2021. pp. 211-240. Available from: https://link.springer.com/10.1007/ 978-981-15-1967-3\_9

[14] Zhang D. Image ranking. In: Fundamentals of Image Data Mining. Cham: Springer; 2019. p. 271-287. Available from: http://link.springer. com/10.1007/978-3-030-17989-2\_12

[15] Zhang Y, Li Z, Cai J, Wang J. Image segmentation based on FCM with mahalanobis distance. In: International Conference on Information Computing and Applications. Berlin: Springer; 2010. pp. 205-212. Available from: http:// link.springer.com/10.1007/978-3- 642-16167-4\_27

[16] Flach P. Machine Learning [Internet]. 2nd ed. Cambridge: Cambridge University Press; 2012. Available from: http://ebooks.cambridge .org/ref/id/CBO9780511973000

[17] Sundararajan D. Digital Image Processing [Internet]. Singapore: Springer Singapore; 2017. Available from: https://link.springer.com/book/ 10.1007/978-981-10-6113-4

[18] Burger W, Burge MJ. Point Operations. In: Digital image processing [internet]. 2nd ed. London: Springer; 2016:57-88. Available from: https://link.springer.com/chapter/ 10.1007/978-1-4471-6684-9\_4

[19] Gonzalez RC, Woods RE, Eddins SL. Image segmentation I. In: Digital Image Processing Using MATLAB. 3rd ed. United States of America: Gatesmark Publishing; 2020. pp. 633-721

[20] Otsu N. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics. 1979;**9**(1):62-66 [Internet] Available from: http://ieeexplore.ieee.org/ document/4310076/

[21] Rokach L, Maimon O. Splitting criteria. In: Data Mining with Decision Trees [Internet]. Singapore:World Scientific Publishing; 2014. pp. 61-68. Available from: http://www.worldscientific.com/ doi/abs/10.1142/9789814590082\_0005

## [22] Masud MM, Khan L,

Thuraisingham B. Email worm detection using data mining. In: Techniques and Applications for Advanced Information Privacy and Security [Internet]. Hershey, PA: IGI Global; 2009. pp. 20-34. Available from: http://services.igi-global.com/re solvedoi/resolve.aspx?doi=10.4018/978-1- 60566-210-7.ch002

[23] Omitaomu OA. Decision trees. In: Lecture Notes in Data Mining [Internet]. London: World Scientific; 2006. pp. 39-51. Available from: http://www. worldscientific.com/doi/abs/10.1142/ 9789812773630\_0004

[24] Addison PS. The Illustrated Wavelet Transform Handbook [Internet]. Biomedical Instrumentation and Technology. Boca Raton: CRC Press; 2017. p. 163. Available from: https:// www.taylorfrancis.com/books/ 9781315372556

[25] Rubinstein RY, Kroese DP. The Cross-Entropy Method [Internet]. New York, NY: Springer New York; 2004 (Information Science and Statistics). Available from: http://link.springer. com/10.1007/978-1-4757-4321-0

[26] Ito S, Sagawa T. Information flow and entropy production on Bayesian networks. In: Mathematical Foundations and Applications of Graph Entropy [Internet]. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA; 2016. pp. 63-99 Available from: http://doi.wiley. com/10.1002/9783527693245.ch3

[27] Hernández-Hernández JL, García-Mateos G, González-Esquiva JM, *Thresholding Image Techniques for Plant Segmentation DOI: http://dx.doi.org/10.5772/intechopen.104587*

Escarabajal-Henarejos D, Ruiz-Canales A, Molina-Martínez JM. Optimal color space selection method for plant/soil segmentation in agriculture. Computers and Electronics in Agriculture. 2016;**122**: 124-132

[28] Minervini M, Fischbach A, Scharr H, Tsaftaris SA. Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognition Letters. 2016;**81**:80-89. [Internet]. Available from:. DOI: 10.1016/j. patrec.2015.10.013

## **Chapter 6**
