**3.1.1 Cooccurrence matrices**

The principle of these matrices is to count how often pairs of grey levels occur in a digital image of texture. In fact, we take into account pairs of pixels that are located along a given direction defined by the angle and separated by a fixed distance *d*.

Let *L N I I* 1,2,..., and *L N <sup>J</sup>* 1,2,..., *<sup>J</sup>* be respectively the horizontal and vertical spatial domains and *G* 0,1,..., 1 the grey levels.

The digital image : *I J f LL G* assigns a grey level to each pixel.

Furthermore, we consider four directions ( = 0°, 45°, 90° and 135°) along which a pair of pixels can lie. Thereafter, for the distance d, we are able to obtain the unnormalized frequencies P( , ', ,d) .

For instance, P( , ',0 , ) *d* is defined as the cardinality of the set of pixel pairs having the following properties:

$$\begin{cases} f(i,j) = \gamma \quad \text{and} \ f(i',j') = \gamma'\\ |i-i'| = d \quad j = j' \end{cases} \tag{1}$$

For the other angles, the corresponding frequencies are defined similarly. In this document, we consider the distance **d** = 1 (this means we consider neighboring pixels). Moreover, texture of the wheatears, and, more generally, of the pictures, are not oriented along a common direction. Thus, for each pair of grey levels we sum over the four frequencies (corresponding to the four angles) and we note:

$$P\_{\mathbf{y}\ \mathbf{y}^\*} = \sum\_{\theta} P(\mathbf{y}', \mathbf{y}', \theta^\diamond, \mathbf{1}) \tag{2}$$

 ' ' ' , ' *P <sup>p</sup> <sup>P</sup>* is the element of the normalized cooccurrence matrix.

Generally cooccurrence matrices are not used directly because of their size. The principle is to compute a set of measures from these matrices, such as those proposed by Haralick (1973).

Figure 5 shows three samples of ear ,soil and leaf textures: wheatearstexture reveals many transitions between very different grey tones. For leaves and soil the transitions are more progressive.

52 Agricultural Science

Proxy-detection imagery for yield prediction needs the determination of the three components of the yield: number of ears / m², number of grains per ear and the weight per thousand seeds. Wheat ear shapes are very variable, especially due to the different inclinations of wheatears. However, wheatears show a very rough surface whereas leaf surfaces are smoother (figure 4).

Leaf surface Wheatear surface

Considering this specific roughness (figure 4), texture analysis can be used in order to

In order to extract the texture features, a variety of methods have already been proposed in the literature and tested in practice. In this context, three families of texture feature extraction exist: statistical, spectral and structural methods. A successful texture

Two methods are discussed in this chapter. The first one is based on statistical analysis by

The principle of these matrices is to count how often pairs of grey levels occur in a digital image of texture. In fact, we take into account pairs of pixels that are located along a given

Let *L N I I* 1,2,..., and *L N <sup>J</sup>* 1,2,..., *<sup>J</sup>* be respectively the horizontal and vertical spatial

pixels can lie. Thereafter, for the distance d, we are able to obtain the unnormalized

*d* is defined as the cardinality of the set of pixel pairs having the

and separated by a fixed distance *d*.

= 0°, 45°, 90° and 135°) along which a pair of

classification or segmentation requires an efficient feature extraction methodology.

cooccurrence matrices, and the second one is based on Fourier filtering.

The digital image : *I J f LL G* assigns a grey level to each pixel.

domains and *G* 0,1,..., 1 the grey levels.

Furthermore, we consider four directions (

**3. Proxy-detection image processing for wheat ear counting** 

Fig. 4. 3D surface of leaf and ear

**3.1 Texture feature extraction** 

**3.1.1 Cooccurrence matrices** 

direction defined by the angle

frequencies P( , ', ,d)

following properties:

For instance, P( , ',0 , ) 

.

discriminate wheatears, leaves and ground.

Fig. 5. Examples of the three textural patterns

Thus, we compute four Haralick's features which characterize the particular disposition of the cooccurrence matrix elements for the 3 texture classes:

energy (angular second moment) :

$$\sum\_{\mathbf{r}\prec\mathbf{r}'} p\_{\mathbf{r}'\prec\mathbf{r}'}^2 \tag{3}$$

inverse different moment :

$$\sum\_{\mathbf{y},\mathbf{y}'} \frac{1}{1 + (\mathbf{y} - \mathbf{y}')^2} p\_{\mathbf{y}'\mathbf{y}'} \tag{4}$$

Texture, Color and Frequential Proxy-Detection

**3.1.3 Results and discussion** 

(wheatears that have been forgotten).

**3.1.2 Results from the cooccurrence matrices** 

original image and the corresponding result after the first stage.

Both kind of erroneous classification are circled on the pictures.

Image Processing for Crop Characterization in a Context of Precision Agriculture 55

In order to illustrate the efficiency of this textural function, the values of FD obtained on the image may be rescaled in the form of a gray level scale. For example, figure 6 shows an

Fig. 6. Example of the textural functionFD (right) for a given wheatear image (left)

In order to show its relevance, the Textural function is used in the energy function of a Maximum A Posteriori segmentation algorithm (Martinez de Guerenu et al. 1996). Figure 7 shows separately the Wheatear and Non-wheatear regions resulting from this segmentation stage. Wheatear region pictures show false positive wheatear detecton (leaves or stems classified as wheatears). Non-wheatear region pictures show false negative detection

Wheatear regions Non wheatear regions

Fig. 7. Wheatear and Non-wheatear region results for MAP segmentation

contrast :

$$\sum\_{\mathbf{r}\cdot\mathbf{r}^\*} (\mathbf{r}-\mathbf{r}\prime)^2 p\_{\mathbf{r}\cdot\mathbf{r}^\*} \tag{5}$$

entropy :

$$-\sum\_{\boldsymbol{\gamma},\boldsymbol{\gamma}^\*} p\_{\boldsymbol{\gamma}\boldsymbol{\gamma}^\*} \times \log \left( p\_{\boldsymbol{\gamma}\boldsymbol{\gamma}^\*} \right) \tag{6}$$

The four coefficients were calculated for windows of 17x17 pixels surrounding each pixel. This window size is smaller than the ears' size and allows for reducing noise effects. The computational cost, the cooccurrence matrices were calculated over posterised pictures (32 grey levels), which avoids obtaining hollow matrices, considering the number of pixel in each window.

Table 1 shows the values of the four features for the 3 samples of the figure 5.


Table 1. Values of the four Haralick's parameters associated with the three textural patterns

The features computed from the cooccurrence matrices are used in a learning step in order to construct a discriminant function, called here textural function, which identifies efficiently the pixel belonging to the « ears » class.

Considering a trial set of windows, the use of a stepwise selection leads us to selects a subset of parameters (from the four ones describe above) to produce a good discrimination model.

At the first step, we compute the variance ratio F for each feature:

$$F = \frac{\text{between} - \text{group variance}}{\text{within} - \text{group variance}} = \frac{\sum\_{i} n\_{i} \times \left(\overline{\mathbf{x}}\_{i.} - \overline{\mathbf{x}}\_{..}\right)^{2} / \left(\text{g} - 1\right)}{\sum\_{i} \left(\overline{\mathbf{x}}\_{i\cdot} - \overline{\mathbf{x}}\_{i.}\right)^{2} / \left(\text{n} - \text{g}\right)}\tag{7}$$

*g* = 2 is the number of groups (one for ears, one for leaves and ground), *ni* is the number of windows for the group *i* (*n n <sup>i</sup>* ), *<sup>i</sup>*. *<sup>x</sup>* is the mean of the group *i*, .. *<sup>x</sup>* is the mean of the trial set, *ik x* is the value of the window *k* belonging to the group *i*.

The value of the variance ratio reflects the parameter's contribution to the discrimination if it is included in the textural function. The parameter that has the largest ratio is selected at first. Afterwards, at each step, the order of insertion is computed using the partial F-ratio as a measure of the importance of parameters not yet in the equation. As soon as the partial F-ratio related to the most recently entered parameter becomes non-significant the process is stopped.

In a second stage, a discriminant analysis with the selected parameters is computed. Because there are only two groups to discriminate, only one discriminant factor *FD*(textural function) is obtained.

## **3.1.2 Results from the cooccurrence matrices**

54 Agricultural Science

2 '

 

' '

 log

(5)

0.3 0.6 0.8

(6)

( ') *p*

*p p* 

The four coefficients were calculated for windows of 17x17 pixels surrounding each pixel. This window size is smaller than the ears' size and allows for reducing noise effects. The computational cost, the cooccurrence matrices were calculated over posterised pictures (32 grey levels), which avoids obtaining hollow matrices, considering the number of pixel in

**Texture contrast energy entropy inverse different moment** 

Table 1. Values of the four Haralick's parameters associated with the three textural patterns

The features computed from the cooccurrence matrices are used in a learning step in order to construct a discriminant function, called here textural function, which identifies

Considering a trial set of windows, the use of a stepwise selection leads us to selects a subset of parameters (from the four ones describe above) to produce a good discrimination model.

*g* = 2 is the number of groups (one for ears, one for leaves and ground), *ni* is the number of windows for the group *i* (*n n <sup>i</sup>* ), *<sup>i</sup>*. *<sup>x</sup>* is the mean of the group *i*, .. *<sup>x</sup>* is the mean of the trial

The value of the variance ratio reflects the parameter's contribution to the discrimination if it is included in the textural function. The parameter that has the largest ratio is selected at first. Afterwards, at each step, the order of insertion is computed using the partial F-ratio as a measure of the importance of parameters not yet in the equation. As soon as the partial F-ratio related to the most recently entered parameter becomes non-significant the process is stopped. In a second stage, a discriminant analysis with the selected parameters is computed. Because there are only two groups to discriminate, only one discriminant factor *FD*(textural function)

4.9 3.6 2.0

*nxx g*

*xx n g*

2 . .. 2 .

/( 1)

(7)

/( )

*ik i*

*i i*

*i*

*i*

, '

 

 

, '

 

Table 1 shows the values of the four features for the 3 samples of the figure 5.

0.04 0.22 0.94

contrast :

entropy :

each window.

ears leaves soil

is obtained.

8.1 2.5 0.3

efficiently the pixel belonging to the « ears » class.

At the first step, we compute the variance ratio F for each feature:

set, *ik x* is the value of the window *k* belonging to the group *i*.

between group variance within group variance *<sup>F</sup>* <sup>=</sup> In order to illustrate the efficiency of this textural function, the values of FD obtained on the image may be rescaled in the form of a gray level scale. For example, figure 6 shows an original image and the corresponding result after the first stage.

Fig. 6. Example of the textural functionFD (right) for a given wheatear image (left)
