**3. Machine vision**

Field plants, residue, and soil ecosystems are very complex, but, machine vision technology has the potential to systematically unravel and identify plants using optical properties, shape, and texture of leaves (Meyer et al., 1998). Considerable research has been reported using optical or remote sensing sensors to identify crop health by surface reflectance of green plants in agricultural fields (Gausman et al., 1973; Tucker, et al, 1979; Gausman et al., 1981; Thomas, et al, 1988; Storlie et al., 1989, Tarbell and Reid, 1991.; Franz et al., 1991b; and others). Hagger, et al (1983, 1984) reported the first prototype, reflectance-based plant sensor for spraying weeds. Hummel and Stoller (2002) evaluated a later commercial weed sensing system and noted their problems. Tian, et al (1999) developed a simple weed seeker in Illinois. Unfortunately, subsequent optical, non-image, sensor-based weed seekers and spot sprayers have not gained commercial acceptance for various reasons: first, single-element optical sensors can change the size of their field of view based on lens properties and distance to a target. Secondly, sensed reflectance properties may change according to the spatial contents of target components within the field of view Woebbecke, et al (1994); and finally, these sensors therefore may not always distinguish conclusively between crop, weed, or soil residue background. The voltage signal originating from an optical diode or transistor along with the Gaussian lens system used creating the field of view is a weighted average-problem, where the proportions of contributing reflectance and spatial contents are unknown. That problem can be solved only by spatial image analysis.

dicots (Mortensen et al., 1992 and Johnson et al., 1993, 1995). Monocots differ architecturally from dicots. Most weeds are serious competitors for moisture and soil nutrients. By first classifying the weed as either a monocot or dicot, a herbicide could be selected that most effectively controls that type of plant, resulting in better application efficiencies. Most postemergent herbicides are selective in controlling one plant type or the other. Wiles and Schweizer (1999, 2002) researched the spatial distribution of weed seed banks using soil samples to map locations of weed seed banks in a given field. Seed banks have been found distributed in a patchy manner. Using the maps as a guide, farmers could treat just the weed patches with minimal amounts of the appropriate chemical. Site-specific weed management could mean a significant reduction in herbicide use, which saves the farmer money and benefits the environment. However, a large number of soil and plant samples are needed to

Stubbendick, et al (2003) provided a comprehensive compendium of weedy plants found across the Great Plains of the United States. Color plates were provided of canopy architecture and sometimes close-ups of individual leaves, flowers, and fruit. A hand drawing of canopy architecture was also given. In order to recognize a particular species, one needs to understand the concept of inflorescence and various plant taxonomy terms. There are many existing plant image databases around the United States. However, their suitability as reference images has yet to be determined for machine vision applications. An important application using machine vision is site-specific or spot herbicide application systems to reduce the total amount of chemical applied (Lindquist et al., 1998, 2001 a,b; Medlin, et al, 2000). Therefore, a major need for improved weed IPM and ecological assessment of invasive plant species is the development of a low-cost, but high resolution, machine vision system to determine plant incidence, even when imbedded with other plants, and to identify the species type. Machine vision systems should assist in the creation of plant field maps, leading to valid action thresholds (National Roadmap for IPM 2004).

Field plants, residue, and soil ecosystems are very complex, but, machine vision technology has the potential to systematically unravel and identify plants using optical properties, shape, and texture of leaves (Meyer et al., 1998). Considerable research has been reported using optical or remote sensing sensors to identify crop health by surface reflectance of green plants in agricultural fields (Gausman et al., 1973; Tucker, et al, 1979; Gausman et al., 1981; Thomas, et al, 1988; Storlie et al., 1989, Tarbell and Reid, 1991.; Franz et al., 1991b; and others). Hagger, et al (1983, 1984) reported the first prototype, reflectance-based plant sensor for spraying weeds. Hummel and Stoller (2002) evaluated a later commercial weed sensing system and noted their problems. Tian, et al (1999) developed a simple weed seeker in Illinois. Unfortunately, subsequent optical, non-image, sensor-based weed seekers and spot sprayers have not gained commercial acceptance for various reasons: first, single-element optical sensors can change the size of their field of view based on lens properties and distance to a target. Secondly, sensed reflectance properties may change according to the spatial contents of target components within the field of view Woebbecke, et al (1994); and finally, these sensors therefore may not always distinguish conclusively between crop, weed, or soil residue background. The voltage signal originating from an optical diode or transistor along with the Gaussian lens system used creating the field of view is a weighted average-problem, where the proportions of contributing reflectance and spatial contents are

unknown. That problem can be solved only by spatial image analysis.

get an accurate map—and that can be costly.

**3. Machine vision** 

Image analysis is a mathematical process to extract, characterize, and interpret tonal information from digital or pixel elements of a photographic image. The amount of detail available depends on the resolution and tonal content of the image. The process is iterative, starting with large features followed by more detail, as needed. However, shape or textural feature extraction first requires identification of targets or Regions of Interest (ROI). These regions are then simply classified as green plants or background (soil, rocks, and residue). ROI's can be also identified with supervised control of the camera or field of view (Woebbecke, et al, 1994, Criner, et al, 1999), using a supervised virtual software window, cropping of selected areas, or unsupervised crisp or fuzzy segmentation procedures. ROI's are then binarized to distinguish target and background. Binarized images are then used for shape analysis or boundary templates for textural feature analysis. The binary image is combined with tonal intensity images of the targets (Gerhards and Christensen, 2003, Meyer et al., 1999; Kincaid and Schneider, 1983; Jain, 1989; Gonzalez and Woods, 1992; and others).

Machine vision offers the best potential to automatically extract, identify, and count target plants, based on color, shape, and textural features (Tillett et al. 2001). However, directing the image analysis process toward the classical botanical taxonomic, plant identification approach has previously required considerable supervised human intervention. A major problem is the presentation of plant features including individual leaves and canopy architecture to a discrimination or classification system. Camargo Neto, et al (2004 a,b; 2005) presented a combination of traditional image processing techniques, fuzzy clustering, pattern recognition, and a fuzzy inference neural network to identify plants, based on leaves. A particular difficult problem was the development of an algorithm to extract individual leaves from complex canopies and soil/residue color images.

If image vegetative/background classification is to be useful for plant species identification, a separated plant region of interest (ROI) must be found to provide important canopy information needed to discriminate at the very least, broadleaf versus grass species (Woebbecke et al., 1995a; Meyer et al., 1998). Four basic steps for a computerized plant species classification system were presented by Camargo Neto (2004). The first step is creating a binary image which accurately separates plant regions from background. The second step is to use the binary template to isolate individual leaves as sub images from the original set of plant pixels (Camargo Neto, et al, 2006a). A third step was to apply a shape feature analysis to each extracted leaf (Camargo Neto, et al, 2006b). The fourth and final step was to classify the plant species botanically using additional leaf venation, textural features acquired during the previous steps (Camargo Neto and Meyer, 2005). Machine vision plant image analysis has been greatly enhanced through the introduction of the automatic color and focusing digital camera (Meyer, et al, 2004). Digital cameras when run in the automatic mode make decisions on "best picture", and thus are extremely popular as consumer products.

#### **4. Vegetation indices**

The use of vegetation indices in remote sensing of crop and weed plants is not new. It represents the first step shown in Figure 2. Studies for crop and weed detection have been performed using different spectral bands and combinations for vegetative indices (Woebbecke et al. 1995b, El-Faki, et al., 2000ab, Marchant et al., 2004; Wang et al., 2001, Lamm et al., 2002; Mao et al., 2003; Yang et al., 2003). Color vegetation indices utilize only the red, green and blue spectral bands. The advantage of using color indices is that they

Machine Vision Identification of Plants 405

excess green (ExG) index has been widely cited in the literature and has been tested in recent studies (Giltelson et al., 2002; Lamm et al., 2002; Mao et al., 2003; and others). ExG plant regions of interest could then be completely binarized using a selected contrast threshold value for each image. Thus, an important condition was the selection of the threshold value. Mao et al. (2003) subsequently tested several indices: ExG, normalized difference index (NDI), and the modified hue for separating plant material from different backgrounds (soil and withered plant residue). In his study, the ExG index was found superior to the other indices tested. A critical step was to select a manual threshold value to

Other color vegetation indices have been reported for separating plants from soil and residue background in color images. For example, the normalized difference vegetation index (NDI) by Perez et al. (2000) uses only the green and red channels and is given as:

Perez's NDI was improved by adding a one, and then multiplying by a factor of 128. Hunt, et. al (2005) developed a vegetation index, known as the Normalized Green-Red Difference Index (NGRDI) for their model airplane photography for assessing crop biomass. Zhang, et al (1995) and Gebhardt, et al (2003) also used various RGB transforms for their plant image

Color indices have been suggested to be less sensitive to in lighting variations, and may have the potential to work well for different residues backgrounds (Campbell, 1996). However, a disproportionate amount of redness from various lighting sources may overcast a digital image, making it more difficult to identify green plants with simple RGB indices (Meyer et al, 2004b). For example, image redness may be related to digital camera operation and background illumination, but may also be related to redness from the soil and residue itself. An alternate vegetative index called excess red (ExR = 1.4 r - g) was proposed by

Meyer and Camargo Neto (2008) reported on the development of an improved color vegetation index: Excess Green minus Excess Red (ExG-ExR). This index does not require a threshold and compared favorably to the commonly used Excess Green (ExG), and the normalized difference (NDI) indices. The latter two indices used an Otsu threshold value to convert the index near-binary to a full-binary image. The indices were tested with digital color images of single plants grown and taken in a greenhouse and field images of young soybean plants. Vegetative index accuracies were compared to a hand extracted plant regions of interest using a separation quality factor algorithm. A quality factor of one represented a near perfect binary match of the computer extracted plant target compared to the hand extracted plant region. The ExG-ExR index had the highest quality factor of 0.88 + 0.12 for all three weeks, and soil-residue backgrounds for the greenhouse set. The ExG+Otsu and NDI-Otsu indices had similar quality factors of 0.53 + 0.39. and 0.54 + 0.33 for the same set, respectively. Field images of young soybeans against bare soil gave quality factors for both ExG-ExR and ExG+Otsu around 0.88 + 0.07. The quality factor of NDI+Otsu using the same field images was 0.25 + 0.08. ExG-ExR has a fixed, built-in plant-background zero threshold, so that it does not need Otsu or any user selected threshold value. The ExG-ExR index worked especially well for fresh wheat straw backgrounds, where it was generally 55

G - R

G + R (2)

NDI =

binarize the tonal image into a black and white image.

Meyer et al.(1998a), but was not tested until later studies.

segmentation step.

Fig. 1. A strategic approach to weed assessment.

accentuate a particular color such as plant greenness, which should be intuitive by human comparison. Woebbecke et al. (1995a) was one of the first researchers to test vegetation indices that were derived using color chromatic coordinates and modified hue for distinguishing green plant material in images from bare soil, corn residue, and wheat straw residue. Woebbecke's indices (without row and column indices of each pixel) included:

$$\text{Color indices: (r - g., g - b.,}\frac{\text{g - b}}{\text{r - g}}, \text{and } 2 \cdot \text{g - r - b.)}\tag{1}$$

where: r, g, and b are known as the chromatic coordinates (Wyszecki and Stiles, 1982), given as:

$$\mathbf{r} = \frac{\mathbf{R}^\*}{\mathbf{R}^\* + \mathbf{G}^\* + \mathbf{B}^\*}, \mathbf{g} = \frac{\mathbf{G}^\*}{\mathbf{R}^\* + \mathbf{G}^\* + \mathbf{B}^\*}, \text{and } \mathbf{b} = \frac{\mathbf{B}^\*}{\mathbf{R}^\* + \mathbf{G}^\* + \mathbf{B}^\*} \tag{2}$$

and: \*\* \* R , G , and B are normalized RGB values ( 0 to 1), defined as:

$$\text{R}^\* = \frac{\text{R}}{\text{R}\_\text{m}}, \text{G}^\* = \frac{\text{G}}{\text{G}\_\text{m}}, \text{and } \text{B}^\* = \frac{\text{B}}{\text{B}\_\text{m}}$$

R, G, and B are the actual pixel values obtained from color images, based on each RGB channel or band.

Rm, Gm, and Bm = 255, are the maximum tonal value for each primary color.

Woebbecke discovered that the excess green vegetation index ( ExG = 2 g - r - b ) provided an interesting near-binary, tonal image outlining a plant region of interest. Woebbecke's

Unravel a canopy - Find individual leaves

Separate plant canopies from background

accentuate a particular color such as plant greenness, which should be intuitive by human comparison. Woebbecke et al. (1995a) was one of the first researchers to test vegetation indices that were derived using color chromatic coordinates and modified hue for distinguishing green plant material in images from bare soil, corn residue, and wheat straw residue. Woebbecke's indices (without row and column indices of each pixel) included:

where: r, g, and b are known as the chromatic coordinates (Wyszecki and Stiles, 1982), given

\*\* \*

\*\* \* \* \*\* \* \*\* \* \*\*

mm m RG B R = , G = , and B = RG B

R, G, and B are the actual pixel values obtained from color images, based on each RGB

Woebbecke discovered that the excess green vegetation index ( ExG = 2 g - r - b ) provided an interesting near-binary, tonal image outlining a plant region of interest. Woebbecke's

RG B r = , g = , and b = R +G +B R +G +B R +G +B (2)

g - b r - g

, and 2 g - r - b ) (1)

Plant/weed map - Problem assessment and cost of weed control

GPS coordinates (location of weed patches)

Species identification logic and plant population

Leaf shape and venation textural feature analysis

Fig. 1. A strategic approach to weed assessment.

Two-dimensional images of sparse plant populations

as:

channel or band.

Color indices: ( r - g , g - b ,

and: \*\* \* R , G , and B are normalized RGB values ( 0 to 1), defined as:

Rm, Gm, and Bm = 255, are the maximum tonal value for each primary color.

excess green (ExG) index has been widely cited in the literature and has been tested in recent studies (Giltelson et al., 2002; Lamm et al., 2002; Mao et al., 2003; and others). ExG plant regions of interest could then be completely binarized using a selected contrast threshold value for each image. Thus, an important condition was the selection of the threshold value. Mao et al. (2003) subsequently tested several indices: ExG, normalized difference index (NDI), and the modified hue for separating plant material from different backgrounds (soil and withered plant residue). In his study, the ExG index was found superior to the other indices tested. A critical step was to select a manual threshold value to binarize the tonal image into a black and white image.

Other color vegetation indices have been reported for separating plants from soil and residue background in color images. For example, the normalized difference vegetation index (NDI) by Perez et al. (2000) uses only the green and red channels and is given as:

$$\text{NDI} = \frac{\text{G} \cdot \text{R}}{\text{G} + \text{R}} \tag{2}$$

Perez's NDI was improved by adding a one, and then multiplying by a factor of 128. Hunt, et. al (2005) developed a vegetation index, known as the Normalized Green-Red Difference Index (NGRDI) for their model airplane photography for assessing crop biomass. Zhang, et al (1995) and Gebhardt, et al (2003) also used various RGB transforms for their plant image segmentation step.

Color indices have been suggested to be less sensitive to in lighting variations, and may have the potential to work well for different residues backgrounds (Campbell, 1996). However, a disproportionate amount of redness from various lighting sources may overcast a digital image, making it more difficult to identify green plants with simple RGB indices (Meyer et al, 2004b). For example, image redness may be related to digital camera operation and background illumination, but may also be related to redness from the soil and residue itself. An alternate vegetative index called excess red (ExR = 1.4 r - g) was proposed by Meyer et al.(1998a), but was not tested until later studies.

Meyer and Camargo Neto (2008) reported on the development of an improved color vegetation index: Excess Green minus Excess Red (ExG-ExR). This index does not require a threshold and compared favorably to the commonly used Excess Green (ExG), and the normalized difference (NDI) indices. The latter two indices used an Otsu threshold value to convert the index near-binary to a full-binary image. The indices were tested with digital color images of single plants grown and taken in a greenhouse and field images of young soybean plants. Vegetative index accuracies were compared to a hand extracted plant regions of interest using a separation quality factor algorithm. A quality factor of one represented a near perfect binary match of the computer extracted plant target compared to the hand extracted plant region. The ExG-ExR index had the highest quality factor of 0.88 + 0.12 for all three weeks, and soil-residue backgrounds for the greenhouse set. The ExG+Otsu and NDI-Otsu indices had similar quality factors of 0.53 + 0.39. and 0.54 + 0.33 for the same set, respectively. Field images of young soybeans against bare soil gave quality factors for both ExG-ExR and ExG+Otsu around 0.88 + 0.07. The quality factor of NDI+Otsu using the same field images was 0.25 + 0.08. ExG-ExR has a fixed, built-in plant-background zero threshold, so that it does not need Otsu or any user selected threshold value. The ExG-ExR index worked especially well for fresh wheat straw backgrounds, where it was generally 55

Machine Vision Identification of Plants 407

utilization spectral wave bands and color components have been used arithmetically and called vegetation indices. The index Meyer and Camargo Neto (2008) is an advanced color

> Plant canopy/ background separation: ExG-ExR

Shape features training Set

Leaf extraction

> Rotation corrected venation

Shape features Test Set

Rotation corrected venation textural features

Fig. 2. Prototype Plant Species Identification and Enumeration using Leaf features.

Only a few methods of unsupervised leaf extraction from canopy images have been reported in the literature. Franz et al. (1991b) reported the use of curvature functions and the Fourier-Mellin correlation to identify completely visible and partially occluded sets of leaves. Leaf statistical features of mean, variance, skewness, kurtosis were computed, using spectral wavebands of red, green, blue, and near infrared. These features were used to discriminate leaf types of unifoliolate soybean, ivy, morning glory cotyledons, velvetleaf cotyledons, foxtail, first leaf of ivy, morning glory, and the first leaf of velvet leaf. Franz et al. (1995) further developed an algorithm to extract boundaries of occluded leaves using an edge detection technique to link the end points of leaf edge segments. User intervention was required at various steps of the algorithm. The fractions of individual leaves obtained were reported to be 0.91, 0.87, 0.95, and 0.71 for velvetleaf, soybean, ivy leaf morning glory, and

Plant species Identification, Population, and Mapping

Neural Network Classification Model

Digital color images by GPS coordinates

Statistical Analysis (SAS) Principle Components. Discriminant Analysis.

To clarify this issue, occluded or partial fractions of leaves are probably not that useful for species identification. However, all canopies will exhibit whole individual leaves at the canopy apex, which can be seen in overhead photographs. Some leaves may standout by

**5. Computerized single leaf extraction** 

Training set

foxtail, respectively.

vegetation index.

Color images - Visually verified

Best leaf? (Shape and Size)

Leaf/ Template Test Set

per cent more accurate than the ExG+Otsu and NDI+Otsu indices. Once a binary plant region of interest is identified with a vegetation index, other advanced image processing operations may be applied, such as identification of plant species such as would be needed for strategic weed control.

Near-Infrared (NIR) along with color bands have been used in vegetative indices for satellite remote sensing applications. However, NIR is less human intuitive, since the human eye is not particularly sensitive to the NIR spectrum which begins with red light. The human eye is only able to discern color (retinal sensors called cones). The eye also contains rods which are essentially receptive to small amounts of blue light that may exist after sundown. NIR is also not readily available with an RGB color digital camera. NIR usually requires a special monochromatic camera with a silicon-based sensor that can detect light up to one micron in wavelength with an NIR band pass filter. Hunt, at al (2011) has experimented with extracting near infrared out of RGB digital cameras. They developed a low-cost, color and color-infrared (CIR) digital camera that detects bands in the NIR, green, and blue. The issue still remains as to how does one verify the accuracy of infrared-image-based vegetative index without comparison to vegetation observed in a corresponding color visual image? So, the verification process of existence of plant material either returns to color images or some other non-optical method.

Two additional problems tend to exist with previous research regarding vegetative indices (a) the disclosure of the manual or automatic threshold used during the near-binary to binary conversion step, and (b) generally, the lack of reporting of vegetation index accuracy. Gebhardt, et al (2003) suggested that it was not necessary to classify vegetation on a pixel basis with digital imaging. However, if there are too many plant pixels mixed up with background pixels, accuracy may be reduced. Hague, et al (2006) suggested a manual comparison of vegetative areas from high resolution photographs. To date, very few vegetative index studies have reported validation accuracy of detecting plant material in independent images from other sources. This problem becomes particularly apparent, when these indices are applied to the collection of photographic plant databases currently available.

Plant classification might be expanded to hyper spectral imaging (Okamoto, et al. 2007). Wavelet along with discriminant analyses were used to identify spectral patterns of pixel samples for a 75–80 percent classification rate of five young plant species. Typically, hyper spectral cameras are expensive.

In summary, color image classification systems utilize the red (R), green (G), and blue (B) tonal intensity components. Color is a special form of spectral reflectance, which can be derived from spectral measurements (Wyszecki and Stiles, 1982; Murch, 1984; Jain, 1989; Gonzalez and Woods, 1992; Perry and Geisler, 2002). Perceived (human) color is based on the (RGB) primary colors. Woebbecke et al. (1995) discovered that the excess green index (2G-R-B) could provide excellent near-binary segmentation of weed canopies over bare soil for canopy shape feature analysis. El-Faki et al. (2000b) studied different RGB indices, as potential weed detection classifiers, but none possibly as good as excess green. The best correct segmentation rates (CCR) found were around 62%, while some misclassification rates were less than 3%. Meyer et al. (1999, 2004) proposed an excess red index (1.3R-G), based on physiological, rod-cone proportions of red and green. This index also provides near-binary silhouettes of plants under natural lighting conditions. Marchant, et al, (2004) proposed additional procedures for dealing with machine vision and natural lighting. The

per cent more accurate than the ExG+Otsu and NDI+Otsu indices. Once a binary plant region of interest is identified with a vegetation index, other advanced image processing operations may be applied, such as identification of plant species such as would be needed

Near-Infrared (NIR) along with color bands have been used in vegetative indices for satellite remote sensing applications. However, NIR is less human intuitive, since the human eye is not particularly sensitive to the NIR spectrum which begins with red light. The human eye is only able to discern color (retinal sensors called cones). The eye also contains rods which are essentially receptive to small amounts of blue light that may exist after sundown. NIR is also not readily available with an RGB color digital camera. NIR usually requires a special monochromatic camera with a silicon-based sensor that can detect light up to one micron in wavelength with an NIR band pass filter. Hunt, at al (2011) has experimented with extracting near infrared out of RGB digital cameras. They developed a low-cost, color and color-infrared (CIR) digital camera that detects bands in the NIR, green, and blue. The issue still remains as to how does one verify the accuracy of infrared-image-based vegetative index without comparison to vegetation observed in a corresponding color visual image? So, the verification process of existence of plant material either returns to color images or

Two additional problems tend to exist with previous research regarding vegetative indices (a) the disclosure of the manual or automatic threshold used during the near-binary to binary conversion step, and (b) generally, the lack of reporting of vegetation index accuracy. Gebhardt, et al (2003) suggested that it was not necessary to classify vegetation on a pixel basis with digital imaging. However, if there are too many plant pixels mixed up with background pixels, accuracy may be reduced. Hague, et al (2006) suggested a manual comparison of vegetative areas from high resolution photographs. To date, very few vegetative index studies have reported validation accuracy of detecting plant material in independent images from other sources. This problem becomes particularly apparent, when these indices are applied to the collection of photographic plant databases currently

Plant classification might be expanded to hyper spectral imaging (Okamoto, et al. 2007). Wavelet along with discriminant analyses were used to identify spectral patterns of pixel samples for a 75–80 percent classification rate of five young plant species. Typically, hyper

In summary, color image classification systems utilize the red (R), green (G), and blue (B) tonal intensity components. Color is a special form of spectral reflectance, which can be derived from spectral measurements (Wyszecki and Stiles, 1982; Murch, 1984; Jain, 1989; Gonzalez and Woods, 1992; Perry and Geisler, 2002). Perceived (human) color is based on the (RGB) primary colors. Woebbecke et al. (1995) discovered that the excess green index (2G-R-B) could provide excellent near-binary segmentation of weed canopies over bare soil for canopy shape feature analysis. El-Faki et al. (2000b) studied different RGB indices, as potential weed detection classifiers, but none possibly as good as excess green. The best correct segmentation rates (CCR) found were around 62%, while some misclassification rates were less than 3%. Meyer et al. (1999, 2004) proposed an excess red index (1.3R-G), based on physiological, rod-cone proportions of red and green. This index also provides near-binary silhouettes of plants under natural lighting conditions. Marchant, et al, (2004) proposed additional procedures for dealing with machine vision and natural lighting. The

for strategic weed control.

some other non-optical method.

spectral cameras are expensive.

available.

utilization spectral wave bands and color components have been used arithmetically and called vegetation indices. The index Meyer and Camargo Neto (2008) is an advanced color vegetation index.

Fig. 2. Prototype Plant Species Identification and Enumeration using Leaf features.
