**5. Some experimental results and analysis**

#### **5.1 Pothole detection using SFCM segmentation**

The results for the clustering of the RGB imagery using FCM and SFCM are comparatively presented. Where there is low spectral heterogeneity, the first Principal Components Transform image (PCT-band 1) is used in the FCM and SFCM clustering. The results in **Table 4** shows that the inclusion of the spatial neighborhood information using the SFCM, results in a more compact detection of the potholes, by segmenting the potholes from the non-potholes and ensuring homogeneity within the pothole itself, hence taking the spatial cues in clustering. Furthermore, the SFCM performs much better than FCM especially under different lighting conditions.

### **5.2 Pothole depth imagery representation**

Defects on pavements are defined as surface deformations that are greater than a threshold as illustrated in **Figure 6(b)**. Since the captured depth data is corrupted with noise, the depth-image plane as illustrated in **Figure 4** (**Figures 6(b)** and **6 (c)**), is not necessarily parallel to the surface that is under inspection. This is solved by fitting a plane to the points in the depth image (**Figure 6(b)**), that are not farther than a threshold from the IR camera (**Figure 6(c)**). By using the random sample consensus (RANSAC) algorithm [52], the plane is fitted to the points, and the depth image is subtracted from the fitted plane, with the results in **Figure 6(d)**. To discriminate between the depressions (potholes) and the flat regions (nonpotholes), the Otsu's thresholding algorithm is used. Sample results of the depthimage segmentation are sequentially presented in **Figure 6**.

#### **5.3 Feature based RGB-D data fusion for enhanced pothole segmentation**

In this section, an illustration on the potential of fusion of the depth and color image at the object or feature level is demonstrated. A possible two-way fusion approach comprising of either: (i) pre-pothole detection fusion involving the enhancement of the color image with the depth image, or (ii) post-pothole detection fusion of the pothole defect features as independently determined from the RGB and depth images respectively is proposed and conceptually represented in **Figure 7**. The first approach presents a joint segmentation approach, which is similar to extracting consistent layers from the image where each layer segment in terms of both color and depth. It is common for real scene object, like pavement pothole surfaces, to be characterized by different intensities and a small range of depths. The incorporation of the depth information into the segmentation process, allows for the detection of real pothole object boundaries instead of just coherent color regions, and the objective is to enhance the application relevant features in the resultant fused image product.

The potential and significance of fusion of RGB and depth imagery is illustrated in **Figures 8** and **9**, using the pothole edge identification from the RGB and depth image data. **Figure 8** shows an RGB and depth (RGB-D) single frame pavement data acquired Kinect experimental setup. The RGB is smoothened (left frame) using the median filter, while hole-filling using the joint bilateral filter is applied to the depth image (right frame). It is observed that the two images complement each other. Comparing the corrected image datasets, it is observed that the depth image clearly defines the pothole edges as compared to the fuzzy representation of the

**Original RGB pothole image frame and the first**

**163**

**principal** 

**components**

 **(PC) image**

**Clustering and** 

**FCM clustering** FCM clustering

> FCM

segmentation

 results with Otsu

SFCM

segmentation

 results

Perimeter of the detected pothole

> thresholding

SFCM clustered segments

Detected pothole after

thresholding

*DOI: http://dx.doi.org/10.5772/intechopen.88877*

*On the Use of Low-Cost RGB-D Sensors for Autonomous Pothole Detection with Spatial…*

**segmentation**

 **results for pothole detection using fuzzy**

**SFCM segments and** 

**segmentation**

 **SFCM results with Otsu** 

**thresholding**

*c***-means clustering**

*On the Use of Low-Cost RGB-D Sensors for Autonomous Pothole Detection with Spatial… DOI: http://dx.doi.org/10.5772/intechopen.88877*

**5. Some experimental results and analysis**

*Geographic Information Systems in Geospatial Intelligence*

**5.1 Pothole detection using SFCM segmentation**

**5.2 Pothole depth imagery representation**

resultant fused image product.

**162**

image segmentation are sequentially presented in **Figure 6**.

**5.3 Feature based RGB-D data fusion for enhanced pothole segmentation**

In this section, an illustration on the potential of fusion of the depth and color image at the object or feature level is demonstrated. A possible two-way fusion approach comprising of either: (i) pre-pothole detection fusion involving the enhancement of the color image with the depth image, or (ii) post-pothole detection fusion of the pothole defect features as independently determined from the RGB and depth images respectively is proposed and conceptually represented in **Figure 7**. The first approach presents a joint segmentation approach, which is similar to extracting consistent layers from the image where each layer segment in terms of both color and depth. It is common for real scene object, like pavement pothole surfaces, to be characterized by different intensities and a small range of depths. The incorporation of the depth information into the segmentation process, allows for the detection of real pothole object boundaries instead of just coherent color regions, and the objective is to enhance the application relevant features in the

The potential and significance of fusion of RGB and depth imagery is illustrated in **Figures 8** and **9**, using the pothole edge identification from the RGB and depth image data. **Figure 8** shows an RGB and depth (RGB-D) single frame pavement data acquired Kinect experimental setup. The RGB is smoothened (left frame) using the median filter, while hole-filling using the joint bilateral filter is applied to the depth image (right frame). It is observed that the two images complement each other. Comparing the corrected image datasets, it is observed that the depth image clearly defines the pothole edges as compared to the fuzzy representation of the

conditions.

The results for the clustering of the RGB imagery using FCM and SFCM are comparatively presented. Where there is low spectral heterogeneity, the first Principal Components Transform image (PCT-band 1) is used in the FCM and SFCM clustering. The results in **Table 4** shows that the inclusion of the spatial neighborhood information using the SFCM, results in a more compact detection of the potholes, by segmenting the potholes from the non-potholes and ensuring homogeneity within the pothole itself, hence taking the spatial cues in clustering. Furthermore, the SFCM performs much better than FCM especially under different lighting

Defects on pavements are defined as surface deformations that are greater than a threshold as illustrated in **Figure 6(b)**. Since the captured depth data is corrupted with noise, the depth-image plane as illustrated in **Figure 4** (**Figures 6(b)** and **6 (c)**), is not necessarily parallel to the surface that is under inspection. This is solved by fitting a plane to the points in the depth image (**Figure 6(b)**), that are not farther than a threshold from the IR camera (**Figure 6(c)**). By using the random sample consensus (RANSAC) algorithm [52], the plane is fitted to the points, and the depth image is subtracted from the fitted plane, with the results in **Figure 6(d)**. To discriminate between the depressions (potholes) and the flat regions (nonpotholes), the Otsu's thresholding algorithm is used. Sample results of the depth-

**Original RGB pothole image frame and the first**

**165**

**principal** 

**components**

 **(PC) image**

**Clustering and** 

**FCM clustering** FCM clustering

> FCM

segmentation

 results with Otsu

SFCM

segmentation

 results

Perimeter of the detected pothole *…*

thresholding

**Table 4.** *Pothole detection results using FCM and SFCM.*

SFCM clustered segments

Detected pothole after

thresholding

*DOI: http://dx.doi.org/10.5772/intechopen.88877*

*On the Use of Low-Cost RGB-D Sensors for Autonomous Pothole Detection with Spatial*

**segmentation**

 **results for pothole detection using fuzzy**

**SFCM segments and** 

**segmentation**

 **SFCM results with Otsu** 

**thresholding**

*c***-means clustering**

*On the Use of Low-Cost RGB-D Sensors for Autonomous Pothole Detection with Spatial… DOI: http://dx.doi.org/10.5772/intechopen.88877*

**Table 4.** *Pothole detection results using FCM and SFCM.*

**Original RGB pothole image frame and the first**

**164**

**principal** 

**components**

 **(PC) image**

**Clustering and** 

**FCM clustering**

FCM clustering

> FCM

segmentation

 results with Otsu

SFCM

segmentation

 results

Perimeter of the detected pothole

thresholding

SFCM clustered segments

Detected pothole after

thresholding

*Geographic Information Systems in Geospatial Intelligence*

**segmentation**

 **results for pothole detection using fuzzy**

**SFCM segments and** 

**segmentation**

 **SFCM results with Otsu** 

**thresholding**

*c***-means clustering**

#### **Figure 6.**

*(a) Pothole depth image. (b) Corresponding depth data to RGB image in (a). (c) Plane fitting using RANSAC [52]. (c) Relative depth obtained from subtracting the depth values from the fitted plane. (d) Rotated gray-scale representation of the relative depth values. (e) Detected pothole defect obtained from binarizing image (d) using the Otsu's thresholding. (f) Depth map of the detected pothole with dimensions in millimeters (cm).*

edges by the color image (**Figure 9**). This implies that it is possible to improve the pothole detection from RGB imagery through fusion of the RGB and depth image datasets (feature fusion) or through post-segmentation fusion (object fusion). For this chapter, only a discussion and potential illustration is presented.

classifications. The overall results show that the detection rate for potholes was at

*Comparing RGB imagery (a) and filtered depth map for pothole and non-pothole mapping on asphalt*

*Conceptual framework for the RGB-D pothole defect detection based on pre-detection image feature fusion and*

*On the Use of Low-Cost RGB-D Sensors for Autonomous Pothole Detection with Spatial…*

*DOI: http://dx.doi.org/10.5772/intechopen.88877*

In terms of the pothole metrology measurements, **Table 7** presents a sample summary of the results for the metrologic data quantification as characterized by: length and width, mean depth, mean surface area and volume of the potholes within image frames, and the resulting relative errors. From the results in **Table 7**, it is observed that while for some pothole defects the estimated dimensions are close to the ground-truth manual measurements, in few cases i.e., less 25% of the images, the relative error is more than 20%. This observed error magnitude in the potholedetection system was attributed to the shape and edge complexity of the potholes, which are mathematically complex to represent and estimate appropriately and

82.8% degree of accuracy.

**Figure 8.**

*pavement.*

**167**

**Figure 7.**

*post-detection object fusion.*

accurately as demonstrated in **Figure 6**.

#### **5.4 Evaluation of results and quantification of pothole metrology parameters**

An evaluation of the low-cost pavement pothole detection system is carried out using 55 depth image frames comprising of 35 images with potholes and 20 defectfree frames were evaluated. The results of the illustrative evaluation are presented in **Tables 5** and **6**, respectively in terms of the confusion matrix and the overall performance indices: TP, TN, FP, and FN which respectively represent the true positive, true negative, false positive and false negative. In **Table 6**, accuracy is defined as the proportion of the true classifications in the test dataset, while precision is the proportion of true positive classifications against all positive

*On the Use of Low-Cost RGB-D Sensors for Autonomous Pothole Detection with Spatial… DOI: http://dx.doi.org/10.5772/intechopen.88877*

#### **Figure 7.**

*Conceptual framework for the RGB-D pothole defect detection based on pre-detection image feature fusion and post-detection object fusion.*

#### **Figure 8.**

edges by the color image (**Figure 9**). This implies that it is possible to improve the pothole detection from RGB imagery through fusion of the RGB and depth image datasets (feature fusion) or through post-segmentation fusion (object fusion). For

*(a) Pothole depth image. (b) Corresponding depth data to RGB image in (a). (c) Plane fitting using RANSAC [52]. (c) Relative depth obtained from subtracting the depth values from the fitted plane. (d) Rotated gray-scale representation of the relative depth values. (e) Detected pothole defect obtained from binarizing image (d) using the Otsu's thresholding. (f) Depth map of the detected pothole with dimensions in*

*Geographic Information Systems in Geospatial Intelligence*

**5.4 Evaluation of results and quantification of pothole metrology parameters**

An evaluation of the low-cost pavement pothole detection system is carried out using 55 depth image frames comprising of 35 images with potholes and 20 defectfree frames were evaluated. The results of the illustrative evaluation are presented in **Tables 5** and **6**, respectively in terms of the confusion matrix and the overall performance indices: TP, TN, FP, and FN which respectively represent the true positive, true negative, false positive and false negative. In **Table 6**, accuracy is defined as the proportion of the true classifications in the test dataset, while preci-

this chapter, only a discussion and potential illustration is presented.

**Figure 6.**

**166**

*millimeters (cm).*

sion is the proportion of true positive classifications against all positive

*Comparing RGB imagery (a) and filtered depth map for pothole and non-pothole mapping on asphalt pavement.*

classifications. The overall results show that the detection rate for potholes was at 82.8% degree of accuracy.

In terms of the pothole metrology measurements, **Table 7** presents a sample summary of the results for the metrologic data quantification as characterized by: length and width, mean depth, mean surface area and volume of the potholes within image frames, and the resulting relative errors. From the results in **Table 7**, it is observed that while for some pothole defects the estimated dimensions are close to the ground-truth manual measurements, in few cases i.e., less 25% of the images, the relative error is more than 20%. This observed error magnitude in the potholedetection system was attributed to the shape and edge complexity of the potholes, which are mathematically complex to represent and estimate appropriately and accurately as demonstrated in **Figure 6**.
