**3.3 Object-based image classification (OBIA)**

Object-based image classification (OBIA) is seen as an advancement in land cover classification, where its advantage lies in the classification of objects represented by a group of pixels. OBIA approaches for analyzing remotely sensed data have been established and investigated since the 1970s. Object-oriented methods of image classification have become more popular in recent years due to the availability of software [19]. Object-based classification techniques start by the grouping of neighboring pixels into meaningful areas. This means that the segmentation and subsequent object topology generation is controlled by the resolution and the scale of the expected objects. In an object-based classified image, the elementary picture elements are no longer the pixels, but connected sets of pixels [20].

#### **3.4 Segmentation**

The segmentation process in OBIA is used to recognize, differentiate and separate features within the image. This method involves the grouping of pixels into regions or areas based on their similar spectral reflectance, texture and area. Segmentation is defined as the delineation of the entire digital image into a number of segments or set of pixels, the goal is to enhance the present objects of the image into something more meaningful and required [21]. The segmentation process is dependent on the scale, shape, and compactness of objects. Several tests are needed to determine the best scale to use for image segmentation.

#### **3.5 Feature extraction**

The feature extraction process is performed after the image is segmented, this involves the searching of meaningful objects within the image such as roads, vegetation and buildings. This process allows us to isolate and extract only the object features that we need or that we are interested in. The computation of feature extraction can be statistical such as mean height, geometrical such as shape, elongation, rectangularity, and compactness. These parameters play an important role in the final output of extraction. The spatial and spectral properties are the two important factors for extraction [21]. The features extracted from the image bands or channels are used in the supervised classification of buildings.

### **3.6 Training sites**

The training site section involves the selection of training sites for the building classification, the building features that are selected are those that have different

**169**

**4. Results**

analysis of the process.

*High-Resolution Object-Based Building Extraction Using PCA of LiDAR nDSM and Aerial Photos*

Classification involves a supervised classification of the buildings for example using the support of vector machine (SVM). SVM has recently been given much attention as a classification method. In recent studies, Support vector machines were compared to other classification methods, such as Neural Networks, Nearest Neighbor, Maximum Likelihood and Decision Tree classifiers for remote sensing

After classification, accuracy assessment is needed to determine the reliability of the classification process. This can be done by creating an accuracy assessment report or visually inspecting the results of the classification using the original image

There is no classification or extraction process that is 100% accurate, therefore improvements can be made using rule-based classification. This involves making improvements to the results of the extraction process by using the attributes of the segmented layer. Geometrical rule-based classification involves selecting the desirable shape, compactness, rectangularity and elongation of objects, meanwhile statistical rule-based classification, involves selecting the mean height or mean NIR

imagery and have surpassed all of them in robustness and accuracy [22].

values from the segmented layer to improve the extraction of buildings.

After the building extraction, the building outlines are observed to be very definitive at a large scale of 1:1000, which is significantly sufficient for various applications and scenarios. Nonetheless, zooming in closer at a very large scale of 1:250, some jagged edges can be seen. These minor rough or jagged edges were eliminated by cleaning the edges of buildings by choosing a standard precision and

Several experiments have been completed to determine the best combination of PCA raster data for the building extraction process. A total of five experiments have been completed to determine the best scenario of building/roof extraction on a 1 sq. km area. The aerial photograph of these areas shows a total of 584 buildings; therefore, the accuracy of building extraction was measured using this number. A total of 20 buildings are chosen for the training sites, these buildings sites are used in all five approaches. **Table 1** is provided further below that gives a quantitative

For the building segmentation process (**Figure 3**), a scale of 25, shape 0.5 and Compactness 0.5 was used, this parameter creates much smaller segments

characteristics such as color and shape. During the training site selection, other buildings can be selected that will be used for the accuracy assessment of the OBIA process. Buildings selected in the training site selection cannot be selected again for

*DOI: http://dx.doi.org/10.5772/intechopen.92640*

the accuracy assessment process.

**3.7 Classification**

**3.8 Accuracy assessment**

**3.9 Rule-based classification**

**3.10 Regularize building outlines**

tolerance value to regularize the building outlines.

of the study area.

*High-Resolution Object-Based Building Extraction Using PCA of LiDAR nDSM and Aerial Photos DOI: http://dx.doi.org/10.5772/intechopen.92640*

characteristics such as color and shape. During the training site selection, other buildings can be selected that will be used for the accuracy assessment of the OBIA process. Buildings selected in the training site selection cannot be selected again for the accuracy assessment process.

### **3.7 Classification**

*Spatial Variability in Environmental Science - Patterns, Processes, and Analyses*

**3.3 Object-based image classification (OBIA)**

Principal component analysis is a technique used to reduce the dimensionality of multivariate and multispectral datasets such as images with the aim of preserving as much of the relevant information as possible. PCA provides a method for the reduction of redundant information apparent in multi-dimensional databases. PCA represents any object with a much fewer information compared to the original image. Minimization of the correlation of multidimensional bands is performed by mathematically transforming the multi-band into another vector space with a new basis [17]. PCA was performed on the aerial photos in combination with LiDAR nDSM raster. The result is a single multiband raster, this means that the result of the LiDAR nDSM and aerial photos is a raster with 5 bands in a single raster dataset [18].

Object-based image classification (OBIA) is seen as an advancement in land cover classification, where its advantage lies in the classification of objects represented by a group of pixels. OBIA approaches for analyzing remotely sensed data have been established and investigated since the 1970s. Object-oriented methods of image classification have become more popular in recent years due to the availability of software [19]. Object-based classification techniques start by the grouping of neighboring pixels into meaningful areas. This means that the segmentation and subsequent object topology generation is controlled by the resolution and the scale of the expected objects. In an object-based classified image, the elementary picture elements are no longer the pixels, but connected sets of pixels [20].

The segmentation process in OBIA is used to recognize, differentiate and separate features within the image. This method involves the grouping of pixels into regions or areas based on their similar spectral reflectance, texture and area. Segmentation is defined as the delineation of the entire digital image into a number of segments or set of pixels, the goal is to enhance the present objects of the image into something more meaningful and required [21]. The segmentation process is dependent on the scale, shape, and compactness of objects. Several tests are needed

The feature extraction process is performed after the image is segmented, this involves the searching of meaningful objects within the image such as roads, vegetation and buildings. This process allows us to isolate and extract only the object features that we need or that we are interested in. The computation of feature extraction can be statistical such as mean height, geometrical such as shape, elongation, rectangularity, and compactness. These parameters play an important role in the final output of extraction. The spatial and spectral properties are the two important factors for extraction [21]. The features extracted from the image bands or channels are used in the supervised classification of buildings.

The training site section involves the selection of training sites for the building classification, the building features that are selected are those that have different

to determine the best scale to use for image segmentation.

**3.2 PCA**

**3.4 Segmentation**

**3.5 Feature extraction**

**3.6 Training sites**

**168**

Classification involves a supervised classification of the buildings for example using the support of vector machine (SVM). SVM has recently been given much attention as a classification method. In recent studies, Support vector machines were compared to other classification methods, such as Neural Networks, Nearest Neighbor, Maximum Likelihood and Decision Tree classifiers for remote sensing imagery and have surpassed all of them in robustness and accuracy [22].

#### **3.8 Accuracy assessment**

After classification, accuracy assessment is needed to determine the reliability of the classification process. This can be done by creating an accuracy assessment report or visually inspecting the results of the classification using the original image of the study area.

#### **3.9 Rule-based classification**

There is no classification or extraction process that is 100% accurate, therefore improvements can be made using rule-based classification. This involves making improvements to the results of the extraction process by using the attributes of the segmented layer. Geometrical rule-based classification involves selecting the desirable shape, compactness, rectangularity and elongation of objects, meanwhile statistical rule-based classification, involves selecting the mean height or mean NIR values from the segmented layer to improve the extraction of buildings.

#### **3.10 Regularize building outlines**

After the building extraction, the building outlines are observed to be very definitive at a large scale of 1:1000, which is significantly sufficient for various applications and scenarios. Nonetheless, zooming in closer at a very large scale of 1:250, some jagged edges can be seen. These minor rough or jagged edges were eliminated by cleaning the edges of buildings by choosing a standard precision and tolerance value to regularize the building outlines.
