Preface

MATLAB is a computing program for engineers and scientists to analyze data, develop algorithms, and create models.

This book addresses specialists who are interested in the applications of MATLAB programs and technology. It covers some practical aspects of MATLAB programming, including new methodologies, techniques, and applications. New examples of MATLAB code and Simulink for technological developing platforms are promoted. These applications are from domains including electronic and communication engineering, geodetic and photogrammetric engineering, control systems, digital signal processing, and power electronics. Thus, the book discusses usages of MATLAB in radar antennas, geometric segmentation, Bluetooth applications, and control of electrical drives.

The published examples highlight the capabilities of MATLAB programming in the fields of mathematical modeling, algorithmic development, data acquisition, time simulation, and testing. Researchers in different domains developed new MATLAB applications and tools that enhance human understanding and improve specialist ability to design and implement high-performance solutions. Applications have presented that focus on the methodologies used, with implementation and testing issues.

The book is divided into four sections. The first section discusses mesosphere– stratosphere–troposphere (MST) radars, which are a type of wind profiler designed to measure wind speed and other atmospheric parameters up to altitudes of 100 km or more, characterized by high-power transmitters and large, very high-frequency antennas. The second section discusses geometric segmentation of geometric features into discrete geometric patterns that use real scanned geometries, noise-affected patterns, and that are not well sampled. The third section discusses applications of Bluetooth, which is a short-range, wireless technology standard used for exchanging data between fixed and mobile devices over short distances using ultra-high-frequency radio waves in industrial, scientific, and medical (ISM) purpose radio bands, excluding applications in telecommunications. The final section discusses modeling and simulation of electric drive control systems based on fuzzy PI speed regulators to improve control efficiency.

Chapter 1 is a brief introduction to the broad issues of MATLAB/Simulink applications, reviewing the areas to which the software is addressed and giving some basic examples of programming techniques. Chapter 2 gives an analysis to quantify the distortion in the radiation pattern due to aperture thinning at MST radar antenna. MATLAB is used to analyze the results of the radiation pattern, in both principal planed and for different azimuth angles with and without thinning, viewed in both polar and rectangular forms. Chapter 3 presents an application of automated building facade surveying produced from scanning data by means of coding in MATLAB. The chapter highlights that point cloud data need fundamental process flow as the fundamental steps of geometric segmentation by MATLAB programming. Chapter 4 presents low-energy Bluetooth applications in which

measurement data are acquired by Bluetooth-compatible sensors and processed on a personal computer. The research application is using MATLAB elements such as methods to implement endless loops and real-time display of acquired data and using quaternions to handle the 3D orientation of a device. Chapter 5 presents a MATLAB program library for modeling and simulating speed fuzzy control systems for the main electric motors used as actuators in practice, including dc motors, induction motors, and permanent magnet synchronous motors.

I wish to thank the authors for their excellent contributions. I am also grateful to the staff at IntechOpen, especially Author Service Manager Dolores Kuzelj, for their assistance throughout the publication process.

> **Constantin Voloşencu** Department of Automation and Applied Informatics, "Politehnica" University Timişoara, Timişoara, Romania

Section 1 Introduction

#### **Chapter 1**

## Introductory Chapter: Matlab and Simulink Applications

*Constantin Volosencu*

#### **1. Generalities and publications**

In the scientific and technical field there are a multitude of numerical calculation programs. Some examples of these programs can be given as follows. Analytica, created by Lumina Decision Systems, is a numerical modeling environment with a visual programming language based on influence diagrams. LabView, created by National Instruments, is a graphical and textual through formula nodes software, for process monitoring and control. Mathcad, created by Parametric Technology Corporation, is a computer software for the verification, validation, documentation and re-use of mathematical calculations. Matlab is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. It allows numerical computation and simulation with extended 2D/3D visualization with vector manipulation. Matlab allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages. Simulink is a Matlab-based graphical programming environment for modeling, simulating and analyzing dynamical systems from different domains.

According to MathWorks presentation, the Matlab language fundamentals consists in syntax, operators, data types, array indexing and manipulation. Some of mathematics domain supported are linear algebra, differentiation and integrals, Fourier transform and other. The users may presents graphic results in two and three dimensional plots, images, animation and visualization. Data can be imported and exported, analyzed, preprocessed and visually explored. The language has many functions and assures programming scripts with program files, control flow, editing and debugging. The users may develop applications using App Designer, Guide or a programmatic workflow. Advanced software development is supported with object-oriented programming, code performance, unit testing, external interfaces to Java and Web services, C/C++, .NET and other languages. The language is a desktop environment with preferences and settings and platform differences. It assures support for third-party hardware, such as webcam, Arduino, and Raspberry Pi hardware. Simulink is a block diagram environment for multidomain simulation and model-based design. It supports system-level design, simulation, automatic code generation, and continuous test and verification of embedded systems. Simulink provides a graphical editor, customizable block libraries, and solvers for modeling and simulating dynamic systems. It is integrated with Matlab, enabling users to incorporate Matlab algorithms into models and export simulation results to Matlab for further analysis. It allows modeling of time-varying systems and of large-scale architectures, running systems, reviewing results, validate system behavior, optimizing performance for specific goals. The users may extend the existing Simulink modeling functionality using Matlab, C/C++, and Fortran code.

It assures hardware support for third-party hardware, such as Arduino, Raspberry Pi, and Beagle Board.

The software has many applications in practice, which can be mentioned: signal processing, image processing and computer vision, control systems, test and measurement, radio-frequency and mixed signals, wireless communication, robotics and autonomous systems, automotive, aerospace, FPGA, ASIC, and SoC Development, computational finance, computational biology and the number of application is increasing.

Over the years, numerous books have been published that present applications of the Matlab and Simulink programs. Some examples from the last years can be highlighted, as follows. Some books dedicated to students and engineers, which presents fundamentals of Matlab may be mentioned. This books make introduction in basic programming, fundaments consisting in data, statement structures, control structures, functions, algebraic computation, variables, complex numbers, vectors and matrices, data processing, examples of solving problems, examples in chemistry and physics, but also some advanced techniques for object-oriented programming, graphical user interface design and web applications [1–7]. More advanced issues as model predictive control or deep learning application are presented in [8] and respectively [9].

Extensive collections of works in the field of Matlab and Simulink applications, from the last years, can be cited as follows [10–20]. These collections, which can be used for educational, scientific and engineering purposes, include applications of: programming, developing graphical user interfaces, power system analysis, control systems design, system modeling and simulation, parallel processing, optimizations, signal and image processing, computer graphic visualization, electric machines, power electronics, genetic programming, digital watermarking, artificial networks, algebraic computation, data acquisition, image processing, seismology, meteorology, natural environment, interconnected power grids, antennas, underwater vehicles, models and data identification in biology, fuzzy logic, and discrete event systems.

Papers using the Matlab and Simulink programs have appeared and continue to appear in the literature. Here are some examples from the last years. A Matlab processing toolbox for analytical spectral devises field spectroscopy data, for generation of consistent and comparable ground spectra that have been corrected for viewing and illumination geometries as well as other factors such as the individual characteristics of the reference panel used during acquisition [21]. A software development platform is used in [22] for speedy evaluation and implementation of image processing options on the automatic guided vehicles. A program code written in Matlab, designed to be used inside of a Simulink model in [23], allows a fuel cell model to be used in a wide variety of 1D simulation platforms by exporting the code as C/C++.

#### **2. Examples**

#### **2.1 A hyperbolic partial differential equation**

The following example is realized using the *PDE modeler* toolbox. With this application the users can analyze elliptic, parabolic and hyperbolic Eqs. A hyperbolic equation case study, for wave propagation in square domain in plane, is presented in this example [24, 25]. The equation used in analysis is:

$$\frac{\partial u^2}{\partial^2 t} = c\_1 \nabla (c\_2 \nabla u) + c\_3 u + c\_4 \tag{1}$$

*Introductory Chapter: Matlab and Simulink Applications DOI: http://dx.doi.org/10.5772/intechopen.98578*

where the parameter have the following values: *c*<sup>1</sup> = 1, *c*<sup>2</sup> = 1, *c*<sup>3</sup> = 0, *c*<sup>4</sup> = 10.

The space on which is made the analysis is a square with unitary dimension *l* = 1. Boundary conditions were imposed as follows: on the left, right and front Dirichlet conditions: *h* = 1, *r* = 0. On the square's base Neumann conditions: *q* = 0, *g* = 0.

The discrete optimized number and position of meshes are presented in **Figure 1**. The contour solution is presented in **Figure 2**. For these meshes the approximated solution is presented in 3D in **Figure 3**.

#### **2.2 Modeling and simulation of a control system for a second order process**

The second example presents a simple case of modeling and simulation of a basic control system for a second order process.

The process has the transfer function:

$$H\_p(s) = \frac{\mathcal{y}(s)}{\mathcal{u}(s)} = \frac{K\_L}{(T\_1s + 1)(T\_s + 1)}\tag{2}$$

where *u* is process input and *y* is process output.

The following values are chosen for the process parameters: *T*<sup>1</sup> = 0,4 s,*T*<sup>s</sup> = 0,04 s and *K*<sup>L</sup> = 2.

The process has a disturbance *v* at its input.

A linear PI controller is chosen, with the transfer function:

$$H\_R(s) = \frac{u\_c(s)}{e(s)} = K\_R \left(1 + \frac{1}{T\_i s}\right) \tag{3}$$

**Figure 1.** *The optimized meshes.*

**Figure 2.** *Contour plotted solution.*

where *e* is the error, as difference between the reference *w* and the feedback *r* and *u*<sup>c</sup> is the command. The controller is tuned in accordance with the Kessler version of the module criterion:

*Introductory Chapter: Matlab and Simulink Applications DOI: http://dx.doi.org/10.5772/intechopen.98578*

$$T\_i = T\_1 = \mathbf{0}, 4$$

$$K\_R = \frac{T\_1}{2K\_L T\_s} = \mathbf{2}, 5 \tag{4}$$

In **Figure 4** shows how to arrange the work windows for this application on the screen. First, Matlab work space is open, then Simulink. A Simulink block diagram for the control system according the above theory is developed, with *transfer function* and *integrator* blocks from *Continuous* block library, *Add* and *Gain* blocks from *Math Operators*, *Step* and *Clock* blocks from *Sources*,*To Workspace* and *Scope* blocks from *Sinks*. The parameters are entered literally in the Simulink scheme, and their values are given in Matlab. The parameter values are saved in a data file, which is called with instruction *load* each time before the scheme is run. The values of the vectors *w*, *uc*, *v*, *u*, *y*, calculated at the tome values from vector *t*, are passed into Matlab workspace with the blocks *To Workspace* The time variations of the variables

**Figure 4.** *Simulink block diagram work screen.*

**Figure 5.** *Simulink diagram for fuzzy block.*

*u*<sup>c</sup> and *y* are presented on the two scopes. The time variations of the variables *w*, *v*, *u*c, *u* and *y* are presented using the instructions *suplot*, *plot*, *grid*, *xlabel*, *ylabel*, *axis*.

Analyzing the output graph *y* it can be seen that the overshoot σ1% = 4,3%, the settling time *t*<sup>r</sup> = 8,4.*T*<sup>s</sup> = 0,336 s, in accordance with Kessler's tuning criterion.

#### **2.3 Modeling and simulation of a fuzzy control system for the second order process**

The third example presents a simple case of modeling and simulation of a basic fuzzy control system [26–29] for the same second order process like in the second example.

#### **Figure 6.**

*The screen for fuzzy control system.*

**Figure 7.** *The screen for fuzzy system design.*

*Introductory Chapter: Matlab and Simulink Applications DOI: http://dx.doi.org/10.5772/intechopen.98578*

The process has the same transfer function. A fuzzy PI controller is used.

In **Figure 5** shows Simulink block diagram for fuzzy controller. With the *fuzzyLogicDesigner* window the user may set membership functions, rule base and inference.

In **Figure 6** shows how to arrange the work windows for this application on the screen.

In **Figure 7** shows the work window for *fuzzyLogicDesigner*. The Simulink block diagram uses a fuzzy controller with derivation at the input and integration at the output. The working with Matlab and Simulink is the same like in the second example.

### **Conflict of interest**

The author has no conflict of interest.

#### **Author details**

Constantin Volosencu "Politehnica" University, Timisoara, Romania

\*Address all correspondence to: constantin.volosencu@aut.upt.ro

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Chapman S.J. Matlab Programming for Engineers, 6th edition, Cengage Learning, 2020.

[2] Xue D. Matlab Programming. Mathematical Problem Solutions, De Gruyter, 2020, DOI:10.1515/ 9783110666953.

[3] Hahn B., Valentine D. Essential Matlab for Engineers and Scientists, Academic Press, 2019.

[4] Lee H.H. Programming and Engineering Computing with Matlab, SDC Publications, 2019.

[5] Moore H., Matlab for Engineers, 5th Edition, Pearson, 2018.

[6] Attaway S. Matlab 5th Edition A Practical Introduction to Programming and Problem Solving Butterworth-Heinemann, 2018.

[7] Kattan P.I. Matlab For Beginners: A Gentle Approach, Createspace, 2008.

[8] Dittmar R. Model Predictive Control mit Matlab und Simulink, IntechOpen**,** London**,** UK**,** 2019**,** DOI: 10.5772/ intechopen.86001**.**

[9] Paluszek M., Thomas S. Practical Matlab Deep Learning: A Project-Based Approach, Apress, 2020.

[10] Leite E.P., editor, Matlab Modelling, Programming and Simulations IntechOpen London UK 2010 DOI: 10.5772/242.

[11] Assi A., editor, Engineering Education and Research Using Matlab IntechOpen London UK 2011 DOI: 10.5772/1532.

[12] Ionescu C., editor, Ed. Matlab A Ubiquitous Tool for the Practical Engineer IntechOpen London UK 2011 DOI: 10.5772/82.

[13] Leite E.P., editor, Scientific and Engineering Applications Using Matlab IntechOpen London UK 2011 DOI: 10.5772/1531.

[14] Chakravarty S., editor,. Technology and Engineering Applications of Simulink IntechOpen London UK 2012 DOI: 10.5772/2414.

[15] KatsikisV., editor, Matlab A Fundamental Tool for Scientific Computing and Engineering Applications - Volume 1, IntechOpen, London UK, 2012, DOI: 10.5772/2557.

[16] Katsikis V., editor,. Matlab A Fundamental Tool for Scientific Computing and Engineering Applications - Volume 2 IntechOpen London UK 2012 DOI: 10.5772/3338.

[17] Katsikis V., editor, Matlab. A Fundamental Tool for Scientific Computing and Engineering Applications - Volume 3, IntechOpen, 2012, London UK, DOI: 10.5772/3339.

[18] Bennett K., editor,. Matlab. Applications for the Practical Engineer, IntechOpen London UK, 2014, DOI: 10.5772/57070.

[19] Valdman J., editor Applications from Engineering with Matlab Concepts IntechOpen London UK 2016 DOI: 10.5772/61386.

[20] Saghafinia A., editor, Matlab. Professional Applications in Power System, IntechOpen London UK 2018, DOI: 10.5772/intechopen.68720.

[21] Elmer, K.; Soffer, R.J.; Arroyo-Mora, J.P.; Kalacska, M. ASDToolkit: A Novel Matlab Processing Toolbox for ASD Field Spectroscopy Data. *Data* 2020, *5,* 96. DOI: 10.3390/data5040096.

[22] Kotze, B.; Jordaan, G. Investigation of Matlab as Platform in Navigation and *Introductory Chapter: Matlab and Simulink Applications DOI: http://dx.doi.org/10.5772/intechopen.98578*

Control of an Automatic Guided Vehicle Utilising an Omnivision Sensor. Sensors 2014, *14,* 15669-15686. DOI: 10.3390/s140915669.

[23] Lazar, A.L.; Konradt, S.C.; Rottengruber, H. Open-Source Dynamic Matlab/Simulink 1D Proton Exchange Membrane Fuel Cell Model. Energies 2019, *12,* 3478. DOI: 10.3390/ en12183478.

[24] Voloşencu, C., Identification of Distributed Parameter Systems, Based on Sensor Networks and Artificial Intelligence, WSEAS Transactions on Systems, 2008, Issue 6, Vol. 7, p. 785-801.

[25] Voloşencu, C. - Identification in Sensor Networks, In: Proceedings of the 9th WSEAS International Conference on Automation and Information (ICAI'08), June 24-26, 2008; Bucuresti, WSEAS Press, p. 175-183.

[26] Voloşencu, C., editor, Fuzzy Logic, IntechOpen Ltd., London, UK, 2020, DOI: 10.5772/Intechopen.77460*.*

[27] Volosencu, C., Introductory Chapter: Basic Properties of Fuzzy Relations, *Fuzzy Logic*, IntechOpen, London, UK 2020, *DOI:* 10.5772/ Intechopen.77460*.*

[28] Volosencu, C., Tuning Fuzzy PID Controllers,*Theory,Tuning and Application to Frontier Areas*, edited by Rames *C. Panda*, InTech, 2012. DOI: 10.5772/32750.

[29] Voloşencu, C., Stabilization of Fuzzy Control Systems, WSEAS Transactions on Systems and Control, 2008, Issue 10, Vol. 3, p. 879-896.

Section 2 MST Radar

#### **Chapter 2**

## Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System

*Nali Dinesh Kumar*

#### **Abstract**

Most often, in MST radar system, a few number of transmitters are nonoperational due to various factors, making the liner sub-arrays corresponding to these transmitters in effective. This results in the thinning of the aperture and deviation of the excitation from the specified Taylor distribution. The array pattern will be distorted due to this deviation, when compared to the reference pattern. This chapter gives a complete analysis to quantify the distortion in the radiation pattern due to Aperture thinning. MATLAB was extensively used to analyze the results. The results of the radiation pattern in both principal planed and for different azimuth angles with and without thinning/tilt are presented. Radiation pattern is viewed in both polar and rectangular (2-D and 3-D) forms. Conclusions on the results obtained are presented.

**Keywords:** Array Factor, Distortion, Aperture thinning, MATLAB, Phased antenna, Polar Form, Rectangular form, Side Lobe Levels, Taylor distribution

#### **1. Introduction**

The ever-increasing demand for the software development of Aperture thinned Radiation Pattern has motivated to model the present work. The Indian MST radar is a phased antenna array, highly sensitive operating at 53 MHZ with a peak power aperture product of 2.5 <sup>10</sup><sup>10</sup> W-m<sup>2</sup> . One for each polarization, it consists of two collocated orthogonal sets of 1024 3-element Yagi-Uda antennas. They are arranged over an area of 130 m 130 m in a 32 32 matrix (**Figure 1**). The complete array setup is illuminated using 32 distributed transmitters of varying power. In turn distributed transmitters each will feed a linear sub-array of 32 antennas with a 32 parallel runs of center-fed-series-feed structures [1].

Yagi-uda antenna is chosen for MST radar antenna array. Choice of an element for MST radar, advantages in favor Yagi element, requirements of side lobe level (SLL), antenna element design and modified Taylor distribution are explained [1–4]. Amplitude distribution, illumination efficiency and feeder efficiency are derived. Finally MST radar specifications are tabulated.

While planning for the antenna element designing, the sharing antenna elements architecture of fixed overlap sub array to avoid grating lobe in the antenna pattern technique is also considered [5].

An experiment made to generate low side lobe patterns optimizing ring radii and individual element excitations from concentric circular arrays [6] does not worked

for MST radar array. The approach of array excitation weight vectors as imaginary number chromosomes are often used as a general tool for pattern synthesis of absolute arrays that uses decimal linear crossover [7]. WWII elevated the phasedarray antennas and has become a perfect tool for RF systems [8]. Then the main focus on the G-band multifunction measuring instrument systems for the Land is developed [9]. A coupled structural-electromagnetic model of phased array antenna PAA is developed to explain the performances of antenna, and the result of random errors and mechanical distortion [10]. Random error is generated throughout the producing and assembly method, and mechanical distortion is caused by external masses like high thermal distinction, vibration and impact loads [11]. Arrays produce aperture errors [12] as their determination is sometimes neglected, being in several cases very troublesome, such errors area unit mutual effects between parts of AN array, scattering from and obstruction because of the feed of a parabolic reflector, and optical phenomenon at a lens antenna step etc. [13].

#### **2. Geometry of MST radar**

Indian MST Radar antenna array uses a two-dimensional filled antenna array for both transmission and reception. An inter-element spacing of 0.7λ (λ, being the radar wavelength) is used in both the principle directions, which allows a grating lobe free beam scanning up to an angle of about 24<sup>0</sup> from broadside direction [1].

#### **2.1 Choice of the element**

To obtain the gain of 36 dB as given in the MST radar specifications a filled aperture of roughly 21λ to 25λ is required. To fill this aperture the number of elements required are given as

$$\text{IN} = \frac{\text{Total Aperture Area A}\_{\text{p}}}{\text{Effective Area of SSingle Element A}\_{\text{e}}} \tag{1}$$

Where Ae = λ<sup>2</sup> Ge/4π.


*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

**Table 1.**

*Different types of elements, and their number required to fill the aperture.*

**Table 1** gives the total number of elements required to fill up the aperture for different types of antenna elements. Comparative study of the various antennas as the potential elements in MST radar configuration was made the possibility of using following types. They are Crossed Dipole over a ground plane, Coaxial Collinear, Three –element Yagi, and Four-element Yagi.

Out of these elements, the crossed dipole over a ground plane has the gain of the order of 5 dB and hence total number of dipoles required to fill the same aperture is quite large compared to the Yagi types and would require a more complicated and expensive feed network. The Coaxial Collinear *(CoCo)* antenna, which again is another form of dipole over a ground plane, is apparently simple to fabricate. These can be directly constructed at the site using RG 8/U or equivalent RF cables, but maintenance and water proofing of such an array would be tough.

A *Yagi-Uda* array consists of many parallel dipoles with different lengths and spacing, out of which only one is actively fed and others are shorted at their feed points. Since only one of the dipoles is driven and all other elements are parasitic, the later functions respectively as a reflector or as a director. In general, the longest shorted element with length of the order of λ/2 is the reflector and the shorted element is the director. This can be viewed as the array of dipoles in which all but the driven elements (Exciters) are short-circuited. For three-element Yagi case Voltage is

$$\mathbf{V\_m} = \sum\_{\mathbf{n}=1}^{3} \mathbf{I\_n} \mathbf{Z\_{mn}} \tag{2}$$

Where, In is the current on the nth element m is the element number

Putting V1 = V2 = 0 and simultaneously solving these equations gives

$$\frac{I\_1}{I\_2} = \frac{Z\_{13}Z\_{32} - Z\_{12}Z\_{33}}{E} \tag{3}$$

$$\frac{I\_3}{I\_2} = \frac{Z\_{13}Z\_{12} - Z\_{23}Z\_{11}}{E} \tag{4}$$

Where, **E=Z11 Z33 – Z31 Z13**.

Using these current rations, the input impedance, gain and radiation pattern can be calculated.

#### **2.2 Antenna element design consideration**

The single element gain and radiation pattern change considerably in the array environment. The physical area that each element couples limits the element gain in an infinite array [2, 3, 14] and is given by

$$\mathbf{g}\_{\mathbf{r}} = \begin{pmatrix} 4\pi \,\mathrm{d}\_{\mathbf{x}} \,\mathrm{d}\_{\mathbf{y}}/\lambda^2 \end{pmatrix} \cos\theta \tag{5}$$

for, dx = dy = 0.7λ

$$\mathbf{g}\_{\mathbf{r}} = 4 \,\,\pi \,(0.49) = 6.157 = 7.89 \text{ dB} \tag{6}$$

A practical element with a gain higher that this value, would lead to overlap of effective areas, without any useful addition to the array gain. The three-element Yagi appears to be a practical choice as the element of the MST array. It has a higher front-to-back ratio, which is useful in minimizing the ground effects and it can be designed to have a gain between 6.5 dB to 8 dB.

Considering the isolated Yagi element gain as 7.2 dB, the total array gain at a taper frequency of 80% works out to be 36.3 dB. This would leave a margin of 0.3 dB towards gain loss due to amplitude and phase errors across the aperture, thus allowing us to realize a gain of 36 dB for the zenith beam. The diameter of the element was chosen to be 0.75 inch, which is a standard commercially available tube.

The following values were found to offer satisfactory performance


The expected performance of thee-element Yagi with the above parameters is tabulated below


#### **2.3 Feeder network configuration**

The feeder network of MST radar antenna array consists of two orthogonal sets; one for each polarization. The feeder network consists of thirty-two parallel runs of center-fed-series-feed (CFSF) structure. Thirty-two transmitters of varying power illuminate the array; each is feeding a linear sub-array of thirty-two antenna elements.

The feeder networks of all the sub-arrays are identical as far as the power distribution is concerned. The CFSF network, (shown in **Figure 2**) consisting of power divider at the center and a series of directional couplers on each side of its, connects the linear sub-array to the T/R switch, which delivers the transmitter output power to the array and the power received by the array to the corresponding low noise amplifier. Components of the feeder network are RG 1–5/8″, RS 7/8″ and air dielectric coaxial lines, Wilkinson type in-phase power divider, Distributed version of coupled line type directional couplers and Lumped version of hybrid type directional couplers [1].

Description of each of the above components is given below.

*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

**Figure 2.** *Center –fed-series-feed (CFSF) network.*

#### **2.4 Rigid cable**

*RG 1–5/8*″ cable is used to carry the output power of high power transmitters *(70 kW – 120 kW range)* to the CFSF network. RG 7/8″ is used to carry the output power of low power transmitters *(15 kW – 53 kW range)* to the sub-array input. This cables use foam dielectric, which is having a velocity factor of 0.89 at 53 MHz, the operating frequency, these cables offer an attenuation of about 0.5 dB per 100 m.

#### **2.5 Power divider/combiner**

This device acts as a divider in the transmit mode and as a combiner in the receive mode. The circuit diagram of *Wilkinson type divider/combiner* is shown in **Figure 3**. All the three ports are terminated with characteristic impedance, Zo (50 Ω). Ports 2 and 3 are isolated. During the transmit mode the transmitter output power is fed to the port-1, which will be divided in phase equally between the output ports 2 and 3. In the receive mode the power received by the two halves of the linear sub-array will be delivered through series feed network to the ports 2 and 3 respectively which will be combined in phase art port-1.

The relationship between the voltages in the transmit mode at the output and input ports is given by

$$\mathbf{V\_2 = V\_3 = -j \ (V\_1/V\_2)}\tag{7}$$

**Figure 3.** *Wilkinson type divider/combiner.*

The relationship between the voltages in the receive mode at the output and input ports is given by

$$\mathbf{V\_1 = -j \ (V\_2/V\_3)/\sqrt{2}} \tag{8}$$

Where, Vi is the voltage at the port-I. Since the two halves are symmetry V2 = V3 = V. Therefore.

$$\mathbf{V\_1 = -j(\sqrt{2})V} \tag{9}$$

#### **2.6 Distributed versions of coupled line directional coupler (DC)**

The coupled rod co-axial directional coupler is shown in **Figure 4**. In the **Figure 4**, section, 1–2 is the main line and 3–4 is the auxiliary line, which is coupled to the main line. As the electric current passes through Section 1–2 from port-1, it produces a magnetic field around it. This magnetic field couples with conductor 3–4 and induces current in it. Therefore, by varying the separation between the two conductors, we can control the coupling factor.

In the transmit mode port-1 is the input port to which the power will be fed. Port-2 is the direct output power and portr-3 is the coupled port through which antenna will be energized. Port-4 is isolated with respect to port-1. The relationship between various voltages is given by

$$\mathbf{V\_3 = k \ V\_1 \qquad V\_2 = -j \left(\sqrt{\left(1 - K^2\right)}\right) \times V\_1 \qquad V\_4 = 0 \times V\_1 = 0 \tag{10}$$

Where Vi is the Voltage at port-i. This indicates that V1 and V3 are in phase and V2 is lagging V3 by 90<sup>0</sup> .

All the thirty-two antennas within a sub-array should get the excitation signals with same phase so as to produce a main beam in the broad side direction resulting in high gain. In order to achieve this, the lengths of the feeding cables (running from the coupled port to the antenna balun) are adjusted accordingly. This process is called *Phase equalization*.

In the receive mode the antenna delivers power to the coupled port (port-3) and port-2 will be fed by the power coming from the adjacent coupler. In this mode, as a consequence of the phase equalization, V2 always leads V3 by 90<sup>0</sup> . The relationship between the various voltages is given by

$$\mathbf{V\_1 = \left(\sqrt{\left(1 - \mathbf{K}^2\right)}\right)V\_2 + \mathbf{k}\ \mathbf{V\_3}} \tag{10a}$$

$$\mathbf{V\_4} = \left(\sqrt{\left(\mathbf{1} - \mathbf{k}^2\right)}\right)\mathbf{V\_3} - \mathbf{k}\ \mathbf{V\_2} \tag{11}$$

**Figure 4.** *Coupled line directional coupler.*

*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

#### **2.7 90<sup>0</sup> Hybrid coupler lumped version**

This coupler comprises of four quarter-wave sections, two in series and two in shunt. Each quarter line is realized by equivalent π section of lumped elements (inductors and capacitors) [2, 3].

In this structure, diagonally opposite ports are coupled. Since the coupled signal travels a distance of two quarter-wavelengths, it will be out of phase with respect to the input port. Powers given at the port-1 will be distributed between the direct output port-2 and coupled port-3. Port-4 is isolated. The coupling factor is dependent on the normalized impedance of series and shunt arm quarter-wave line sections, Zb and Za respectively. Coupling factor is given by.

$$\mathbf{k} = -\mathbf{Z\_b/Z\_a} \tag{12}$$

Condition for impedance matching is given by.

$$\mathbf{1}/\mathbf{Z\_b}^2 - \mathbf{1}/\mathbf{Z\_a}^2 = \mathbf{1} \tag{13}$$

#### **2.8 Excitation coefficients in the transmit mode**

The normalized amplitudes (Cn) for all antenna elements with respect to the first antenna from the divider is given by

$$\mathbf{C}\_{\mathbf{n}} = \mathbf{V} \mathbf{k}\_{\mathbf{n}} \prod\_{i=1}^{n-1} \sqrt{\left(\mathbf{1} - \mathbf{k}\_{i}^{2}\right)} \tag{14}$$

Where KI is the coupling factor of coupler 'I' from the center (divider).

#### **2.9 Excitation coefficients in the receive mode**

The normalized amplitudes (Cn) for all antenna elements with respect to the first antenna from the divider is given by

$$\mathbf{C}\_{\mathbf{n}} = \mathbf{k}\_{\mathbf{n}} \prod\_{i=1}^{n-1} \sqrt{\left(\mathbf{1} - \mathbf{k}\_{i}^{2}\right)} \tag{15}$$

Where KI is the coupling factor of coupler 'I' from the center (divider).

#### **2.10 Illumination efficiency**

Since the weighing factors of the antenna elements are same for both transmit and receive modes, illumination efficiency will be same for both the modes. Illumination efficiency is defined as "the ratio of effective length of the sub-array to the physical length", which can be expressed as

$$\eta\_{\rm ill} = \frac{1}{\mathbf{1} \mathsf{6C}\_1} \sum\_{\mathbf{n}=1}^{16} \mathsf{C}\_{\mathbf{n}} \tag{16}$$

which is found to be 79%.

Feeder network consists of fifteen couplers on either side of the divider/ combiner to feed 16 antennas on either side. The coupler rating in decibels and the

#### *MATLAB Applications in Engineering*


#### **Table 2.**

*The coupler ratings and their corresponding coupling factor.*


#### **Table 3.**

*The coupling coefficients of the coupler.*

corresponding coupling factors of all the fifteen couplers are given in **Table 2**. The corresponding coupling coefficients (Cn) are given in **Table 3**.

#### **2.11 Feeder network efficiency**

Feeder network efficiency is defined as "the ratio of power delivered to the sub-array by the feeder network to the power output of the transmitter, feeding the sub array'. I the receive mode it is defined as "the ratio of the power delivered to LNA by the feeder network to the total power developed at the terminals of all thirty-two antenna elements during reception". All feeder line components are assumed to the lossless in the computation of the feeder network efficiency. Normalized characteristic impedance of the system is assumed as unity in power calculations.

#### **2.12 Transmit mode**

When an input voltage of √2 volts is applied to the power divider, the voltage amplitude at the input of the first coupler will be one volt. The total power delivered to the sub-array is given by

*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

$$\mathbf{P}\_{\rm out} = 2 \sum\_{i=1}^{16} \mathbf{A}\_{\rm n}^{2} \text{units} \tag{17}$$

Where An is the input voltage fed from the coupled ports of the directional couplers. Power input to the feeder network is,

$$P\_T = \left(\sqrt{2volts}\right)^2 = 2\mu m \text{ts}$$

Feeder efficiency ηft is given by is found to be 100%

$$\eta\_{\text{ft}} = \frac{\mathbf{P\_{ant}}}{\mathbf{P\_r}} \tag{18}$$

#### **2.13 Receive mode**

In the receive mode, all the antennas of the sub-array equal powers and deliver the same to the coupled ports of the feeder network. When an input voltage of 1 volt is applied to all the coupled ports, the input power to the feeder network is given by

$$\mathbf{P\_{ant}} = \mathbf{32} \times \left(\mathbf{1}\right)^2 = \mathbf{32} \text{ watts.}$$

The output voltage of the combiner (which is fed to the LNA) is:

$$\mathbf{V}\_{\text{out}} = \left(\sqrt{2}\right)\mathbf{V}\_{0,1} \tag{19}$$

Where, V0,1 is the output of the first coupler. The combiner output power is given by.

$$\mathbf{P}\_{\rm out} = \mathbf{2} |\mathbf{V}\_{0,1}|^2 \tag{20}$$

Feeder network efficiency is given by

$$
\eta\_{fr} = \frac{P\_{out}}{P\_{tn}} \tag{21}
$$

and found to be 92.8%. Note that the rest of the power will be dissipated in the isolated ports of the directional couplers.

In addition to this, there will be a combining loss (at IF level) of 0.6 dB, which is equivalent to an efficiency of 92.3%, and due to amplitude imbalance there will be some loss that should be accounted in the overall feeder line efficiency. Hence, the total feeder efficiency of the MST radar planar phased array is equal to 85.6%.The specifications of MST radar are listed here:



#### **3. Aperture thinning of MST radar antenna Array**

Most often, a few number of transmitters are non-operational due to various factors, making the linear sub-arrays corresponding to these transmitters ineffective. Even if the transmitters are operational, with in a sub-array, it is possible that some elements will not get the excitation signal due to weak connection or discontinuity problems in the feeder line. This results in the thinning of the aperture and the deviation of the excitation from the specified Taylor distribution. Due to this deviation, the array pattern will be distorted from the normal pattern. In this chapter, a detailed analysis is carried out to quantify the degradation in the radiation pattern due to aperture thinning. Phase-errors are assumed to the zero throughout.

#### **3.1 Array pattern of 2 N MST radar**

If the array aperture is in the x y-plane and sub-arrays are aligned parallel to y-axis with a spacing dx, along the x-axis, array pattern can be expressed as

*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

$$f(\theta,\phi) = \sum\_{m=1}^{2\text{Nx}} \sum\_{n=1}^{2\text{Ny}} I\_{mn} e^{\left[-jk\text{Sin}\theta \left(md\_x Ca\phi + nd\_\uparrow \text{Sin}\theta\right)\right]} \tag{22}$$

Where dx = sub-array spacing along the x-axis

dy = element spacing within a sub-array (along the y-axis)

m = sub-array number along the x-axis

n = element number along the y-axis within a sub-array

θ = Field point angle from broadside

Ø = Azimuth angle

Imn = excitation current coefficient of nth element in mth row (sub-array) 2Nx = number of sub-arrays in x-axis

2Ny = Number of elements within a sub-array (along y-axis)

k = Phase constant (in free space)

For MST radar antenna array dx = dy = d = 0.7λ and 2Nx = 2Ny = 2 N = 32. If each row has the same current distribution, even though the current levels are different in different rows, that is

$$\frac{I\_{mn}}{I\_{ml}} = \frac{I\_{ln}}{I\_{11}}\tag{23}$$

Which is true for MST array case, and hence possible to separate the current distribution and the array factor can be expressed in the form

$$f(\theta, \mathcal{Q}) = f\_{\,\,x}(\theta, \mathcal{Q}) f\_{\,\,y}(\theta, \mathcal{Q}) \tag{24}$$

In which

$$f\_{\chi}(\theta,\phi) = \sum\_{m=1}^{2N} I\_m e^{(-jmkd\_{\chi}\text{Sin}\theta\text{Cas}\phi)}\tag{25}$$

$$f\_{\mathcal{I}}(\theta,\phi) = \sum\_{n=1}^{2N} I\_n e^{\left(jnkd\_j \text{Sin}\theta \text{Sin}\phi\right)}\tag{26}$$

and

$$I\_m = \frac{I\_{m1}}{I\_{11}}\tag{27}$$

$$I\_n = \frac{I\_{1n}}{I\_{11}}\tag{28}$$

are the normalized current distributions in a row of elements parallel to x-axis and y-axis respectively. All the thirty-two elements are phase-equalized within a sub-array by adjusting the input feed cable lengths. Hence a linear sub-array, when excited alone, will produce a fan beam in the broadside direction. Beam tilting is done in E-plane (or Ø = 0<sup>o</sup> plane or xz-plane in this case) by providing progressive phase shift along the successive linear sub-arrays. The equations are executed in MATLAB to find the array patterns at different zenith angles.

If an array aperture is not fully excited, then it is said to be "Thinned". When this thinning is applied for the MST radar array, which is a planar array with separable current distribution, the array pattern can be expressed as

$$f(\theta,\phi) = \sum\_{m=1}^{2N} I\_m e^{(-j m k d\_x \text{Sin}\theta \text{Cas}\phi)} \sum\_{n=1}^{2N} I\_n e^{\left(j n k d\_y \text{Sin}\theta \text{Sin}\phi\right)} \tag{29}$$

where Im is proportional to the square root of output power of transmitters and in is proportional to the coupling coefficients of CFSF network. The array factors in both the principle planes, Ø 0o (E-plane) and Ø = 90<sup>0</sup> (H-plane), respectively are

$$f\_E(\theta) = f(\theta, \mathbf{0}) = \sum\_{n=1}^{2N} I\_n \sum\_{m=1}^{2N} I\_m e^{(-jmkd\_c \text{Sin}\theta)} \tag{30}$$

$$f\_{H}(\theta) = f\left(\theta, \mathbf{90}^{0}\right) = \sum\_{m=1}^{2N} I\_{m} \sum\_{n=1}^{2N} I\_{n} e^{\left(-j n k d\_{j} \sin \theta\right)} \tag{31}$$

When some of the currents Im are zero (which means the corresponding transmitters are off), it is clear that the shape of H-plane pattern, fH(θ), will not be affected though its magnitude changes according to the first term in (31). However, the E-plane pattern, fE(θ), will be distorted according to the second term of (30). So, when few transmitters are non-operational only the e-plane pattern will be distorted. To study the effect of aperture thinning on the radiation pattern, it is required to compute in MATLAB the array pattern by letting some of the Im to zero, which is equivalent to putting the corresponding transmitters off.

#### **4. Results and discussions**

The antennas in the sub-arrays would not get excitation signal, if few transmitters are non-operational. Though they are physical present, electrically they are not effective. If an array aperture is not fully excited, then it is said to be 'thinned'. This results in the deviation of the excitation from the specified Taylor distribution. Due to this deviation, they array pattern will be distorted. To quantify the distortion in the radiation patterns in both E and H planes, the array pattern expressions are made to depend on the shape of the beam. Forcing some of the Im to zero, which means these transmitters are non-operational, can effect array thinning. Iy is clear that the shape of H-plane pattern, fH(θ), will not be affected, though its magnitude changes according to the first term in Eq. (31). However, the E-plane pattern, fE(θ), will be distorted according to the second term of Eq. (30) Hench, we can conclude that only E-plane pattern will be distorted and needs to the examined in case few transmitters are non-operational. Programming in MATLAB helped a lot to examine these cases.

Array pattern is computed in the E-plane with different thinning configurations, that is by letting group of transmitters (or sub-arrays) in effective. Radiation parameters are distorted for all the cases. The two important parameters that may affect the radar performance are gain and SLL. The variation of these two parameters with different array thinning configuration is tabulated in **Table 4**. Array pattern thinning obtained with and without tilting can be viewed from plot shown in the **Figures 5** and **6**.

Array pattern is computed for different azimuth angles using Eq. (30) by letting all transmitters (or sub-arrays) effective. The important parameters that may affect the radar performance are gain and SLL. The variation of the SLL parameter is tabulated in **Table 5**. Array pattern computed for different azimuth angles obtained without thinning and tilting can be viewed from **Figures 7–9**.

The antenna array 3-D array pattern obtained using MATLAB is shown in the **Figure 10**. Amplitude distribution is plotted using the Eq. (22). **Figure 11** shows a 3–D plot of antenna array when fully excited. Distortions of amplitude distribution due to array thinning are shown in **Figures 12** and **13**.


*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

#### **Table 4.**

*Variation of parameters with different array thinning configurations.*

#### **Figure 5.**

*Array pattern for the two principal planes.*

Polar plots plotted in MATLAB for both the principal planes are shown in the **Figures 14** and **15**. Radiation pattern for different azimuth angles obtained in MATLAB are shown in **Figures 16** and **17**.

#### **4.1 MATLAB package**

The entire software is developed using the MATLAB package. MATLAB was chosen for the numeric computation and visualization of the array pattern. Matalb

#### **Figure 6.**

*Array pattern for a tilt of 10 degrees in E-plane.*


**Table 5.**

*Variation of SLL standard configuration with different azimuth angles.*

#### **Figure 7.**

*H –plane pattern – without distortion and with a tilt of 10 deg.*

is a helpful tool to develop portable and graphical user interface software. After entering into the package, the menu options as shown in **Figure 18** will be visible. The new file with the rated powers of the transmitters, and standard

*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

#### **Figure 8.**

*E plane pattern distortion due to array thinning transmitters off: 1 to 8 & 25–32 and tilt: 0 deg.*

#### **Figure 9.**

*E- plane pattern distortion due to array thinning transmitters off: 6, 8, 9, 17, 24, 29, 30 & 32 and tilt: 0 deg.*

**Figure 10.** *3-D Array pattern.*

**Figure 11.**

*3-D power distribution for MST radar antenna array.*

**Figure 12.**

*3-D power distribution for MST radar antenna array transmitters off: 1–8, & 25–32.*

**Figure 13.** *Transmitters off: 6, 8, 9, 17, 24, 29, 30 & 32.*

*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

#### **Figure 14.**

*Radiation power pattern for both principal planes.*

**Figure 15.** *Radiation power pattern for E-plane.*

parameters of the MST radar can be selected and will have. dat extension. **Figure 19** shows the window of the open file with open options such as m files, mat files etc.

#### **Figure 17.**

*Radiation power pattern for an azimuth angle of 30<sup>0</sup> .*

#### **Figure 18.**

*Array factor distortion analysis menu options.*


#### **Figure 19.**

*Array factor distortion analysis file open selection option.*

*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

The display window will be of the format shown in **Figure 20**. The format of the display window is of 6 x 8 matrix. Press <O.K > button to clear the display. The sub menu allows the user to view as well as enter the values of your choice for both transmitters and parameters values, for the further processing. Format for entering the values in 6 x 8 matrix is shown in **Figure 21**. The accept button is pressed for processing the entered values.

The Config option allows the user with two sub menu options 1) Tx-power and 2) Parameters. The TX-Power option allows the user to change the transmitter power levels with the help of the slider as shown in **Figure 22**. The TX-Power have 4 sub menu options for each hut that is NH-1, NH-2, SH-1, and SH-2. The user can

#### **Figure 20.**

*Transmitter power ON/OFF status.*

#### **Figure 21.** *Format for entering transmitter power values.*

#### **Figure 22.**

*The transmitter power levels changes in north hut 1(NH 1).*


#### **Figure 23.**

*Option to change the antenna array parameters.*

change the parameters of the antenna array as shown in **Figure 23**, where the operational frequency in MHz, inter-element distance in wavelengths, number of rows and columns, tilt angle of the beam in degrees can be varied.

#### **5. Conclusions**

From the results obtained in MATLAB, it may be noticed that degradation in gain and SLL due to the absence of a few low power transmitters (**Table 4**)

#### *Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

(1–4 cases) is not significant. Surprisingly, SLL improves with some low power transmitters off (case-3), where the loss in directive gain is only marginal symmetrical thinning gives higher SLL than asymmetrical thinning (cases 4 & 5). If all low power transmitters are off (case-7), then the array is similar to standard radar array (almost uniform distribution) giving a SLL of 13.5 dB.

On the contrary, the absence of high power transmitters causes SLL to increase significantly. From cases 8–11 of the table, it is clear that absence of even two transmitters increases the SLL by 1.5–6 dB. It may be noted that SLL depends on the position of transmitters that are off. Absence of central sub-arrays results in higher SLL. Cases 12–14 demonstrates that absence of more than four power transmitters give unacceptable SLL (worse than Uniform distribution case). Finally, case 15 represents the real time status of the radar on a particular operational day, where eight transmitters were not functioning.

For different azimuth angles, Radiation power pattern of the MST radar antenna array is plotted. **Table 5** shows the radiation power pattern for different azimuth angles. and the variations of SLL for standard configuration of radar array, with different azimuth values. It is observed that the change in the SLL for different azimuth angles varies a lot compared to that of the SLL of the two principal planes. The main focus is on the analysis of distortion of array pattern due to thinning.

The following facts can be concluded from the pattern obtained using MATLAB:


The work presented in this report can further be extended to study in the following cases.


#### **Acknowledgements**

I deem it as privilege to acknowledge my indebtedness to all those people who have helped me in completing the investigation of the. I express my sense of gratitude and thanks to my guides Dr. P Srinivasulu, Engineer 'SG', NMST Radar Facility, Gadanki and Dr. N C Eswar Reddy, Professor, Sri Venkateswara University, Tirupathi for their valuable guidance, encouragement, inspiration and cooperation throughout the investigation.

*MATLAB Applications in Engineering*

### **Author details**

Nali Dinesh Kumar Vignan Institute of Technology and Science, Affiliated to JNTUH, Hyderabad, India

\*Address all correspondence to: nalidinesh@gmail.com

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Radiation Power Pattern Distortion Analysis Using MATLAB for MST Radar System DOI: http://dx.doi.org/10.5772/intechopen.97637*

#### **References**

[1] MST Radar manual, "Characterization of MST Radar Antenna Array", P. Srinivasulu, Scientist-F.

[2] Ricolindo L. Carino and Mark.F. Horstemeyer (July 7th 2016). Case Studies in Using MATLAB to Build Model Calibration Tools for Multiscale Modeling, Applications from Engineering with MATLAB Concepts, Jan Valdman, IntechOpen, DOI: 10.5772/62348.

[3] Bahadır Ergün and Cumhur Şahin (January 6th 2021). Laser Point Cloud Segmentation in MATLAB [Online First], IntechOpen, DOI: 10.5772/ intechopen.95249

[4] Constantine A. Balanis, "Antenna Theory analysis and Design", (2nd edition). John Wiley & sons Inc. New York, 2005.

[5] Edward C. Jordon and Keith G. Balman, "Electromagnetic Waves and Radiating Systems", (2nd edition), Prentice Hall of Indian Pvt. Ltd., New Delhi, 1991.

[6] N Dinesh Kumar, "Array Factor Distortion Analysis", Verlag Publisher: LAP publishers, 1st edition.

[7] H. Steyskal, "Simple method for pattern nulling by phase perturbation", IEEE Transaction on Antennas and Propagation, vol.31, pp.163-166, 1983. 10.1109/TAP.1983.1142994

[8] V.Rajya Lakshmi and G.S.N.Raju, "Optimization of radiation patterns of array antennas", PIERS Proceedings, Suzhou, China, pp.1434-1438, 12-16 September 2011

[9] Keen-Keong Yan and Yilong Lu, "Sidelobe reduction in array-pattern synthesis using genetic algorithm", IEEE Transactions on Antennas and

Propagation, vol.45, no.7, July 1997. 10.1109/8.596902

[10] Haupt, R. L. and Y. Rahmat-Samii, "Antenna array developments: A perspective on the past, present and future," IEEE Antennas and Propagation Magazine, Vol. 57, No. 1, 86–96, 2015. doi:10.1109/ MAP.2015.2397154

[11] Farina, A. and L. Timmoneri, "Phased array systems for air, land and naval defence applications in Selex ES," 8th European Conference on Antennas and Propagation (EuCAP), 560–564, Hague, 2014. doi:10.1109/ EuCAP.2014.6901818

[12] Ruze, J., "The effect of aperture errors on the antenna radiation pattern," Nuovo Cimento Suppl, Vol. 9, No. 3, 364–380, 1952. 10.1007/ BF02903409

[13] Wang, C., M. Kang, W. Wang, B. Duan, L. Lin, and L. Ping, "On the performance of array antennas with mechanical distortion errors considering element numbers," International Journal of Electronics, Vol. 104, No. 3, 462–484, 2017. doi:10.1080/ 00207217.2016.1218064

[14] Wang, C., et al., "Electromechanical coupling based performance evaluation of distorted phased array antennas with random position errors," International Journal of Applied Electromagnetics and Mechanics, Vol. 51, No. 3, 285-295, 2016. doi:10.3233/JAE-150170

[15] Hsiao, J. K., "Array sidelobes, error tolerance, gain and beamwidth," Naval Research Lab Report, 8841, Washington DC. Sep. 28, 1984.

Section 3

## Geometric Segmentation

#### **Chapter 3**

## Laser Point Cloud Segmentation in MATLAB

*Bahadır Ergün and Cumhur Şahin*

#### **Abstract**

Currently, as a result of the massive continuous advancements in laser measurement technology, possibilities of map production are broadened, the loss of time and the waste of material sources are highly prevented, and the accuracy and precision of the obtained results are significantly improved. In the view of engineering concept. However, big data which are from laser point clouds have been especially used in the significant procedures of surveying studies. Programming methods are dependent in each studies. In the necessity of the applications, the coding procedure has more efficient, the data of work has increased, and time has been consumed. The coding methods have necessarily been optimized for working together especially in the big data studies. In this section, an automated survey (building facade surveying) is produced from scanning data by means of coding in MatLAB.

**Keywords:** surveying, laser point cloud data, segmentation, object determination, coding in MatLAB

#### **1. Introduction**

Nowadays, laser scanning and modeling technology have been extensively used in city documentation and cultural heritage studies besides the technique of imaging for global representation in the internet survey and navigation applications. In the view of engineering concept. However, big data which are from laser point clouds have been especially used in the significant procedures of surveying studies. Programming methods are dependent in each studies. In the necessity of the applications, the coding procedure has more efficient, the data of work has increased, and time has been consumed. The coding process of big data process must be modeled in the hardware systems requires receiving consideration. This process has been related to hardware construction by electronic and computer engineering vision. This process has mathematical model and algorithm, which has been concerned surveying and computer programming engineering vision. In this chapter, Matlab coding models including functional and stochastic properties have been suggested and discussed for operational process within laser scanning data segmentation in surveying studies.

#### **2. Process**

#### **2.1 Data structure**

Laser scanners scan the object in horizontal and vertical directions under a certain angle as a series of points and this allows the object to be displayed as a point cloud. In order to determine each of the laser points, measurement of scannercentric polar coordinates are made [1]. The measured points are slanted distance to point P, the angle between X-axis and horizontal plane *α*, and the angle of inclination of the horizontal plane measuring line φ. As illustrated in **Figure 1**, the initial point of terrestrial laser scanners are considered to be the positioned points. These measurements are based entirely on their local coordinate systems.

The resulting point cloud data is processed in formats related to the coordinate and the angle. The processing is carried out as follows: DXF for CAD models, ASCII for surface modeling, VRML format for visualization, and txt or pts. Software which varies with different laser scanner instrumentation can be used to obtain the point cloud data.

Laser scanners initially obtain the X,Y,Z Cartesian coordinates inside a second coordinate system which is located at the center of the station point and then they scan the surface of the object. In addition to the three-dimensional coordinates, the resulting data includes the density of the returning signal in terms of RGB (Red, Green, Blue) depending on the structure of the surface in question and the distance of measurement. Modeling of the scanned object and environment gets easier with recorded RGB density values. The dense data obtained by scanning is called a point cloud. **Table 1** displays the formation of txt data linked to the point cloud data.

There is a software in target-oriented modules to obtain raw data, to convert data to a workable format, and to perform the texturing process (if necessary) etc.

**Figure 1.** *TLS local coordinate system.*


#### **Table 1.**

*Point cloud data formation in ASCII format.*

**Figure 2** shows the point cloud data that is represented with RGB density values. A Leica HDS – 3000 terrestrial laser scanner is used to scan the point cloud data.

#### **2.2 Programming flow procedure**

During the documentation of the coordinate information of laser scanner point cloud data, there is no regular data order and classification [2]. For segmentation, points with known three-dimensional coordinates must be selected from all point cloud data. The algorithm is formalized with mathematical surface or point clustering techniques. Planar surfaces or points including depth parameters can be extracted by using various methods [3–9]. In addition, the surface points of the assigned surfaces are filtered. Once the building's planar surfaces are obtained, various methods can be used to extract property boundaries.

Programming flow procedure could be designed in this method in Matlab programming in **Figure 3**.

Deciding segmentation method is a dynamic process in Matlab programming flow. There are three different segmentation methods have been used in this step.

Geometric segmentation is based on geometric information of point cloud data. Radiometric segmentation is based on radiometric information of point cloud data. Hybrid segmentation is based on all information of point cloud data. The

**Figure 2.** *Cyclone 5.2 point cloud image.*

**Figure 3.** *Programming flow procedure.*

mathematical model of these segmentation methods could be based on not only conventional methods but also expert systems (Fuzzy systems, SVM, etc.) as shown in **Figure 4**.

#### **3. Examples of geometric segmentation in Matlab**

#### **3.1 Point segmentation**

Algorithm of this study which aims at filtering laser point cloud data of parallel surfaces in indoor areas by the help of filtering function is shown in **Figure 5** [10].

Selecting one of the parallel indoor surfaces as the reference plane by an operator is the first step of filtering function algorithm.

Distinct surfaces define the point cloud data which is illustrated in **Figure 6**. Planar surfaces like walls generally define the indoor areas. **Figure 6** displays point cloud data and various surface structures in a three dimensional coordinate system.

Mathematical function that represents plane surface is given in Eq. (1).

$$A\mathbf{x} + B\mathbf{y} + \mathbf{C}\mathbf{z} + D = \mathbf{0} \tag{1}$$

Eq. (1) shows that surface function consists of 4 parameters: A, B, C, D. The selected reference plane surface can be expressed mathematically by calculating

**Figure 5.**

*Geometric segmentation steps.*

#### **Figure 6.**

*Point cloud data structure in 3D coordinate system.*

these four parameters. An operator must read and manually enter the selected point coordinates of reference plane. The parameters of selected reference plane surface are determined by this way. A plane can be mathematically defined with reference to four parameters. So, we can only define the parameters of reference plane with four points. An operator selects more than four points in the same reference plane and determines the parameters of reference plane in adjustment process. Operator manually enters the threshold value secondly. Threshold value can be defined as the minimum difference of depth during the filter operation. All the operations are performed automatically by a Matlab-based software, other than the two stages of the algorithm.

Calculating the parameters of the selected reference plane is the second step of geometric segmentation algorithm. Once the operator manually chooses multiple (five or more) points as part of the first step of algorithm, the Matlab-based interface automatically determines four parameters which represent the reference plane in the adjustment computation. Thus, the adjusted reference plane is created in this step.

The third step of the algorithm is to calculate the distances of the parameters specified in the adjustment computation and all points in the laser point cloud to the reference plane.

**Figure 7** illustrates a point's distance to a plane. The distances of all points of the laser point cloud to the adjusted reference plane is calculated with Eq. (2). Eq. (2) shows the filtering function for the segmentation.

$$h = \frac{|A\mathbf{x}\_1 + B\mathbf{y}\_1 + C\mathbf{z}\_1 + D|}{\sqrt{A^2 + B^2 + C^2}}\tag{2}$$

where;

h: Distance of a laser point to the reference plane,

A, B, C, D: Parameters of the adjusted reference plane,

x1, y1, z1: Each laser point's three dimensional coordinates.

These stated distances are added in vector form to segmentation matrix S as a column. For each point in segmentation filtering, point distances for the reference plane are calculated separately. The definition of Matlab operation environment is done in the format of segmentation matrix. This is basically called the "segmentation matrix". However, column algorithm are used to make the calculations and final column vector provides classification of points on the surfaces. The first column of segmentation matrix which is shown in **Figure 8** is the X value, and the second column is the Y value of points. The third column represents the Z value in terrestrial laser scanning point cloud data. The fourth column shows the point distance into the reference plane. The fifth column displays the statistical

**Figure 7.** *Distance of a point to a plane.*


**Figure 8.**

*Matrix form of segmentation algorithm in Matlab software.*

differences in terms of distances. The sixth column illustrates the exponential values of the differences, and the seventh column shows the surface of the related points.

Algorithm's fourth step is called the step of segmentation (classification). This step is utilized for statistical vectorial change which is created for the study. In this step, the detection of the number of surfaces in a vector is essential and this is determined by the amount of statistical deviation. The values of standard deviation determine the statistical analysis. Standard deviation's comparison value is called the threshold value. Thus, the minimum difference of depth in question determines the amount of the statistical deviation. So, if less minimum difference of depths are obtained for the points, it means that they are on the same surface. In statistical analysis (5th Column) the total number of various surfaces is detected, and all distances are calculated as an exponential function (6th Column) which are shifted into a positive value. That is, the raw data is obtained to carry out the classification step. The exponential part of the obtained value is taken because there are some conditions. These conditions include a negative plane point distance and a stable point on the other side of the plane. So, the surface with a smaller absolute value gets the points between two surfaces. With reference to this value, each point in the laser are assigned to a surface and to the matrix of the surface (7th Column).

In the final step of the segmentation algorithm, four boundary surface points are defined by laser scanning points assigned to the surface matrix. For this purpose, minimum and maximum x, y plane coordinate values of the points in the surface matrix are used. The edges in each segment of the original segmented laser point cloud data are determined with reference to minX-minY, minX-maxY, maxYminX, maxX-maxY values. So, the boundary points of both surface and x and y plane coordinates are evaluated. However, the original values are not considered as the height value of points with plane coordinates defined in the algorithm. If we created a surface with a Z value acquired from the laser scanning data of the four points of the surface, not every surface would be parallel to each other. Thus, if we assign a height value to the four edge points, the segmentation result is assigned as the average height value of the Z (height) value for all points assigned to that surface. Therefore, the Z value of the four segmented points for each surface is the average Z value. This is the same value for each of the four points. This can be seen in **Figure 9** [10].

At the end of the segmentation process, each surface with thousands of points is converted to planes that contain only four points and the average height of all points in a segment are taken. In order to test the filtering algorithm, laser point cloud of a

**Figure 9.** *Four corners of the surfaces obtained by segmentation.*

class are scanned with Leica HDS 3000 in GYTE (presented in **Figure 10**). The scanning frequency in this study is 5 mm.

When obtaining the data which are specified in txt format from CYCLONE (Leica) software, affine conversion is carried out in order to give the data depth Z for point cloud data. So, operator will be able to determine the reference surface easier. An original data set, which is composed of four different surfaces and 21,932 points, is selected. There are significant depth differences which ranges from 1 cm to 20 cm in terms of the surfaces of doors, borders, walls and columns.

When the indoor space in **Figure 11** is examined, it is seen that surface 1 is the reference plane which enables the testing of the filtering function from four plane

**Figure 10.** *Application site.*

**Figure 11.** *Application data. (four indoor plane surfaces with different depths).*

surfaces that are parallel to each other. **Figure 7** shows that surface 1 is the backmost and surface 4 is the foremost of the surfaces. Operator chooses five points on the reference surface in Cyclone screen in order to calculate equation parameters of the plane. Threshold value (minimum difference of depth) of test data is 1 cm. By using the Matlab-based software, the classification of 21,932 three dimensional laser point clouds are made with four separate surfaces. This classification can be seen in **Figure 12**.

**Figure 13** shows the graph which presents the exponential values. There are 3906 points in the first segment which is based on 21,932 original laser scanning data with four distinct classes. In the second segment, there are 6588. In the third segment, there are 1951 points, and in the fourth 9487 points (5 points belong to surface1). **Figure 14** presents the surfaces for each of the laser point cloud data. These are classified under four classes. **Figure 14** shows that the distance threshold value between the second and third plane surfaces is nearly 1 cm. The filter performs the segmentation of these two distinct surfaces without any errors. While assigning all the points on the laser point cloud into a plane surface, it is important to record them to their respective surface matrix. The points within the matrix of each surface enable us to find the X, Y plane coordinate values of four edge points of the plane surface. In this plane surface, only one Z value is designated to all four points.

**Figure 12.** *Application data in Matlab software.*

**Figure 13.** *Segmantation clustering graphic.*

The crucial consideration here is to understand that surface selection does not cause any limitation and the operator decides this surface and defines it as the reference plane. Operator can select any surface as the reference plane in the suggested algorithm. The closest or the farthest plane surface or any surface within these surfaces can be determined as the reference surface by the operator. So, exponential values are used during the process of creating the segmentation matrix (matrix S). Distances between points and reference plane can be either negative or positive but since the exponential values are used, this does not change the results of segmentation or classification. Thus, negative or positive distances to the reference plane do not result in any segmentation error (this only shows the side that they belong to on any plane and does not interfere with the result). So, if the reference plane is the foremost surface (surface 4) and five points are selected on that surface, first segment includes 3906 points, second segment 6542 points, third segment 1997 points, and fourth segment 9487 points (note that the source of 5 points is surface 4) [10]. Matlab coding flowchart for this application is given in **Figure 15**.

Matlab Code of Flowchart in **Figure 15** has been given in Appendix A.

**Figure 15.** *Flowchart of Matlab coding for point segmentation.*

#### **3.2 Object segmentation**

In this application, it is aimed to obtain the corner coordinate values of window gaps with geometric segmentation, which are not present in the mobile laser scanning data in order to use these values for automated visualization of the building facade survey. The sharp corners of the window are both natural points and target points for the vertical conversion. For this reason, laser point cloud data is used for the detection of these points by geometric segmentation. Then, the window corner points obtained by this segmentation are compared numerically with the values in the original point cloud data data in order to perform the analysis and obtain the results of the study.

The point cloud data used in the study are obtained from a measurement which was made in the Balgat district of Ankara with a vehicle in the inventory of TOPCON Company, equipped for mobile mapping, with a speed ranging from 10 km/h to 20 km/h. Ladybug-5 camera is used to produce photogrammetric data. The measuring equipment is shown in **Figure 16** [11].

The point cloud data of study study area which are collected mobil laser scanner has shown in the **Figure 17**.

In this application, the data, which were obtained from cyclone program, were saved into the Tamyatay.txt file. Then, in order to distinguish the building points from the ground points on which we performed the survey process, the points which were 35 cm high from the lowest point of the ground have been determined on our raw data and the classification process was started.

Ground criteria was determined by the following Eq. (3) [12]:

$$Z\_{1=Z\_{\min}} + zk \tag{3}$$

Here, the ground height value *zk* (ground thickness), which is asked to be entered into the code, can be changed. In order to distinguish the points more easily in the study, 35 cm was taken as the most appropriate value. Before starting the classification process, the scanned point clouds were plotted in Matlab as below the **Figure 18**.

For performing the classification process, first, a value of 1 was assigned to the class column of all points. Then, Z value was assigned to points whose Z value was less than the ground criteria. The ground points matrix Matrix\_Ground was created by selecting the values which were assigned with 0 and its graph was plotted. The road's center axis Axisy, Axisx was determined using the points having maximum

**Figure 16.** *The vehicle of application which was used to obtain point cloud data.*

**Figure 17.** *Mobil point cloud data of application Array.*

#### **Figure 18.**

*Three-dimensional laser point cloud data in Matlab.*

and minimum X and Y values from the ground points. When determining the center axis of the road, the axis was used on the points which were taken as *Z*<sup>1</sup><sup>¼</sup> *Z* and had a depth of *Z*<sup>1</sup> and above.

These processes were calculated with the following Eqs. (4) and (5):

$$\text{Axis}\_{\mathcal{Y}} = \text{Ground}\_{\mathcal{Y}\_{\text{min}}} + \left(\frac{\text{Ground}\_{\mathcal{Y}\_{\text{max}}} - \text{Ground}\_{\mathcal{Y}\_{\text{min}}}}{2}\right) \tag{4}$$

$$\text{Axis}\_{\text{x}} = Groupd\_{\text{x}\_{\text{min}}} + \left(\frac{Ground\_{\text{x}\_{\text{max}}} - Groupd\_{\text{x}\_{\text{min}}}}{2}\right) \tag{5}$$

Other than the values which were assigned to the class column with the value of 0 (ground point), the points assigned with class value 1 (building points) were saved as Matrix\_Building\_seg matrix and its graph was plotted in the **Figure 19**.

The building points were separated from the ground height of 35 cm, but a new classification was made on the basis of the above-mentioned road center axis to determine whether the building was on the left or right side of the road axis. For this purpose, a value of 1 was assigned to points smaller than *Axisx* (depth of the road axis) and a value of 2 was assigned to points greater than *Axisx*. The points assigned with the value of 1 are shown in the Matrix\_Building\_Left matrix, and the points assigned with the value of 2 are shown in the Matrix\_Building\_Right matrix.

It is assumed that the building lays on the side with the high number of points. So, the number of points of the Matrix\_Building\_Left and Matrix\_Building\_Right matrices were calculated and the values of the Matrix\_Building\_Left matrix were written as Matrix\_Building\_Segment matrix where the building points were completely separated and then, the building points were plotted. Also, if there were no points exceeding the height limit ð Þ *Z*<sup>1</sup> on the right side of the road center axis, although this data was not included in our data, this classification step would have to be made in order to completely make a distinction of the building points in the **Figure 19** also.

After determining the building points, building points were transferred into a reference surface to draw the building facade survey. A mathematical filtering function was applied in this stage.

The segmentation works on the basis of surface-dependent height differences within the maximums and minimums of the surface function that forms the mathematical model. Surface equation is expressed as Eq. (6) [13]:

$$a\mathbf{x}\_n + by\_n + c\mathbf{z}\_n + d = \mathbf{0} \tag{6}$$

where n is the number of points.

An exponential filtering method is used as the filtering function. Filtering function is determined as Eq. (7):

$$\mathbf{f}(\mathbf{x}) = e^{\Delta B u \text{ilding}} = \frac{1}{e^{\Delta D \text{cph}}} \tag{7}$$

Where *ΔXBuilding* and Δ*Depth* were calculated with the following Eqs. (8) and (9):

$$
\Delta X\_{Building} = \frac{d + bZ\_{Building} + aY\_{Building}}{c} \tag{8}
$$

**Figure 19.** *Road center Axis and point cloud data without point cloud of road in Matlab.*

$$
\Delta\_{Depth} = X\_{Building} - X\_{max} \tag{9}
$$

*a, b, c, d* represent plane equation parameters, *ΔXBuilding* is the depth of building points to the reference surface, Δ*Depth* is depth differences of building points to the reference surface, *e*<sup>Δ</sup>*Bina* is result values of the function, *e*<sup>Δ</sup>*Depth* is exponential values of differences of depth, and *Ekseny* represents the y value of road center axis.

For the segmentation process, the building reference surface points were determined at first. Then, the limits of the reference surface were resolved by using the minimum and maximum points of Z (height) and Y (length) of the building points and a rectangular surface was created. Only the maximum value of X (depth) was used. This is due to the fact that the bottom window (Window 1) of the building, which we were to transfer to the surface is located more on the backside than the other three windows. Also, when the minimum value of X is taken, there is a possibility that the points behind the bottom window with the smaller depth values might interfere with the bottom window and building points. The specified reference surface points are written into a Building\_Reference\_Surface\_Points.txt file in the **Table 2**. The Surface graphic has shown in the **Figure 20**.

After the reference surface points are determined, the reference plane parameters of the building are calculated using these points. The reference surface is drawing in Matlab has shown in the **Figure 21**, then the PRight matrix is created and plotted. It is written into a Building\_Reference\_Plane\_Par.txt file.


**Table 2.** *Building reference surface points.*

**Figure 20.** *Building reference surface points.*

#### **Figure 21.** *Reference surface.*

The following Eqs. (10–13) are used in the process of adjustment:

$$I = [4, 1] \tag{10}$$

$$N = BY^TBY \tag{11}$$

$$m = N^{-1}I \tag{12}$$

$$X = N^{-1}[I] \tag{13}$$

It represents unit matrix, *BY* is the matrix of detected maximum and minimum surface points, *N* is the normal equation coefficient matrix, *n* is the plain terms vector, and *X* represents the matrix of unknown values (surface parameters) in the **Table 3**.

Using the calculated parameters and the result values of the function (*e*<sup>Δ</sup>*Building*), Z and Y values of the building points are transferred to the reference plane. The X depth value (*SurfaceDepth*) of the reference plane is calculated by averaging the maximum and minimum X values, and a *SurfaceFilter* matrix with the depth of the number of building points is created using the unit matrix. These processes are performed with the following Eqs. (14) and (15):

$$Surface\_{Depth} = \frac{Surface\_{X\_{max} +} Surface\_{X\_{min}}}{2} \tag{14}$$

$$\text{Surface}\_{\text{Filter}} = \text{Surface}\_{\text{Depth}}.I\_{\text{Unit}} \tag{15}$$

To use the coordinate values obtained by mathematical filtering in the Geometric.m code, the new point cloud matrix was created and written to the


**Table 3.** *Reference plane parameters.*

Geometric\_Filter\_Input.txt file for the references surface.The distance between points was determined as 25 cm, Geometric.m code was executed and 180 new points or openings were written and graphed by creating opening\_output\_txt file. As a result of this process, the window openings were filled vertically. This segmentation points have shown in the **Figure 22**.

Since the gaps were calculated in blocks with gap filtering in Matlab, a reclassification was performed to determine which window the dots belonged to. A function which is similar to the exponential filter function used in the transfer of building points to the reference plane was used. The windows which were drawn by using the coordinate values obtained from the result of point segmentation (distance between points is 14.5 cm) in red, surface segmentation in green, and conventional segmentation in magenta are shown in **Figures 23** and **24**.

To assign these points, the OP\_Pen matrix with the number of lines (same as the building point numbers) and with a NaN value was created. Then, points within the 120 cm border were selected from the Matlab code matrix and printed on the OP\_Pen matrix. In order to distinguish these assigned values from NaN values, the snip.m code was run once more. The points to be optimized for each window were differentiated and the files OP\_PP1.txt, OP\_PP2.txt, 0P\_PP3.txt, OP\_PP4.txt, which

**Figure 22.**

*Window points after geometric segmentation.*

**Figure 23.** *Windows survey via two methods in MatLab.*

#### **Figure 24.**

*Window gaps result of segmentation methods in survey point cloud in MatLab for window gaps.*

#### **Figure 25.**

*Flowchart of Matlab coding for object segmentation.*

had 2132, 646, 624, 383 points respectively, were created and the data points to be segmented were plotted in the **Figure 24**.

Matlab coding flowchart for this application is given in **Figure 25**. Matlab Code of Flowchart in **Figure 25** has been given in Appendix B.

#### **4. Conclusion**

With this chapter, we observe that point cloud data need fundamental process flow as the fundamental steps of geometric segmentation by Matlab programming. Obviously, Matlab programming which depend on various algorithm by data structures. Geometric segmentation methods should be based on these alghorithm by programmers in the point cloud data. The process examples and some tips are given by Matlab coding in this chapter, which is made for some kind of point cloud

segmentation flowcharts, and an approach to solve fundamental geometric segmentation is presented. Generaly, point cloud data coordinate system does not defined universal coordinate system. Thus, point cloud data listed disorderly by Cyclone 5.2 software are classified in a surface-based way, so it passes through a geometric segmentation before transformation to universal coordinate system.

In the potential, upcoming studies, radiometric and hiybrid segmentation methods might be used in Matlab coding. More accurate results might be obtained by using another method than geometric segmentation technique or by adding a third filter to geometric filtering or other segmentation methods whic are mentioned in this chapter.

#### **Appendix A**

```
clear all;
clc;
format long g;
load 'YUZEY1.txt';
load 'ham_data.txt';
I=ones(4,1);
N=transpose(YUZEY1)*(YUZEY1);
n=inv(N)*I;
X=inv(N)*(I);
[t,k]=size(ham_data);
int i;
int j;
i=1;
j=1;
matris_X=ones(t,1);
matris_Y=ones(t,1);
matris_Z=ones(t,1);
for j=1:k;
  i=1:t;
    c=ham_data(i,j);
    if j==1
       matris_X(i,1)=c;
    elseif j==2
         matris_Y(i,1)=c;
       elseif j==3
            matris_Z(i,1)=c;
         end
end
a=X(1,1);
b=X(2,1);
c=X(3,1);
d=X(4,1);
T=[ham_data];
[ns,ss]=size(T);
for i=1:ns;
```

```
Matris_Oran(i,1) =(abs(a*(T(i,1)+ b*T(i,2)+ c*T(i,3)+ d)))/sqrt(a*a+b*b+c*c);
end
Kesme_Kriteri = std(Matris_Oran);
%—————————————————————————————————
%—————————————————————————————————
%—————————————————————————————————
kes = Kesme_Kriteri;
gebze = max(Matris_Oran);
cayir = min(Matris_Oran);
yuksek = mean(Matris_Oran);
%sayi = ((Ayrac-1)/kes);
for i=1:ns;
 Matris_YOran(i,1) =(Matris_Oran(i,1)/yuksek);
end
ayar1 = max(Matris_YOran);
ayar2 = min(Matris_YOran);
ayar3 = mean(Matris_YOran);
ayar4 = std(Matris_YOran);
for i=1:ns;
 Matris_KOran(i,1) =exp(Matris_YOran(i,1));
end
dayar1 = max(Matris_KOran);
dayar2 = min(Matris_KOran);
dayar3 = mean(Matris_KOran);
dayar4 = std(Matris_KOran);
%yüzey sayisinin belirlenmesi—————————————————————
say = dayar1 - dayar2;
say = round (say);
yuzey_sayisi= say;
%——kesme kriteri ve sýnýflandýrma——————————————
kes=dayar2 + 0.25;
for i=1:ns;
  if (Matris_KOran(i,1) < kes)
             Matris_KEGIT(i,1) = 1;
  else
             Matris_KEGIT(i,1) = 0;
 end
end
for i=1:ns;
 if ((2.25 < Matris_KOran(i,1))&&( Matris_KOran(i,1) < 3.25))
         Matris_KEGIT(i,1) = 2;
```

```
end
end
for i=1:ns;
 if ((3.25 <= Matris_KOran(i,1))&&( Matris_KOran(i,1) < 4.25))
        Matris_KEGIT(i,1) = 3;
 end
end
for i=1:ns;
 if ((4.25 <= Matris_KOran(i,1))&&( Matris_KOran(i,1) < 5.50))
        Matris_KEGIT(i,1) = 4;
 end
end
%—————————————————————————————————
%—————————————————————————————————
KY=ham_data(:,2);
```
KX=ham\_data(:,1); KZ=ham\_data(:,3); %Fig1=plot(KX,KY,KZ,'–rs');

T=[T,Matris\_Oran,Matris\_YOran,Matris\_KOran,Matris\_KEGIT];

```
Fig2 = plot(Matris_KOran,'-.or');
```
### **Appendix B**

clc; clear; format long g;

%Point cloud data loading

```
[dosyaadi,dosyayolu] = uigetfile(...
  {'*.dat;*.txt;*.xyz;*.pts','Lazer Veri Dosyaları...(*.dat,*.txt,*.xyz,*.pts)';
  '*.dat', 'Data_Dosyalar (*.dat)';...
  '*.txt', 'Txt_Dosyalar (*.txt)';...
  '*.xyz', 'Nokta_Dosyalar (*.xyz)';...
  '*.pts', 'Nokta Bulutu_Dosyalar (*.pts)';
  '*.*', 'Tüm Dosyalar (*.*)'},...
  'Bir Lazer Tarama Veri Dosyası Seçiniz:');
 if dosyaadi=0
h=waitbar(0,'Lazer Verisi Yükleniyor');
for i=1:10
 Ham_veri_Matris = load([dosyayolu,dosyaadi]);
 [ns, ss] = size(Ham_veri_Matris);
```

```
waitbar(i/10,h);
end
 close(h);
end
 %Segmentation starting
 X =Ham_veri_Matris(:,1);
Y =Ham_veri_Matris(:,2);
Z =Ham_veri_Matris(:,3);
 figure('Name','3D Laser Point Cloud','NumberTitle','on')
scatter3(X,Y,Z,'.');
 str1=num2str(ns);
uiwait(msgbox({'Toplam Tarama Nokta Sayısı',[str1]},'Success'));
fprintf('Toplam Tarama Nokta Sayısı %d\n',ns);
 matris_sinif=ones(ns,1);
Matris_segmentation = [X,Y,Z,matris_sinif];
 %Elevetion extraction
zemin=min(Z);
 %Elevation determined
 ifade={'Zemin Yüksekliği Değerini Giriniz !'};
baslik='Zemin Kalınlığı (birim m)';
normal={'0.35'};
zemin_kln=inputdlg(ifade,baslik,1,normal);
zemin_kln=str2double(zemin_kln);
Z1= zemin+zemin_kln;
 k=0;
for i=1:ns
    if (Ham_veri_Matris(i,3) <=Z1)
          Matris_segmentation(i,4)=0;
    k=k+1;
   end
end
for i=1:ns
  if (Matris_segmentation(i,4)== 0)
     Matris_Yer(i,1)=Ham_veri_Matris(i,1);
    Matris_Yer(i,2)=Ham_veri_Matris(i,2);
    Matris_Yer(i,3)=Ham_veri_Matris(i,3);
  end
end
```

```
Laser Point Cloud Segmentation in MATLAB
DOI: http://dx.doi.org/10.5772/intechopen.95249
```

```
Yer_X=Matris_Yer(:,1);
Yer_Y=Matris_Yer(:,2);
Yer_Z=Matris_Yer(:,3);
 Yer_X = nonzeros(Yer_X);
Yer_Y = nonzeros(Yer_Y);
Yer_Z = nonzeros(Yer_Z);
 Matris_Yer=[Yer_X Yer_Y Yer_Z];
 [yns,yss]=size(Matris_Yer);
 figure('Name','Zemin Noktaları','NumberTitle','on')
scatter3(Yer_X,Yer_Y,Yer_Z,'.');
hold on;
 str2=num2str(yns);
msgbox({'Zemin Noktası Sayısı',[str2]},'Success');
str3=num2str(ns-yns);
msgbox({'Bina Noktası Sayısı',[str3]},'Success');
 fprintf('Zemin Noktası Sayısı %d\n',yns);
fprintf('Bina Noktası Sayısı %d\n',ns-yns);
 %Base axes determination
 Yer_y_min=min(Yer_Y);
Yer_x_min=min(Yer_X);
Yer_y_max=max(Yer_Y);
Yer_x_max=max(Yer_X);
 Eksen_y=Yer_y_min+((Yer_y_max-Yer_y_min)/2);
Eksen_x=Yer_x_min+((Yer_x_max-Yer_x_min)/2);
Line1=[Eksen_x Yer_y_min];
Line2=[Eksen_x Yer_y_max];
Point_X=[Eksen_x
     Eksen_x];
 Point_Y=[Yer_y_min
     Yer_y_max];
Point_Z=[Z1
     Z1];
 %Segmentation matrix refreshing
 Matris_seg=zeros(size(Matris_segmentation));
k=0;
t=0;
for i=1:ns
 if (Matris_segmentation(i,4)==1)
```

```
Matris_seg(i,1)= Matris_segmentation(i,1);
Matris_seg(i,2)= Matris_segmentation(i,2);
Matris_seg(i,3)= Matris_segmentation(i,3);
```

```
Matris_seg(i,4)= Matris_segmentation(i,4);
    k=k+1;
   else if (Matris_segmentation(i,4)==0)
    Matris_seg(i,1)= 0;
    Matris_seg(i,2)= 0;
    Matris_seg(i,3)= 0;
    Matris_seg(i,4)= 0;
    t=t+1;
   end
  end
end
 %Building Points determinated
 Bina_X=nonzeros (Matris_seg(:,1));
Bina_Y=nonzeros (Matris_seg(:,2));
Bina_Z=nonzeros (Matris_seg(:,3));
 Matris_Bina=[Bina_X,Bina_Y,Bina_Z];
 [bns,bss]=size(Matris_Bina);
 sinif=ones(bns,1);
 Matris_Bina_seg=[Matris_Bina,sinif];
 %Building Points drawing
 figure('Name','Bina Noktaları','NumberTitle','on')
 scatter3(Bina_X,Bina_Y,Bina_Z,'.');hold on
plot3(Point_X,Point_Y,Point_Z, '-.o');
hold on
grid on
 %Building Points segmented
 sol=0;
sag=0;
for i=1:bns
    if (Matris_Bina(i,1) < Eksen_x )
        Matris_Bina_seg(i,4)= 1;
        sol=sol+1;
   else if (Matris_Bina(i,1) > Eksen_x )
        Matris_Bina_seg(i,4)= 2;
        sag=sag+1;
       end
  end
end
```

```
Matris_Bina_sol=zeros(bns,3);
```
Matris\_Bina\_sag=zeros(bns,3);

```
for i=1:bns
       if (Matris_Bina_seg(i,4)==2)
       Matris_Bina_sag(i,1)= Matris_Bina_seg(i,1);
       Matris_Bina_sag(i,2)= Matris_Bina_seg(i,2);
       Matris_Bina_sag(i,3)= Matris_Bina_seg(i,3);
       else
       Matris_Bina_sag(i,1)= 0;
       Matris_Bina_sag(i,2)= 0;
       Matris_Bina_sag(i,3)= 0;
       end
end
 Matris_Bina_Sag_X=Matris_Bina_sag(:,1);
Matris_Bina_Sag_Y=Matris_Bina_sag(:,2);
Matris_Bina_Sag_Z=Matris_Bina_sag(:,3);
 Matris_Bina_Sag_X=nonzeros(Matris_Bina_Sag_X);
Matris_Bina_Sag_Y=nonzeros(Matris_Bina_Sag_Y);
Matris_Bina_Sag_Z=nonzeros(Matris_Bina_Sag_Z);
Matris_Bina_Sag=[Matris_Bina_Sag_X,Matris_Bina_Sag_Y,Matris_Bina_Sag_Z];
for i=1:bns
      if (Matris_Bina_seg(i,4)==1)
      Matris_Bina_sol(i,1)= Matris_Bina_seg(i,1);
      Matris_Bina_sol(i,2)= Matris_Bina_seg(i,2);
      Matris_Bina_sol(i,3)= Matris_Bina_seg(i,3);
      else
      Matris_Bina_sol(i,1)= 0;
      Matris_Bina_sol(i,2)= 0;
      Matris_Bina_sol(i,3)= 0;
      end
end
 Matris_Bina_Sol_X=Matris_Bina_sol(:,1);
Matris_Bina_Sol_Y=Matris_Bina_sol(:,2);
Matris_Bina_Sol_Z=Matris_Bina_sol(:,3);
 Matris_Bina_Sol_X=nonzeros(Matris_Bina_Sol_X);
Matris_Bina_Sol_Y=nonzeros(Matris_Bina_Sol_Y);
Matris_Bina_Sol_Z=nonzeros(Matris_Bina_Sol_Z);
 Matris_Bina_Sol=[Matris_Bina_Sol_X,Matris_Bina_Sol_Y,Matris_Bina_Sol_Z];
 %Left side – right side deciding
 [bnsol,bnssol] = size(Matris_Bina_Sol);
[bnsag,bnssag] = size(Matris_Bina_Sag);
 if bnsol > bnsag
  msgbox('Bina, yol eksenine göre Sol taraftadır !','Success');
  [bina,sut_bina]=size(Matris_Bina_Sol);
```
Matris\_sinif=ones(bina,1); Matris\_Bina\_Segment=[Matris\_Bina\_Sol,Matris\_sinif];

```
%Segmented Building Points drawing
figure('Name','Bina Noktaları','NumberTitle','on')
scatter3(Matris_Bina_Sol_X,Matris_Bina_Sol_Y,Matris_Bina_Sol_Z,'.');
hold on;
```
else

```
msgbox('Bina, yol eksenine göre Sağ taraftadır !','Success');
[bina,sut_bina]=size(Matris_Bina_Sag);
Matris_sinif=2*ones(bina,1);
Matris_Bina_Segment=[Matris_Bina_Sag,Matris_sinif];
```
end

%Referance surface determination

Bina\_yuzey\_X\_max = max(Matris\_Bina\_Segment(:,1)); Bina\_yuzey\_X\_min = min(Matris\_Bina\_Segment(:,1));

Bina\_yuzey\_Z\_max = max(Matris\_Bina\_Segment(:,3)); Bina\_yuzey\_Z\_min = min(Matris\_Bina\_Segment(:,3));

```
Bina_yuzey_Y_max = max(Matris_Bina_Segment(:,2));
Bina_yuzey_Y_min = min(Matris_Bina_Segment(:,2));
```

```
BY1=[Bina_yuzey_Z_max,Bina_yuzey_Y_max,Bina_yuzey_X_max,1];
BY2=[Bina_yuzey_Z_max,Bina_yuzey_Y_min,Bina_yuzey_X_max,1];
BY3=[Bina_yuzey_Z_min,Bina_yuzey_Y_max,Bina_yuzey_X_max,1];
BY4=[Bina_yuzey_Z_min,Bina_yuzey_Y_min,Bina_yuzey_X_max,1];
```

```
BG1=[Bina_yuzey_Z_max,Bina_yuzey_Y_max,Bina_yuzey_X_max];
BG2=[Bina_yuzey_Z_max,Bina_yuzey_Y_min,Bina_yuzey_X_max];
BG3=[Bina_yuzey_Z_min,Bina_yuzey_Y_max,Bina_yuzey_X_max];
BG4=[Bina_yuzey_Z_min,Bina_yuzey_Y_min,Bina_yuzey_X_max];
```

```
BG=[BG1
  BG2
  BG3
  BG4];
 BGGX=BG(:,1);
BGGY=BG(:,2);
BGGZ=BG(:,3);
BY=[BY1
  BY2
  BY3
  BY4];
```
%Reference surface Points

GZ=[Bina\_yuzey\_Z\_max Bina\_yuzey\_Z\_max Bina\_yuzey\_Z\_min Bina\_yuzey\_Z\_min]; GY=[Bina\_yuzey\_Y\_max Bina\_yuzey\_Y\_min Bina\_yuzey\_Y\_max Bina\_yuzey\_Y\_min]; GX=[Bina\_yuzey\_X\_max Bina\_yuzey\_X\_max Bina\_yuzey\_X\_max Bina\_yuzey\_X\_max]; %Reference surface drawing figure('Name','Bina En Büyük Referans Yüzey Noktaları','NumberTitle','on') scatter3(GX,GY,GZ,'.'); hold on; figure('Name','Bina En Büyük Referans Yüzeyi','NumberTitle','on') plot3(GX,GY,GZ); grid on; GZ=[Bina\_yuzey\_Z\_min Bina\_yuzey\_Z\_max Bina\_yuzey\_Z\_max]; KZ=[Bina\_yuzey\_Z\_max Bina\_yuzey\_Z\_min Bina\_yuzey\_Z\_min]; GY=[Bina\_yuzey\_Y\_min Bina\_yuzey\_Y\_max Bina\_yuzey\_Y\_min]; KY=[Bina\_yuzey\_Y\_max Bina\_yuzey\_Y\_min Bina\_yuzey\_Y\_max]; GX=[Bina\_yuzey\_X\_max Bina\_yuzey\_X\_max Bina\_yuzey\_X\_max]; figure('Name','Referans Yüzey','NumberTitle','on') fill3(GX,GY,GZ,GX); grid on hold on fill3(GX,KY,KZ,GX) grid on hold on

dlmwrite ('Bina\_Referans\_Yuzey\_noktaları.txt',BY,'delimiter','\t'); I=ones(4,1); N=transpose(BY)\*BY; n=pinv(N)\*I; X=pinv(N)\*(I);

%Reference surface parameters

a=X(1,1); b=X(2,1); c=X(3,1); d=X(4,1); PSag=[a b c d];

dlmwrite('Bina\_Referans\_Düzlem\_Par.txt',PSag,'delimiter','\t'); %Building surface segmentation

```
for i=1:bina
  Derinlik_Bina(i,1)=(-(d+b*Matris_Bina_Segment(i,2)+a*Matris_Bi-
  na_Segment(i,3))/c);
  Delta_Derinlik(i,1)= Derinlik_Bina(i,1)- Bina_yuzey_X_max;
  DDEXP_Bina(i,1) = 1/(exp(Delta_Derinlik(i,1)));
  Egim_Acisi_Bina(i,1) = atan((Matris_Bina_Segment(i,2)-Z1)/(Matris_
Bina_Segment(i,1)- Eksen_y));
end
```

```
Yuzey_Der=(Bina_yuzey_X_max+Bina_yuzey_X_min)/2;
Filtre_Yuzey = Yuzey_Der*ones(bina,1);
Matris_Bina_Segment=[Matris_Bina_Segment,Derinlik_Bina,Delta_Derinlik,
DDEXP_Bina,Filtre_Yuzey];
```

```
KOD=[Matris_Bina_Segment(:,2) Matris_Bina_Segment(:,3) Matris_
Bina_Segment(:,8)];
dlmwrite('Geo_Filitre_Giriş.txt',KOD,'delimiter','\t');
```
%Geometric Segmentation start run Geometric.m; %Geometric Segmentation finish

KSM = dlmread('bosluk\_output.txt');

hold on;

```
figure('Name','Geometric Öncesi Yüzey Noktaları','NumberTitle','on')
scatter3(Matris_Bina_Segment(:,8),Matris_Bina_Segment(:,2),Matris_
Bina_Segment(:,3),'.');hold on
scatter3(KSM(:,3),KSM(:,1),KSM(:,2),'.','r');
hold on;
```

```
Ref_Nok_X=Point_X/2;
   Ref_Nok_Y=Point_Y/2;
   Ref_Nok_Z=Z1;
    %Building flat number info ask
    ifade1={'Bina Kaç Katlı ?'};
   normal1={'4'};
   baslik1='Bina Kat Sayısı';
   satir1=1;
   cevap1=inputdlg(ifade1,baslik1,satir1,normal1);
   Bina_Kat_Sayisi=str2double(char(cevap1(1,1)));
    %Flat Classing
    [kfns,kfss]=size(KSM);
   Egim_Acisi_Geo=ones(kfns,1);
   EXP_Kal=ones(kfns,1);
   for i=1:kfns
     Egim_Acisi_Geo(i,1) = atan(KSM(i,2)/10);
     EXP_Kal(i,1)=exp(Bina_Kat_Sayisi*Egim_Acisi_Geo(i,1));
   end
    std_Kal=std(EXP_Kal);
   max_Kal=max(EXP_Kal);
   min_Kal=min(EXP_Kal);
   KAL_Seg_Deg = round(std_Kal,0)/(Bina_Kat_Sayisi*0.5);
   Seg_Kal=[KSM Egim_Acisi_Geo EXP_Kal];
    for i=1:Bina_Kat_Sayisi
     eval(sprintf('Pen%d = NaN(kfns,3)', i));
   end
   for j=Bina_Kat_Sayisi:-1:1
     for i=1:kfns
       if Seg_Kal(i,5)>KAL_Seg_Deg*2^(j-2) && Seg_Kal(i,5)<KAL_Seg_Deg*2^
       (j-1)
       eval(sprintf('Pen%d(i,1)=Seg_Kal(i,1)', j));
       eval(sprintf('Pen%d(i,2)=Seg_Kal(i,2)', j));
       eval(sprintf('Pen%d(i,3)=Seg_Kal(i,3)', j));
      end
     end
   PP=snip(eval(sprintf('Pen%d', j)),nan);
   eval(sprintf('PP%d = PP', j));
    PX=[max(eval(sprintf('PP%d(:,1)', j)))
     min(eval(sprintf('PP%d(:,1)', j)))
     min(eval(sprintf('PP%d(:,1)', j)))
     max(eval(sprintf('PP%d(:,1)', j)))
     max(eval(sprintf('PP%d(:,1)', j)))];
      eval(sprintf('aax%d=PX(1,1)',j));
Laser Point Cloud Segmentation in MATLAB
DOI: http://dx.doi.org/10.5772/intechopen.95249
```

```
eval(sprintf('bbx%d=PX(2,1)',j));
  eval(sprintf('PP%dX = PX', j));
 PY=[min(eval(sprintf('PP%d(:,2)', j)))
  min(eval(sprintf('PP%d(:,2)', j)))
  max(eval(sprintf('PP%d(:,2)', j)))
  max(eval(sprintf('PP%d(:,2)', j)))
  min(eval(sprintf('PP%d(:,2)', j)))];
   eval(sprintf('aay%d = PY(3,1)',j));
  eval(sprintf('bby%d = PY(1,1)',j));
   eval(sprintf('PP%dY = PY', j));
 PZ=[eval(sprintf('PP%d(1,3)', j))
  eval(sprintf('PP%d(1,3)', j))
  eval(sprintf('PP%d(1,3)', j))
  eval(sprintf('PP%d(1,3)', j))
  eval(sprintf('PP%d(1,3)', j))];
  eval(sprintf('PP%dZ = PZ', j));
end
 KODX=[max(KOD(:,1))
  min(KOD(:,1))
  min(KOD(:,1))
  max(KOD(:,1))
  max(KOD(:,1))];
 KODY=[min(KOD(:,2))
  min(KOD(:,2))
  max(KOD(:,2))
  max(KOD(:,2))
  min(KOD(:,2))];
 KODZ=[KOD(1,3)
  KOD(1,3)
  KOD(1,3)
  KOD(1,3)
  KOD(1,3)];
 %Results drawing
```

```
figure('Name','Röleve','NumberTitle','on')
for i=1:Bina_Kat_Sayisi
line(eval(sprintf('PP%dX', i)),eval(sprintf('PP%dY', i)),eval(sprintf('PP%dZ',
i)),'Color','red');hold on
end
line(KODX,KODY,KODZ);hold on
```
### **Author details**

Bahadır Ergün\* and Cumhur Şahin Department of Geomaics Engineering, Gebze Technical University, PK 141, Gebze 41400 Kocaeli, Türkiye

\*Address all correspondence to: bergun@gtu.edu.tr

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Lichti D, Gordon SJ. Error Propagation in directly georeferenced terrestrial laser scanner point clouds for cultural heritage recording. In: Proceeding of FIG Working Week; 22-27 May 2004; Athens, Greece; 2004

[2] Ergun B. A novel 3D geometric object filtering function for application in indoor area with terrestrial laser scanning data. Optics & Laser Technology. 2010;**42**: 799-804

[3] Lerma JL, Biosca JM. Segmentation and filtering of laser scanner data for cultural heritage. In: Proceeding of CIPA 2005 XX International Symposium; 26 September–01 October, 2005; Torino, Italy; 2005

[4] Dold C, Brenner C. Automatic matching of terrestrial scan data as a basis for the generation of detailed 3D city models. In: Proceeding of International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences; 12-23 July 2004; Istanbul, Turkey; 35(B3) pp. 1091-1096

[5] Pu S, Vosselman S. Automatic extraction of building features from terrestrial laser scanning. In: Proceeding of International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; 25-27 September 2006; Dresden, Germany; 36(5):5 pages

[6] Biosca JM, Lerma JL. Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods. In: Proceeding of ISPRS Journal of Photogrammetry and Remote Sensing, 3-11 July 2008; Beijing, China; **63**(1):84-98

[7] Bauer J, Karner K, Schindler K, Klaus A, Zach C. Segmentation of building

models from dense 3D point-clouds. In: Proceedings of 27th Workshop of the Austrian Association for Pattern Recognition; 5-6 June 2003; Laxenburg, Austria. 2003; pp. 253-258

[8] Boulaassal H, Landes T, Grussenmeyer P, Tarsha-Kurdi F. Automatic segmentation of building facades using terrestrial laser data. In: Proceedings of International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Workshop on Laser Scanning 2007 and SilviLaser 2007; Espoo, Finland. Vol. 36 (3/W52). 2007. pp. 65-70

[9] Boulaassal H, Landes T, Grussenmeyer P. Automatic extraction of planar clusters and their contours on building façades recorded by terrestrial laser scanner. International Journal of Architectural Computing. 2009;**7**(1): 1-20

[10] Sahin C. Planar segmentation of indoor terrestrial laser scanning point clouds via distance function from a point to a plane. Optics and Lasers in Engineering. 2015;**64**:23-31. DOI: 10.1016/j.optlaseng.2014.07.007

[11] Ergun B, Sahin C, Aydin A. Twodimensional (2-D) Kalman segmentation approach in mobile laser scanning (MLS) data for panoramic image registration. Lasers in Engineering. 2020;**48**:121-150

[12] Grewal MS, Andrews A. Kalman Filtering Theory and Practice Using MATLAB. 2nd ed. Canada: Wiley; 2001

[13] Ay E. 3D geometric object fılterıng application in terrestrial laser scanning data. [Master's thesis]. Kocaeli, Türkiye: Gebze Technical University; 2009

Section 4
