**6. UWB for map building and localization**

This section deals with the problem of building a map of the surrounding area using the bat-type scenario introduced in Section 2.1 . This scenario is characterized by the fact that no supporting infrastructure is used and no external information about the location of the mobile, robot-like sensor is needed. Our goal is to build a map of the surrounding for the robot, while at the same time the robot localizes itself relative to the map. In the field of robotics, this problem is known as simultaneous localization and mapping (SLAM).

To solve the complete SLAM problem, many different approaches have been presented e.g. in [62], but there well established sensor technologies like LASERs or optical cameras are used that would not work in the envisaged scenario and cannot make use of the unique capabilities of UWB radar, see Section 2.1. Other solutions are based on WLAN [53], RFID [32] or other external sources of information and, thus, must also be discarded.

There are other approaches for indoor localization and/or map building using UWB technology, but they are restricted to estimate the 2D dimensions of a strictly rectangular room [14] or need a priori information about the positions of walls to calculate virtual anchor nodes [39]. The solution presented here is more general and copes with arbitrary room shapes as long as adjacent walls are straight and orthogonal to each other.

The main advantage of this approach in comparison with the object recognition or the imaging in Section 7 and 8, is the fact that it is able to deliver a solution with a far lower number of measurements.

In the following section, a solution to the SLAM problem using a UWB radar in the bat-type scenario is described. It uses measurement models incorporating three different typical room characteristics: straight walls, corners and edges and a state-space description of the room and the robot. Algorithms for dynamic state estimation are used to calculate the desired states. Data association of measured propagation times and room features is vital here and is dealt with in great detail. Results using simulated and measured data then show the feasibility of the concept. Special requirements for the antennas are also discussed.

### **6.1. Measurement model**

22 Will-be-set-by-IN-TECH

x (m)

(b) Motion factor: Track 2

of the factors, the sensor network provides more degrees of freedom for the system designer,

This section deals with the problem of building a map of the surrounding area using the bat-type scenario introduced in Section 2.1 . This scenario is characterized by the fact that no supporting infrastructure is used and no external information about the location of the mobile, robot-like sensor is needed. Our goal is to build a map of the surrounding for the robot, while at the same time the robot localizes itself relative to the map. In the field of robotics, this

To solve the complete SLAM problem, many different approaches have been presented e.g. in [62], but there well established sensor technologies like LASERs or optical cameras are used that would not work in the envisaged scenario and cannot make use of the unique capabilities of UWB radar, see Section 2.1. Other solutions are based on WLAN [53], RFID [32] or other

There are other approaches for indoor localization and/or map building using UWB technology, but they are restricted to estimate the 2D dimensions of a strictly rectangular room [14] or need a priori information about the positions of walls to calculate virtual anchor nodes [39]. The solution presented here is more general and copes with arbitrary room shapes as

The main advantage of this approach in comparison with the object recognition or the imaging in Section 7 and 8, is the fact that it is able to deliver a solution with a far lower number of

In the following section, a solution to the SLAM problem using a UWB radar in the bat-type scenario is described. It uses measurement models incorporating three different typical room characteristics: straight walls, corners and edges and a state-space description of the room and the robot. Algorithms for dynamic state estimation are used to calculate the desired states. Data association of measured propagation times and room features is vital here and is dealt with in great detail. Results using simulated and measured data then show the feasibility of

**Figure 18.** The motion and topology factors given at a certain frequency *f* . For the motion factor, *v* · *PRT* = 5*c*/ *f* , where *v* is the sensor speed, *PRF* is the pulse repetition time, and *c* is the signal propagation speed. For the topology factor, the sensor element interval is 5*c*/ *f* , and the number of


L2

L1

y (m)

x (m)

(c) Topology factor


x (m)

(a) Motion factor: Track 1

sensor elements on each array is 30.

measurements.

compared with a single sensor system.

**6. UWB for map building and localization**

problem is known as simultaneous localization and mapping (SLAM).

external sources of information and, thus, must also be discarded.

long as adjacent walls are straight and orthogonal to each other.

the concept. Special requirements for the antennas are also discussed.


L2

L1

y (m)

y (m)

**Figure 19.** Three room features used in the state-space measurement equation.

To detect and localize the features, the bat-type UWB radar is used to measure the round-trip-times between the transmitting antenna, features of the surroundings, and the receiving antennas, which are extracted from UWB impulse responses and stored in the measurement vector **z**. A two-dimensional geometrical model of the real world is used in the algorithm, similar to [4]. Walls and corners are represented as single and double reflections, respectively. Edges and small objects are represented as scatterers. Schematic illustrations of the propagation models are shown in Fig. 19 .

Using an estimated initial position and orientation of the antenna array and an estimated initial position of a feature, an expected time-of-flight between transmitter, feature and receiver can be calculated. This can be used for dynamic state estimation, for example in an Extended Kalman filter or a particle filter, to iteratively improve the estimate of the positions. By measuring at different positions or rotating the antenna array, it is possible to distinguish the different features and calculate an initial estimate of their positions. To do this, a state-space description of the room and the robot is needed.

### **6.2. State-space description of the room and the robot**

To solve the SLAM problem, a state-space description is used. The state vector **x** to be estimated consists of three different parts.

$$\mathbf{x} = \begin{bmatrix} \mathbf{x}\_{robot\prime} \mathbf{x}\_{sensor\prime} \mathbf{x}\_{map} \end{bmatrix}^T \tag{7}$$

**x***robot* contains the information about the robot position in *x* and *y* direction, *px* and *py*, as well as the speed of the robot, represented as movement angle *p<sup>φ</sup>* and the absolute value of the speed *v*. All values are in relation to the local coordinate system.

$$\mathbf{x\_{robot}} = \begin{bmatrix} p\_{x\_{\prime}} p\_{y\_{\prime}} p\_{\phi\_{\prime}} v \end{bmatrix}^{T} \tag{8}$$

In **x***sensor* relevant information about the sensors like biases, vector **b**, or the orientation *φarray* of the antenna array relative to the robot is stored.

$$\mathbf{x}\_{\text{sensor}} = \begin{bmatrix} \mathbf{b}\_{\prime} \boldsymbol{\phi}\_{\text{array}} \end{bmatrix}^{T} \tag{9}$$

This is possible because the a priori state estimate **ˆx**− contains the position and orientation of the antenna array as well as the position of the landmarks, so an expected time of flight can be calculated using the measurement functions presented earlier. Here, a Gaussian distribution with known covariance is assumed. *p* (*ci* = *j*) represents the probability of impulse *zi* being

Cooperative Localization and Object Recognition in Autonomous UWB Sensor Networks 203

From the normalized importance distribution *π* the data association vector **c** is drawn by means of the Monte Carlo method. In this way, even in the case of false measurements being closer to the predicted measurement than the correct measurement, there is a chance that the right one is chosen. It is important to note that the opposite case, a false measurement chosen

So, at first glance, this method has no advantage over a simple Nearest Neighbor method, where only the measurement closest to the prediction is used. The Monte Carlo method tends to produce slightly worse results than simply choosing the association with the highest probability. This method makes sense if not only one state is estimated but many hypotheses

The particle filter tracks many hypotheses. These hypotheses are depicted as points, or particles, in the multi-dimensional state space. Each particle *s<sup>l</sup>* is composed of the estimated

The index *l* denotes the *l*th particle. The weight *w<sup>l</sup>* is an indicator of the likelihood that a certain hypothesis holds true. It is calculated using the weight from the previous iteration. Thus, the weight serves as a memory. In the long run, the hypothesis with the highest weight

> *<sup>k</sup>*−<sup>1</sup> *<sup>p</sup>* **<sup>z</sup>***k*|**x***<sup>l</sup> k*, **c***l k*

(a) Cluttered measurements (b) Associated measurements

**Figure 20.** Left: Cluttered sample measurements over time, right: Associated measurement curves.

, the covarianz matrix **P***<sup>l</sup>* of the state, a data association variable **c***<sup>l</sup>* where the association

.

(13)

(14)

associated with landmark *j* .

over a correct one, is also possible.

state **ˆx***<sup>l</sup>*

of possible states. That is what the particle filter can handle.

will be the one that best approximates the real world.

*6.3.2. Data association in measurement space*

between measurements and landmarks is stored, and a weight *w<sup>l</sup>*

*s <sup>l</sup>* = **ˆx***l* , **P***<sup>l</sup>* , **c***l* , *w<sup>l</sup>* 

*wl <sup>k</sup>* <sup>=</sup> *<sup>w</sup><sup>l</sup>*

The map consists of the coordinates of recognized features of the surroundings, called landmarks, and are stored in **x***map*.

$$\mathbf{x}\_{map} = \begin{bmatrix} \mathbf{x}\_{\text{landmark}\_1}, \mathbf{y}\_{\text{landmark}\_1}, \mathbf{x}\_{\text{landmark}\_2}, \mathbf{y}\_{\text{landmark}\_2}, \dots \end{bmatrix}^T \tag{10}$$

This is the largest part of the state vector.

To estimate the current state **x***<sup>k</sup>* of the system in time step *k*, first the a priori state estimate **ˆx**<sup>−</sup> *k* is calculated using the previous state **ˆx***k*−<sup>1</sup> by using the system transition function *<sup>g</sup>*

$$\mathbf{\hat{x}}\_{k}^{-} = \mathcal{g}\left(\mathbf{u}, \mathbf{\hat{x}}\_{k-1}\right) \tag{11}$$

where **u** is a control vector used to model external influences like movement commands to the robot.

In a second step, the state estimate **ˆx***<sup>k</sup>* is updated from **ˆx**<sup>−</sup> *<sup>k</sup>* using the measurements from the UWB radar. In what follows, the index *k* is often discarded to make the text better readable.

The remaining problem is that of data association discussed in the next subsection.

### **6.3. Data association**

A major task in employing the UWB radar for SLAM is data association, in this case the task of assigning the time-of-flight measurements of the radar to corresponding features of the surroundings. The solution presented in this subsection uses two different grouping algorithm, one working in the state space, the other working in the measurement space. In both cases, the estimation of the map is done using a Rao-Blackwellized particle filter, as presented in [13].

The principal challenge we are facing is the fact that in indoor environments there is always an abundance of echoes to deal with. It is not always obvious which particular echoes belong together comparing the two impulse responses from the left and the right receiver channel. It is even harder to identify the feature which caused a particular pair of echoes. In order to improve the current state estimate, the task of determining which pair of impulses belongs to which already identified landmark has to be performed. This is achieved by applying different data association algorithms.

### *6.3.1. Data association in state space*

The first method is a probabilistic method in state space. For a given data association vector **c** an importance distribution *π<sup>j</sup>* (*i*) is calculated for all impulses *i* and landmarks *j*, using the a priori state estimate **ˆx**<sup>−</sup> and the time of flights *zi*.

$$\pi\_j(\mathbf{i}) = p\left(z\_i|\mathbf{x}^-, \mathbf{c}\_i = j\right) p\left(\mathbf{c}\_i = j\right) \tag{12}$$

This is possible because the a priori state estimate **ˆx**− contains the position and orientation of the antenna array as well as the position of the landmarks, so an expected time of flight can be calculated using the measurement functions presented earlier. Here, a Gaussian distribution with known covariance is assumed. *p* (*ci* = *j*) represents the probability of impulse *zi* being associated with landmark *j* .

From the normalized importance distribution *π* the data association vector **c** is drawn by means of the Monte Carlo method. In this way, even in the case of false measurements being closer to the predicted measurement than the correct measurement, there is a chance that the right one is chosen. It is important to note that the opposite case, a false measurement chosen over a correct one, is also possible.

So, at first glance, this method has no advantage over a simple Nearest Neighbor method, where only the measurement closest to the prediction is used. The Monte Carlo method tends to produce slightly worse results than simply choosing the association with the highest probability. This method makes sense if not only one state is estimated but many hypotheses of possible states. That is what the particle filter can handle.

The particle filter tracks many hypotheses. These hypotheses are depicted as points, or particles, in the multi-dimensional state space. Each particle *s<sup>l</sup>* is composed of the estimated state **ˆx***<sup>l</sup>* , the covarianz matrix **P***<sup>l</sup>* of the state, a data association variable **c***<sup>l</sup>* where the association between measurements and landmarks is stored, and a weight *w<sup>l</sup>* .

$$\mathbf{s}^{l} = \begin{bmatrix} \hat{\mathbf{x}}^{l}, \mathbf{P}^{l}, \mathbf{c}^{l}, \mathbf{w}^{l} \end{bmatrix} \tag{13}$$

The index *l* denotes the *l*th particle. The weight *w<sup>l</sup>* is an indicator of the likelihood that a certain hypothesis holds true. It is calculated using the weight from the previous iteration. Thus, the weight serves as a memory. In the long run, the hypothesis with the highest weight will be the one that best approximates the real world.

$$w\_k^l = w\_{k-1}^l p\left(\mathbf{z}\_k | \mathbf{x}\_{k'}^l \mathbf{c}\_k^l\right) \tag{14}$$

### *6.3.2. Data association in measurement space*

24 Will-be-set-by-IN-TECH

In **x***sensor* relevant information about the sensors like biases, vector **b**, or the orientation *φarray*

The map consists of the coordinates of recognized features of the surroundings, called

To estimate the current state **x***<sup>k</sup>* of the system in time step *k*, first the a priori state estimate **ˆx**<sup>−</sup>

where **u** is a control vector used to model external influences like movement commands to the

UWB radar. In what follows, the index *k* is often discarded to make the text better readable.

A major task in employing the UWB radar for SLAM is data association, in this case the task of assigning the time-of-flight measurements of the radar to corresponding features of the surroundings. The solution presented in this subsection uses two different grouping algorithm, one working in the state space, the other working in the measurement space. In both cases, the estimation of the map is done using a Rao-Blackwellized particle filter, as

The principal challenge we are facing is the fact that in indoor environments there is always an abundance of echoes to deal with. It is not always obvious which particular echoes belong together comparing the two impulse responses from the left and the right receiver channel. It is even harder to identify the feature which caused a particular pair of echoes. In order to improve the current state estimate, the task of determining which pair of impulses belongs to which already identified landmark has to be performed. This is achieved by applying different

The first method is a probabilistic method in state space. For a given data association vector **c** an importance distribution *π<sup>j</sup>* (*i*) is calculated for all impulses *i* and landmarks *j*, using the a

*zi*|**ˆx**−, *ci* = *j*

is calculated using the previous state **ˆx***k*−<sup>1</sup> by using the system transition function *<sup>g</sup>*

The remaining problem is that of data association discussed in the next subsection.

**ˆx**−

**b**, *φarray*

*xlandmark*<sup>1</sup> , *ylandmark*<sup>1</sup> , *xlandmark*<sup>2</sup> , *ylandmark*<sup>2</sup> , ...*<sup>T</sup>* (10)

*<sup>k</sup>* = *<sup>g</sup>* (**u**, **ˆx***k*−1) (11)

*<sup>T</sup>* (9)

*<sup>k</sup>* using the measurements from the

*p* (*ci* = *j*) (12)

*k*

**x***sensor* =

of the antenna array relative to the robot is stored.

**x***map* =

In a second step, the state estimate **ˆx***<sup>k</sup>* is updated from **ˆx**<sup>−</sup>

This is the largest part of the state vector.

landmarks, and are stored in **x***map*.

robot.

**6.3. Data association**

presented in [13].

data association algorithms.

*6.3.1. Data association in state space*

priori state estimate **ˆx**<sup>−</sup> and the time of flights *zi*.

*π<sup>j</sup>* (*i*) = *p*

**Figure 20.** Left: Cluttered sample measurements over time, right: Associated measurement curves.

One problem that arises in using a particle filter is that due to the probabilistic nature of the algorithm the number of particles can grow considerably high. This is because the number of possible data associations increases with every measurement step. Using more particles or discarding unlikely hypotheses by resampling can only partially solve this problem.

To reduce the necessary number of particles, we use a second approach: Measurements are not directly associated to landmarks. Instead, they are first grouped in the measurement space. To do this, the fact is used that the antenna array moves only in small steps between consecutive measurements, so measurements originating from the same feature also change only slightly. This correlation can be exploited. By employing a simple Kalman Filter in the measurement space, it is possible to predict and group measurements that belong to the same feature. Only whole groups of measurements are passed to the particle filter, which greatly reduces the number of hypotheses needed and therefore the number of particles necessary. Figure 20 shows a simulation of this process. In the left figure, measurements are taken as the robot travels through the environment. Dots indicate extracted time-of-flights. The measurements are cluttered, but almost continuous echoes originating from room features can be made out. The right figure shows the result of the grouping algorithm.

The disadvantage of this procedure is that the grouping introduces a time delay in the system. Moreover, it requires measurements to be made more frequently, and so partly weakens one advantage of the room reconstruction algorithm.

angle in degree

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 22 shows the impulse responses for the left and right channel at point 16 of the scenario described in Section 2.2. On the x-axis, the time *t* times the speed of light *c* indicates the distance the pulse has travelled. Peaks reflected or scattered from different room

To further verify the results, measurements were made in a laboratory room. The room included furniture, some metal pipes on the walls, and was filled with assorted laboratory equipment at one end of the room. The sensor array consisted of three double ridged horn

The antenna array was placed in the middle of the room and rotated manually. Pictures of the room can be seen in Fig. 25. As in the simulation, no information about the current angle of

Normalized impulse response

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>0</sup>

(b) Right receiver channel

distance in m

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>0</sup>

Cooperative Localization and Object Recognition in Autonomous UWB Sensor Networks 205

 *tc* in m

**Figure 21.** Radargram at position 16 of the simulated room.

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>0</sup>

distance in m

**Figure 22.** Impulse responses at position 16 of the simulated room, facing 0 degrees.

(a) Left receiver channel

antennas 0.46 m apart, similar to the simulation.

characteristics can easily be separated.

**6.5. Measurements**

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Normalized impulse response
