**5. Evaluation and optimization of the topology via ambiguity function analysis**

As discussed in the chapter, sensor network imaging is one of the important applications of UWB sensors. In the UWB sensor network, there are stationary sensors (e.g. anchors), and mobile sensors. In the perspective of radar imaging, the spatial distribution of the stationary sensors would form a "real array", while the movement of the mobile sensors would generate a "virtual array" (i.e. synthetic aperture). The beam patterns of this "real array" and "virtual array" highly depend on their spatial configurations (topologies). In other words, the resolving performance of the real/virtual array highly depends on the topology itself.

Beyond the topologies of the real and virtual arrays, the signal parameters such as the waveform, bandwidth, etc., could also impact the resolving performance of the system. That is, the overall resolving performance of the system is jointly decided by the parameters, including the topologies of the real and virtual arrays, as well as the signal parameters. This makes the analysis of the topology even more challenging.

Sensor networks are designed to be highly accurate for their intended purpose. Always, designers and engineers are required to know the level of resolution expected from a particular sensor configuration. In order to evaluate and optimize the topologies, ambiguity function analysis is introduced in this section. Via ambiguity functions, we could know how the topology of the real/virtual array (i.e. the array formed by stationary/mobile sensors) contribute to the resolving performance of the system, and then further optimize it [30, 69].

Generally, in the far field, the ambiguity function can be factorized into several factors such as the signal related factor, the topology factor (associated with the "real array"), and the motion factor (associated with the "virtual array") [69], as shown in Fig. 16. The combination of these factors results in the overall resolving ability of the system. Theoretically, each individual factor can be used to evaluate a certain aspect of the resolution characteristics or optimize certain parameters instead of using the complicated ambiguity function of the system as a whole.

As described in the scenario, a number of UWB sensors are deployed to image the environment in order to provide necessary information for further applications. As shown in Fig. 17, it can consist of a number of moving transmitters (*Ti* ∈ {*T*1, *T*2, .., *TM*} ) and a number of stationary receivers (*Rj* ∈ {*R*1, *R*2, .., *RN*} ) . In this way, a UWB sensor network is constructed. The transmitters can move along predefined tracks (e.g. Track 1 & 2) to probe

20 Will-be-set-by-IN-TECH

 between **transmitter array**  & **receiver array**

Evaluation To evaluate the contribution of each individual factor

Topology factor Receiver placement

Motion factor the receiver array

Optimization To optimize a certain group of parameters via the corresponding factor

Transmitter placement **within** the transmitter array

**within** 

Signal factor Relative position

**Figure 16.** Factorization of MIMO ambiguity function and its potential applications.

makes the analysis of the topology even more challenging.

**5. Evaluation and optimization of the topology via ambiguity function**

As discussed in the chapter, sensor network imaging is one of the important applications of UWB sensors. In the UWB sensor network, there are stationary sensors (e.g. anchors), and mobile sensors. In the perspective of radar imaging, the spatial distribution of the stationary sensors would form a "real array", while the movement of the mobile sensors would generate a "virtual array" (i.e. synthetic aperture). The beam patterns of this "real array" and "virtual array" highly depend on their spatial configurations (topologies). In other words, the resolving performance of the real/virtual array highly depends on the topology itself.

Beyond the topologies of the real and virtual arrays, the signal parameters such as the waveform, bandwidth, etc., could also impact the resolving performance of the system. That is, the overall resolving performance of the system is jointly decided by the parameters, including the topologies of the real and virtual arrays, as well as the signal parameters. This

Sensor networks are designed to be highly accurate for their intended purpose. Always, designers and engineers are required to know the level of resolution expected from a particular sensor configuration. In order to evaluate and optimize the topologies, ambiguity function analysis is introduced in this section. Via ambiguity functions, we could know how the topology of the real/virtual array (i.e. the array formed by stationary/mobile sensors) contribute to the resolving performance of the system, and then further optimize it [30, 69]. Generally, in the far field, the ambiguity function can be factorized into several factors such as the signal related factor, the topology factor (associated with the "real array"), and the motion factor (associated with the "virtual array") [69], as shown in Fig. 16. The combination of these factors results in the overall resolving ability of the system. Theoretically, each individual factor can be used to evaluate a certain aspect of the resolution characteristics or optimize certain parameters instead of using the complicated ambiguity function of the system as a

As described in the scenario, a number of UWB sensors are deployed to image the environment in order to provide necessary information for further applications. As shown in Fig. 17, it can consist of a number of moving transmitters (*Ti* ∈ {*T*1, *T*2, .., *TM*} ) and a number of stationary receivers (*Rj* ∈ {*R*1, *R*2, .., *RN*} ) . In this way, a UWB sensor network is constructed. The transmitters can move along predefined tracks (e.g. Track 1 & 2) to probe

**MIMO Ambiguity Function Factorization**

**analysis**

whole.

Factors

Applicati ons

**Figure 17.** UWB MIMO imaging scenario. "Track 1 & 2" are the transmitter tracks; Triangles: nonlinear tracks.

the environment. The receivers collect the backscattered probing signals to produce an image of the environment. Meanwhile, they could also serve as anchor sensors to support other applications, such as localizing and tracking the position of the moving transmitters [61].

The sensor motion factor is shown in Fig. 18 (a) and (b), with respect to different tracks (Track1 and Track2, as defined in Fig. 17). In the figures, apparently, the ripples are narrower in the direction of L1 compared to the direction of L2. It indicates that the resolving performance in the direction of L1 is better than that of L2, due to the total angular rotation in the direction of L1 is far greater than the angular rotation in the direction of L2 with respect to the reference **x**0. For similar reasons, the resolving performance of "Track1" is better than the resolving performance of "Track2" in the corresponding directions. In addition, it is shown in Fig. 18 (a) that a "ghost" object occurs in the direction of L2, due to an insufficient illumination of the object. Generally, it would generate a false object image, and consequently worsen the quality of the image.

In Fig. 18 (a) and (b), the motion factors are given with respect to linear tracks. However, in practice, the sensors are not necessarily moving along linear tracks. There may be more practical irregular tracks as shown in Fig. 17 where the triangles indicate the transmission positions. The irregular movement of the sensors could improve the performance of ghost suppression, since the irregular tracks can provide a more sufficient illumination of the environment compared to the linear tracks.

According to the sensor topology in Fig. 17, the topology factor is given in Fig. 18(c). In the figure, the ghost image is partially suppressed. As shown in the figure, the suppression residuals exist at the ghost image position. However, they are not as strong as the real object. Theoretically, the ghost image can be further suppressed by optimizing the sensor spatial placement.

Figure 18 (a), (b) and (c) indicate the resolution contribution of the sensor motions and the sensor placement topology to the overall resolution. As given in Fig. 17, the overall performance of the system is the combination of all involved individual factors. It implies that we can try to realize a better overall resolving performance by (i) optimizing each individual factor, or (ii) trading-off between related factors. For example, in order to suppress the "ghost" image, on the one hand, we can optimize the movement tracks via the motion factor and the sensor placement topology via the topology factor. On the other hand, a compromise can be made between the motion factor and the topology factor. In this sense, due to the interaction

22 Will-be-set-by-IN-TECH 200 Ultra-Wideband Radio Technologies for Communications, Localization and Sensor Applications Cooperative Localization and Object Recognition in Autonomous UWB Sensor Networks <sup>23</sup>

**6.1. Measurement model**

(a) single reflection (b) double reflection (c) scatterer

Cooperative Localization and Object Recognition in Autonomous UWB Sensor Networks 201

To detect and localize the features, the bat-type UWB radar is used to measure the round-trip-times between the transmitting antenna, features of the surroundings, and the receiving antennas, which are extracted from UWB impulse responses and stored in the measurement vector **z**. A two-dimensional geometrical model of the real world is used in the algorithm, similar to [4]. Walls and corners are represented as single and double reflections, respectively. Edges and small objects are represented as scatterers. Schematic illustrations of

Using an estimated initial position and orientation of the antenna array and an estimated initial position of a feature, an expected time-of-flight between transmitter, feature and receiver can be calculated. This can be used for dynamic state estimation, for example in an Extended Kalman filter or a particle filter, to iteratively improve the estimate of the positions. By measuring at different positions or rotating the antenna array, it is possible to distinguish the different features and calculate an initial estimate of their positions. To do this,

To solve the SLAM problem, a state-space description is used. The state vector **x** to be

**x***robot* contains the information about the robot position in *x* and *y* direction, *px* and *py*, as well as the speed of the robot, represented as movement angle *p<sup>φ</sup>* and the absolute value of the

*px*, *py*, *pφ*, *v*

**<sup>x</sup>***robot*, **<sup>x</sup>***sensor*, **<sup>x</sup>***map<sup>T</sup>* (7)

*<sup>T</sup>* (8)

**Figure 19.** Three room features used in the state-space measurement equation.

the propagation models are shown in Fig. 19 .

estimated consists of three different parts.

a state-space description of the room and the robot is needed.

**6.2. State-space description of the room and the robot**

speed *v*. All values are in relation to the local coordinate system.

**x** =

**xrobot** =

**Figure 18.** The motion and topology factors given at a certain frequency *f* . For the motion factor, *v* · *PRT* = 5*c*/ *f* , where *v* is the sensor speed, *PRF* is the pulse repetition time, and *c* is the signal propagation speed. For the topology factor, the sensor element interval is 5*c*/ *f* , and the number of sensor elements on each array is 30.

of the factors, the sensor network provides more degrees of freedom for the system designer, compared with a single sensor system.
