UWB Technology-Based Applications

#### **Chapter 4**

## Ranging and Positioning with UWB

*Jerome Henry*

#### **Abstract**

Indoor location is one of the key use cases enabled by UWB for navigation and asset tracking. The 802.15.4a and 802.15.4z standards describe several techniques for determining the distance between a mobile device client and a set of static anchors. Two of them (SS-TWR and DS-TWR) use a bidirectional exchange between the client and the anchor, with the resulting distance being calculated on the initiating side (SS-TWR) or both sides (DS-TWR). OWR does not require an exchange and simply relies on the comparison of arrival times of signals (TDoA). With UL-TDoA, the time of arrival of the client signal is compared on several anchors, drawing distance hyperbolae from which the client location is deduced. With DL-TDoA, the reverse happens: the client compares the time of arrival of signals from several anchors and deduces its position. A third family of techniques is not described in the Standard but is commonly implemented in the field: AoA, where a comparison of the phase of the signal among two or more antennas is used to compute the direction of the sender. From these elements, a location engine computes the mobile device's position. This chapter examines these techniques in detail.

**Keywords:** UWB, 802.15.4a, 802.15.4z, TWR, OWR, TDoA, SS-TWR, SDS-TWR, DS-TWR, PRF, RDEV, ERDEV, ToF, RRMC IE, RRTI IE, RMI IE

#### **1. Introduction**

It is often said that outdoor localization is on its way to being solved, thanks to the progress of the fusion of GPS, dead reckoning, and cellular techniques. Indoor, however, the challenge remains. Although they vary widely, all indoor localization uses cases revolve around the idea of determining the position of a known object about known landmarks, like room numbers or other locally significant markers. Sensing techniques allow for the detection of a moving body, but the accuracy of such determination is limited. Precision can greatly increase if the object incorporates a radio frequency (RF) technology, that allows it to interact with other RF objects whose location is known. These conditions bring the problem closer to its outdoor counterpart, and many technologies attempt to solve it.

Among them, Ultra-Wide Band (UWB) technologies have emerged as solutions of choice. "Ultra-wide" is a term characterizing any radio transmission occurring over a large channel (> 500 MHz). In most cases, these technologies face the risk of interfering with other transmitters and are allowed to transmit this type of large signal in exchange for the signal to be very low power (thus limiting the interferences they cause to others).

There have been many proposals for UWB communications, in many forums. However, one of them, first defined in the IEEE 802.15.4a Standard in 2007 [1], then refined in the IEEE 802.15.4z Standard in 2020 [2], has emerged as a key player for indoor localization, because of its claim of high accuracy. Experiments in controlled environments report localization accuracy down to 3 cm (1 inch), and commercial deployments now claim less than 30 cm (1 foot) error. This precision is made possible by the characteristics of the UWB transmission, as defined in the IEEE 802.15.4 family of standards, and augmented by industry-wide certifications like FiRa. This chapter examines the principles and components that guide UWB-ranging and make it the de facto solution for indoor ultra-precise localization.

#### **2. Ranging with UWB**

#### **2.1 UWB ranging claim to accuracy**

UWB may appear as just one of many radio frequency-based (RF) technologies interested in the localization use case. However, its design makes it particularly welladapted for accurate ranging. Most radio technologies have attempted, in one way or another, to measure the distance between a sender and a receiver but have found the endeavor to be challenging.

A simple approach is to translate the received signal strength value into a distance estimation, using standard free path loss equations. One obvious limitation of such a technique is that obstacles may cause the received signal to be weaker than it would be, traveling in an unobstructed path. Reflections and multipath in general may also cause the received signal to be stronger (constructive interferences) or weaker (destructive interferences) than it would be in an open space. For these reasons, signal strength-based techniques for distance estimation are used, but not preferred, as they have the reputation of being flawed with large inaccuracy.

Other techniques measure the angle of the received signal on multiple receivers (or the angle from multiple transmitters, which positions are known). By using these combined angles, the transmitter or the receiver location can be deduced with standard geometrical tools. UWB allows for this technique as we will see later in this chapter, and its accuracy compares to that of other technologies. One requirement of this approach is the cooperation between several senders or several receivers to construct a geometrical object before the location can be found (while the signal strengthbased techniques directly translate the signal at a single receiver into a distance).

A third family of techniques attempts to measure the time-of-flight (ToF), that is the time taken by a signal to travel between the transmitter and the receiver. This approach has received the favor of many radio technology families, including IEEE 802.11 Fine Timing Measurement (FTM), Bluetooth Low Energy (BLE) High-Accuracy-Distance-Measurement (HADM), and UWB, because it can lead to very accurate results. However, it requires the protocol designers and implementers to solve several technical difficulties. In addition to the challenge of agreeing on a common time reference between the transmitter and the receiver, which we will examine further in the next section, ToF requires that the receiver should be able to precisely determine the time of arrival of the signal. For UWB, just like for most other techniques, this time is the precise time of arrival of the beginning of the signal, often referred to as the first pulse of the first symbol of the header of a PPDU (Physical Protocol Data Unit). The PPDU includes the physical layer of the transmitted frame,

#### *Ranging and Positioning with UWB DOI: http://dx.doi.org/10.5772/intechopen.109750*

which typically starts with a form of the preamble (called the SYNC field in 802.15.4), a simple rhythmic structure that precedes the body of the frame, where the real data will be found. The preamble serves three core purposes:


Determining the exact time of the arrival of the first part of the preamble is more challenging than it would seem to the untrained eye. The receiver needs to measure the energy over a range of interesting frequencies (the channel to which the receiver is set) at regular intervals. To measure the rise and fall of energy of a signal that follows the structure of a sinusoid wave, the sampling typically needs to occur (at least) twice as fast as the channel bandwidth. For example, if the channel is 1 Hertz wide (one peak and one trough per second), measuring the energy twice per second is sufficient. Similarly, if the channel is 80 MHz wide (e.g., with IEEE 802.11 ac), sampling the channel 160 million times per second is needed. This second case represents taking a sample once every 6.25 nanoseconds (ns). Practically, this method means that the receiver takes a sample at a time t0 and measures no notable energy beyond the noise floor. Then at time t1, 6.25 ns later, another sample is taken, and this time significant energy is detected (and thus a preamble is likely arriving at the antenna). However, the preamble may have started arriving at any time between t0 and t1. Light and any other RF energies travel at approximately 299,792,458 m per second in the air and therefore travel a bit more than 1.80 m (or 6 ft) in a 6.25 ns interval. This sampling method makes it would be difficult to measure a distance with a precision greater than 1.80 m over our example 80 MHz-wide channel. Technologies leveraging channels mechanically obtain better-ranging accuracy with the ToF technique, because they sample more often, reducing the time of arrival uncertainty. UWB commonly uses ultra-wide channels, 500 MHz or wider, and thus benefits from a clear advantage in this domain.

Technologies that focus on high data rates need to implement a rich modulation structure, where many symbols are sent in parallel over the width of the channel. The advantage is data transmission speed, but the downside is another challenge for efficient ToF. To receive the data part of the signal properly, the receiver needs to measure the energy of each segment of interest (where each symbol may be found) over the entire channel width. However, segments with a strong energy peak may overwhelm neighboring segments with low energy. One important requirement for these types of technology is therefore to mandate a low Peak to Average Power Ratio (PAPR), to ensure that no segment blinds its neighbors by being much over or below the average energy of all segments taken together. This necessity makes that many of these complex modulations allow for sidelobes of energy on each segment, that

smoothen the average energy value of the segment. Unfortunately, a side effect of this structure is that the sidelobe may reach the receiver before the useful part of the segment. This event is inconsequential for proper reception of the data associated with the segment (the receiver can recognize the peak and demodulate its associated symbol), but it may lure the receiver into believing that the signal started being received before it was, causing the system to conclude on unrealistically short distances. ToF is challenging for these modulation-rich technologies.

UWB was designed to avoid both the sidelobe and the bandwidth issues. In the time domain, the UWB signal is composed of very short pulses (2 ns each). The sequence of pulses encodes the message to transmit. The interval between each pulse (usually represented by the term pulse repetition frequency [PRF]) determines how much data can be sent by the unit of time (**Figure 1**). Because each pulse is very short, it can easily be recognized (no issue of sidelobe confusing the receiver). Because the receiver recognizes the pattern of the preamble, it can recompose the first pulse (and its exact arrival time) even if the receiver did not sample perfectly at the right point in time. This structure gives UWB the claim of a range accuracy of the order of 2 inches, or 6 cm [3]. We will see below that there are still some technical issues to solve to get to that level.

Another advantage of the UWB transmission structure is that the pulse is sent, in the frequency domain, over the large 500-MHz channel. The amount of energy transmitted is large enough that the receiver can recognize each pulse. However, the amount of energy per unit of bandwidth is very small (e.g., in the domains regulated under the American Federal Communications Commission [FCC], 41.3 dBm per MHz maximum; by contrast, some channels for 802.11 transmissions allow 11 dBm/ MHz, i.e., a signal one hundred thousand times more powerful per MHz of bandwidth than UWB). This spread of the energy transmitted makes those narrower systems (for example, an 802.11 device listening to "only" 80 MHz of the 500 MHz-wide transmissions) will barely detect the UWB signal in the general noise. However, the UWB receiver, capturing the full 500 MHz, will read each pulse with ease.

#### **2.2 Single-sided two-way ranging**

#### *2.2.1 Ranging terminology*

The UWB system of short pulses allows for good-ranging precision. The UWB ranging frame itself is not very special. What distinguishes it from any other UWB frame is that one bit of the Physical (PHY) Layer (the ranging bit) is set. The rest of the header, and the rest of the frame, can be of any of the formats allowed by the 802.15.4a or 802.15.4z Standards. If the frame is solely intended for ranging purposes,

**Figure 1.** *UWB pulse structure.*

it is in most cases as short as possible (to limit the amount of airtime consumed by each ranging frame). In its simplest expression, when the frame is built for ranging between 2 devices, it only contains the header (no payload, no source or destination addresses). In more complex environments (multiple possible senders and receivers), additional fields are added as needed.

The simplest ranging process then consists of an exchange of two frames between 2 ranging-capable devices (RDEVs when implementing 802.15.4a, or enhanced ranging capable-devices - ERDEVs, when implementing improved modes defined in 802.15.4z and described below). The two-frame exchange is simply called Two-Way Ranging (TWR) in 802.15.4a and was renamed Single-Sided Two-Way Ranging (SS-TWR) in 802.15.4z (SS is added in that revision of the Standard to avoid any confusion with another mode, DS-TWR, described in the next session).

One of the devices is called the initiator (in 802.15.4.z, or the originator in 802.15.4a, but the functions are the same in both Standards for the basic TWR case), and the other is the responder.

#### *2.2.2 Basic SS-TWR mode*

At time t0, the initiator (A in **Figure 2**) sends a ranging frame and starts its ranging counter.

The frame travels to the responder (B in **Figure 2**), consuming a time-of-flight tp that is proportional to the distance between both devices.

Upon receiving the first pulse of the preamble, at time t1, the responder starts its ranging counter. The responder then receives the rest of the frame, interprets it, and realizes that it is a ranging frame and that it should respond. It then builds a ranging response frame and sends it to the initiator, at time t2. The responder indicates in the frame a payload value called *treplyB*, which indicates the time between the reception of the first pulse at the responder antenna (time t1, at which the responder ranging counter was started) to the time at which the first pulse of the responder's ranging frame is set to leave the responder's antenna (t2). The responder stops its ranging counter upon sending the response frame.

The frame travels to the initiator, consuming the same time of flight tp as the first ranging frame (in UWB, one assumption is that devices do not move fast enough from each other for their distance to have significantly changed during the exchange).

The initiator receives the first pulse of the preamble, notes the time from its ranging counter (t3), then receives the rest of the frame.

**Figure 2.** *SS-TWR choreography.*

At this point (**Figure 2**), the initiator has the time it took to perform the entire round of exchange (*troundA* ¼ *t*<sup>3</sup> � *t*0), and the time consumed by the responder's activity (*treplyB*). As frames traveled both ways, the ToF between devices is simply estimated as:

$$
\hat{ToE} = \frac{1}{2} \left( t\_{roundA} - t\_{polyB} \right). \tag{1}
$$

Eq. (1) supposes that both devices' crystals count time at the same speed, as the initiator subtracts *treplyB* from *troundA*. But *troundA* is measured by A, while *treplyB* is measured by B. In most cases, crystals are imperfect and there is a time offset, or drift, between devices. There are multiple proprietary and many common methods to reconcile these differences. For example, one may notice that the PHY header indicates the PRF, and that the duration of each pulse is also known. Therefore, the responder could simply measure each pulse duration and the PRF in the frame received from the initiator, find that they do not perfectly align with its interpretation of the pulse duration and the PRF, and deduce the time offset between its clock and the clock of the initiator.

#### *2.2.3 802.15.4z reply time improvements*

802.15.4z allows B to make good use of finding the time offset difference, regardless of how B made that determination. When both A and B support these refinements, they are called ERDEVs.

In an efficient embodiment (SS-TWR with fixed reply time), A sends a ranging frame as in the SS-TWR mode, but A and B agreed in advance (through out-of-band or previous UWB messages) to a specific *treplyB* value. B receives A's ranging frame and waits for *treplyB*, then replies. In a sound implementation, B uses the pulse duration and PRF estimation as detailed above to estimate A's clock speed and adjust its *treplyB* value based on that understanding, therefore attempting to determine *treplyB* the way A would have calculated it (i.e., "in A's time"). This common understanding limits the effect of the drift.

In another form (SS-TWR with embedded time result, **Figure 3**, left), A, in its ranging request frame, inserts a Ranging Request Measurement and Control Information Element (RRMC IE), that sets a Reply Time Request bit. B then understands that

#### **Figure 3.**

*802.15.4z SS-TWR with embedded time result (left) and 802.15.4z SS-TWR with deferred time result (right).*

*Ranging and Positioning with UWB DOI: http://dx.doi.org/10.5772/intechopen.109750*

it must compute the drift directly. Then, in its response frame, B adds a Ranging Reply Time Instantaneous Information Element (RRTI IE), that expresses the calculated time offset. A can then choose to incorporate this drift in its calculation. In the real world, it is unlikely that B would be able to make such drift determination on the fly, but it could compute the offset with better accuracy at each new round. Then, the distance accuracy increases as more rounds are performed between the devices.

In the third and last form (SS-TWR with deferred reply time result, **Figure 3**, right), B would want to provide the offset value in near real-time but does not have the computing capability to calculate its value on the fly. Therefore, upon receiving the ranging request from A, with the Reply Time Request bit set in the RRMC IE, B responds with an acknowledgment frame that allows A to compute *troundA*. Then, in a subsequent frame, B sends a Ranging Measurement Information (RMI) Information Element, that includes both *treplyB* and the estimated offset. The offset between A and B, *Coffs*, is then incorporated into Eq. (1), which becomes:

$$
\hat{ToF} = \frac{1}{2} \left( t\_{roundA} - t\_{repplyB} - \left( 1 - C\_{qf\bar{\imath}} \right) \right). \tag{2}
$$

These improvements allow the ranging exchange to complete with a time error well below 1 ns (often in the 100-picosecond range), allowing a ranging accuracy in the order of 3 cm.

#### **2.3 Double-sided two-way ranging**

One limitation of SS-TWR is that responder B does not benefit from the exchange. Its role is merely to respond to A. In the days of 802.15.4a, there was also a concern that clock drifts could not be properly accounted for. The 802.15.4a Standard then devised an additional mode, called Symmetric Double-Sided Two-Way Ranging (SDS-TWR). In this mode, A starts with a ranging frame, as in the SS-TWR basic mode, and B responds with its ranging frame that includes *treplyB*, but keeps its ranging counter running. Upon receiving the response, A, instead of directly computing the ToF, responds with its ranging frame and processing time (*treplyA*). B receives that response measures its arrival time and stops the ranging counter. B also measures its own *troundB*, which starts when B sends its ranging response frame and stops when it receives A's response. The ToF value is now present 4 times (**Figure 4**): twice in *troundA*, measuring the interval from the departure of the first frame from A, and the arrival of the response from B, and twice in *troundB*, measuring the interval from the departure of the response frame from B, and the arrival of the response from A.

In this configuration, A can still compute its interpretation of the ToF using Eq. (1). B can also compute its interpretation of the ToF, with the additional advantage, that the error is now reduced as B can compare *treplyA* to its measurements, and therefore estimate the relative drifts. The estimation remains precisely that, an estimation. However, the process reduces the error. Even with a crystal accurate at 80 ppm, the error in the ToF estimated by B commonly drops well below 10 picoseconds.

802.15.4z added an enhancement to this mode (**Figure 5**), called Double-Sided Two-Way Ranging (DS-TWR, thus without the "symmetric" word in the 802.15.4.a version). In this variation, well adapted to scenarios where the conditions of the channel or other parameters cause *treply* to be large on either or both sides (with the consequence that the drift is more difficult to evaluate), A first sends a ranging request, including the RRMC IE and its support for the DS-TWR exchange.

#### **Figure 5.**

*802.15.4z DS-TWR with deferred reply time result (left), DS-TWR with embedded ranging information (right).*

B responds with an Acknowledgement frame, allowing A to measure *troundA*. But then, after the round, B initiates its TWR exchange, indicating in the RRMC IE that this is a continuation of the previous exchange. A response with an Acknowledgement frame, allowing B to measure *troundB*. Then, in a subsequent frame, A sends another frame to B, that includes the RMI IE and the value for *troundA* and *treplyA*. From all these elements, B can minimize the effects of the drifts and compute the ToF. B can then, in turn, sends a frame to A with the RMI IE that contains *troundB* and *treplyB*. A can then also compute its estimation of the ToF. In both cases, the estimation becomes:

$$T\hat{o}F = \frac{(t\_{roundA}t\_{roundB}) - \left(t\_{rightA}t\_{rightB})}{t\_{roundA} + t\_{rightA} + t\_{roundB} + t\_{rightB}}\tag{3}$$

Here again, the error is reduced to less than 10 picoseconds in most cases, and both sides compute the ToF estimate. The main downside of this method is that it requires up to 6 frames to complete. A reduced version of this method is also allowed by

*Ranging and Positioning with UWB DOI: http://dx.doi.org/10.5772/intechopen.109750*

802.15.4z and is called DS-TWR with embedded ranging information. In this variation, B does not respond to A's ranging request frame with an Acknowledgement frame, but directly with a ranging request frame, that includes the RRMC IE to show that the response is a continuation of the exchange. A then responds with a ranging frame that includes the RMI IE and the value of *troundA*, but also an RRTI IE that includes the value of *treplyA*. B can then directly compute its estimation of the ToF. If A also wants to perform this computation, it can set, in the RRMC IE, a field called ToF Request. This causes B, at the end of its ToF estimation, to share the calculated value in a new frame with A (carrying the ToF estimate in the RMI IE).

#### **2.4 Time difference of arrival**

All the techniques associated with TWR variants suppose a form of initial configuration between devices, so they know their respective role and place in the sequence of messages. This requirement raises the natural question of the final goal of such a ranging exercise. In many cases, the purpose of the measurement is not to find the relative distance between two objects, but to determine the location of one of them. The use case may then be navigation (a mobile device needs to establish its location, and commonly display it on the local screen over a local map), or asset tracking (a backend management platform needs to record and/or display the location of assets, for example, parts on a factory floor). If the purpose is solely navigation or solely asset tracking, then it becomes practical to deploy a set of devices (now called anchors) at static, known positions, and configure them permanently for navigation or tracking purposes. In this case, the ranging messaging structure can be transformed so that only one side (the mobile device or the static anchors) sends the ranging messages.

This possibility is lightly described in 802.15.4a, and more formally specified in 802.15.4z (under the umbrella name One-Way Ranging [OWR]).

#### *2.4.1 TDoA for navigation*

For navigation purposes, the mobile device needs to establish its distance to each known anchor and deduce its location by comparing these distances. In 802.15.4a, the anchors would be statically configured for this purpose. Then, at regular intervals, they would send a message with the Ranging bit set, the Acknowledgement bit not set [so the mobile device knows not to answer], and the identifier of the sending anchor (**Figure 6**). All anchors would send the ranging message at the same time. Because the anchors would be at different distances, the mobile device would receive the messages at different times and would use the time difference of arrival (TDoA) to deduce its location. We will look at this last step in the next section.

**Figure 6.** *802.15.4a TDoA Mode 1.*

Practically, however, this technique (called TDoA mode 1) made several assumptions that were not always realized:


802.15.4z describes a variation of this method. In this version, the anchors send their message one after the other, with a precise (and known) offset between transmissions (**Figure 7**). This difference avoids message collisions.

The anchor's clocks still need to be carefully synchronized, and the standard suggests an over-the-wire or over-the-air method, without further details. There are certainly many possible proprietary methods for such exchanges. A practice long established in the industry consists in designating one primary anchor that sends intervals (e.g., every 100 ms) a broadcast message (called a 'sync' message) that includes its time. The others (called the secondary anchors) receive this message and, knowing their distance to the primary anchor (and anchors are static in position and configured by a network administrator), re-align their clock to the primary's time. After a few of these exchanges, the secondary anchors can learn their mean drift over the sync message intervals and re-align their clocks also between sync messages. In stable conditions (e.g., no brutal change of temperature or other operating conditions), the system can reach an accuracy in the 20 to 50 picosecond range. The sync message may also include an ordered list of anchors and an interval. Using that information, the receiving anchors would know which anchor is supposed to send the ranging message first, and how long each next anchor needs to wait before sending its ranging frame. In the real world, as the list of anchors is static, and as one anchor may disappear for any reason, it is common for the network administrator to simply configure each anchor with a time offset ("send your ranging frame *Xμs* after detecting anchor *i*'s ranging frame, and/or *Yμs* after detecting anchor *j*'s ranging frame"). Such a static configuration avoids consuming airtime to repeat a sequence

**Figure 7.** *802.15.4z downlink TDoA.*

that is unlikely to change. It also avoids the chain rupture effect of the next anchor never sending a ranging frame, because it is waiting for the previous anchor to send, while that previous anchor was disconnected for some reason.

#### *2.4.2 TDoA for asset tracking*

Both 802.15.4a and 802.15.4.z, describe the asset tracking case with similar methods (802.15.4.a calls this case TDoA method 2, 802.15.4.z simply describes it as a second case of TDoA utilization). In this scenario, the mobile device (called the initiator) sends ranging messages (called 'blinks') at regular intervals (**Figure 8**). The header has the Ranging bit set and the Acknowledgement bit not set (no response needed), and the frame can be limited to carrying a form of identifier for the initiator (a MAC address or a simpler identifier), along with a message number. Each anchor individually receives the message, notes the time of arrival, then forwards the message number, initiator identifier, and time of arrival to an external system (usually a Real-Time Location Service [RTLS] server). The server collects such messages from all detecting anchors and uses the time difference of arrival to compute the initiator location.

802.15.4a does not assume the anchors' clocks, but 802.15.4.z recognizes that they must still be synchronized. This is because they need to mention the time of arrival of each blink. If the clocks are not set to the same time reference, these times of arrival cannot be compared. Thus, the asset tracking case still often leverages sync messages sent from a primary anchor to the secondary anchors. A conceptual difficulty associated with this requirement is that the sync messages serve no direct location purpose. They are just part of the necessary mechanic to keep the clocks aligned. Yet they consume airtime, which is not available for the blink messages. Therefore, more frequent sync messages increase the accuracy of the TDoA measurement, but also reduce the possible density of blink messages (thus the number of devices tracked in each space, or the frequency of their blink updates). In most scenarios, an arbitration is made to limit the sync messages to match the inaccuracy tolerance of the location calculation derived from the clock drifts.

#### *2.4.3 TDoA hybrid modes*

The binary opposition between asset tracking (the mobile device is the one sending frames used for ranging, the anchors send messages to synchronize their clocks, but are otherwise passively receiving the ranging frames) and navigation (the anchors are the ones sending frames used for ranging, the mobile device is potentially entirely passive and thus invisible to the infrastructure) makes sense in a specialized scenario. However, the real world is often more complex. A department store, for example,

**Figure 8.**

*802.15.4a TDoA Mode 2, 802.15.4z uplink TDoA.*

may want to offer ultra-accurate navigation services for its customers, but also track goods and staff in the store. A smartphone might be in the hands of either side (customer or staff). Additionally, the store may want to ensure customer anonymity or may request permission to identify the users using the navigation service. This is where the Standards (802.15.4a and 802.15.4z) stop and where other organizations, like FiRa, define common use cases and specifications among vendors so that the anchors and the mobile device can recognize their operating scenario in the field.

In most cases, the specifications can recognize that UWB is one component of a more complex system, that includes an operating system and possibly other radio technologies (e.g., Wi-Fi or BLE). There is therefore a possibility of signaling (possibly out-of-band, e.g., with BLE or Wi-Fi) between the infrastructure (where the anchors reside) and the mobile device to indicate the scenario:

In the case of anonymous navigation, the infrastructure merely needs to signal the operating parameters (channel and others).

In the case of asset tracking, the infrastructure may provide a form of identifier (this is store X), preferably verifiable by the mobile device (e.g., a hash), and the tracking parameters (interval between frames and others). The mobile device operating system can then parse the message, compare the request to its configuration, and start emitting ranging frames if it is an asset of the store (while a customer mobile device would simply ignore the request).

In a hybrid case, the infrastructure may want to track the device while offering navigation services. Tracking may be a generic analytic need, to observe general movements in the store, without identifying any specific device, or be more specific by tracking individual devices (for example because some customers with a storespecific app may request coupons when in proximity to some type of merchandise). Here again, the infrastructure can signal one or both scenarios. Some mobile devices may then be configured to ignore the request. Others may be configured to only provide anonymous ranging. In that case, the device may (passively) perform navigation, and at random intervals send short series of ranging frames with a temporary (randomized) identifier. As the series is short, sporadic and the identifier randomized, the infrastructure would not obtain more than small snippets of directions, which would not be very useable individually, but would be sufficient, at scale, to provide an understanding of the general movements of people through the store. Other devices may have a specific store app and be configured by the user to provide an accurate location.

There may also be some cases where device tracking becomes mandatory, for example in hazardous areas. Here again, the infrastructure can signal such zones, requesting all devices in the zone to signal their presence. Mobile devices may then be configured to either respond to such requests or ignore them.

#### **2.5 Angle of arrival (AoA)**

802.15.4a, published in 2007, did not consider angles. However, several proprietary implementations leveraged the angle of arrival of the signal to deduce its likely direction [2, 3]. This determination has several advantages:

Triangulation or multiangulation (leveraging three, or multiple angles) can complement trilateration, or multilateration (leveraging three, or multiple distances). This point will be explained further in the next section.

When the distance to a single device is evaluated over multiple samples, the observation of the matching angles allows the system to calculate if the signal

direction is stable. In an LoS ideal scenario, the source is sending a series of frames (and associated pulses), that all reach the receiver with the same intensity and from the same direction. In an indoor nLoS scenario, some frames may reach the receiver through an LoS path (and their power level and angle can be measured). Others will reflect on obstacles, the LoS signal may be too weak to be detected, and the frames may therefore reach the receiver with different power levels and angles. By comparing the angles of the pulses, from one frame to the next, it is possible to conclude if the channel is stable and LoS, or unstable and/or nLoS. Although leveraging this piece of information in real-time may be difficult, a properly designed system may relate the change of angle with the change in the calculated range value and deduce what is the most likely angle of arrival for the LoS component, if it can be found.

While UWB implementations started leveraging the angle of arrival (under the name Phase Difference of Arrival, PDoA) as soon as 802.15.4.a was published, other radio technologies also started at the same period to consider the angle of arrivals, either at the protocol-definition level (e.g., BLE) or in practical implementations (e.g., Wi-Fi).

This uncoordinated development led to an imprecise terminology that still confuses researchers today. In its most common implementation, UWB (along with BLE or Wi-Fi) considers the angle of a single signal received on two (or more) antennas of a single receiver. The frequency of the signal is known, and consequently its wavelength. For example, UWB channel 9 has its center frequency *f* set to 7987.2 MHz, and its wavelength (*1/f,* usually written *λ*) is therefore approximately 0.0375 m. By observing the point in the wave cycle (the phase) at which one antenna receives the signal and comparing it to the phase at which another antenna (whose distance to the first antenna is known) receives the same signal, it is then a matter of basic trigonometry to deduce the incident angle of that signal (**Figure 9**).

Several radio technologies call this angle the "signal angle of arrival". Some UWB experts, however, call it "Phase difference of arrival" (pDoA), because it is obtained by comparing the phase between antennas. Unfortunately, several other technologies call pDoA the observation of a signal on a single antenna, comparing the phase of one primary component to the phase of one or more other components (e.g., subcarriers, reflections, etc.) [4]. Other technologies call pDoA the differences in phase of two different signals received on a single antenna [5]. This variability in terminology makes the term pDoA no longer preferred, and AoA may be a safer choice, when in doubt.

Because the consideration of the angle became, in the 2010 decade, an active contributor to many radio technologies seeking location accuracy, 802.15.4z integrates the values. The standard does not define how to measure the angle and merely observes that a system may be able to calculate its value. In all the two-way ranging techniques considered in 802.15.4z, the initiator can request, as an option in the

**Figure 9.** *The angle of arrival determination.*

ranging frame, to request for the AoA (azimuth and elevation) at which the frame was received by the responder. The responder indicates these elements (in radians, with a possible range of ½ � �*π*, *π* for the azimuth, and ½ � �*π=*2, *π=*2 for the elevation) in the RMI IE in the response.

#### **2.6 Protection of UWB exchanges**

The UWB frames are not encrypted, and an observer could read the timestamps they carry. When observing the full TWR exchange, the observer may be able to deduce the distance between two ranging UWB devices. The issue is not critical in many settings where the use case is the navigation or asset tracking. However, UWB is also used with TWR for accurate ranging, for example, to automatically open a door when a user is near or unlocking a car. In these cases, an attacker could replay or hijack one side's ranging frames, lead the other side to conclude on a distance shorter than the real physical distance between the initiator and the responder, and thus open the door or unlock the car while the user's device is still far.

802.15.4z designed two mitigation techniques for such an attack. The first form is a mutual authentication between the initiator and the responder, which leads to encrypted exchanges. This mechanism protects from eavesdropping and is welladapted for scenarios where the initiator and responder know each other (e.g., a car and its key fob). However, an attacker can still hijack the exchanged frames at the PHY level and lure the receiver to conclude on a short ToF. A second mechanism was designed to mitigate this risk, in the form of a Scrambled Timestamp Sequence (STS) field. The STS can be inserted in the physical header of the ranging frames and consists of pseudo-randomized pulses organized in blocks (one to four blocks of 512 chips each [� 1*μ*�, or 128 bits, separated by silences, or 'gaps'). The STS relies on keys that are exchanged in advance between the initiator and the responder, and nonces (numbers used once), from which the transmitter generates a unique value used as the STS to timestamp the ranging frame. The receiver, having the same information, generates the same value, and accepts the ranging frame only if its STS matches the receiver's generated value. Because of the large number of pulse sequences that can be generated for the STS field, the probability that an attacker could generate the right sequence and thus lure the receiver to conclude that the relayed frame did originate from the expected sender, at the expected distance, is minuscule.

Although the STS scheme is not expected to be impossible to attack, it has proven to be robust [6].

#### **3. Finding a location with UWB**

Once an estimation of the distance (and possibly angle) between a mobile device and several anchors has been found, the next step is to compare the values and deduce the location of the object. The terminology in this field tends to distinguish the position, which is the conclusion of the comparison of distances (or angles) to known points, from the location, which is the position projected on a known set of references. Thus, for example, one would say that the position of a particular object is at the intersection of three circles of radii 5.7, 8.1, and 7.4 meters from anchors A, B, and C respectively. Then, once the position of the object and the anchors have been projected onto a map, one could declare that the location of the object is on the second floor of the hotel, in the upper left corner of room 242.

*Ranging and Positioning with UWB DOI: http://dx.doi.org/10.5772/intechopen.109750*

Finding the location of the mobile object thus supposes that the location of the anchors is known. For the asset tracking use case, the RTLS server can be configured with such information. In the navigation case, the 802.15.4 Standard does not describe how the mobile object should learn the anchors' location. Conceptually, such information may be embedded in the UWB frame payload. However, the anchors and the mobile device would need to agree on the location format. Industry certifications like the one driven by FiRa define a common format and suggest that the location information should be expressed out-of-band (for example using Bluetooth Low Energy [BLE]).

Regardless of the use case, the first step toward determining location is to compute a position.

#### **3.1 Establishing a position from ranges**

#### *3.1.1 Localization in the TWR case*

In the TWR cases, ranges are evaluated between a mobile device and a set of anchors. Without angle information, determining that the mobile is *x* meters away from an anchor places the mobile on a circle of radius *x*, centered on that anchor. On a plane (i.e., supposing an ideal 2D environment), 2 circles intersect on two points (when they intersect), and 3 anchors are needed to hope for a unique solution (if all three circles intersect). The action of finding the position from three distances is called trilateration. In 3 dimensions, the circles become spheres, and 3 spheres intersect on two points, thus causing position uncertainty. If all anchors are on the same plane, then intersection points are typically above each other. When the position then needs to be projected onto a 2D map, this uncertainty may be acceptable. When 3D representations are needed, and no assumption can be made about the object height, 4 anchors or more are needed to hope to obtain a single intersection point (multilateration).

In an ideal world, the intersection can be simply calculated by solving a set of equations representing the distance of the object to each anchor. If *m* is a mobile object of unknown coordinates *m* ¼ *xm*, *ym*, *zm* , and *ai* is an anchor of known coordinates *ai* ¼ *xi*, *yi* , *zi* , the distance between the mobile and the anchor is expressed by the straightforward Euclidean distance equation:

$$d\_i^2 = (\mathbf{x}\_i - \mathbf{x}\_m)^2 + \left(\mathbf{y}\_i - \mathbf{y}\_m\right)^2 + (\mathbf{z}\_i - \mathbf{z}\_m)^2 \tag{4}$$

Eq. (4) can be re-written in an alternate form:

$$\left(\mathbf{x}\_m^2 + y\_m^2 + z\_m^2\right) - 2\left(\mathbf{x}\_i\mathbf{x}\_m + y\_i y\_m + z\_i z\_m\right) = d\_i^2 - \left(\mathbf{x}\_i^2 + y\_i^2 + z\_i^2\right) \tag{5}$$

As more anchors (*j*, *k*, etc.) are added, it becomes possible to obtain linear equations in *xm, ym* and *zm* by subtracting equations pairwise. When 4 such equations are available, the system is overdetermined and a single solution can be found, however, even if UWB allows for high-ranking precision, each measurement is not mathematically perfectly accurate (the experimenter observes an estimation of the range, ^ *di*, that differs from the true range by an unknown factor *ϵ<sup>i</sup>* so that ^ *di* ¼ *di* þ *ϵi*) and in the real world, a perfect solution cannot be found.

#### *3.1.2 Localization in the TDoA case*

The same type of issue can be observed in the TDoA case. With OWR, the detecting side never measures distances directly. Instead, the observation is mere that a given mobile signal arrives on an anchor *n us* earlier than on another anchor (or equivalently that the mobile received the signal from one anchor *n us* earlier than the signal from another anchor). As the speed of light and RF signals are known, this observation is translated into the conclusion that one anchor is *x cm* closer than the other one, but the distance itself is not known. This relationship translates into a hyperbolic line between anchors (**Figure 10**). TDoA measurements between two anchors form one hyperbola.

Conceptually, three anchors result in 3 hyperbolae. However, in the real world, the TDoAs are compared against a primary anchor (e.g., anchor 1) and the hyperbola that compares anchor 2 to anchor 3 is unusable. Therefore, three anchors result in 2 usable hyperbolae, which intersect at a single point on a 2D plane. For 3D determination, at least 4 anchors are therefore needed. The comparison is then translated into a matrix of compared distances, from which the true Euclidean distance can be found iteratively [4]. Here again, distances in the real world are noisy, and no perfect solution can be found. TDoA also suffers from the additional difficulty that, contrary to circles, hyperbolae are asymptotic to a line. Practically, this fact translates into the issue that, as the observed compared distance deviates from the ground truth, it worsens faster (up to infinity) than its TWR counterpart.

#### **3.2 Least square solutions**

To solve the issue of reconciling noisy distances, a natural approach is to attempt to determine the measurement errors and minimize them. Mathematically, with *N* anchors, this requirement is expressed as

$$\min\_{m} \sum\_{i=1}^{N} \left( ||m - a\_i|| - \hat{d}\_i \right)^2 \tag{6}$$

Naturally, the experimenter does not know *m* but can find the coordinates iteratively, with techniques like gradient descent where the derivative of Eq. (6) for each

**Figure 10.** *Hyperbolae formed from OWR measurements.*

component of *m xm*, *ym*, *zm* is calculated, then an iterative process tests the values of each component until the *m* that minimizes that derivative is found.

This least square (LS) method has been well established for noisy distance resolution when the errors to all anchors are comparable. In some environments, however, some anchors are in direct LoS to the measuring mobile device, while some others are behind obstacles. In this type of scenario, fusion techniques appear, where LS is complemented with steps that either estimate the nLoS deviation [7] to allow anchorto-anchor comparison or use fingerprinting techniques [8] to place the mobile at the position of best likelihood.

#### **3.3 Bayesian framework solutions**

One key aspect of the LS approaches is that they examine each measurement set individually. However, it may be tempting to reason that a mobile device is by nature moving, and that the position at time tn+1 may be related to the position at time tn. Therefore, localization determination often borrows from Bayesian filtering techniques, where the state of a dynamic system is evaluated from noisy measurements and compared to the conclusion of previous measurements. There are naturally many techniques falling into that family, and this chapter only underlines the most used.

#### *3.3.1 Kalman filter*

Among the techniques leveraging the Bayesian framework, the most common is the Kalman filter (KF). The technique has been used successfully since the 1960s for trajectory and position estimation. In essence, the standard KF approach compares the estimation at time tn with the prediction built from past observations and predictions [9]. When the new observation is noisy, the algorithm trusts more (affects a higher weight to) the prediction. When the new observation noise is small, the algorithm affects a higher weight to that observation than to the prediction.

One key requirement of the KF is that the underlying equations must be linear. The noise statistic also must have a Gaussian distribution. For location estimation, the distance equations are commonly not linear, and researchers have proposed several extensions to the KF for these cases. The most common variants are the unscented Kalman filter (UKF) and the extended Kalman filter (EKF). Both models solve the non-linearity by approximating the nonlinearity with Taylor expansions, then estimating their derivatives (which then become linear equations), EKF with the first derivative, and UKF with the second derivative. EKF is commonly used when the noise figure is small (variance differences are not large, mostly in LoS environments). UKF appears often in scenarios where the noise is high (environments dominated by nLoS scenarios).

The KF family has received the favor of implementers of indoor localization algorithms because, despite its complexity, the KF relies on matrix operations that most operating systems integrate natively. Thus, the computation can be done efficiently on most systems, in near real-time, and KF techniques are widely successful for UWB-based localization [10–12]. However, you should be aware of several limitations:

The KF methods require an initial position estimation, which in most cases is not available (and thus a random or arbitrary value is fed into the system). The algorithm then converges as more observations are made. The pace of these observations has a

direct influence on the convergence speed. In other words, if your UWB method observes one position per second, the convergence to an accurate-enough position will be much slower than if your system generates 50 measurements per second. A common implementation practice is therefore to ignore the first *n* estimations to give time for the system to converge.

The KF methods are sensitive to sudden changes. By their very nature, they affect a low weight for measurements that are very far from the estimated positions. But these positions are based on past observations. A direct consequence is that the trajectory is linear, and the mobile device suddenly changes direction (for example, the user turns a corner in a corridor), the KF methods tend to overshoot, estimating that the new observation is likely inaccurate. On a map, you would then see the device trajectory continuing for a little while (possibly through a wall) before slowly turning and catching up to the user's real position. Here again, the pace of the sampling dictates the duration and span of this negative effect.

#### *3.3.2 Particle filters*

Particle filter (PF) is the name given to a family of techniques that implement the Monte Carlo approach within the Bayesian framework [13]. It is well adapted for non-Gaussian, non-linear estimations, and thus often used for indoor ranging problems. This is because, in LoS conditions, the observations may display some noise value forming a Gaussian distribution around some value, but in nLoS conditions, walls and multipath tend to inflate the observed values, causing a long tail of distance overestimations that negates the Gaussian condition.

At its core, the PF incorporates two models. A motion model reads a set of values and deduces a possible state. In the case of indoor location, this first set may be obtained for example from odometry (readings from the device's internal sensors) to estimate the device's new position. In most cases, this technique alone is not sufficient to compute the path, because the sensors operate at a small scale and their accuracy suffers at a larger scale. For example, suppose a smartphone gyroscope estimates an 81-degree left turn, but the user turned 92 degrees left. After walking 10 more meters, the system sees the device 2 meter right of its real position. As the user moves, the gyroscope, accelerometer, and all the other sensors get multiple and frequent inputs, and their errors tend to build on one another, making that odometry can be quite accurate at a small scale (it is called a local technique), for small movements, but is not a great tool to compute trajectory at a large scale if the real position of the mobile is not rectified at intervals, using another source of truth (called a global technique), that may not be very accurate at small scale but may provide better visibility at these larger scales. Outdoor, that source can be GPS. Indoor, a ranging method is a great second source.

PF, therefore, includes, in addition to the motion model, a sensor model that measures the distance to some reference points, with the goal of re-positioning the mobile device from this second source. Naturally, the second source alone is not sufficient (otherwise there would be no need for the first model), as measurements are also noisy. So, PF operates by collecting a set of observations (in our case, distances to anchors), that are called particles, and establishing their probability density function (PDF) to determine which of the observations are most likely to be correct. PF is not very well-adapted to high-dimension problems, but works well for indoor localization, and is therefore widely used with UWB TWR [14, 15], either alone, or in combination (or in comparison) with KF [16, 17].

#### *3.3.3 Machine learning methods*

The methods examined so far rely on the laws of physics to find the best range estimation for each anchor and deduce the best position from the combination of ranges available. With multiple samples from multiple anchors, the number of parameters becomes large enough that statistical methods can be successfully substituted for physical methods. These complementary approaches help address two types of issues:

nLoS detection: LoS measurements are closer to the ground truth distance than nLoS measurements. Detecting nLoS conditions (and the stretch they induce to the measured distance) has been an active field of research, where unsupervised techniques can dramatically help group alike nLoS scenarios [18, 19] and reduce the effect of the stretch they induce.

Insufficient contributions: when there are not enough anchors to range against, or when they are all nLoS, supervised techniques allow the operator to sample measurements in different known locations, then deduce the location matching a new set of measurements by comparing it to the sample values. This technique is commonly called fingerprinting and is used on its own [20], or in combination with other techniques [21].

#### **4. Conclusion**

This chapter examined the evolution of UWB Standards for ranging. First defined in the IEEE 802.15.4a amendment in 2007, at a time when other groups also claimed ultra-wideband transmissions, UWB initially focused on the simple case of TWR, where one initiator would range against one responder. The integration into a larger localization solution implied a static configuration of roles and relied on implementers to fill the elements undefined in the Standard.

As UWB proved an efficient technology for accurate ranging, multiple proprietary implementations appeared that leveraged the basic tools defined in the protocol, but also added improvements and new modes to better fulfill the different location use cases. In 2020, IEEE 802.15.4z integrated many of these elements, to better address the challenges of TWR implementation, and account for the most popular augmentations, namely TDoA and AoA.

The Standard defines elements of the Physical and the Data Link layers and is silent on how UWB should be used in an end-to-end localization solution. It is therefore not sufficient for practical implementation. Organizations like FiRa integrate the IEEE protocol into a larger landscape, addressing the various use cases and the required communication structure above the two bottom layers.

This combination has made UWB very successful for localization with ultra-high accuracy. The techniques of converting a series of ranges, angles, or time of arrival differences into a position are not specific to UWB. However, the precision allowed by the structure of the UWB signal makes it a prime candidate to solve complex indoor navigation and asset localization problems, especially in the world of robotics and the Internet of Things (IoT).

The journey is far from over. IEEE 802.15.4z assumes that many elements required by the ranging exchange are sent out-of-band or consume in-band airtime that could be used to perform more or better ranging. There is therefore still a need to refine the technique and integrate it with the standard more tools to simplify the communication or perfect the accuracy of the range obtained. The IEEE 802.15ab task group has been formed to tackle this task, with the ambition to complete its work by the end of 2025.

#### **Author details**

Jerome Henry Cisco Systems, Research Triangle Park, USA

\*Address all correspondence to: jhenry@ieee.org

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### **References**

[1] IEEE. 802.15.4a. Part 15.4: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low-Rate Wireless Personal Area Networks (WPANs), Amendment 1: Add Alternate PHYs. New York. 2007

[2] IEEE. 802.15.4z. Part 15.4: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low-Rate Wireless Personal Area Networks (WPANs), Amendment 1: Enhanced Ultra Wideband (UWB) Physical Layers (PHYs) and Associated Ranging Techniques. New York. 2020

[3] Malajner M, Planinšič P, Gleich D. UWB ranging accuracy. 2015 International Conference on Systems, Signals and Image Processing (IWSSIP). London. UK. 10-12 Sept. 2015. pp. 61-64. DOI: 10.1109/IWSSIP.2015.7314177

[4] Sackenreuter B, Hadaschik N, Faßbinder M, Mutschler C. Lowcomplexity PDoA-based localization. 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN). Madrid. Spain. 4-7 Oct. 2016. pp. 1-6. DOI: 10.1109/IPIN.2016.7743692

[5] Naz A, Asif HM, Umer T, Kim B-S. PDOA based indoor positioning using visible light communication. IEEE Access. **6**:7557-7564

[6] UWB Secure Ranging in FiRa [Internet]. Available from: https://www. firaconsortium.org/sites/default/files/ 2022-09/FIRA-Whitepaper-UWB-Secure-Ranging-August-2022\_0.pdf

[7] Yu K, Wen K, Li Y, Zhang S, Zhang K. A novel NLOS mitigation algorithm for UWB localization in harsh indoor environments. IEEE Transactions on Vehicular Technology. 2019;**68**(1): 686-699

[8] Poulose A, Eyobu OS, Kim M, Han DS. Localization error analysis of indoor positioning system based on UWB measurements. 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN). Zagreb. Croatia. 2-5 Jul. 2019. pp. 84-88. DOI: 10.1109/ICUFN.2019.8806041

[9] Welch GF. Kalman filter. In: Computer Vision. Cham (SW): Springer; 2020. DOI: 10.1007/978-3- 030-03243-2\_716-1

[10] Fu J, Fu Y, Xu D. Application of an adaptive UKF in UWB indoor positioning. 2019 Chinese Automation Congress (CAC). Hangzhou. China. 22-24 Nov. 2019. pp. 544-549. DOI: 10.1109/CAC48633.2019.8996692

[11] Feng D, Wang C, He C, Zhuang Y, Xia X. Kalman-filter-based integration of IMU and UWB for high-accuracy indoor positioning and navigation. IEEE Internet of Things Journal. 2020;**7**(4): 3133-3146

[12] Cano J, Chidami S, Ny J. A Kalman filter-based algorithm for simultaneous time synchronization and localization in UWB networks. 2019 International Conference on Robotics and Automation (ICRA). Montreal. Canada. 20-24 May 2019. pp. 1431-1437. DOI: 10.1109/ ICRA.2019.8794180

[13] Elfring J, Torta E, van de Molengraft R. Particle filters: A hands-on tutorial. Sensors. 2021;**21**(2):438

[14] Yang W, Zhang W, Li F, Shi Y, Nie F, Huang Q. UAPF: A UWB aided particle filter localization for scenarios with few features. Sensors. 2020;**20**(23):6814

[15] Li Z, Wu J, Kuang Z, Zhang Z, Zhang S, Dong L, et al. Moving target tracking algorithm based on improved resampling particle filter in UWB environment. Wireless Communications and Mobile Computing. 2022;**2022**: 9974049

[16] Petukhov N, Zamolodchikov V, Zakharova E, Shamina A. Synthesis and comparative analysis of characteristics of complex Kalman filter and particle filter in two-dimensional local navigation system. 2019 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). Yekaterinburg. Russia. 25-16 Apr. 2019. pp. 225-228. DOI: 10.1109/ USBEREIT.2019.8736595

[17] Li X, Wang Y, Liu D. Research on extended Kalman filter and particle filter combinational algorithm in UWB and foot-mounted IMU fusion positioning. Mobile Information Systems. 2018;**2018**: 1587253

[18] Krishnan S, Xenia Mendoza Santos R, Ranier Yap E, Thu Zin M. Improving UWB based indoor positioning in industrial environments through machine learning. 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV). Singapore. 18-21 Nov. 2018. pp. 1484-1488. DOI: 10.1109/ ICARCV.2018.8581305

[19] Fan J, Awan A. Non-line-of-sight identification based on unsupervised machine learning in ultra wideband systems. IEEE Access. 2019;**7**: 32464-32471

[20] Che F, Ahmed A, Ahmed QZ, Zaidi S, Shakir M. Machine learning based approach for indoor localization using ultra-wide bandwidth (UWB) system for industrial internet of things (IIoT). 2020 International Conference on UK-China Emerging Technologies

(UCET). Glasgow. UK. 20-21 Aug. 2020. pp. 1-4. DOI: 10.1109/UCET51115.2020. 9205352

[21] Poulose A, Dong S. UWB indoor localization using deep learning LSTM networks. Applied Sciences. 2020; **2020**(10):62-90

#### **Chapter 5**

## Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges

*Jonas Ninnemann, Paul Schwarzbach and Oliver Michler*

#### **Abstract**

Radio sensing is a rapidly emerging research field. It focuses on designing an integrated communication system that can also perform localization and radar functionalities sharing the same transmit signals and potentially the same hardware. Ultrawideband (UWB) impulse radio is a promising technology for radio sensing because it offers a high-range resolution and direct access to the channel impulse response (CIR) to observe the multipath components (MPCs) of the wideband channel caused by scattering at target objects. This approach enables a wide range of functionalities and applications, especially in the field of mobility and transportation. The foundation is given by the signal propagation and channel modeling of the UWB channel, which is briefly revisited in this chapter. Based on the CIR and estimated MPCs the target object can be localized like a multistatic passive radar. The influence of geometry in a passive target localization system is studied by calculating the geometric dilution of precision (GDOP). In addition to passive localization more tasks and functionalities of radio sensing, are briefly introduced including detection, tracking, imaging, counting, and classification. The chapter concludes with further research directions and challenges in UWB radio sensing, especially for real-world use in the context of mobility applications.

**Keywords:** multipath-assisted radio sensing (MARS), channel model, channel impulse response (CIR), multipath components (MPCs), impulse radio (IR), radio sensing for intelligent transportation systems (ITS)

#### **1. Introduction**

Ultra-wideband (UWB) impulse radio (IR) is extensively researched and used as a technology for indoor positioning systems [1, 2] and short-range communication [3]. Such systems are enabled by key features of the UWB physical layer (PHY) such as high bandwidth, the transmission of very short impulses, and low power. These features lead to a high range resolution and usually dense networks, which in general makes UWB perfectly suitable for radio sensing tasks and applications.

The term radio sensing first emerged in the context of cellular networks and is referring to the usage of existing radio signals to passively sense the environment [4–6]. The goal is to perform communication, localization, and radar functionalities by sharing the same transmit signals and potentially the same hardware. The analysis of the radio signal itself enables new features and functions of the communication systems such as localization, tracking, imaging, detection, or classification of passive target objects without the need for dedicated hardware or specialized measurement setups. In the past, this approach referred to UWB-IR and was mostly considered for tracking [7, 8], imaging [9], and people counting [10]. In contrast to such radar systems, radio sensing follows a more integrated approach by combining communication, localization, and sensing functionalities with the same radio signal and hardware to be more cost and spectrum efficient. This way the IR could be an integrated part of future communication networks to fulfill sensing tasks [5].

UWB and its IR nature allow direct access to the channel impulse response (CIR) in the time domain as the fundamental signal parameter for channel estimation. The CIR measurement includes information about the different propagation paths of the signal (direct path and echo paths). For passive target localization, the CIR is measured, and the multipath components (MPCs) are extracted to fulfill the different tasks of radio sensing. The high bandwidth of UWB ð≥ 500 MHz) allows the distinguishing of MPCs in the CIR, even if the time delay of the propagation paths is relatively close together [11].

There are many use cases of radio sensing ranging from smart cities, smart homes, vehicular networks, and health to drones. In terms of mobility applications, radio sensing based on UWB could be helpful and game-changing for intelligent transportation systems (ITS). **Table 1** lists possible use cases of radio sensing for the various modes of transport and tasks of radio sensing.

To implement the use cases, UWB is a promising technology for radio sensing. Therefore, this chapter transfers radio sensing approaches to UWB for different tasks and functionalities. In particular, the chapter provides the necessary fundamentals and proposes a UWB radio sensing approach based on the multipath channel model. Limitations in terms of range resolution and sensor arrangement are discussed, as well as further research directions and challenges.

The rest of this chapter is organized as follows: Section 2 gives an overview of the research field of radio sensing, the different scientific and technological influences, and the current state-of-the-art. In Section 3, the basics of signal propagation are introduced to derive the wideband multipath channel model. Based on this, a UWB radio sensing approach is then described in Section 4 for passive target localization. In addition, the influence of network geometry is investigated for different transceiver


#### **Table 1.**

*Use cases of UWB radio sensing for intelligent transportation systems.*

constellations and other tasks of radio sensing are briefly introduced. In Section 5 challenges and further research directions for UWB-based radio, sensing is discussed. The chapter concludes with a summary of the key contributions in Section 6.

#### **2. Taxonomy of radio sensing**

The research field of radio sensing is composed of different scientific directions and is only made possible by the fusion of these influences and ideas. The three main pillars are wireless communication systems, localization based on RF signals, and radar systems. The different terms and concepts in this highly interconnected research field are presented in a word cloud in **Figure 1**.

Wireless communication systems [12, 13] have been used not only for networking but also for the localization of devices for years now. Especially in the context of indoor positioning systems (IPS) [1, 2], where no global navigation satellite systems (GNSS) are available, localization enables new services and applications. To estimate the location of a mobile sensor (tag) a wireless sensor network (WSN) [1], consisting of fixed sensors (anchors) with known positions, is placed in the environment. Based on different channel parameters and positioning principles, the position of the mobile tag is estimated. Because the target object or user needs to wear an active sensor, this technique is referred to as active localization. UWB is one technology for such localization systems, perfectly suitable due to its high positioning accuracy.

The same radio signal used for active localization can also be analyzed to enable device-free passive localization (DFPL) [14, 15]. Here, the target object does not need to carry a sensor, but instead the position is estimated by evaluating different channel and propagation effects. DFPL is one possible task of a radio sensing system. But based on the use of wireless radio technology and channel estimation procedures, many tasks can potentially be accomplished. For example, detection of the target object, mapping of the environment, tracking, classification of different scenes or counting of objects and people.

In the past, such tasks were accomplished by dedicated radar systems with specific hardware, spectrum, and techniques for the measurement of radar parameters. In

#### **Figure 1.**

*Taxonomy of radio sensing as a fusion of different research directions in the context of integrated wireless communication systems. New abbreviations are joint communication and sensing (JCAS), frequency-modulated continuous wave (FMCW) radar, and real-time locating systems (RTLS).*

principle, there are two different types of radars: continuous wave (CW) and pulse radar. One specific implementation of a CW radar is the frequency-modulated continuous wave (FMCW) radar. There are different geometrical configurations of radar systems such as monostatic, bistatic, or multistatic systems. The radar system can detect objects by transmitting a pulse or CW signal toward the target object and analyzing the reflected signal to estimate different signal parameters like the time-offlight to the object [16].

More recent research focuses on the integration of these dedicated radar systems into cellular communication systems such as 5G/6G. The integration can be achieved through different levels starting from a better spectral coexistence of radar and communication systems over uniform hardware, RF frontend, and waveform design to true perceptive networks in the future. Different terms are used to describe such systems in research like Joint Communication and Sensing (JCAS) [5], Integrated Sensing and Communication (ISAC) [17], or radar communication (RadCom) [18]. This integration is only possible because of the trend to higher frequency spectrum and bandwidth in cellular networks and the resulting higher range resolution for sensing applications.


#### **Table 2.**

*Different approaches and measurement principles for UWB radio sensing.*

*Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges DOI: http://dx.doi.org/10.5772/intechopen.110040*

In terms of UWB, the IR is already part of the PHY concept. UWB sends very short pulses over a large bandwidth and uses pulse position modulation to transfer information. This concept enables active localization based on time of arrival estimation. Based on the transmitted pulses and the impulse response of the wideband channel, the PHY of UWB is perfectly suitable for different sensing tasks. In research, different approaches, algorithms, and use cases for UWB sensing are currently discussed (**Table 2**).

#### **3. Signal propagation and channel model**

#### **3.1 Propagation phenomena**

A radio communication system emits an electromagnetic wave, that experiences different propagation phenomena and effects before reaching the receiver (RX). In general, the energy of the radio signal is reduced in free space, where no obstacle is located between the transmitter (TX) and the RX (**Figure 2**). The Friis transmission model [28] stated that the received power *Pr* is given as follows [29]:

$$P\_r = P\_t G\_t G\_r \left(\frac{\lambda}{4\pi d}\right)^2\tag{1}$$

where *Pt* is the transmitted power, *Gt* and *Gr* are the antenna gain of the TX and RX. The free-space path loss (FSPL) depends on the wavelength *λ* and traveled distance *d* of the signal.

In addition to the FSPL, the received power is further reduced, and the signal direction is influenced by three other basic propagation phenomena: reflection, diffraction, and scattering (**Figure 2**). These effects occur when the electromagnetic wave with wavelength *λ* encounters an object with size *A*. Reflections appear when the size of the object is very large compared to the wavelength: *A* ≫ *λ*. The angle of the

**Figure 2.** *Signal propagation and phenomena.*

incident wave concerning the surface normal is equal to the angle that the reflected wave makes to the same normal. Reflections lead to a decrease in received power due to absorption or even the transmission loss of the wave energy by the encountered object. Diffraction arises when the wave hits an object with a size in the order of the wavelength (*A* ≈*λ*) and is explained by the Huygens principle. Scattering is the result of the encounter with a very small object compared to the wavelength: *A* ≪ *λ*. Scattering occurs on objects with rough surfaces, whereby the incident wave is redirected in many directions [29, 30].

#### **3.2 Channel effects**

The wireless channel is affected and characterized by the variation of the channel strength or energy level over time and frequency. All effects combined are called fading, and the summarized attenuation of all effects is the path loss between the TX and the RX. Fading can mainly be categorized into large-scale fading and small-scale fading, according to **Figure 3**. Large-scale fading characterizes the variations in path loss over distance (FSPL, log-normal), and also shadowing (slow fading) or even blockage by large objects. These effects are typically frequency independent. Smallscale fading on the other hand describes the constructive or destructive interference of multiple signal paths between the TX and RX. This is caused by scattering and generally leads to a time-varying channel [13, 31, 32].

Especially small-scale fading must be considered for wideband channel models targeting sensing applications. Small-scale fading results in either a flat fading channel or a frequency-selective channel. The coherence bandwidth *Bc* is considered the bandwidth where the channel is regarded as a flat channel. This mean all signals passing through the channel experience similar attenuation and phase shifts. The root mean square (RMS) delay spread *τrms* is inversely proportional to *Bc*, which means larger *τrms* results in a more frequency selective fading channel. If the signal bandwidth *Bs* is higher than *Bc* the channel is considered a frequency selective fading channel. The time interval over which the wireless channel is constant is called coherence time *Tc* [29, 30].

#### **3.3 Multipath propagation channel model**

The path loss is the basis for empirical channel models, which model the received power at a reference distance according to the carrier frequency and the environment [29]. Empirical models consider different types of propagation environments and are

**Figure 3.** *Types of channel fading effects.*

*Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges DOI: http://dx.doi.org/10.5772/intechopen.110040*

based on real-world measurements [33]. Flat fading channels are modeled with statistical models. For example, the Rayleigh fading distribution describes a statistical time-varying model for the propagation of electromagnetic waves. The Rice distribution modeling, a channel with one strong LOS component [29]. These models are based on measurements of the channel statistics for different predefined environment categories, but fail to resolve individual propagation paths. In comparison to statistical and empirical models, a deterministic model is more suitable for sensing approaches and applications. Individual propagation paths are calculated based on the aforementioned channel effects.

A multipath wideband frequency selective channel can be modeled as a linear time-varying system. We assume that the attenuation and propagation delay does not depend on the frequency inside the range of the coherence bandwidth of the channel. We can generalize the system to an arbitrary input *x*(*t*) and compute the received signal *y*(*t*) as follows:

$$\mathbf{y}(t) = \sum\_{i=1}^{N} a\_i(t)\mathbf{x}(t - \tau\_i(t))\tag{2}$$

where *N* is the number of different propagation paths with an attenuation *ai*ð Þ*t* and propagation delay *τi*ð Þ*t* at time *t* [13].

Because the channel is linear, it can be described by an impulse response *h*ð Þ *τ*, *t* [13]. The CIR of a time-varying multipath channel is given as [29]:

$$h(\mathbf{r}, t) = \sum\_{i=1}^{N} a\_i(t) \delta(t - \tau\_i(t)) \tag{3}$$

where *ai*ð Þ*t* and *δ*ð Þ *t* � *τi*ð Þ*t* is the attenuation, respectively, the delta function of the *i*th delayed propagation path. The Fourier-transformed impulse response of the system results in the following frequency response *H* (*f*;*t*) in frequency domain *f* [13]:

$$H(f;t) \coloneqq \int h(\tau, t)e^{-j2\pi f\tau}d\tau = \sum\_{i=1}^{N} a\_i(t)e^{-j2\pi f\tau\_i(t)}\tag{4}$$

The fading multipath channel is now described by an input/output relation as an impulse response of a linear time-varying system. The system can be interpreted as a linear finite impulse response (FIR) filter and is also referred to as the tapped delay line model. An example of such an FIR-based channel model is illustrated in **Figure 4** with three different reflected paths and the LOS path, resulting in a four-tap FIR filter. Each tap corresponds to a reflected path with an amplitude *ai*ð Þ*t* and a corresponding delay *τi*ð Þ*t* .

In a stationary case, where the *ai*ð Þ*t* and *τi*ð Þ*t* do not depend on time *t* we can model the channel as a usual linear time-invariant (LTI) system with the CIR corresponding to the following equation:

$$h(\mathbf{r}) = \sum\_{i=1}^{N} a\_i \delta(t - \tau\_i) \tag{5}$$

Multipath propagation causes different propagation effects depending on the propagation paths and shadowing. The direct path between the signal TX and RX is referred to as line-of-sight (LOS) propagation. In contrast, an obstructed or reflected

**Figure 4.** *Tapped-delay-line representation of the time-variant multipath channel model.*

transmission path is called non-line-of-sight (NLOS). The term multipath reception applies if the signal reaches the RX via multiple paths caused by different propagation phenomena. This results in a received signal composed of attenuated, delayed, and phase-shifted replicas of the transmitted signal. These components can take different paths in the environment before reaching the RX and are thus called multipath components (MPCs) [30, 32].

#### **3.4 Bandwidth and range resolution**

The next step in channel modeling is to convert the time-continuous channel to a time-discrete channel with limited bandwidth *Bs*. In the case of UWB, the input waveform of the channel or transmitted signal is a Gaussian pulse with a certain pulse duration *Td* <sup>¼</sup> <sup>1</sup> *Bs* and a signal bandwidth of *Bs* ≥500 MHz. The rectangular shape in the frequency domain corresponds in the time domain to the sinc function. Based on the sampling theorem we can sample the CIR following the Whittaker-Shannon interpolation formula [13]:

$$\tilde{h}(\mathbf{r}) = \sum\_{i=1}^{N} a\_i \text{sinc}(B\_i(t - \mathbf{r}\_i)) \tag{6}$$

where *sinc*ð Þ� donates to the sinc function defined by *sinc x*ð Þ¼ *sin* ð Þ *<sup>π</sup><sup>x</sup> <sup>π</sup><sup>x</sup>* . The sum of normalized sinc functions for every tap in the time-continuous signal allows the reconstruction of the CIR for the band-limited channel.

*Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges DOI: http://dx.doi.org/10.5772/intechopen.110040*

#### **Figure 5.**

*Qualitative correlation between four different bandwidths and time resolution for bandlimited received signals: MPCs are marked as black Diracs and the signal is modeled as the sum of all sinc function according to Eq. (6). The axes are scaled with min–max scaling, as this is only a qualitative representation of the different bandwidths.*

The argument of the sinc function in Eq. (6) is proportional to the used bandwidth. A larger signal bandwidth leads to a narrower sinc function. Thus, more individual MPC in the CIR can potentially be resolved [34]. Therefore, the achievable range resolution Δ*d* for radio sensing is determined by the signal bandwidth *Bs* [35]:

$$
\Delta d = \frac{c}{2B\_s} \tag{7}
$$

where *c* donates to the speed of light.

The range resolution is a key metric for many types of radio-sensing tasks and describes the ability to separate different MPC from each other in the CIR. **Figure 5** shows an example of a channel with three multipath components and the bandlimited reconstructed CIRs for different signal bandwidths. For the reconstruction, the signal is interpolated using a sinc kernel according to Eq. (6). The qualitative comparison of different bandwidths shows that a single MPC cannot be resolved if the bandwidth of the signal is not sufficient.

#### **4. UWB radio sensing: Approach and tasks**

#### **4.1 Problem formulation**

The channel model and impulse response from Eq. (5) can be translated into the spatial domain. The propagation delay *τ<sup>i</sup>* of the *i*th MPC is the time the electromagnetic wave travels from TX, bouncing at the scatter point (SP) and arriving at the RX. The SP is located somewhere on the target object, which should be located or detected. The time can be converted to a distance *di* by multiplying with the speed of light *c* [36]:

$$d\_i = \pi\_i \cdot \mathcal{c} = \mathcal{R}\_{\text{TX,SP}} + \mathcal{R}\_{\text{SP,RX}} + \mathcal{e}\_i \tag{8}$$

where *R*TX,SP is the geometric distance between TX and SP, and *R*SP,RX is the distance between SP and RX. The ranging error is represented by *ei*. The measured distance *di* is the length of the propagation path between TX and RX.

For the target localization, we consider a wireless sensor network (WSN) with multiple sensor nodes. Now we can estimate *k*th different propagation delays *τi*,*<sup>k</sup>* for the different channels and also have different propagation path lengths *di*,*k*. We assume the WSN consists of an RX at position **<sup>X</sup>**RX <sup>¼</sup> ½ � 0,0,0 <sup>⊺</sup> and *<sup>k</sup>* <sup>¼</sup> 1, … ,*<sup>K</sup>* different TX at positions **X**TX,*<sup>k</sup>* ¼ *x*TX,*k*, *y*TX,*k*, *z*TX,*<sup>k</sup>* h i<sup>⊺</sup> . The SP at the target object is located at **<sup>X</sup>**^ SP <sup>¼</sup> *<sup>x</sup>*^SP, ^*y*SP, ^*z*SP � �<sup>⊺</sup> (**Figure 6**).

$$\begin{split} \mathbf{R}\_{\text{bi},k} &= \mathbf{R}\_{\text{TX},\text{SP}} + \mathbf{R}\_{\text{SP},\text{RX}} \\ &= \sqrt{(\mathbf{x}\_{\text{TX},k} - \hat{\mathbf{x}}\_{\text{SP}})^2 + \left(\mathbf{y}\_{\text{TX},k} - \hat{\mathbf{y}}\_{\text{SP}}\right)^2 + (\mathbf{z}\_{\text{TX},k} - \hat{\mathbf{z}}\_{\text{SP}})^2} + \sqrt{\hat{\mathbf{x}}\_{\text{SP}}^2 + \hat{\mathbf{y}}\_{\text{SP}}^2 + \hat{\mathbf{z}}\_{\text{SP}}^2} \end{split} \tag{9}$$

The bistatic range *R*bi,*<sup>k</sup>* can be obtained or estimated for the measured propagation path *di*,*<sup>k</sup>* and is then the sum of the transmitter-target range *R*TX*k*,SP and the targetreceiver range *R*SP,RX according to the following equation: [37].

This is a non-linear optimization problem and therefore the solution is not directly obvious. One approach is iterative methods like the Taylor series linearization [38]. Another option is to estimate the target position by a closed-form solution [37, 39] like spherical-interpolation (SI) [40] or spherical-intersection (SX) [41].

#### **4.2 Non-linear least squares estimation**

To solve the non-linear equation system given in Eq. (9) the function is linearly approximated at a working point using Taylor's theorem. The solution of the resulting linear least-squares approach is used to adjust the position estimation in an iterative process [38, 42]. A first estimation of the target object position *x*0, *y*0, *z*<sup>0</sup> � � is utilized to initialize the Taylor series at this point. The innovation of this first estimation *δx*, *δy*, *δ<sup>z</sup>* � � allows the adjustment of the estimation and is calculated as follows:

**Figure 6.** *The geometric configuration of the target and sensors for the localization.*

*Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges DOI: http://dx.doi.org/10.5772/intechopen.110040*

$$
\hat{\mathfrak{x}}\_{\text{SP}} = \mathfrak{x}\_0 + \delta\_\mathbf{x} \qquad \qquad \hat{\mathfrak{y}}\_{\text{SP}} = \mathfrak{y}\_0 + \delta\_\mathbf{\mathcal{Y}} \qquad \qquad \hat{\mathfrak{z}}\_{\text{SP}} = \mathfrak{z}\_0 + \delta\_\mathbf{\mathcal{z}} \tag{10}
$$

The first-order Taylor polynomial *Tk* is used to calculate the linear approximation of Eq. (9) at the first estimation of the target object position:

$$\begin{split} T\_{k} &= \mathbf{R}\_{\mathrm{bi},k} + a\_{k,x}\delta\_{x} + a\_{k,y}\delta\_{y} + a\_{k,x}\delta\_{z} \\ &\approx \mathbf{R}\_{\mathrm{bi},k}(x\_{0},y\_{0},z\_{0}) + \left. \frac{\partial \mathbf{R}\_{\mathrm{bi},k}(x,y,z)}{\partial \mathbf{x}} \right|\_{x=x\_{0},y=y\_{0},x=z\_{0}} \delta\_{x} + \left. \frac{\partial \mathbf{R}\_{\mathrm{bi},k}(x,y,z)}{\partial y} \right|\_{x=x\_{0},y=y\_{0},x=z\_{0}} \delta\_{y} \\ &+ \left. \frac{\partial \mathbf{R}\_{\mathrm{bi},k}(x,y,z)}{\partial \mathbf{z}} \right|\_{x=x\_{0},y=y\_{0},x=z\_{0}} \delta\_{z} \end{split} \tag{11}$$

where *ak*,*<sup>x</sup>*, *ak*,*<sup>y</sup>*, and *ak*,*<sup>z</sup>* are the partial derivatives of the Eq. (9):

$$\begin{array}{rcl} a\_{k,\mathbf{x}} &=& \frac{\mathbf{x}\_{\text{TX},k} - \mathbf{x}\_{0}}{\sqrt{\left(\mathbf{x}\_{\text{TX},k} - \mathbf{x}\_{0}\right)^{2} + \left(\mathbf{y}\_{\text{TX},k} - \mathbf{y}\_{0}\right)^{2} + \left(\mathbf{z}\_{\text{TX},k} - \mathbf{z}\_{0}\right)^{2}}} + \frac{\mathbf{x}\_{0}}{\sqrt{\mathbf{x}\_{0}^{2} + \mathbf{y}\_{0}^{2} + \mathbf{z}\_{0}^{2}}} \\ a\_{k,\mathbf{y}} &=& \frac{\mathbf{y}\_{\text{TX},k} - \mathbf{y}\_{0}}{\sqrt{\left(\mathbf{x}\_{\text{TX},k} - \mathbf{x}\_{0}\right)^{2} + \left(\mathbf{y}\_{\text{TX},k} - \mathbf{y}\_{0}\right)^{2} + \left(\mathbf{z}\_{\text{TX},k} - \mathbf{z}\_{0}\right)^{2}}} + \frac{\mathbf{y}\_{0}}{\sqrt{\mathbf{x}\_{0}^{2} + \mathbf{y}\_{0}^{2} + \mathbf{z}\_{0}^{2}}} \\ a\_{k,\mathbf{z}} &=& \frac{\mathbf{z}\_{\text{TX},k} - \mathbf{z}\_{0}}{\sqrt{\left(\mathbf{x}\_{\text{TX},k} - \mathbf{x}\_{0}\right)^{2} + \left(\mathbf{y}\_{\text{TX},k} - \mathbf{y}\_{0}\right)^{2} + \left(\mathbf{z}\_{\text{TX},k} - \mathbf{z}\_{0}\right)^{2}}} + \frac{\mathbf{z}\_{0}}{\sqrt{\mathbf{x}\_{0}^{2} + \mathbf{y}\_{0}^{2} + \mathbf{z}\_{0}^{2}}} \end{array} \tag{12}$$

In matrix notation this equals:

$$\mathbf{A} = \begin{bmatrix} a\_{1\times} & a\_{1\times} & a\_{1\times} \\ a\_{2\times} & a\_{2\times} & a\_{2\times} \\ \vdots & \vdots & \vdots \\ a\_{k\times} & a\_{k\times} & a\_{k\times} \end{bmatrix} \qquad \boldsymbol{\delta} = \begin{bmatrix} \delta\_x \\ \delta\_y \\ \delta\_z \end{bmatrix} \qquad \mathbf{b} = \begin{bmatrix} d\_1 - \mathbf{R}\_{\mathrm{bi},1} \\ d\_2 - \mathbf{R}\_{\mathrm{bi},2} \\ \vdots \\ d\_k - \mathbf{R}\_{\mathrm{bi},k} \end{bmatrix} \tag{13}$$

where matrix *A* represents the geometry matrix or Jacobian matrix containing the partial derivatives for the variables. The vector *δ* contains the three-dimensional error components for the object position estimation. Additionally, vector *b* is the calculated difference between the measured length of the reflection path *di* and the function value at the estimated object position *R*bi,k *x*0, *y*0, *z*<sup>0</sup> � �. The linear equation system can then be solved using the least squares approach [38]:

$$\boldsymbol{\delta} = (\mathbf{A}^{\mathsf{T}} \mathbf{A})^{-1} \mathbf{A}^{\mathsf{T}} \mathbf{b} \tag{14}$$

The calculated residual of the position estimation *δ* is used to adjust the estimation (Eq. (15)), which is also the starting point for the next iteration of the method:

$$x\_0 \gets x\_0 - \delta\_\mathbf{x}, \qquad y\_0 \gets y\_0 - \delta\_\mathbf{y}, \qquad z\_0 \gets z\_0 - \delta\_\mathbf{z} \tag{15}$$

#### **4.3 Theoretical bounds and geometric dilution of precision**

To assess the influence of the geometric constellation between network nodes and scatter points the Cramer-Rao lower bound (CRLB) is computed. The CRLB is the

theoretical limit on the performance and accuracy (error variance) of any unbiased estimator and can be derived by the inverse of the Fisher information matrix (FIM). If the FIM is positive definite or non-singular, then the inverse of *J* exists and the CRLB can be written as [43]:

$$\text{CRLB} = \text{J}^{-1} \tag{16}$$

The geometric dilution of precision (GDOP) is the ratio of the accuracy limitation of the localization to the accuracy of measurements and is calculated based on the CRLB as follows [44]:

$$\text{GDOP} = \sqrt{\text{tr}(\text{CRLB})} = \sqrt{\text{tr}(\mathbf{J}^{-1})} \tag{17}$$

where trð Þ� donates to the trace of the square matrix of CRLB. If all measurement errors are to be considered zero-mean independent and identically distributed Gaussian variables in the positioning system the GDOP is [45]:

$$GDOP = \sqrt{\text{tr}(\mathbf{A}^T \mathbf{A})}^{-1} \tag{18}$$

where *A* represents the Jacobian matrix as in Eq. (12).

The CRLB of the positioning accuracy depends also on the ranging error or error in the MPC extraction [46]. This error also results partly from the limits in the range resolution of UWB (Section 3.4).

**Figure 7** presents the calculated GDOP map for every possible target position inside a grid around the sensors for two different sensor arrangements. The GDOP is calculated following Eq. (18) based on the Jacobian matrix *A* in Eq. (13) for position candidates represented by an equidistant grid. The resulting GDOPs are indicated by the color scale. The first geometric constellation has some areas with degraded GDOP values, while the second symmetrical sensor arrangement around the target results in lower GDOP values and thus a better accuracy [43].

*GDOP map for localization with two different sensor constellations with three TX (black) and one RX (gray). The GDOP (color scale) is calculated for each target position within an equidistant grid sampled with 0.1 m.*

#### **4.4 Tasks and functionalities of radio sensing**

Radio sensing fulfills different tasks and functionalities depending on the use case (cf. **Table 1**). These tasks include but are not limited to localization, tracking, mapping/imaging, presence detection, counting, or classification and are enabled by specific algorithms. The data processing to achieve the different sensing tasks concerning to UWB specifics is outlined in **Figure 8**. The input for all UWB radio sensing tasks and functionalities is the CIR, from which the clutter is removed beforehand (Section 5.2). Mapping, classification, and counting use all values in the CIR, whereas detection and localization are based on the extracted MPC at the target object.

**Figure 9** depicts a collection of results of algorithms and methods to enable radio sensing. All subfigures show original content and were created based on the frameworks and algorithms detailed in [47–49]. The software used to create the illustrations is indicated on the backmatter.

The goal of passive localization is to estimate the position of the target object in a multistatic sensor network based on the measured bistatic ranges between multiple TX and RX. The bistatic range corresponds to the length of the reflection path and can be estimated as MPC from the CIR. An ellipse is defined by the constant bistatic range, on which the target lies, with its foci being located at the TX and RX positions. The target position can either be estimated by setting up the ellipse equation and calculating the intersections of these ellipses or by solving the non-linear equation system (9) with a Taylor series linearization as outlined in Section 4.2. **Figure 9a** shows the localization with the elliptical model with four sensors (three TX and one RX) and the target at the intersection point of the ellipses [42].

Mapping or imaging is accomplished by using the whole CIRs obtained between all TX and the RX. For that, a single CIR is spatially mapped based on the elliptical model resulting in a crowd of ellipses for every distance value in the CIR with the corresponding amplitude values as magnitude. After interpolating the ellipses and combining the resulting grid with the other mapped CIRs in the sensor network, a heatmap of the environment is obtained. The heatmap in **Figure 9b** highlights regions with reflections at the target objects in yellow. The map could also be used to estimate the position of the target objects or even multiple objects [47].

The next task for sensing is the detection of objects or even counting multiple objects/people. Detection is achieved by only analyzing the CIR. As shown in

**Figure 8.** *The connection between data processing and the various sensing tasks.*

**Figure 9.**

*Tasks of radio sensing: (a) localization with elliptical model, (b) mapping and imaging, (c) detecting and counting with MPC extraction, (d) classification of CIRs with k-nearest neighbor (kNN) algorithm.*

**Figure 9c**, the CIR is filtered to remove the static background using the reference method described in Section 5.2. Then, the MPC from the wanted target is extracted using a simple threshold detector [48]. Further challenges for clutter removal and MPC extraction are discussed in Section 5.2. The CIR and extraction of MPCs could also be used for counting objects or people. Here, the different peaks and local maxima in the CIR are clustered, probability filtered, and the number of targets are extracted by a maximum likelihood estimator [26].

The whole CIR is used for classification for example based on the k-nearest neighbor (kNN) algorithm to distinguish between different states. Therefore, the test data is compared with pre-recorded training data by finding the minimum distance determined by a specific metric. Thus, the recorded CIR could be assigned to a state/class for example to derive a detection status. **Figure 9d** shows an assembled time series from two measured CIRs of the test dataset (red) compared to the three closest time series of the training dataset (black). The kNN algorithm uses a Euclidean metric and the three closest neighbors in the training dataset to determine the state/class of the test sample. The example in **Figure 9d** shows the measured data from seat occupancy detection inside a connected aircraft cabin using UWB. The static background from

the CIR is removed beforehand and the CIRs from the different sensors are combined into a unified time series [49].

### **5. Challenges and research directions**

In this section, selected challenges for UWB-based radio sensing are discussed and further research directions are derived. Since the focus in this chapter is on transport and mobility, the challenges are mainly highlighted based on the various use cases in this area.

#### **5.1 Performance indicators and metrics**

Moving from solely communication purposes toward perceptive networks fundamentally changes how wireless systems are evaluated. State-of-the-art performance metrics, such as the received signal strength (RSS) or signal-to-interference-plus-noise ratio (SINR), are not useful to evaluate radio sensing systems. Instead, specific metrics for different sensing tasks from other research domains should be applied. For example, vision technologies, such as LiDAR and camera commonly uses the probability of detection [50] to determine the detectability of objects, IPS apply the root mean square error (RMSE) as an accuracy metric, or classification tasks consider the accuracy to predict the label of an object. To further elaborate, **Table 3** proposes and briefly describes performance metrics for different sensing tasks. One challenge is to select the right metric for the desired task and to combine and weigh different metrics.

#### **5.2 Clutter removal and MPC extraction**

Clutter is MPCs from the static background environment. In general, these signals are not of interest for sensing applications, but instead only the reflected signals from the target object or detected people need to be considered. The task of clutter removal is to remove or suppress these MPCs in the CIR. One approach is the background subtraction of multipath signals originating from permanent or long-period static objects. There are two methods for background subtraction: the reference method and the dynamic method [5, 7, 51].

The reference method subtracts an averaged reference signal *h* refð Þ*t* without the target object from the measurement with the target object *h*(*t*) flowing Eq. (19). This


**Table 3.**

*Possible performance indicators and metrics for different sensing tasks.*

method is best suited for static environments and when the calibration with the reference signal is possible beforehand [7, 50]:

$$
\overline{h}^{\text{sub}}(t) = |h(t) - \overline{h}^{\text{ref}}(t)| \tag{19}
$$

The dynamic method subtracts the static, time-invariant background based on exponential averaging. The background *bt* is computed using the previous background estimate *bt*�<sup>1</sup> and the newly received CIR *ht* [52]:

$$b\_t = ab\_{t-1} + (1 - a)h\_t \tag{20}$$

The constant scalar weighing factor *α* between 0 and 1 determines whether recent or long-term events are emphasized. The clutter-removed signal *h* subð Þ*t* is then also obtained by subtraction:

$$\overline{h}^{\text{sub}}(t) = |h\_t - b\_t| \tag{21}$$

In non-stationary scenarios, clutter removal is much harder, especially in dynamic environments, and can lead to missed detection of the target during the measurement time. Also, background subtraction is not effective in dense multipath scenarios [25].

In addition, the extraction of the MPC with the corresponding time delay from the CIR is of upmost importance to fulfill the different sensing tasks. First, a basic maximum or threshold detector could be used to find the MPC peak in the CIR. More advanced methods cluster the different peaks in the CIR to MPC clusters, which are from a single target object. Other methods used for UWB signals for MPC extraction are detectors of distributed targets (so-called (*N*,*k*)-detector), the interperiod-correlation processing (IPCP) detector, and the constant false alarm rate (CFAR) detector [53].

Froehle [36] uses an MPC extraction algorithm consisting of three steps to tackle this challenge. First, all peaks in the CIR are searched with a high-resolution peak search, then a weighting factor is applied before the estimated MPC is detected and canceled out as the strongest scatter. This process is repeated for more attenuated scatter and MPCs [36].

The precondition for the MPC extraction and delay estimation is the distinctness of the MPCs in the CIR. This is especially challenging when the reflected signal strength from a long propagation path is very weak, the target is hidden or shadowed behind other objects, or the MPCs could not be isolated due to limited range resolution and bandwidth [25, 47].

#### **5.3 Channel model and propagation simulation**

Commonly the channel for communication systems is modeled with empirical or statistical models like the 3GPP TR 38.901 for 5G [33]. These models are based on measurements of channel statistics for different predefined environment categories such as indoor, urban, or rural. In comparison, deterministic channel models like the multipath channel model described in Section 3 use individual propagation paths or rays as a modeling foundation. This approach is generally suitable for sensing approaches and applications because every ray and MPC can individually be modeled.

Often ray tracing is used to calculate the delay of every propagation path or ray. In addition, the attenuation of the path is determined taking FSPL, reflection, scattering,

#### *Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges DOI: http://dx.doi.org/10.5772/intechopen.110040*

and diffraction losses into account. Ray tracing is a computationally intensive operation and much more complex than statistical models [54].

An example of a radio sensing mapping task is to estimate the occupancy state of a parking lot in a car park for smart parking and ticketing, as depicted in **Figure 10**. The car park environment is geometrically modeled, and the material properties are applied. Sensors are deployed at different locations (**Figure 10a**), and the signal parameters are complying with UWB [54]. The center frequency is set to 6.5 GHz and the transmit power to 41.3 dBm MHz<sup>1</sup> . Then, the individual propagation paths with their corresponding received signal power and propagation delay are computed with the radio propagation simulation software *Altair Feko/WinProp 2022* [55] using the deterministic ray tracing model (**Figure 10a**). After the simulation, the propagation delays and amplitudes are used to obtain the bandlimited reconstructed CIR with help of Eq. (6) in a custom software framework. Since the simulation is performed without and with the target vehicle, the static background subtraction is applied based on the

#### **Figure 10.**

*Antenna comparison for a radio sensing application: (a) radio propagation simulation with Altair Feko/WinProp 2022 [55] for parking plot occupancy detection in a car park with an example of computed rays between two sensors, (b) mapping of a CIR between two sensors (black circles) as a family of ellipses, resulting interpolated heatmap with (c) omnidirectional antennas and (d) directional/sector antennas.*

reference method described in Section 5.2. The subtracted CIRs between all sensors are mapped with the elliptical method described in Section 4.4 (**Figure 10b**). Interpolating and combining the maps of all sensor combinations results in a heatmap of the environment highlighting the reflections of the target object for two different antenna configurations (**Figure 10c** and d) [56].

In the future, a combination of deterministic ray tracing and statistical models will be needed to evaluate integrated communication systems with both data transmission and radio sensing capabilities [18].

#### **5.4 Directional antennas**

The sensing performance can not only be enhanced with enhanced signal processing but also directly by antennas and the HF signal itself. Omnidirectional antennas radiate the radio power in all directions perpendicular to the azimuth directions equality, whereas specific directional antennas have much higher antenna gain in a specific direction. This means that the signal from this direction is amplified, whereas signals from other directions are much more attenuated. In terms of radio sensing, the reflected signal strength from the desired direction is much stronger in the CIR, so the MPC can be isolated more accurately to find the position of the target. In addition, sector antennas combine multiple directional antenna elements and are switchable to a desired sector and direction. Beamforming even allows spatial selectivity of an antenna array by dynamically controlling the phase and relative amplitude of the signal.

To investigate the influence of directional antennas on the sensing performance, the radio propagation simulation based on ray tracing with *Altair Feko/WinProp 2022* [55] can be used. **Figure 10** compares the mapping/imaging of a car park to estimate the occupancy state of a parking lot with omnidirectional antennas against directional antennas. Due to the selectivity of the antenna and the antenna gain in direction of the target vehicle, reflections are much stronger and highlighted at the boundaries of the vehicle from all sites.

The antenna choice directly affects the radio sensing performance and should be further investigated and considered in addition to algorithms and signal processing improvements.

#### **6. Conclusions**

The main contribution of this chapter is the transfer of UWB IR to the emerging research field of radio sensing from a mobility and transportation applications perspective. Therefore, the wideband multipath channel model was introduced and the theoretical bounds for range resolution and GDOP were concluded. Different tasks and functionalities of radio sensing were described across the board ranging from detection, localization, mapping/imaging, counting, and classification. The approaches and algorithms are promising and allow a wide range of use cases of radio sensing in ITS overall modes of transport, for example, smart parking and ticketing, connected aircraft cabin or vehicular networks, and automotive radar.

However, some challenges need to be addressed in further research to implement UWB-based radio sensing systems in real-world applications. These include adaptability to the environment, scalability to a larger sensor network and reliability and robustness of the discussed approaches and algorithms. One approach is machine

*Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges DOI: http://dx.doi.org/10.5772/intechopen.110040*

learning (ML) algorithms to classify the different detection states. This does not require prior MPC extraction. Instead, the entire CIR is used to train and test the classifier with predefined detection scenes and labels to improve the robustness of the detection. Another promising research direction is the integration of radio sensing into a UWB RTLS system to improve the integrity of the localization and enable detection or localization when the object does not carry an active sensor. A fully integrated solution may also use a SLAM approach for anchor mapping, active mobile tag localization, and radio sensing for detection and other tasks.

#### **Software**

All calculations, algorithms, and plotting of figures were performed with Python (version 3.10) [57] using the following additional packages: matplotlib (3.6.2) [58], numpy (1.22.4) [59], and scipy (1.9.0) [60]. The radio propagation simulation is carried out with Altair Feko/WinProp 2022 [55].

### **Author details**

Jonas Ninnemann\*†, Paul Schwarzbach† and Oliver Michler Transport Systems Information Technology, Institute of Traffic Telematics, Technische Universität Dresden, Dresden, Germany

\*Address all correspondence to: jonas.ninnemann@tu-dresden.de

† These authors contributed equally.

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Zafari F, Gkelias A, Leung KK. A survey of indoor localization systems and technologies. IEEE Communications Surveys & Tutorials. 2019;**21**(3): 2568-2599. DOI: 10.1109/ COMST.2019.2911558

[2] Shi G, Ming Y. Survey of indoor positioning systems based on ultrawideband (UWB) technology. In: Zeng QA, editor. Wireless Communications, Networking and Applications. Lecture Notes in Electrical Engineering. India: Springer; 2016. pp. 1269-1278. DOI: 10.1007/978-81- 322-2580-5\_115

[3] Lee G, Park J, Jang J, Jung T, Kim TW. An IR-UWB CMOS transceiver for highdata-rate, low-power, and short-range communication. IEEE Journal of Solid-State Circuits. 2019;**54**(8):2163-2174. DOI: 10.1109/JSSC.2019.2914584

[4] Barneto CB, Turunen M, Liyanaarachchi SD, Anttila L, Brihuega A, Riihonen T, et al. Highaccuracy radio sensing in 5G new radio networks: Prospects and selfinterference challenge. In: 53rd Asilomar Conference on Signals, Systems, and Computers; 03–06 November 2019. Vol. 2019. Pacific Grove, CA, USA: IEEE; 2019. pp. 1159-1163. DOI: 10.1109/ IEEECONF44664.2019.9048786

[5] Zhang JA, Rahman ML, Wu K, Huang X, Guo YJ, Chen S, et al. Enabling joint communication and radar sensing in Mobile networks -a survey. IEEE Communications Surveys Tutorials. 2021;**24**(1):306-345. DOI: 10.1109/ COMST.2021.3122519

[6] Chen Y, Zhang J, Feng W, Alouini MS. Radio sensing using 5G signals: Concepts, state of the art, and challenges. IEEE Internet of Things

Journal. 2022;**9**(2):1037-1052. DOI: 10.1109/JIOT.2021.3132494

[7] Ledergerber A, D'Andrea R. A multistatic radar network with ultrawideband radio-equipped devices. Sensors. 2020;**20**(6):1599. DOI: 10.3390/ s20061599

[8] Dong J, Guo Q, Liang X. Through-Wall moving target tracking algorithm in multipath using UWB radar. IEEE Geoscience and Remote Sensing Letters. 2021;**19**:3503405. DOI: 10.1109/ lgrs.2021.3050501

[9] Le C, Dogaru T, Nguyen L, Ressler MA. Ultrawideband (UWB) radar imaging of building interior: Measurements and predictions. IEEE Transactions on Geoscience and Remote Sensing. 2009;**47**(5):1409-1420. DOI: 10.1109/TGRS.2009.2016653

[10] Choi JH, Kim JE, Kim KT. People counting using IR-UWB radar sensor in a wide area. IEEE Internet of Things Journal. 2021;**8**(7):5806-5821. DOI: 10.1109/JIOT.2020.3032710

[11] Cimdins M, Schmidt SO, Bartmann P, Hellbrück H. Exploiting ultra-Wideband Channel impulse responses for device-free localization. Sensors. 2022;**22**(16):6255. DOI: 10.3390/s22166255

[12] Goldsmith A. Wireless Communications. Cambridge: Cambridge University Press; 2005. pp. 27-98. DOI: 10.1017/ CBO9780511841224

[13] Tse D, Viswanath P. Fundamentals of Wireless Communication. Cambridge: Cambridge University Press; 2005. pp. 10-48. DOI: 10.1017/ CBO9780511807213

*Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges DOI: http://dx.doi.org/10.5772/intechopen.110040*

[14] Youssef M, Mah M, Agrawala A. Challenges: Device-free passive localization for wireless environments. In: Proceedings of the 13th Annual ACM International Conference on Mobile Computing and Networking - MobiCom '07. September 2007. Montréal, Québec, Canada: ACM Press; 2007. pp. 222-229. DOI: 10.1145/1287853.1287880

[15] Jovanoska S, Zetik R, Thoma R, Govaers F, Wilds K, Koch W. Device-free indoor localization using a distributed network of autonomous UWB sensor nodes. In: 2013 Workshop on Sensor Data Fusion: Trends, Solutions, Applications (SDF); 09–11 October 2013. Bonn, Germany: IEEE; 2013. pp. 1-6. DOI: 10.1109/SDF.2013.6698264

[16] Mahafza BR, Winton SC, Elsherbeni AZ. Handbook of Radar Signal Analysis. Boca Raton, US: CRC Press; 2021. pp. 93-126

[17] Liu F, Cui Y, Masouros C, Xu J, Han TX, Eldar YC, et al. Integrated sensing and communications: Towards dual-functional wireless networks for 6G and beyond. IEEE Journal on Selected Areas in Communications. 2022;**40**(6): 1728-1767. DOI: 10.1109/ JSAC.2022.3156632

[18] Wild T, Braun V, Viswanathan H. Joint Design of Communication and Sensing for beyond 5G and 6G systems. IEEE Access. 2021;**9**:30845-30857. DOI: 10.1109/access.2021.3059488

[19] Cimdins M, Schmidt SO, Hellbrück H. MAMPI-UWB—Multipath-assisted device-free localization with magnitude and phase information with UWB transceivers. Sensors. 2020;**20**(24):7090. DOI: 10.3390/s20247090

[20] Schmidhammer M, Siebler B, Gentner C, Sand S, Fiebig UC. Bayesian multipath-enhanced device-free localisation: Simulation-and measurement-based evaluation. IET Microwaves, Antennas & Propagation. 2022;**16**(6):327-337. DOI: 10.1049/ mia2.12244

[21] Gentner C, Jost T, Wang W, Zhang S, Dammann A, Fiebig UC. Multipath assisted positioning with simultaneous localization and mapping. IEEE Transactions on Wireless Communications. 2016;**15**(9):6104-6117. DOI: 10.1109/TWC.2016.2578336

[22] Bocus MJ, Piechocki R. A comprehensive ultra-wideband dataset for non-cooperative contextual sensing. Scientific Data. 2022;**9**(1):650. DOI: 10.1038/s41597-022-01776-7

[23] Leitinger E, Meissner P, Rüdisser C, Dumphart G, Witrisal K. Evaluation of position-related information in multipath components for indoor positioning. IEEE Journal on Selected Areas in Communications. 2015;**33**(11): 2313-2328. DOI: 10.1109/ JSAC.2015.2430520

[24] Froehle M, Meissner P, Witrisal K. Tracking of UWB multipath components using probability hypothesis density filters. In: 2012 IEEE International Conference on Ultra-Wideband; 17–20 September 2012. Syracuse, NY, USA: IEEE; 2012. pp. 306-310. DOI: 10.1109/ ICUWB.2012.6340452

[25] Li C, Tanghe E, Fontaine J, Martens L, Romme J, Singh G, et al. Multi-static UWB radar-based passive human tracking using COTS devices. IEEE Antennas and Wireless Propagation Letters. 2022;**21**(4):695-699. DOI: 10.1109/LAWP.2022.3141869

[26] Choi JW, Yim DH, Cho SH. People counting based on an IR-UWB radar sensor. IEEE Sensors Journal. 2017;

**17**(17):5717-5727. DOI: 10.1109/ JSEN.2017.2723766

[27] Qorvo. DW1000 – Qorvo. 2022. Available from: https://www.qorvo.com/ products/p/DW1000

[28] Friis HT. A note on a simple transmission formula. Proceedings of the IRE. 1946;**34**(5):254-256. DOI: 10.1109/ JRPROC.1946.234568

[29] Kim H. Wireless Communications Systems Design. Chichester, West Sussex. United Kingdom: John Wiley & Sons, Ltd; 2016. pp. 32-51

[30] Pätzold M. Mobile Radio Channels. Chichester, West Sussex. United Kingdom: John Wiley & Sons, Ltd; 2011. pp. 338-342

[31] Hari KVS. Channel models for wireless communication systems. In: Kennington J, Olinick E, Rajan D, editors. Wireless Network Design: Optimization Models and Solution Procedures. International Series in Operations Research & Management Science. New York, NY: Springer; 2011. pp. 47-64. DOI: 10.1007/978-1- 4419-6111-2\_3

[32] Molisch AF. Ultra-wide-band propagation channels. Proceedings of the IEEE. 2009;**97**(2):353-371. DOI: 10.1109/ JPROC.2008.2008836

[33] 3GPP Radio Access Network Working Group. Study on channel model for frequencies from 0.5 to 100 GHz (Release 16). Vol. 16. 3GPP TR 38.901; 2020

[34] Ulmschneider M. Cooperative Multipath Assisted Positioning [Ph.D. thesis]. Hamburg, Germany: Technischen Universität Hamburg; 2021. DOI: 10.15480/882.3299

[35] Wang Z, Han K, Shen X, Yuan W, Liu F. Achieving the performance bounds for sensing and Communications in Perceptive Networks: Optimal bandwidth allocation. IEEE Wireless Communications Letters. September 2022;**11**(9):1835-1839. DOI: 10.1109/LWC.2022.3183235

[36] Froehle M, Meissner P, Gigl T, Witrisal K. Scatterer and virtual source detection for indoor UWB channels. In: 2011 IEEE International Conference on Ultra-Wideband (ICUWB); 14–16 September 2011. Bologna, Italy: IEEE; 2011. pp. 16-20. DOI: 10.1109/ ICUWB.2011.6058819

[37] Malanowski M, Kulpa K. Two methods for target localization in multistatic passive radar. IEEE Transactions on Aerospace and Electronic Systems. 2012;**48**(1):572-580. DOI: 10.1109/TAES.2012.6129656

[38] Zaied S. UWB Localization of People - Accuracy Aspects. [Master Thesis]. Ilmenau, Germany: Ilmenau University of Technology; 2009

[39] Noroozi A, Sebt MA. Comparison between range-difference-based and Bistatic-range-based localization in multistatic passive radar. In: 2015 16th International Radar Symposium (IRS); 24–26 June 2015. Dresden, Germany: IEEE; 2015. pp. 1058-1063. DOI: 10.1109/IRS.2015.7226218

[40] Smith J, Abel J. Closed-form leastsquares source location estimation from range-difference measurements. IEEE Transactions on Acoustics, Speech, and Signal Processing. 1987;**35**(12): 1661-1669. DOI: 10.1109/TASSP.1987. 1165089

[41] Mellen G, Pachter M, Raquet J. Closed-form solution for determining emitter location using time difference of arrival measurements. IEEE

*Toward UWB Impulse Radio Sensing: Fundamentals, Potentials, and Challenges DOI: http://dx.doi.org/10.5772/intechopen.110040*

Transactions on Aerospace and Electronic Systems. 2003;**39**(3): 1056-1058. DOI: 10.1109/ TAES.2003.1238756

[42] Kocur D, Švecová M, Rovňáková J. Through-The-Wall localization of a moving target by two independent ultra wideband (UWB) radar systems. Sensors. 2013;**13**(9):11969-11997. DOI: 10.3390/s130911969

[43] Godrich H, Haimovich AM, Blum RS. Cramer Rao bound on target localization estimation in MIMO radar systems. In: 2008 42nd Annual Conference on Information Sciences and Systems; 19–21 March 2008. Princeton, NJ, USA: IEEE; 2008. pp. 134-139. DOI: 10.1109/CISS.2008.4558509

[44] Zhang J, Lu J. Analytical evaluation of geometric dilution of precision for three-dimensional angle-of-arrival target localization in wireless sensor networks. International Journal of Distributed Sensor Networks. 2020;**16**(5):1-14. DOI: 10.1177/ 1550147720920471

[45] Lv X, Liu K, Hu P. Geometry influence on GDOP in TOA and AOA positioning systems. In: 2010 Second International Conference on Networks Security, Wireless Communications and Trusted Computing. Vol. 2. Wuhan, China: IEEE; 2010. pp. 58-61. DOI: 10.1109/NSWCTC.2010.150

[46] Jing H, Pinchin J, Hill C, Moore T. An adaptive weighting based on modified DOP for collaborative indoor positioning. The Journal of Navigation. 2016;**69**(2):225-245. DOI: 10.1017/ S037346331500065X

[47] Ninnemann J, Schwarzbach P, Jung A, Michler O. Lab-based evaluation of device-free passive localization using Multipath Channel information. Sensors. 2021;**21**(7):2383. DOI: 10.3390/ s21072383

[48] Ninnemann J, Schwarzbach P, Michler O. Multipath-assisted radio sensing and occupancy detection for smart in-house parking in ITS. In: WiP Proceedings of the Eleventh International Conference on Indoor Positioning and Indoor Navigation - Work-in-Progress Papers (IPIN-WiP 2021); 29 November - 02 December 2021. Lloret de Mar, Spain: CEUR-WS; 2021. pp. 1-15. DOI: 10.48550/ arXiv.2201.06128

[49] Ninnemann J, Schwarzbach P, Schultz M, Michler O. Multipath-assisted radio sensing and state detection for the connected aircraft cabin. Sensors. 2022; **22**(8):2859. DOI: 10.3390/s22082859

[50] Arnold M, Bauhofer M, Mandelli S, Henninger M, Schaich F, Wild T, et al. MaxRay: A raytracing-based integrated sensing and communication framework. In: 2022 2nd IEEE International Symposium on Joint Communications & Sensing (JC&S); 09–10 March 2022. Seefeld, Austria: IEEE; 2022. pp. 1-7. DOI: 10.1109/JCS54387.2022.9743510

[51] Shaikh SH, Saeed K, Chaki N. Moving Object Detection Using Background Subtraction. SpringerBriefs in Computer Science. Cham, Germany: Springer International Publishing; 2014. pp. 15-16. DOI: 10.1007/978-3- 319-07386-6\_3

[52] Jovanoska S, Thomä R. Multiple target tracking by a distributed UWB sensor network based on the PHD filter. In: 2012 15th International Conference on Information Fusion; 09–12 July 2012. Singapore: IEEE; 2012. pp. 1095-1102

[53] Rovnakova J, Svecova M, Kocur D, Nguyen TT, Sachs J. Signal processing for through wall moving target tracking by

M-sequence UWB radar. In: 2008 18th International Conference Radioelektronika; 24–25 April 2008. Prague, Czech Republic: IEEE; 2008. pp. 1-4. DOI: 10.1109/RADIOELEK.2008. 4542694

[54] Ußler H, Porepp K, Ninnemann J, Schwarzbach P, Michler O. Demo: Deterministic radio propagation simulation for integrated communication Systems in Multimodal Intelligent Transportation Scenarios. In: 2022 IEEE International Conference on Communications Workshops (ICC Workshops); 16–20 May 2022. Seoul, Republic of Korea: IEEE; 2022. pp. 1-2. DOI: 10.1109/ICCWorkshops53468. 2022.9915017

[55] Altair Engineering Inc. Altair Feko Applications [Internet]. 2022. Available from: https://www.altair. com/feko-applications [Accessed: April 1, 2023]

[56] Schwarzbach P, Ninnemann J, Michler O. Enabling radio sensing for multimodal intelligent transportation systems: From virtual testing to immersive testbeds. In: 2022 2nd IEEE International Symposium on Joint Communications & Sensing (JC&S); 09–10 March 2022. Seefeld, Austria: IEEE; 2022. pp. 1-6. DOI: 10.1109/ JCS54387.2022.9743504

[57] Van Rossum G, Drake FL. Python 3 Reference Manual. Scotts Valley, CA: CreateSpace; 2009

[58] Hunter JD. Matplotlib: A 2D graphics environment computing in science & engineering. Mai. 2007;**9**(3):90-95. DOI: 10.1109/MCSE.2007.55

[59] Harris CR, Millman KJ, van der Walt SJ, Gommers R, Virtanen P, Cournapeau D, et al. Array programming with NumPy. Nature. 2020;**585**(7825):357-362. DOI: 10.1038/ s41586-020-2649-2

[60] Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, et al. SciPy 1.0: Fundamental algorithms for scientific computing in python. Nature Methods. 2020;**17**(3):261-272. DOI: 10.1038/ s41592-019-0686-2

### *Edited by Rafael Vargas-Bernal*

Ultra-wideband (UWB) is a radio frequency communication technology that transmits data stably and quickly within a short range. Due to its unprecedented accuracy, speed and reliability, it is an ideal technology for the indoor location of moving targets in space-sensitive and complex environments. This book disseminates some of the latest scientific and technological contributions by different researchers around the world in the development of both devices and applications based on UWB technology. Antennas, filters, resonators, delay cells, and transmitters are some of the useful electronic devices or systems described. This book aims to serve as a source of inspiration for continuing scientific research on commercial, industrial, and military applications of UWB technology. The book will be a valuable source of information for undergraduate and graduate students, as well as for experts and researchers around UWB technology.

Published in London, UK © 2023 IntechOpen © saga1966 / iStock

UWB Technology - New Insights and Developments

UWB Technology

New Insights and Developments

*Edited by Rafael Vargas-Bernal*