**Meet the editor**

Dr Salah Bourennane received his PhD degree from Institut National Polytechnique de Grenoble, France. Currently, he is full Professor at the Ecole Centrale Marseille, France, where he is the Dean of Research. He is a head of team Multidimensional Signal Processing Group of Fresnel Institute. His research interests are in statistical signal processing, remote sensing, telecom-

munications, array processing, image processing, multidimensional signal processing and performances analysis. He has published several papers in high level journals.

Contents

Chapter 1 **A Novel Bio-Inspired Acoustic** 

Chapter 2 **Array Processing: Underwater** 

Chapter 3 **Localization of Buried Objects in** 

Chapter 4 **Adaptive Technique for Underwater Acoustic Communication 59**  Shen Xiaohong, Wang Haiyan, Zhang Yuzhi and Zhao Ruiqin

Chapter 5 **Narrowband Interference Suppression** 

**Communication System 95** 

**Scheme for Underwater Acoustic Coherent Communications 121** 

Chapter 7 **Iterative Equalization and Decoding** 

Liang Zhao and Jianhua Ge

Chapter 6 **CI/OFDM Underwater Acoustic** 

Fang Xu and Ru Xu

Weijie Shen, Haixin Sun, En Cheng, Wei Su and Yonghuai Zhang

**Ranging Approach for a Better Resolution Achievement 1** 

Said Assous, Mike Lovell, Laurie Linnett, David Gunn, Peter Jackson and John Rees

**Acoustic Source Localization 13** 

**Sediment Using High Resolution Array Processing Methods 41** 

Salah Bourennane, Caroline Fossati and Julien Marot

Caroline Fossati, Salah Bourennane and Julien Marot

**in Underwater Acoustic OFDM System 75** 

## Contents


**Coherent Communications 121**  Liang Zhao and Jianhua Ge

## **A Novel Bio-Inspired Acoustic Ranging Approach for a Better Resolution Achievement**

Said Assous1, Mike Lovell1, Laurie Linnett2, David Gunn3, Peter Jackson<sup>3</sup> and John Rees<sup>3</sup> <sup>1</sup>*Ultrasound Research Laboratory, University of Leicester* <sup>2</sup>*Fortkey Ltd* <sup>3</sup>*Ultrasound Research Laboratory, British Geological Survey United Kingdom*

#### **1. Introduction**

Bat and dolphin use sound to survive and have greatly superior capabilities to current technology with regard to resolution, object identification and material characterisation. Some bats can resolve some acoustic pulses thousands of times more efficiently than current technology (Thomas & Moss, 2004 ). Dolphins are capable of discriminating different materials based on acoustic energy, again significantly out-performing current detection systems. Not only are these animals supreme in their detection and discrimination capabilities, they also demonstrate excellent acoustic focusing characteristics - both in transmission and reception. If it could approach the efficiencies of bat and cetacean systems, the enormous potential for acoustic engineering, has been widely recognised. Whilst some elements of animal systems have been applied successfully in engineered systems, the latter have come nowhere near the capabilities of the natural world. Recognizing that engineered acoustic systems that emulate bat and cetacean systems have enormous potential, we present in this chapter a breakthrough in high-resolution acoustic imaging and physical characterization based on bio-inspired time delay estimation approach. A critical limitation that is inherent to all current acoustic technologies, namely that detail, or resolution, is compromised by the total energy of the system. Instead of using higher energy signals, resulting in poorer sound quality, random noise and distortion, they intend to use specifically designed adaptable lower energy 'intelligent' signals. There are around 1000 species of bats alive in the world today. These are broken down into the megabats, which include the large fruit bats, and the microbats, which cover a range of species, both small and large, which eat insects, fruit, nectar, fish, and occasionally other bats. With the exception of one genus, none of the megabats use echolocation, while all of the microbats do. Echolocation is the process by which the bat sends out a brief ultrasonic sound pulse and then waits to hear if there is an echo. By knowing the time of flight of the sound pulse, the bat can work out the distance to the target; either prey or an obstacle. That much is easy, and this type of technology has long been adopted by engineers to sense objects at a distance using sound, and to work out how far away they are. However, bats can do much more than this, but the extent of their abilities to sense the world around them is largely unknown, and the research often contradictory. Some experiments have shown that bats can time pulses, and hence work out the distance to objects

units.

the delay between samples, SNR, signal and noise bandwidths, and the prefilter or window used in the generalized correlator. Increasing the sampling rate is not desirable for practical implementation, since sampling at lower rates is suitable for analog-to-digital converters (ADCs) that are more precise and have a lower power consumption. In addition, keeping the sampling rate low can reduce the load on both hardware and further digital processing

A Novel Bio-Inspired Acoustic Ranging Approach for a Better Resolution Achievement 3

In this chapter, we present a new phase-based approach to estimate the time-of-flight, using only the received signal phase information without need to a reference signal as it is the case for other alternative phase-based approaches often relying on a reference signal provided by a coherent local oscillator (Belostotski et al., 2001) to count for the number of cycles taking the signal to travel a distance. Ambiguities in such phase measurement due to the inability to count integer number of cycles (wavelengths) are resolved using the Chinese Remainder Theorem (CRT) taken from number theory, where wavelength selection is based on pair-wise relatively prime wavelengths (Belostotski et al., 2001; Towers et al., 2004). However, the CRT is not robust enough in the sense that a small errors in its remainders may induce a large error in the determined integer by the CRT. CRT with remainder errors has been investigated in the literature (Xiang et al., 2005; Goldreich et al., 2000). Another phase-based measurement approach adopted to ensure accurate positioning of commercial robots, uses two or more frequencies in a decade scale in the transmitted signal. In this, the phase shift of the received signal with respect to the transmitted signal is exploited for ranging (Lee et al., 1989; Yang et al., 1994). However, this approach is valid only when the maximum path-length/displacement is less than one wavelength, otherwise a phase ambiguity will appear. The time delay estimation approach proposed here, is based on the use of local phase differences between specific frequency components of the received signal. Using this approach overcomes the need to cross-correlate the received signal with either a reference signal or the transmitted signal.The developed novel approach for time delay estimation, hence for distance and speed of sound measurement outperform the conventional correlation-based techniques and overcomes the 2*π*-phase ambiguity in the phase-based approaches and most practical situations can be accommodated (Assous et al., 2008; 2010).

**3. Distance measurement using the received signal phase differences between**

(a) Bat pulse. (b) Time-Frequency plot of the bat pulse.

**components: new concept**

−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2

Amplitude

Fig. 1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> −0.25

Sample (N)

with far greater accuracy than is currently possible, even to engineers. Sonar is a relatively recent invention by man for locating objects under water using sound waves. However, locating objects in water and air has evolved in the biological world to a much higher level of sophistication. Echolocation, often called biosonar, is used by bats and cetaceans (whales, manatees, dolphins etc.) using sound waves at ultrasonic frequencies (above 20 kHz). Based on the frequencies in the emitted pulses, some bats can resolve targets many times smaller than should be possible. They are clearly processing the sound differently to current sonar technology. Dolphins are capable of discriminating different materials based on acoustic energy, again significantly out-performing current detection systems. A complete review of this capabilities can be found in (Whitlow, 1993). Not only are these animals supreme in their detection and discrimination capabilities, they also demonstrate excellent acoustic focusing characteristics - both in transmission and reception. What we can gain from these animals is how to learn to see using sound. This approach may not lead us down the traditional route of signal processing in acoustic, but it may let us explore different ways of analyzing information, in a sense, to ask the right question rather than look for the right answer.

This chapter presents a bio-inspired approach for ranging based on the use of phase measurement to estimate distance (or time delay). We will introduce the technique with examples done for sound in air than some experiments for validation are done in tank water. The motivation for this comes from the fact that bats have been shown to have very good resolution with regard to target detection when searching during flight. Jim Simmons (Whitlow & Simmons, 2007) has estimated for bats using a pulse signal with a centre frequency of about 80 kHz (bandwidth 40 kHz) can have a pulse/echo resolution of distance in air approaching a few microns. For this frequency, the wavelength (*λ*) of sound in air is about 4 mm, and so using the half wavelength (*λ*/2) as the guide for resolution we see that this is about 200 times less than that achieved by the bat. We demonstrate in this chapter how we have been inspired from bat and its used signal (chirp) to infer a better resolution for distance measurement by looking to the phase difference of two frequency components.

#### **2. Time delay and distance measurement using conventional approaches**

Considering a constant speed of sound in a medium, any improvement in distance measurement based on acoustic techniques will rely on the accuracy of the time delay or the time-of-flight measurement. The time delay estimation is also a fundamental step in source localization or beamforming applications. It has attracted considerable research attention over the past few decades in different technologies including radar, sonar, seismology, geophysics, ultrasonics, communication and medical ultrasound imaging. Various techniques are reported in the literature (Knapp & Carter, 1976; Carter, 1979; 1987; Boucher & Hassab, 1981; Chen et al., 2004) and a complete review can be found in (Chen et al., 2006). Chen *et.al* in their review consider critical techniques, limitations and recent advances that have significantly improved the performance of time-delay estimation in adverse environments. They classify these techniques into two broad categories: correlator-based approaches and system-identification-based techniques. Both categories can be implemented using two or more sensors; in general, more sensors lead to increase robustness due to greater redundancy. When the time delay is not an integral multiple of the sampling rate, however, it is necessary to either increase the sampling rate or use interpolation both having significant limitations. Interpolating by using a parabolic fit to the peak usually yields to a biased estimate of the time delay, with both the bias and variance of the estimate dependent on the location of the delay between samples, SNR, signal and noise bandwidths, and the prefilter or window used in the generalized correlator. Increasing the sampling rate is not desirable for practical implementation, since sampling at lower rates is suitable for analog-to-digital converters (ADCs) that are more precise and have a lower power consumption. In addition, keeping the sampling rate low can reduce the load on both hardware and further digital processing units.

In this chapter, we present a new phase-based approach to estimate the time-of-flight, using only the received signal phase information without need to a reference signal as it is the case for other alternative phase-based approaches often relying on a reference signal provided by a coherent local oscillator (Belostotski et al., 2001) to count for the number of cycles taking the signal to travel a distance. Ambiguities in such phase measurement due to the inability to count integer number of cycles (wavelengths) are resolved using the Chinese Remainder Theorem (CRT) taken from number theory, where wavelength selection is based on pair-wise relatively prime wavelengths (Belostotski et al., 2001; Towers et al., 2004). However, the CRT is not robust enough in the sense that a small errors in its remainders may induce a large error in the determined integer by the CRT. CRT with remainder errors has been investigated in the literature (Xiang et al., 2005; Goldreich et al., 2000). Another phase-based measurement approach adopted to ensure accurate positioning of commercial robots, uses two or more frequencies in a decade scale in the transmitted signal. In this, the phase shift of the received signal with respect to the transmitted signal is exploited for ranging (Lee et al., 1989; Yang et al., 1994). However, this approach is valid only when the maximum path-length/displacement is less than one wavelength, otherwise a phase ambiguity will appear. The time delay estimation approach proposed here, is based on the use of local phase differences between specific frequency components of the received signal. Using this approach overcomes the need to cross-correlate the received signal with either a reference signal or the transmitted signal.The developed novel approach for time delay estimation, hence for distance and speed of sound measurement outperform the conventional correlation-based techniques and overcomes the 2*π*-phase ambiguity in the phase-based approaches and most practical situations can be accommodated (Assous et al., 2008; 2010).

**3. Distance measurement using the received signal phase differences between components: new concept**

(a) Bat pulse. (b) Time-Frequency plot of the bat pulse.

2 Will-be-set-by-IN-TECH

with far greater accuracy than is currently possible, even to engineers. Sonar is a relatively recent invention by man for locating objects under water using sound waves. However, locating objects in water and air has evolved in the biological world to a much higher level of sophistication. Echolocation, often called biosonar, is used by bats and cetaceans (whales, manatees, dolphins etc.) using sound waves at ultrasonic frequencies (above 20 kHz). Based on the frequencies in the emitted pulses, some bats can resolve targets many times smaller than should be possible. They are clearly processing the sound differently to current sonar technology. Dolphins are capable of discriminating different materials based on acoustic energy, again significantly out-performing current detection systems. A complete review of this capabilities can be found in (Whitlow, 1993). Not only are these animals supreme in their detection and discrimination capabilities, they also demonstrate excellent acoustic focusing characteristics - both in transmission and reception. What we can gain from these animals is how to learn to see using sound. This approach may not lead us down the traditional route of signal processing in acoustic, but it may let us explore different ways of analyzing information,

This chapter presents a bio-inspired approach for ranging based on the use of phase measurement to estimate distance (or time delay). We will introduce the technique with examples done for sound in air than some experiments for validation are done in tank water. The motivation for this comes from the fact that bats have been shown to have very good resolution with regard to target detection when searching during flight. Jim Simmons (Whitlow & Simmons, 2007) has estimated for bats using a pulse signal with a centre frequency of about 80 kHz (bandwidth 40 kHz) can have a pulse/echo resolution of distance in air approaching a few microns. For this frequency, the wavelength (*λ*) of sound in air is about 4 mm, and so using the half wavelength (*λ*/2) as the guide for resolution we see that this is about 200 times less than that achieved by the bat. We demonstrate in this chapter how we have been inspired from bat and its used signal (chirp) to infer a better resolution for distance

in a sense, to ask the right question rather than look for the right answer.

measurement by looking to the phase difference of two frequency components.

**2. Time delay and distance measurement using conventional approaches**

Considering a constant speed of sound in a medium, any improvement in distance measurement based on acoustic techniques will rely on the accuracy of the time delay or the time-of-flight measurement. The time delay estimation is also a fundamental step in source localization or beamforming applications. It has attracted considerable research attention over the past few decades in different technologies including radar, sonar, seismology, geophysics, ultrasonics, communication and medical ultrasound imaging. Various techniques are reported in the literature (Knapp & Carter, 1976; Carter, 1979; 1987; Boucher & Hassab, 1981; Chen et al., 2004) and a complete review can be found in (Chen et al., 2006). Chen *et.al* in their review consider critical techniques, limitations and recent advances that have significantly improved the performance of time-delay estimation in adverse environments. They classify these techniques into two broad categories: correlator-based approaches and system-identification-based techniques. Both categories can be implemented using two or more sensors; in general, more sensors lead to increase robustness due to greater redundancy. When the time delay is not an integral multiple of the sampling rate, however, it is necessary to either increase the sampling rate or use interpolation both having significant limitations. Interpolating by using a parabolic fit to the peak usually yields to a biased estimate of the time delay, with both the bias and variance of the estimate dependent on the location of

Knowing *D* = *v* × *t* we deduce the time delay *t* as

*<sup>λ</sup>*<sup>1</sup> (from (1)).

imposed as follows:

unambiguously.

directly).

**3.1 Example**

wavelength *λ*1, *n*<sup>1</sup> = *<sup>D</sup>*

number of cycles is *n*<sup>2</sup> = *n*<sup>1</sup> + 1.

*n*2=201.0 cycles, and *f*<sup>2</sup> = 201.0 kHz.

but is within the unambiguous range *R*).

**–** *n*<sup>2</sup> = 134, *r*2=0.0165356 ⇐⇒ *φ*<sup>2</sup> = 5.953◦ **–** Thus, Δ*φ*=*φ*<sup>2</sup> − *φ*<sup>1</sup> = 5.953-125.923 =-119.970◦.

the difference in frequency Δ*f* .

implying an error of 0.1234 mm.

• Similarly for the frequency *f*<sup>2</sup> we find the residual phase

*<sup>t</sup>* <sup>=</sup> <sup>1</sup> Δ*f*

Considering (6), the maximum range is achieved by this approach when Δ*n*=1; is

*v* = 1500 m/s=1.5 mm/*μs*, selecting *f*<sup>1</sup> = 200.0 kHz, gives *λ*1= 7.5 mm.

• Using frequencies *f*<sup>1</sup> and *f*<sup>2</sup> defined above, and equations (1) and (6), gives **–** *n*<sup>1</sup> = 133, *r*1=0.349786 cycle, ⇐⇒ *φ*<sup>1</sup> = 0.349786 × 360 = 125.923◦.

(Δ*n* +

A Novel Bio-Inspired Acoustic Ranging Approach for a Better Resolution Achievement 5

If we impose the condition that Δ*n* ≤ 1 then (6) can be solved. This restriction on Δ*n* is

• Similarly, using (1), select frequency *f*<sup>2</sup> with its corresponding wavelength *λ*<sup>2</sup> such that the

*<sup>R</sup>* <sup>=</sup> *<sup>v</sup>*

Therefore, R is the maximum unambiguous range that can be achieved using two frequencies *f*<sup>1</sup> and *f*<sup>2</sup> as described above, where any distance within the range R can be determined

• Defining an unambiguous range *R* = 1500 mm and assuming the speed of sound in water

• Using (1), for this range *R* = *D*, *n*1= 200.0 cycles, *r*<sup>1</sup> = 0. Ensuring Δ*n*=1, from (6) requires

• Consider a distance to target *d*= 1000.1234 mm, we wish to estimate (which is unknown

We use this value in the formula given in (6). However, since Δ*φ* is negative (this means Δ*n* = 1), we add 360◦ giving 240.0296◦ (If Δ*φ* was positive we would have used the value

• Now, v = 1.500 mm/*μs*, <sup>Δ</sup>*<sup>f</sup>* <sup>=</sup> 1 kHz, substituting into (7) gives a first estimate of the range <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> <sup>=</sup> 1000.1233 mm. The Unambiguous Range R (8) is independent of the frequencies used, depending only on

• Note that in practice such resolution may not be achievable and limitations must be considered. For example, if the uncertainty of estimating the phase is within ±0.5◦, then the phases in the example above become *φ*1=126.0 and *φ*2=6.0, giving d=1000.0 mm

• A distance *D* is chosen within which we require an unambiguous range measurement. • Select a frequency *f*<sup>1</sup> within the bandwidth of the system, and its corresponding

Δ*φ*

<sup>360</sup> ) (7)

<sup>Δ</sup>*<sup>f</sup>* (8)

Consider the time-frequency plot of a single bat pulse shown in Fig.1, we note that at any particular time within the pulse there are essentially two frequencies present. The pulse length is about 2 ms and the frequency bandwidth is about 40 kHz. Let describe in the following how using two or more frequencies we may infer a distance.

To explain the concept, consider a scenario where an acoustic pulse contains a single frequency component *f*<sup>1</sup> with an initial zero phase offset. This pulse is emitted through the medium, impinges on a target, is reflected and returns back. The signal is captured and its phase measured relative to the transmitted pulse. Given this situation, we cannot estimate the distance to and from an object greater than one wavelength away (hence, usually, we would estimate the time of arrival of the pulse and assume a value for the velocity of sound in the medium to estimate the distance to the target).

For simplicity, assume the pulse contains a single cycle of frequency *f*<sup>1</sup> of wavelength *λ*1.

The distance *D* to the target can be expressed as

$$D = n\_1 \lambda\_1 + r\_1 \tag{1}$$

where *λ*<sup>1</sup> = *v*/ *f*1, *n*<sup>1</sup> is an integer, *r*<sup>1</sup> is a fraction of the wavelength *λ*<sup>1</sup> and *v* is the speed of sound in the medium.

*r*<sup>1</sup> can be expressed as follows

$$r\_1 = \lambda\_1 \times \frac{\phi\_1}{360} \tag{2}$$

where *φ*<sup>1</sup> is the residual phase angle in degrees. Combining equations (1) and (2) and rearranging

$$\begin{split}D &= n\_1 \lambda\_1 + \lambda\_1 \frac{\phi\_1}{360} \\ &= n\_1 \frac{v}{f\_1} + \frac{\phi\_1}{360} \frac{v}{f\_1} \\ D &= \frac{v}{f\_1} (n\_1 + \frac{\phi\_1}{360}) \end{split} \tag{3}$$

If we transmit a second frequency component *f*<sup>2</sup> within the same pulse, then it will also have associated with it a wavelength *λ*<sup>2</sup> and a residual phase *φ*2, similarly:

$$D = \frac{v}{f\_2}(n\_2 + \frac{\phi\_2}{360})\tag{4}$$

Equations (1) and (2) can be solved by finding (2) <sup>−</sup> (1) <sup>×</sup> ( *<sup>λ</sup>*<sup>2</sup> *λ*1 ) and rearranged to give

$$D = (\frac{\lambda\_1 \lambda\_2}{\lambda\_1 - \lambda\_2})((n\_2 - n\_1) + \frac{(\phi\_2 - \phi\_1)}{360})$$

$$D = (\frac{\lambda\_1 \lambda\_2}{\lambda\_1 - \lambda\_2})(\Delta n + \frac{\Delta \phi}{360})\tag{5}$$

Using *v* = *f* × *λ* we obtain

$$D = \frac{v}{f\_2 - f\_1}((n\_2 - n\_1) + \frac{(\phi\_2 - \phi\_1)}{360}) = \frac{v}{\Delta f}(\Delta n + \frac{\Delta \phi}{360})\tag{6}$$

Knowing *D* = *v* × *t* we deduce the time delay *t* as

$$t = \frac{1}{\Delta f} (\Delta n + \frac{\Delta \phi}{360}) \tag{7}$$

If we impose the condition that Δ*n* ≤ 1 then (6) can be solved. This restriction on Δ*n* is imposed as follows:


Considering (6), the maximum range is achieved by this approach when Δ*n*=1; is

$$R = \frac{v}{\Delta f} \tag{8}$$

Therefore, R is the maximum unambiguous range that can be achieved using two frequencies *f*<sup>1</sup> and *f*<sup>2</sup> as described above, where any distance within the range R can be determined unambiguously.

#### **3.1 Example**

4 Will-be-set-by-IN-TECH

Consider the time-frequency plot of a single bat pulse shown in Fig.1, we note that at any particular time within the pulse there are essentially two frequencies present. The pulse length is about 2 ms and the frequency bandwidth is about 40 kHz. Let describe in the following how

To explain the concept, consider a scenario where an acoustic pulse contains a single frequency component *f*<sup>1</sup> with an initial zero phase offset. This pulse is emitted through the medium, impinges on a target, is reflected and returns back. The signal is captured and its phase measured relative to the transmitted pulse. Given this situation, we cannot estimate the distance to and from an object greater than one wavelength away (hence, usually, we would estimate the time of arrival of the pulse and assume a value for the velocity of sound in the

For simplicity, assume the pulse contains a single cycle of frequency *f*<sup>1</sup> of wavelength *λ*1.

where *λ*<sup>1</sup> = *v*/ *f*1, *n*<sup>1</sup> is an integer, *r*<sup>1</sup> is a fraction of the wavelength *λ*<sup>1</sup> and *v* is the speed of

*<sup>r</sup>*<sup>1</sup> <sup>=</sup> *<sup>λ</sup>*<sup>1</sup> <sup>×</sup> *<sup>φ</sup>*<sup>1</sup>

*D* = *n*1*λ*<sup>1</sup> + *λ*<sup>1</sup>

= *n*<sup>1</sup> *v f*1

*<sup>D</sup>* <sup>=</sup> *<sup>v</sup> f*1

*<sup>D</sup>* <sup>=</sup> *<sup>v</sup> f*2

associated with it a wavelength *λ*<sup>2</sup> and a residual phase *φ*2, similarly:

Equations (1) and (2) can be solved by finding (2) <sup>−</sup> (1) <sup>×</sup> ( *<sup>λ</sup>*<sup>2</sup>

*<sup>D</sup>* = ( *<sup>λ</sup>*1*λ*<sup>2</sup> *λ*<sup>1</sup> − *λ*<sup>2</sup>

where *φ*<sup>1</sup> is the residual phase angle in degrees. Combining equations (1) and (2) and

<sup>+</sup> *<sup>φ</sup>*<sup>1</sup> 360 *v f*1

(*n*<sup>1</sup> <sup>+</sup> *<sup>φ</sup>*<sup>1</sup>

(*n*<sup>2</sup> <sup>+</sup> *<sup>φ</sup>*<sup>2</sup>

)((*n*<sup>2</sup> <sup>−</sup> *<sup>n</sup>*1) + (*φ*<sup>2</sup> <sup>−</sup> *<sup>φ</sup>*1)

)(Δ*n* +

<sup>360</sup> ) = *<sup>v</sup>*

*λ*1

<sup>360</sup> )

Δ*f*

(Δ*n* +

Δ*φ*

Δ*φ*

If we transmit a second frequency component *f*<sup>2</sup> within the same pulse, then it will also have

*<sup>D</sup>* = ( *<sup>λ</sup>*1*λ*<sup>2</sup> *λ*<sup>1</sup> − *λ*<sup>2</sup>

((*n*<sup>2</sup> <sup>−</sup> *<sup>n</sup>*1) + (*φ*<sup>2</sup> <sup>−</sup> *<sup>φ</sup>*1)

*φ*1 360

*D* = *n*1*λ*<sup>1</sup> + *r*<sup>1</sup> (1)

<sup>360</sup> (2)

<sup>360</sup> ) (3)

<sup>360</sup> ) (4)

) and rearranged to give

<sup>360</sup> ) (5)

<sup>360</sup> ) (6)

using two or more frequencies we may infer a distance.

medium to estimate the distance to the target).

The distance *D* to the target can be expressed as

sound in the medium.

rearranging

*r*<sup>1</sup> can be expressed as follows

Using *v* = *f* × *λ* we obtain

*<sup>D</sup>* <sup>=</sup> *<sup>v</sup>*

*f*<sup>2</sup> − *f*<sup>1</sup>

	- **–** *n*<sup>2</sup> = 134, *r*2=0.0165356 ⇐⇒ *φ*<sup>2</sup> = 5.953◦
	- **–** Thus, Δ*φ*=*φ*<sup>2</sup> − *φ*<sup>1</sup> = 5.953-125.923 =-119.970◦.

We use this value in the formula given in (6). However, since Δ*φ* is negative (this means Δ*n* = 1), we add 360◦ giving 240.0296◦ (If Δ*φ* was positive we would have used the value directly).

• Now, v = 1.500 mm/*μs*, <sup>Δ</sup>*<sup>f</sup>* <sup>=</sup> 1 kHz, substituting into (7) gives a first estimate of the range <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> <sup>=</sup> 1000.1233 mm.

The Unambiguous Range R (8) is independent of the frequencies used, depending only on the difference in frequency Δ*f* .

• Note that in practice such resolution may not be achievable and limitations must be considered. For example, if the uncertainty of estimating the phase is within ±0.5◦, then the phases in the example above become *φ*1=126.0 and *φ*2=6.0, giving d=1000.0 mm implying an error of 0.1234 mm.

Considering two frequencies *f*<sup>1</sup> = 200.0 kHz and *f*<sup>2</sup> = 201 kHz, assuming the speed of sound in water (v =1.5 mm/*μ*s), from (8), the unambiguous range *R* = 1500 mm and Δ*n* is 0 or 1.

A Novel Bio-Inspired Acoustic Ranging Approach for a Better Resolution Achievement 7

Consider two distances *d*1, *d*<sup>2</sup> corresponding to two "times" *t*<sup>1</sup> and *t*<sup>2</sup> such that the number of cycles *n* is the same for both frequencies over these distances, and assume the phase measured includes a phase offset for that frequency. As an example, suppose the unknown phase offset

From (9), the term (*n*<sup>1</sup> + *φ*1/360) would be calculated as 13.3333 cycles, where *φ*<sup>1</sup> = 120◦. The

Similarly, for *f*<sup>2</sup> we obtain (*n*<sup>2</sup> + *φ*2/360) = 13.40 cycles, where *φ*<sup>2</sup> = 144◦. The 'measured'

Assume a second distance *d*2= 200 mm. Using (9), for *f*<sup>1</sup> we obtain (*n*<sup>1</sup> + *φ*1/360) = 26.6666 cycles, which gives *φ*<sup>1</sup> = 240◦, the 'measured' *φ*<sup>1</sup> = 240 + 10 = 250◦. For *f*<sup>2</sup> we obtain (*n*<sup>2</sup> + *φ*2/360) =26.80 cycles, which gives *φ*<sup>2</sup> = 288◦. The 'measured' *φ*<sup>2</sup> = 288 + 30 = 318◦.

where the slope (0.6666=1/1.5) is the speed of sound measured as 1 mm per 0.6666 *μs* or 1/0.6666 = 1.5 mm/*μs*. The intercept (5.5555 *μs*) is a measure of the relative phase between *f*<sup>1</sup> and *f*2. Since Δ*f* = 10 kHz, 1 cycle is 100 *μs* long, consequently the offset of 5.5555 *μs* ≡ 360 × (5.5555/100) = 20◦ which is equal to the relative phase (30-10) between the two frequencies. If we had known the phase offset between the two frequencies (20◦) then in the calculation of times we would have obtained for *t*<sup>1</sup> a new phase difference of (174 − 130 − 20) = 24◦ giving

Similarly, for *t*<sup>2</sup> we obtain a new phase difference of (318 − 250 − 20) = 48◦ giving a time for

• If we assumed *d*<sup>1</sup> was 100 mm but it was actually say 120 mm and that *d*<sup>2</sup> was 200 mm but it was actually 220 mm, then we obtain the phase offset as 15.2◦, the slope of (10) above however, is unaffected. For example, such uncertainty may arise if the distance traveled

• If the temperature changes and so *v* changes, this changes the slope of (10) but not the time intercept or the phase offset. For example, if v=1.6 mm/*μ*s, then equation (10) becomes

*<sup>v</sup>* <sup>=</sup> *<sup>n</sup>* <sup>+</sup> *<sup>φ</sup>*/360

*<sup>f</sup>* (9)

<sup>360</sup>×Δ*<sup>f</sup>* =12.2222 *<sup>μ</sup>*s. The actual time

<sup>360</sup>Δ*<sup>f</sup>* = 18.8888 *μs*. The actual time should be

*t* = 0.6666 × *d* + 5.5555 (10)

*<sup>t</sup>* <sup>=</sup> *<sup>D</sup>*

for *f*<sup>1</sup> is 10◦, for *f*<sup>2</sup> is 30◦ and assume *d*<sup>1</sup> = 100 mm.

Thus, using (7), *<sup>t</sup>*<sup>2</sup> <sup>=</sup> <sup>Δ</sup>*φ*/(<sup>360</sup> <sup>×</sup> <sup>Δ</sup>*f*) = (318−250)

*t*<sup>2</sup> = 13.3333 *μs*. Both *t*<sup>1</sup> and *t*<sup>2</sup> are now correct.

*t* = (1/1.6) × *d* + 5.5555 = 0.625 × *d* + 5.5555.

'measured' *φ*<sup>1</sup> = 120 + 10 = 130◦ (*φ*1*measured* = *φ*1*distance* + *φ*1*o f f set*).

Plotting d as the x-axis and t as the y-axis we obtain a linear relationship

by the wave within the transducers is not taken into consideration.

*<sup>φ</sup>*<sup>2</sup> <sup>=</sup> <sup>144</sup> <sup>+</sup> <sup>30</sup> <sup>=</sup> <sup>174</sup>◦; from (7), *<sup>t</sup>*<sup>1</sup> <sup>=</sup> <sup>Δ</sup>*φ*/(<sup>360</sup> <sup>×</sup> <sup>Δ</sup>*f*) <sup>=</sup> (174−130)

Considering the above

should be 6.6666 *μ*s.

a time for *t*<sup>1</sup> = 6.6666 *μs*.

13.3333 *μs*.

Note that:

#### **3.2 Using multiple frequencies through a "Vernier approach"**

In (6), we imposed the condition that Δ*n* ≤ 1. The values of frequencies *f*<sup>1</sup> and *f*<sup>2</sup> were chosen to insure this condition and to obtain a first estimate of the distance <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> (6), and an estimate of the time delay ˆ*tf*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> (7).

Introducing a third frequency *f*<sup>3</sup> = 210 kHz, such that ( *f*<sup>3</sup> − *f*<sup>1</sup> = 10 × (*f*<sup>2</sup> − *f*1)); *f*<sup>2</sup> differs from *f*<sup>1</sup> by 1 kHz and *f*<sup>3</sup> differs from *f*<sup>1</sup> by 10 kHz.

Again from (1), for *f*<sup>3</sup> and d=1000.1234 mm, *n*<sup>3</sup> = 140 cycles and *r*<sup>3</sup> = 0.017276. Thus, *φ*<sup>3</sup> = 6.219◦ which we would measure as 6.5◦. Thus, Δ*φ*<sup>13</sup> = *φ*<sup>3</sup> − *φ*<sup>1</sup> = 6.5 − 126 = −119.5◦. We add 360◦ to give 240.5◦. However, Δ*n*<sup>13</sup> between frequencies *f*<sup>1</sup> and *f*<sup>3</sup> is now 7 (in fact 6, since we have already added in 360◦ to make the phase difference positive).

Using (6) with <sup>Δ</sup>*φ*<sup>13</sup> and different values of <sup>Δ</sup>*n*<sup>13</sup> (0 to 6) to get different distance estimate <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> .

Applying (6) recursively for Δ*n*<sup>13</sup> = 0...6 to calculate ˆ*d<sup>k</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> <sup>|</sup>*k*=0,6 selecting <sup>ˆ</sup>*d<sup>k</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> closest in value to <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> as the optimum value <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> <sup>=</sup> 1000.2083*mm* <sup>|</sup>*k*<sup>=</sup>6. Hence, a new best time delay estimate <sup>ˆ</sup>*tf*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> <sup>|</sup>*k*=<sup>6</sup> .

Note that the new best estimate distance is with an error of 0.0849 mm. If a 4th frequency is introduced *f*<sup>4</sup> = 300.0 kHz, such that the Δ*f* = *f*<sup>4</sup> − *f*<sup>1</sup> = 100.0 kHz, using (1) again, gives *n*<sup>4</sup> = 200 cycles and *r*<sup>4</sup> = 0.02468 corresponding to *φ*<sup>4</sup> = 8.8848◦ which we measure as 9◦. Thus, Δ*φ* = 9 − 126 = −117◦ which gives Δ*φ* = 249.5 after adding 360◦. Note that Δ*n* = 66 in this case.

Similarly, select the estimate <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>4</sup> (Δ*<sup>n</sup>* =0,1,2,..66) close in value to <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> . This occurs at <sup>Δ</sup>*<sup>n</sup>* = 66 giving <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>4</sup> = 1000.125 mm. Taking this as the best estimate, the final error is 0.0016 mm = 1.6 microns.

Thus, this example is reminiscent of the operation of a Vernier gauge as follows:


#### **3.3 Phase offset measurement calibration**

In the numerical example above, it is assumed the phases are faithfully transmitted and received, with no phase error on transmission or reception. It is also assumed that all frequencies have zero phase offset with respect to each other. In practice this is almost certainly not the case and such phase offsets between frequencies should be accounted for as discussed below.

Considering two frequencies *f*<sup>1</sup> = 200.0 kHz and *f*<sup>2</sup> = 201 kHz, assuming the speed of sound in water (v =1.5 mm/*μ*s), from (8), the unambiguous range *R* = 1500 mm and Δ*n* is 0 or 1. Considering the above

$$t = \frac{D}{v} = \frac{n + \phi/360}{f} \tag{9}$$

Consider two distances *d*1, *d*<sup>2</sup> corresponding to two "times" *t*<sup>1</sup> and *t*<sup>2</sup> such that the number of cycles *n* is the same for both frequencies over these distances, and assume the phase measured includes a phase offset for that frequency. As an example, suppose the unknown phase offset for *f*<sup>1</sup> is 10◦, for *f*<sup>2</sup> is 30◦ and assume *d*<sup>1</sup> = 100 mm.

From (9), the term (*n*<sup>1</sup> + *φ*1/360) would be calculated as 13.3333 cycles, where *φ*<sup>1</sup> = 120◦. The 'measured' *φ*<sup>1</sup> = 120 + 10 = 130◦ (*φ*1*measured* = *φ*1*distance* + *φ*1*o f f set*).

Similarly, for *f*<sup>2</sup> we obtain (*n*<sup>2</sup> + *φ*2/360) = 13.40 cycles, where *φ*<sup>2</sup> = 144◦. The 'measured' *<sup>φ</sup>*<sup>2</sup> <sup>=</sup> <sup>144</sup> <sup>+</sup> <sup>30</sup> <sup>=</sup> <sup>174</sup>◦; from (7), *<sup>t</sup>*<sup>1</sup> <sup>=</sup> <sup>Δ</sup>*φ*/(<sup>360</sup> <sup>×</sup> <sup>Δ</sup>*f*) <sup>=</sup> (174−130) <sup>360</sup>×Δ*<sup>f</sup>* =12.2222 *<sup>μ</sup>*s. The actual time should be 6.6666 *μ*s.

Assume a second distance *d*2= 200 mm. Using (9), for *f*<sup>1</sup> we obtain (*n*<sup>1</sup> + *φ*1/360) = 26.6666 cycles, which gives *φ*<sup>1</sup> = 240◦, the 'measured' *φ*<sup>1</sup> = 240 + 10 = 250◦. For *f*<sup>2</sup> we obtain (*n*<sup>2</sup> + *φ*2/360) =26.80 cycles, which gives *φ*<sup>2</sup> = 288◦. The 'measured' *φ*<sup>2</sup> = 288 + 30 = 318◦.

Thus, using (7), *<sup>t</sup>*<sup>2</sup> <sup>=</sup> <sup>Δ</sup>*φ*/(<sup>360</sup> <sup>×</sup> <sup>Δ</sup>*f*) = (318−250) <sup>360</sup>Δ*<sup>f</sup>* = 18.8888 *μs*. The actual time should be 13.3333 *μs*.

Plotting d as the x-axis and t as the y-axis we obtain a linear relationship

$$t = 0.6666 \times d + 5.5555$$

where the slope (0.6666=1/1.5) is the speed of sound measured as 1 mm per 0.6666 *μs* or 1/0.6666 = 1.5 mm/*μs*. The intercept (5.5555 *μs*) is a measure of the relative phase between *f*<sup>1</sup> and *f*2. Since Δ*f* = 10 kHz, 1 cycle is 100 *μs* long, consequently the offset of 5.5555 *μs* ≡ 360 × (5.5555/100) = 20◦ which is equal to the relative phase (30-10) between the two frequencies. If we had known the phase offset between the two frequencies (20◦) then in the calculation of times we would have obtained for *t*<sup>1</sup> a new phase difference of (174 − 130 − 20) = 24◦ giving a time for *t*<sup>1</sup> = 6.6666 *μs*.

Similarly, for *t*<sup>2</sup> we obtain a new phase difference of (318 − 250 − 20) = 48◦ giving a time for *t*<sup>2</sup> = 13.3333 *μs*. Both *t*<sup>1</sup> and *t*<sup>2</sup> are now correct.

Note that:

6 Will-be-set-by-IN-TECH

In (6), we imposed the condition that Δ*n* ≤ 1. The values of frequencies *f*<sup>1</sup> and *f*<sup>2</sup> were chosen to insure this condition and to obtain a first estimate of the distance <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> (6), and an estimate

Introducing a third frequency *f*<sup>3</sup> = 210 kHz, such that ( *f*<sup>3</sup> − *f*<sup>1</sup> = 10 × (*f*<sup>2</sup> − *f*1)); *f*<sup>2</sup> differs

Again from (1), for *f*<sup>3</sup> and d=1000.1234 mm, *n*<sup>3</sup> = 140 cycles and *r*<sup>3</sup> = 0.017276. Thus, *φ*<sup>3</sup> = 6.219◦ which we would measure as 6.5◦. Thus, Δ*φ*<sup>13</sup> = *φ*<sup>3</sup> − *φ*<sup>1</sup> = 6.5 − 126 = −119.5◦. We add 360◦ to give 240.5◦. However, Δ*n*<sup>13</sup> between frequencies *f*<sup>1</sup> and *f*<sup>3</sup> is now 7 (in fact 6, since

Using (6) with <sup>Δ</sup>*φ*<sup>13</sup> and different values of <sup>Δ</sup>*n*<sup>13</sup> (0 to 6) to get different distance estimate <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> .

value to <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> as the optimum value <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> <sup>=</sup> 1000.2083*mm* <sup>|</sup>*k*<sup>=</sup>6. Hence, a new best time delay

Note that the new best estimate distance is with an error of 0.0849 mm. If a 4th frequency is introduced *f*<sup>4</sup> = 300.0 kHz, such that the Δ*f* = *f*<sup>4</sup> − *f*<sup>1</sup> = 100.0 kHz, using (1) again, gives *n*<sup>4</sup> = 200 cycles and *r*<sup>4</sup> = 0.02468 corresponding to *φ*<sup>4</sup> = 8.8848◦ which we measure as 9◦. Thus, Δ*φ* = 9 − 126 = −117◦ which gives Δ*φ* = 249.5 after adding 360◦. Note that Δ*n* = 66 in this

Similarly, select the estimate <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>4</sup> (Δ*<sup>n</sup>* =0,1,2,..66) close in value to <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> . This occurs at <sup>Δ</sup>*<sup>n</sup>* = 66 giving <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>4</sup> = 1000.125 mm. Taking this as the best estimate, the final error is 0.0016 mm = 1.6

• <sup>Δ</sup>*φ*12, related to the frequencies *<sup>f</sup>*<sup>1</sup> and *<sup>f</sup>*2, gives the first estimate of the distance <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> , hence

• A higher frequency *f*<sup>3</sup> is then used (decade difference) to measure the same range but with a finer resolution. So a more accurate approximation to the measured range is obtained

• Similarly, the measured range <sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>4</sup> corresponding to <sup>Δ</sup>*φ*<sup>14</sup> within *<sup>f</sup>*<sup>1</sup> and *<sup>f</sup>*4, will give the

• Consequently, the maximum distance and the minimum resolution achieved are

In the numerical example above, it is assumed the phases are faithfully transmitted and received, with no phase error on transmission or reception. It is also assumed that all frequencies have zero phase offset with respect to each other. In practice this is almost certainly not the case and such phase offsets between frequencies should be accounted for

Thus, this example is reminiscent of the operation of a Vernier gauge as follows:

*<sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> <sup>|</sup>*k*=0,6 selecting <sup>ˆ</sup>*d<sup>k</sup>*

*<sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> closest in

**3.2 Using multiple frequencies through a "Vernier approach"**

we have already added in 360◦ to make the phase difference positive).

Applying (6) recursively for Δ*n*<sup>13</sup> = 0...6 to calculate ˆ*d<sup>k</sup>*

ultimate estimate of the measured range d.

**3.3 Phase offset measurement calibration**

determined by the choice of the frequencies *f*1, *f*2, *f*<sup>3</sup> and *f*4.

from *f*<sup>1</sup> by 1 kHz and *f*<sup>3</sup> differs from *f*<sup>1</sup> by 10 kHz.

of the time delay ˆ*tf*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> (7).

estimate <sup>ˆ</sup>*tf*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> <sup>|</sup>*k*=<sup>6</sup>

case.

microns.

<sup>ˆ</sup>*tf*<sup>1</sup> *<sup>f</sup>*<sup>2</sup> .

<sup>ˆ</sup>*<sup>d</sup> <sup>f</sup>*<sup>1</sup> *<sup>f</sup>*<sup>3</sup> .

as discussed below.

.


At each distance, 3 repetitive pulses were transmitted and received for each distance. Furthermore, 6 repetitive pulses were transmitted and received while keeping the distance constant at 999.804 mm to assess the repeatability of the system (see Fig.3). Each pulse was 3 ms long containing the 4 added frequency components described above. The sampling frequency Fs was set to 10 MHz, giving a number of samples N=20000. A Discrete Fourier Transform (DFT) was then applied to the received pulses to obtain the magnitude and phase

A Novel Bio-Inspired Acoustic Ranging Approach for a Better Resolution Achievement 9

<sup>2</sup> + 1 : *N*] for each

x 104

*<sup>N</sup>*/2=1 kHz which was consistent with the smallest

0 0.5 1 1.5 2 2.5 3

Samples (N)

information for each of the 4 frequency components, using a window of [ *<sup>N</sup>*

x 104

DFT. This was not an issue, since relative phase differences were used.

−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4

Fig. 3 shows the original transmitted (left) and received (right) signals when Tx and Rx were

Note the DFT reports phase with respect to cosine, whereas sine waves were used in this experiment. Sine waves are returned with a phase of −90◦ relative to cosine waves by the

Distance (*mm*) Estimated time (*μs*)

Using the phase difference for each distance, the phase-based time delay approach was applied to obtain the corresponding estimated times for each phase difference Δ*φ*12, Δ*φ*13,

612.85900 446.557 642.86500 466.779 702.87600 507.267 762.87500 547.739 822.84700 587.975 882.87500 628.386 942.86300 668.855 999.80400 707.183

Amplitude

received signal. This gave a resolution *Fs*

0 0.5 1 1.5 2 2.5 3

Samples (N)

Fig. 3. Examples of transmitted and received signals.

Table 1. The estimated time delays for the eight distances

step between the 4 frequencies.

−10 −8 −6 −4 −2 0 2 4 6 8 10

999.804 mm apart.

**4.2 Results and discussion**

Amplitude

## **4. Application**

## **4.1 Experiment**

To demonstrate this approach, a series of measurements were performed in a water tank measuring 1530<sup>×</sup> <sup>1380</sup><sup>×</sup> <sup>1000</sup> *mm*3. Two broadband ultrasonic transducers were used, having a wide bandwidth with a centre frequency between 100 kHz and 130 kHz. They operate as both transmitters (Tx) and receivers (Rx) of ultrasound with a beam width of around 10 degrees at the centre frequency, where -3dB bandwidth is 99 kHz (72 kHz to 171 kHz). The transducers were mounted on a trolley, moveable in the Y-direction, which was in-turn mounted on a rail, moveable in the X-direction. The experimental set-up is illustrated in Fig. 2.

Fig. 2. Schematic diagram of the experimental setup

For this purpose, linear encoders were used to measure displacement of the rails in the x-direction. Software written in Visual Basic provided readouts of transducer positions. The temperature in the tank was measured by thermocouples which were calibrated using a quartz thermometers which are traceable to national standard. They were positioned on the four sides panels of the tank and recorded 19.84, 19.89, 19.89 and 19.88◦C during the experiment.

The transmitter was driven directly by a 20 V peak-to-peak waveform consisting of four sine waves with zero phase offset (70 kHz, 71 kHz, 80 kHz and 170 kHz) added together. A modular system comprising a 16-bit arbitrary waveform generator (Ztec ZT530PXI) and a 16-bit digital storage oscilloscope (Ztec ZT410PXI) were used to transmit and receive signals. A program written in C++ was used to control the signal transmission and acquisition. Distances between Tx and Rx of 612.859 mm, 642.865 mm, 702.876 mm, 762.875 mm, 822.847 mm , 882.875 mm, 942.863 and 999.804 mm were chosen to be within the unambiguous range *R* ≈ 1500 mm (8), as set by the linear encoders. A set of 10 signals; were transmitted for each distance described above. Note that, before transmitting, each signal was multiplied by a Tukey window (cosine-tapered window with a 0.1 taper ratio) to reduce the "turn on" and "turn off" transients of the transducers.

8 Will-be-set-by-IN-TECH

To demonstrate this approach, a series of measurements were performed in a water tank measuring 1530<sup>×</sup> <sup>1380</sup><sup>×</sup> <sup>1000</sup> *mm*3. Two broadband ultrasonic transducers were used, having a wide bandwidth with a centre frequency between 100 kHz and 130 kHz. They operate as both transmitters (Tx) and receivers (Rx) of ultrasound with a beam width of around 10 degrees at the centre frequency, where -3dB bandwidth is 99 kHz (72 kHz to 171 kHz). The transducers were mounted on a trolley, moveable in the Y-direction, which was in-turn mounted on a rail, moveable in the X-direction. The experimental set-up is illustrated in Fig.

For this purpose, linear encoders were used to measure displacement of the rails in the x-direction. Software written in Visual Basic provided readouts of transducer positions. The temperature in the tank was measured by thermocouples which were calibrated using a quartz thermometers which are traceable to national standard. They were positioned on the four sides panels of the tank and recorded 19.84, 19.89, 19.89 and 19.88◦C during the experiment. The transmitter was driven directly by a 20 V peak-to-peak waveform consisting of four sine waves with zero phase offset (70 kHz, 71 kHz, 80 kHz and 170 kHz) added together. A modular system comprising a 16-bit arbitrary waveform generator (Ztec ZT530PXI) and a 16-bit digital storage oscilloscope (Ztec ZT410PXI) were used to transmit and receive signals. A program written in C++ was used to control the signal transmission and acquisition. Distances between Tx and Rx of 612.859 mm, 642.865 mm, 702.876 mm, 762.875 mm, 822.847 mm , 882.875 mm, 942.863 and 999.804 mm were chosen to be within the unambiguous range *R* ≈ 1500 mm (8), as set by the linear encoders. A set of 10 signals; were transmitted for each distance described above. Note that, before transmitting, each signal was multiplied by a Tukey window (cosine-tapered window with a 0.1 taper ratio) to reduce the "turn on" and

**4. Application**

**4.1 Experiment**

Fig. 2. Schematic diagram of the experimental setup

"turn off" transients of the transducers.

2.

At each distance, 3 repetitive pulses were transmitted and received for each distance. Furthermore, 6 repetitive pulses were transmitted and received while keeping the distance constant at 999.804 mm to assess the repeatability of the system (see Fig.3). Each pulse was 3 ms long containing the 4 added frequency components described above. The sampling frequency Fs was set to 10 MHz, giving a number of samples N=20000. A Discrete Fourier Transform (DFT) was then applied to the received pulses to obtain the magnitude and phase information for each of the 4 frequency components, using a window of [ *<sup>N</sup>* <sup>2</sup> + 1 : *N*] for each received signal. This gave a resolution *Fs <sup>N</sup>*/2=1 kHz which was consistent with the smallest step between the 4 frequencies.

Fig. 3. Examples of transmitted and received signals.

Fig. 3 shows the original transmitted (left) and received (right) signals when Tx and Rx were 999.804 mm apart.

Note the DFT reports phase with respect to cosine, whereas sine waves were used in this experiment. Sine waves are returned with a phase of −90◦ relative to cosine waves by the DFT. This was not an issue, since relative phase differences were used.


Table 1. The estimated time delays for the eight distances

#### **4.2 Results and discussion**

Using the phase difference for each distance, the phase-based time delay approach was applied to obtain the corresponding estimated times for each phase difference Δ*φ*12, Δ*φ*13,

do.

**6. Acknowledgment**

**7. References**

*Press*.

(2597).

(320-327).

(509-519).

numbers (40-45).

numbers (309-315).

numbers (236-255).

numbers (609-611).

page numbers (1-19).

No.8, page numbers (521-522).

Edinburgh University and Strathclyde University.

Whitlow, W.L. (1993). The Sonar of Dolphins, *Springer*.

*79CH1476-1-AES*, page numbers (386-395).

would allow us to get a resolution of 4mm/360=11*μm*, using the same wavelength as the bat

A Novel Bio-Inspired Acoustic Ranging Approach for a Better Resolution Achievement 11

This work was undertaken as part of the Biologically Inspired Acoustic Systems (BIAS) project that is funded by the RCUK via the Basic Technology Programme grant reference number EP/C523776/1. The BIAS project involves collaboration between the British Geological Survey, Leicester University, Fortkey Ltd., Southampton University, Leeds University,

Thomas, J.A. & Moss, C.F. (2004). Echolocation in bats and dolphins, *University of Chicago*

Whitlow, W.L; Simmons. J. A (2007). Echolocation in dolphins and bats, *Physics Today*, page

Assous, S.; Jackson, P.; Hopper, C.; Gunn, D.; Rees, J.; Lovell, M. (2008). Bat-inspired distance

Assous, S.; Hopper, C.; Gunn, D.; Jackson, P.; Rees, J.; Lovell, M. (2010). Short pulse

Knapp, G.H. & Carter, G.C. (1976). The generalised correlation method for estimation of time

Carter, G.C. (1979). Sonar signal processing for source state estimation. *IEEE Pub.*

Carter, G.C. (1987). Coherence and time delay estimation. *Proc. of the IEEE*, Vol. 75, No.2, page

Boucher, R.E. & Hassab, J.C (1981). Analysis of discrete implementation of generalised

Chen, J.; Benesty, J.; Huang, Y.A.; (2004). Time delay estimation via linear interpolation and

Chen, J.; Benesty, J.; Huang, Y.A. (2006). Time delay estimation in room acoustic environments:

Belostotski, L.; Landecker, T.L.; Routledge, D.(2001). Distance measurement with phase stable

Towers, C.E.; Towers, P.D.; Jones-Julian, D.C. (2004). The efficient Chinese remainder theorem

*Optics Express*, Vol. 12, No.6, page numbers (1136-1143).

measurement using phase information. *J Acoust Soc Am*, Vol. 124, page numbers

multi-frequency phase-based time delay estimation, *J. Acoust. Soc. Am.* Vol. 01, page

delay, *IEEE Trans. Acoust., Speech, Signal Processing*, Vol. ASSP-24, No.4, page numbers

cross-correlator. *IEEE Trans. Acoust., Speech, Signal Processing*, Vol. ASSP-29, page

cross correlation, *IEEE Trans. Speech and audio Processing*, Vol. 12, No.5, page numbers

An Overview. *Eurasip Journal on Applied Signal Processing*, Vol. 2006, Article ID 26503,

CW radio link unsing the Chinese remainder theorem. *Electronics Letters*, Vol.37,

algorithm for full-field fringe phase analysis in multi-wavelength interferometry.

Δ*φ*14, Δ*φ*23, Δ*φ*<sup>24</sup> and Δ*φ*34, for the pairs *f*<sup>1</sup> *f*2, *f*<sup>1</sup> *f*3, *f*<sup>1</sup> *f*4, *f*<sup>2</sup> *f*3, *f*<sup>2</sup> *f*<sup>4</sup> and *f*<sup>3</sup> *f*4, respectively. Note that a careful use of the Fourier transform (DFT) have to be considered to calculate the phase offset for each component. To do this, we have to take into account the fact that all the frequency components have to be integer number of cycles and have to be in the frequency bins defined by the smallest frequency difference between components. Using a simple calculation of the first estimate by *t*<sup>12</sup> = Δ*φ*12/(*f*<sup>2</sup> − *f*1) as a first estimate using (7), gave corresponding estimated times ˆ*t*12, ˆ*t*13, ˆ*t*14, ˆ*t*23, ˆ*t*<sup>24</sup> and ˆ*t*34, respectively. For each distance, ˆ*t*<sup>14</sup> should be the best estimate (i.e. the greatest Δ*f*). Note that, ˆ*t*<sup>14</sup> parameter is taken as a best estimate for time delay measurement in Table.1. Fig. 3 shows the estimated ˆ*t*<sup>14</sup> for the distance

Fig. 4. Repeated time delay estimations for a transducer separation of 582.893 mm showing a maximum variability of almost 8 ns in the time delays estimated (617.8±0.0081 *μ*s).

between Tx and Rx of 582.893 for 5 repeat pulses. The repeatability is shown to be within 8 ns.

#### **5. Conclusion**

In this chapter, a high resolution time delay estimation approach based on phase differences between components of the received signal has been proposed. Hence no need to do a cross-correlation between the transmitted and the received signal as all the information is comprised in the received signal in the local phase between components. Within an unambiguous range defined by the smallest and the highest frequency in the signal, any distance can be estimated. This time delay technique leads also to high resolution distance and speed of sound measurement. This approach is tolerant to additive gaussian noise as it relies on local phase difference information. The technique is costly effective as it relies on software and no need for local oscillator to overcome such phase ambiguity by counting the integer number of cycles in the received signal as it is done by the conventional methods. We have mentioned that, authors in Whitlow & Simmons (2007), concluded that, bat can achieve a resolution of 20*μm* in air, if we can measure the phase to an accuracy of 1 degree then this

would allow us to get a resolution of 4mm/360=11*μm*, using the same wavelength as the bat do.

## **6. Acknowledgment**

10 Will-be-set-by-IN-TECH

Δ*φ*14, Δ*φ*23, Δ*φ*<sup>24</sup> and Δ*φ*34, for the pairs *f*<sup>1</sup> *f*2, *f*<sup>1</sup> *f*3, *f*<sup>1</sup> *f*4, *f*<sup>2</sup> *f*3, *f*<sup>2</sup> *f*<sup>4</sup> and *f*<sup>3</sup> *f*4, respectively. Note that a careful use of the Fourier transform (DFT) have to be considered to calculate the phase offset for each component. To do this, we have to take into account the fact that all the frequency components have to be integer number of cycles and have to be in the frequency bins defined by the smallest frequency difference between components. Using a simple calculation of the first estimate by *t*<sup>12</sup> = Δ*φ*12/(*f*<sup>2</sup> − *f*1) as a first estimate using (7), gave corresponding estimated times ˆ*t*12, ˆ*t*13, ˆ*t*14, ˆ*t*23, ˆ*t*<sup>24</sup> and ˆ*t*34, respectively. For each distance, ˆ*t*<sup>14</sup> should be the best estimate (i.e. the greatest Δ*f*). Note that, ˆ*t*<sup>14</sup> parameter is taken as a best estimate for time delay measurement in Table.1. Fig. 3 shows the estimated ˆ*t*<sup>14</sup> for the distance

> data std=0.0081

581.5 582 582.5 583 583.5 584

Distance (mm)

Fig. 4. Repeated time delay estimations for a transducer separation of 582.893 mm showing a

between Tx and Rx of 582.893 for 5 repeat pulses. The repeatability is shown to be within 8 ns.

In this chapter, a high resolution time delay estimation approach based on phase differences between components of the received signal has been proposed. Hence no need to do a cross-correlation between the transmitted and the received signal as all the information is comprised in the received signal in the local phase between components. Within an unambiguous range defined by the smallest and the highest frequency in the signal, any distance can be estimated. This time delay technique leads also to high resolution distance and speed of sound measurement. This approach is tolerant to additive gaussian noise as it relies on local phase difference information. The technique is costly effective as it relies on software and no need for local oscillator to overcome such phase ambiguity by counting the integer number of cycles in the received signal as it is done by the conventional methods. We have mentioned that, authors in Whitlow & Simmons (2007), concluded that, bat can achieve a resolution of 20*μm* in air, if we can measure the phase to an accuracy of 1 degree then this

maximum variability of almost 8 ns in the time delays estimated (617.8±0.0081 *μ*s).

426.32

426.325

426.33

426.335

Time(micro seconds)

**5. Conclusion**

426.34

426.345

This work was undertaken as part of the Biologically Inspired Acoustic Systems (BIAS) project that is funded by the RCUK via the Basic Technology Programme grant reference number EP/C523776/1. The BIAS project involves collaboration between the British Geological Survey, Leicester University, Fortkey Ltd., Southampton University, Leeds University, Edinburgh University and Strathclyde University.

#### **7. References**


**0**

**2**

*France*

**Array Processing: Underwater Acoustic**

Array processing is used in diverse areas such as radar, sonar, communications and seismic exploration. Usually the parameters of interest are the directions of arrival of the radiating sources. The High-Resolution subspace-based methods for direction-of-arrival (DOA) estimation have been a topic of great interest. The subspace-based methods well-developed so far require a fundamental assumption, which is that the background noise is uncorrelated from sensor to sensor, or known to within a multiplicative scalar. In practice this assumption is rarely fulfilled and the noise received by the array may be a combination of multiple noise sources such as flow noise, traffic noise, or ambient noise, which is often correlated along the array (Reilly & Wong, 1992; Wu & Wong, 1994). However, the spatial noise is estimated by measuring the spectrum of the received data when no signal is present. The data for parameter estimation is then pre-whitened using the measured noise. The problem with this method is that the actual noise covariance matrix varies as a function of time in many applications. At low signal-to-noise ratio (SNR) the deviations from the assumed noise characteristics are critical and the degradation may be severe for the localization result. In this chapter, we present an algorithm to estimate the noise with band covariance matrix. This algorithm is based on the noise subspace spanned by the eigenvectors associated with the smallest eigenvalues of the covariance matrix of the recorded data. The goal of this study is to investigate how perturbations in the assumed noise covariance matrix affect the accuracy of the narrow-band signal DOA estimates (Stoica et al., 1994). A maximum likelihood algorithm is presented in (Wax, 1991), where the spatial noise covariance is modeled as a function of certain unknown parameters. Also in (Ye & DeGroat, 1995) a maximum likelihood estimator is analyzed. The problem of incomplete pre-whitening or colored noise is circumvented by modeling the noise with a simple descriptive model. There are other approaches to the problem of spatially correlated noise: one is based on the assumption that the correlation structure of the noise field is invariant under a rotation or a translation of the array, while another is based on a certain linear transformation of the sensor output vectors (Zhang & Ye, 2008; Tayem et al., 2006). These methods do not require the estimation of the noise correlation function, but they may be quite sensitive to the deviations from the invariance assumption made, and they are not applicable when the signals also satisfy the invariance assumption.

Consider an array of *N* sensors which receive the signals in one wave field generated by *P* (*P* < *N*) sources in the presence of an additive noise. The received signal vector is sampled

**1. Introduction**

**2. Problem formulation**

Salah Bourennane, Caroline Fossati and Julien Marot

**Source Localization**

*Institut Fresnel, Ecole Centrale Marseille*

