# <sup>1</sup> <sup>2</sup>

**<sup>Ξ</sup>** is the covariance matrix across the rows, and <sup>2</sup> *n n*

, ~ ,, *m n* **X M** *<sup>N</sup>* **Ξ Ξ** where # *m n*

**X** into the random vector #

**Xs** as a matrix Gaussian distribution, i.e.

**Σ <sup>s</sup>** is the covariance matrix

**X** is given as

**M M** *vec* is the expectation matrix and

**M** is the

**Ξ** is the

(14)

**X X** *vec* ,

(15)

*T*

characterization is similar to the MGMRF discussed in Section 2.3.

covariance matrix across the columns. Hence, the pdf of #

1 1

**<sup>X</sup>** be a random matrix distributed as

**3.2.2 Matrix gaussian distribution** 

2 2 <sup>2</sup> 1 2

(Arnold, 1981). Also, if we stack the matrix #

*n m mn p tr*

**Ξ Ξ**

then ~ , *mn* **X M** *<sup>N</sup>* **<sup>Ξ</sup>** where

We model the associated noise process #

across the bands, and 2 2 <sup>2</sup> *N N*

**3.2.3 Separable covariance structure** 

for <sup>1</sup> *m M* where 1 1 <sup>1</sup> <sup>1</sup> *N N m m*

Fuentes, 2006) has the form

 2 2 <sup>2</sup> <sup>2</sup> *N N m m kl*

*mn p* 

exp <sup>2</sup> <sup>2</sup>

expectation matrix, <sup>1</sup> *m m*

2 1 *mn mn*

Let #

matrix #

$$\text{cov}\left(X\_{\mathbf{s},(k,l)}, X\_{\mathbf{s},(k,l)} \middle| \mathbf{L}; \Theta\right) = \sigma\_{lk}^{(1)}\left(L\_{\mathbf{s}}\right)\sigma\_{ll}^{(2)}\left(L\_{\mathbf{s}}\right) \tag{19}$$

$$\text{cov}\left(Y\_{\mathbf{s},(k,l)}, Y\_{\mathbf{s},(k,l)} \Big| \mathbf{L}; \Theta\right) = \sigma\_{kk}^{(1)}\left(L\_{\mathbf{s}}\right)\sigma\_{ll}^{(2)}\left(L\_{\mathbf{s}}\right) \,. \tag{20}$$

This corresponds to the product of the variance associated with the reflectance at the *th k* spectral band <sup>1</sup> *kk L***s** and the variance associated with the reflectance at the *th l* temporal slot <sup>2</sup> *ll L***<sup>s</sup>** . Likewise, the cross-covariance is given as (Arnold, 1981):

$$\text{cov}\left(X\_{\mathbf{s},(i,j)}, X\_{\mathbf{s},(k,l)} \middle| \mathbf{L}; \Theta\right) = \sigma\_{ik}^{(1)}\left(L\_{\mathbf{s}}\right)\sigma\_{jl}^{(2)}\left(L\_{\mathbf{s}}\right) \tag{21}$$

$$\text{cov}\left(Y\_{\mathbf{s},(i,j)}, Y\_{\mathbf{s},(k,l)} \middle| \mathbf{L}; \Theta\right) = \sigma\_{ik}^{(1)}\left(L\_{\mathbf{s}}\right)\sigma\_{jl}^{(2)}\left(L\_{\mathbf{s}}\right) \,. \tag{22}$$

This corresponds to the product of the covariance associated with the reflectance at the *th <sup>i</sup>* and the *th <sup>k</sup>* spectral band <sup>1</sup> *ik L***s** and the covariance associated with the reflectance at the *th <sup>j</sup>* and the *th <sup>l</sup>* temporal slot <sup>2</sup> *jl L***<sup>s</sup>** .

The number of parameters in the unpatterned covariance matrix is *N N NN NN* 1 2 12 12 1 2 . On the other hand, the number of parameters for a separable covariance matrix is *NN NN* 11 22 1 12 , which has fewer parameters compared to its non-separable counterpart.

#### **3.2.4 Separable of interaction matrix structure**

We can also model the interaction matrix coefficients with a separable structure for all **r** and 1 *m M* of the form

$$\boldsymbol{\Theta}\_{\mathbf{r}}(m) = \boldsymbol{\Theta}\_{\mathbf{r}}^{(2)}(m) \otimes \boldsymbol{\Theta}\_{\mathbf{r}}^{(1)}(m) \tag{23}$$

where 1 1 <sup>1</sup> *N N m* **<sup>θ</sup><sup>r</sup>** is the interaction matrix across the bands and 2 2 <sup>2</sup> *N N m* **θ<sup>r</sup>** is the interaction matrix across time. In the next section, the interaction matrix coefficient **θ<sup>r</sup>** *<sup>m</sup>* can be made separable for **r** and1 *m M* provided that **Σ***m* is separable. Furthermore, if **Σ***m* is separable, then the following is the resulting statistical characterization of **Xs** :

$$E\left[\mathbf{X}\_{\sf s} \, \middle| \, \mathbf{L} ; \Theta \right] = \mathbf{0}\_{\sf N \times 1} \tag{24}$$

$$\operatorname{cov}\left(\mathbf{X}\_{\mathsf{s}},\mathbf{X}\_{\mathsf{s}-\mathsf{r}}\,\middle|\,\mathsf{L}\star\Theta\right) = \begin{vmatrix} \mathbf{Z}^{(2)}(\mathrm{L}\_{\mathsf{s}})\otimes\mathbf{Z}^{(1)}(\mathrm{L}\_{\mathsf{s}}) & \mathbf{r} = \mathbf{0}\_{p\times 1} \\ -\left(-\mathsf{G}\_{\mathsf{s}}^{(2)}(\mathrm{L}\_{\mathsf{s}})\boldsymbol{\Sigma}^{(2)}(\mathrm{L}\_{\mathsf{s}})\cdot\mathbf{1}\_{\{\iota\_{\mathsf{s}}=\iota\_{\mathsf{s}\cdot\mathsf{s}}\}}\right)\otimes\left(-\mathsf{G}\_{\mathsf{s}}^{(1)}(\mathrm{L}\_{\mathsf{s}})\boldsymbol{\Sigma}^{(1)}(\mathrm{L}\_{\mathsf{s}})\cdot\mathbf{1}\_{\{\iota\_{\mathsf{s}}=\iota\_{\mathsf{s}\cdot\mathsf{s}}\}}\right) & \mathbf{r}\in\mathcal{N}\;\mathsf{d}\;\mathsf{d}\;\mathsf{d}\;\mathsf{w}\,\mathsf{s} \end{vmatrix} \tag{25}$$

Estimation of the Separable MGMRF Parameters for Thematic Classification 107

This section provides important characterizations enable us to derive the estimators of the GMRF parameters in the next section. We present a simple, yet powerful, method to derive the MPL estimators of the mean and the interaction matrix. Finally, new problems arise in estimating the multivariate observation GMRFs, which were not encountered in the

In this section, a method of deriving the MPL estimators for the mean and the vectorized interaction coefficients are presented regardless of separability. The MPL estimator of the interaction matrix coefficients can be derived by taking the matrix derivative of the log of the pseudo-likelihood function with respect to the interaction matrix coefficient or with

> *f f vec vec*

evaluate. The following proposition provides a simple way of deriving the MPL estimators, where the estimator is either the mean or the vectorized interaction matrix coefficient

mean or the vectorized interaction matrix coefficient. Suppose that **Xs** can be expressed in

covariance matrix **Σ***m* , 1 *m M* is known, then the MPL estimator for **Φ** *m* ,

**Q Σ X 0**

*Proof* From (7) and (9), the log pseudo-likelihood of the image random field conditional

**Θ L Σ X Σ X**

**s s ss**

<sup>1</sup> log log 2 log . <sup>2</sup>

Taking the gradient of the log pseudo-likelihood function in (33) with respect to **Φ** *m* for

<sup>1</sup> . *<sup>T</sup>*

*q*

1

**s s**

*m*

*m*

**s**

1

*m m*

**s**

*M*

*PL N*

**P P**

**X X** (30)

**X PQ ss s s Φ***L* (31)

(32)

(33)

1

*T*

*L L*

**Q Q s s <sup>Θ</sup> <sup>L</sup>** is independent of *<sup>L</sup>* **<sup>Φ</sup> <sup>s</sup>** , and the

**X P** . The latter expression is preferred, since it is easier to

**Φ** , 1 *m M* be a vector of parameters which is either the

**5.1 MPL-based method technique of deriving mean and the interaction matrix** 

respect to its vectorized version from the equivalence relation (Neudecker, 1969)

 

**5. Important MGMRF specifications** 

univariate case, are discussed.

where *f* **X** , and , *m n*

*Proposition 1* Let *<sup>q</sup>* <sup>1</sup>

*m*

**P P s s <sup>Θ</sup> <sup>L</sup>** , *N q*

1 *m M* is obtained by solving the equation

(Navarro, et. al., 2009).

where *<sup>N</sup>* <sup>1</sup>

to the thematic map is given as

1 *m M* , and equating to *q*<sup>1</sup> **0** yields

the form

**estimators** 

$$\operatorname{cov}\left(\mathbf{X}\_{\mathbf{s}} \, \mathbf{Y}\_{\mathbf{s}-\mathbf{r}} \, \middle| \, \mathbf{L} \, \middle| \, \Theta\right) = \boldsymbol{\Sigma}^{(2)}\left(L\_{\mathbf{s}}\right) \cdot \mathbf{1}\_{\left\{L\_{\mathbf{s}} = l\_{\mathbf{s}-\mathbf{r}}\right\}} \otimes \boldsymbol{\Sigma}^{(1)}\left(L\_{\mathbf{s}}\right) \cdot \mathbf{1}\_{\left\{l\_{\mathbf{s}} = l\_{\mathbf{s}-\mathbf{r}}\right\}}.\tag{26}$$

The covariance matrix, from the above equation, cov , ; **XX L s sr <sup>Θ</sup>** has a separable structure between the spectral domain and temporal dimensions. It has a form analogous to that of what is shown in (4) through (6), which is intuitively appealing.

The number of parameters in the unpatterned interaction matrix coefficient is 2 22 1 2 *N NN* . On the other hand, the number of parameters for the separable interaction matrix coefficient is 2 2 *N N* 1 2 , which has fewer parameters compared to its non-separable counterpart.

#### **3.2.5 Separable mean structure**

Likewise, we can also model the mean with a separable structure of the form

$$
\mathfrak{u}(m) = \mathfrak{u}^{(2)}(m) \otimes \mathfrak{u}^{(1)}(m) \tag{27}
$$

for 1 *m M* where <sup>1</sup> <sup>1</sup> *<sup>N</sup>* <sup>1</sup> *m* **μ** is the mean across the bands and <sup>2</sup> <sup>2</sup> *<sup>N</sup>* <sup>1</sup> *m* **μ** is the mean across time. The number of parameters in the unpatterned mean vector is *N NN* 1 2 . On the other hand, the number of parameters for the separable mean vector is *N N* 1 2 which has fewer number of parameters compared to its non-separable counterpart.

#### **3.2.6 Hybrid separable structure**

Finally, we can model the GMRF parameters as having a hybrid separability structure, that is, some of its parameters are separable while the rest are not. Hence, there are eight combinations to consider. As shown in Section 5.2, it is impossible to model a separable interaction matrix with a non-separabable matrix. This leave us six cases to consider in this study.

#### **4. Estimation of thematic map parameters**

The MPLE of **φ** is obtained by taking the derivative of log *PL***φ** with respect to *<sup>m</sup>* <sup>1</sup> *m M a* and *b* **<sup>r</sup> <sup>r</sup>** , then equating to zero (Li, 1995). Accordingly, the estimators are obtained numerically by solving the following set of simultaneous nonlinear equations:

$$\sum\_{\mathbf{s}\in\mathcal{S}} \frac{\exp\left(a\_{m} + \sum\_{\mathbf{r}\in\mathcal{N}} b\_{\mathbf{r}} \cdot V\left(L\_{\mathbf{s}} = m, L\_{\mathbf{s}-\mathbf{r}}\right)\right)}{\sum\_{l=1}^{M} \exp\left(a\_{l} + \sum\_{\mathbf{r}\in\mathcal{N}} b\_{\mathbf{r}} \cdot V\left(L\_{\mathbf{s}} = l, L\_{\mathbf{s}-\mathbf{r}}\right)\right)} = \sum\_{\mathbf{s}\in\mathcal{S}} \mathbf{1}\_{\left\{L\_{\mathbf{s}} = m\right\}} \,\forall a\_{m}, \; 1 \le m \le M\tag{28}$$

$$\sum\_{\mathbf{s}\in\mathcal{S}} \frac{\sum\_{l=1}^{M} \exp\left(a\_{l} + \sum\_{\mathbf{t}\in\mathcal{N}} b\_{\mathbf{t}} \cdot V\left(L\_{\mathbf{s}} = l, L\_{\mathbf{s}-\mathbf{t}}\right)\right) \cdot V\left(L\_{\mathbf{s}} = l, L\_{\mathbf{s}-\mathbf{t}}\right)}{\sum\_{l=1}^{M} \exp\left(a\_{l} + \sum\_{\mathbf{t}\in\mathcal{N}} b\_{\mathbf{t}} \cdot V\left(L\_{\mathbf{s}} = l, L\_{\mathbf{s}-\mathbf{t}}\right)\right)} = \sum\_{\mathbf{s}\in\mathcal{S}} V\left(L\_{\mathbf{s}}, L\_{\mathbf{s}-\mathbf{t}}\right) \quad \forall b\_{\mathbf{r}}, \ \mathbf{r} \in \mathcal{N} \cdot \tag{29}$$

## **5. Important MGMRF specifications**

106 Remote Sensing – Advanced Techniques and Platforms

The covariance matrix, from the above equation, cov , ; **XX L s sr <sup>Θ</sup>** has a separable structure between the spectral domain and temporal dimensions. It has a form analogous to

The number of parameters in the unpatterned interaction matrix coefficient is 2 22

is 2 2 *N N* 1 2 , which has fewer parameters compared to its non-separable counterpart.

Likewise, we can also model the mean with a separable structure of the form

On the other hand, the number of parameters for the separable interaction matrix coefficient

 2 1

<sup>1</sup> <sup>1</sup> *<sup>N</sup>* <sup>1</sup> *m* **μ** is the mean across the bands and

the mean across time. The number of parameters in the unpatterned mean vector is *N NN* 1 2 . On the other hand, the number of parameters for the separable mean vector is *N N* 1 2 which has fewer number of parameters compared to its non-separable counterpart.

Finally, we can model the GMRF parameters as having a hybrid separability structure, that is, some of its parameters are separable while the rest are not. Hence, there are eight combinations to consider. As shown in Section 5.2, it is impossible to model a separable interaction matrix with a non-separabable matrix. This leave us six cases to consider in this

The MPLE of **φ** is obtained by taking the derivative of log *PL***φ** with respect to

obtained numerically by solving the following set of simultaneous nonlinear equations:

**r s sr**

*M L m m*

**r s sr**

**t s st s sr**

**t s st**

 <sup>1</sup>

, then equating to zero (Li, 1995). Accordingly, the estimators are

**1**

, 1

*a mM*

*VL L b*

**s sr r**

, , .

**r**

(29)

**XY L s sr Θ Σ <sup>s</sup> <sup>s</sup> 1 Σ 1** (26)

2 1 cov , ; . *L L L L L L* **s sr s sr**

**μμ μ** *mmm* (27)

1 2 *N NN* .

<sup>2</sup> <sup>2</sup> *<sup>N</sup>* <sup>1</sup> *m* **μ** is

(28)

**3.2.5 Separable mean structure** 

**3.2.6 Hybrid separable structure** 

**4. Estimation of thematic map parameters** 

exp ,

**r**

**r**

 

**t**

*a b V L mL*

 

*a b V L lL*

 

*a b V L lL V L lL*

*a b V L lL*

 

**s s**

**s s**

 **<sup>s</sup>**

exp ,

exp , ,

exp ,

*m*

*l*

*l*

1

*l*

*l*

**t**

*M*

1

*l*

for 1 *m M* where

study.

*<sup>m</sup>* <sup>1</sup> *m M a* and *b* **<sup>r</sup> <sup>r</sup>**

*M*

*l*

that of what is shown in (4) through (6), which is intuitively appealing.

This section provides important characterizations enable us to derive the estimators of the GMRF parameters in the next section. We present a simple, yet powerful, method to derive the MPL estimators of the mean and the interaction matrix. Finally, new problems arise in estimating the multivariate observation GMRFs, which were not encountered in the univariate case, are discussed.

#### **5.1 MPL-based method technique of deriving mean and the interaction matrix estimators**

In this section, a method of deriving the MPL estimators for the mean and the vectorized interaction coefficients are presented regardless of separability. The MPL estimator of the interaction matrix coefficients can be derived by taking the matrix derivative of the log of the pseudo-likelihood function with respect to the interaction matrix coefficient or with respect to its vectorized version from the equivalence relation (Neudecker, 1969)

$$\frac{\partial f}{\partial \mathbf{X}} = \mathbf{P} \Leftrightarrow \frac{\partial f}{\partial \text{vec}(\mathbf{X})} = \text{vec}(\mathbf{P}) \tag{30}$$

where *f* **X** , and , *m n* **X P** . The latter expression is preferred, since it is easier to evaluate. The following proposition provides a simple way of deriving the MPL estimators, where the estimator is either the mean or the vectorized interaction matrix coefficient (Navarro, et. al., 2009).

*Proposition 1* Let *<sup>q</sup>* <sup>1</sup> *m* **Φ** , 1 *m M* be a vector of parameters which is either the mean or the vectorized interaction matrix coefficient. Suppose that **Xs** can be expressed in the form

$$\mathbf{X}\_{\mathbf{s}} = \mathbf{P}\_{\mathbf{s}} - \mathbf{Q}\_{\mathbf{s}} \boldsymbol{\Phi} \left( L\_{\mathbf{s}} \right) \tag{31}$$

where *<sup>N</sup>* <sup>1</sup> **P P s s <sup>Θ</sup> <sup>L</sup>** , *N q* **Q Q s s <sup>Θ</sup> <sup>L</sup>** is independent of *<sup>L</sup>* **<sup>Φ</sup> <sup>s</sup>** , and the covariance matrix **Σ***m* , 1 *m M* is known, then the MPL estimator for **Φ** *m* , 1 *m M* is obtained by solving the equation

$$\sum\_{\mathbf{s}\in\mathfrak{S}(m)} \mathbf{Q}\_{\mathbf{s}}^{T} \Sigma^{-1}(m) \mathbf{X}\_{\mathbf{s}} = \mathbf{0}\_{q \times 1} \,. \tag{32}$$

*Proof* From (7) and (9), the log pseudo-likelihood of the image random field conditional to the thematic map is given as

$$\log \text{PL}\left(\Theta \middle| \mathbf{L}\right) = -\frac{1}{2} \sum\_{m=1}^{M} \sum\_{\mathbf{s} \in \mathcal{S}(m)} \left[ N \log 2\pi + \log \left| \Sigma(L\_{\mathbf{s}}) \right| + \mathbf{X}\_{\mathbf{s}}^{T} \Sigma^{-1}(L\_{\mathbf{s}}) \mathbf{X}\_{\mathbf{s}} \right]. \tag{33}$$

Taking the gradient of the log pseudo-likelihood function in (33) with respect to **Φ** *m* for 1 *m M* , and equating to *q*<sup>1</sup> **0** yields

Estimation of the Separable MGMRF Parameters for Thematic Classification 109

1 1 2 12 1 2 1

*mmm m m m*

*T*

**θ Σθ Σ r r** (43)

**θ Σθ Σ r r** , (44)

,1 *m M* and thus these cases are not possible.

(42)

,

(45)

1

*mmm mmm*

*T T*

 <sup>1</sup> 1 11 1 *<sup>T</sup> m mmm*

 <sup>1</sup> 2 22 2 *<sup>T</sup> m mmm*

By considering the hybrid separability cases which involve a separable interaction matrix and a non-separable covariance matrix, the expression *<sup>T</sup>* <sup>1</sup> *mm m* **Σθ Σ <sup>r</sup>** is not separable, in general. This implies that **θ<sup>r</sup>** *m* cannot be expressed in the form

This section proposes an estimation procedure for the GMRF parameters for both separable

**Proposition 2**Assume that the interaction matrix coefficients **θ<sup>r</sup>** *m* for **r**

*m m mm*

**s r r**

**s r r μ I θ 1 Σ I θ 1**

1 *m M* and the covariance matrices **Σ***m* for 1 *m M* are known. Then the mean

*T*

*N N L m L m*

1 *T*

 **s r s r r s <sup>r</sup> <sup>s</sup> <sup>r</sup>**

*N L m L m*

**I θ 1 Σ Y θ 1 Y**

*m mm*

 

 

 **s r s r r r**

**r r**

**Σθ Σ Σθ Σ**

**r r**

1 1 22 2 11 1 .

*T T*

**Σ Σθ θ Σ Σ**

**θ θ Σ Σθ θ Σ Σ**

The identification of **θ<sup>r</sup>** *m* is completely specified from (39) if we take

**r r r r**

which is analogous to the relation in (39).

2 1 **θθ θ rr r** *mmm* for *<sup>S</sup>* **<sup>r</sup>**

**6. GMRF parameter estimation** 

**6.1 Mean parameter estimation** 

parameters are estimated as follows:

a. Non-Separable Case:

for 1 *m M* .

b. Separable Case:

and non-separable cases based on the MPL.

*m*

*m*

In addition, if we assume the following for 1 *m M* :

<sup>1</sup> ˆ

<sup>1</sup> 2 1 2 12 1 2 1

*m m m mm m m m*

$$\mathbf{0}\_{q\times 1} = \frac{\partial}{\partial \mathbf{\Theta}(m)} \log P L\{\boldsymbol{\Theta} | \mathbf{L}\} = -\frac{1}{2} \sum\_{l=1}^{M} \sum\_{\mathbf{s} \in \mathbf{\mathcal{S}}(l)} \frac{\partial}{\partial \mathbf{\Theta}(m)} \mathbf{X}\_{\mathbf{s}}^{T} \boldsymbol{\Sigma}^{-1}(L\_{\mathbf{s}}) \mathbf{X}\_{\mathbf{s}}.\tag{34}$$

Since

$$\begin{split} \mathbf{X}\_{\sf s}^{T} \boldsymbol{\Sigma}^{-1} \left( \mathbf{L}\_{\sf s} \right) \mathbf{X}\_{\sf s} &= \left( \mathbf{P}\_{\sf s} - \mathbf{Q}\_{\sf s} \boldsymbol{\Phi} \left( \mathbf{L}\_{\sf s} \right) \right)^{\sf T} \boldsymbol{\Sigma}^{-1} \left( \mathbf{L}\_{\sf s} \right) \left( \mathbf{P}\_{\sf s} - \mathbf{Q}\_{\sf s} \boldsymbol{\Phi} \left( \mathbf{L}\_{\sf s} \right) \right) \\ &= \mathbf{P}\_{\sf s}^{T} \boldsymbol{\Sigma}^{-1} \left( \mathbf{L}\_{\sf s} \right) \mathbf{P}\_{\sf s} - 2 \mathbf{P}\_{\sf s}^{T} \boldsymbol{\Sigma}^{-1} \left( \mathbf{L}\_{\sf s} \right) \mathbf{Q}\_{\sf s} \boldsymbol{\Phi} \left( \mathbf{L}\_{\sf s} \right) + \boldsymbol{\Phi}^{T} \left( \mathbf{L}\_{\sf s} \right) \mathbf{Q}\_{\sf s}^{T} \boldsymbol{\Sigma}^{-1} \left( \mathbf{L}\_{\sf s} \right) \mathbf{Q}\_{\sf s} \boldsymbol{\Phi} \left( \mathbf{L}\_{\sf s} \right) \end{split} \tag{35}$$

then taking the gradient in (34) with respect to **Φ** yields

$$\begin{split} \frac{\partial}{\partial \mathbf{Q}} \mathbf{X}\_{\mathbf{s}}^{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{L}\_{\mathbf{s}}) \mathbf{X}\_{\mathbf{s}} &= 2 \mathbf{Q}\_{\mathbf{s}}^{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{L}\_{\mathbf{s}}) \mathbf{P}\_{\mathbf{s}} \mathbf{1}\_{\{\boldsymbol{L}\_{\mathbf{s}} = \boldsymbol{m}\}} - 2 \mathbf{Q}\_{\mathbf{s}}^{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{L}\_{\mathbf{s}}) \mathbf{Q}\_{\mathbf{s}} \boldsymbol{\Theta} (\boldsymbol{L}\_{\mathbf{s}}) \mathbf{1}\_{\{\boldsymbol{L}\_{\mathbf{s}} = \boldsymbol{m}\}} \\ &= 2 \mathbf{Q}\_{\mathbf{s}}^{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{L}\_{\mathbf{s}}) \left( \mathbf{P}\_{\mathbf{s}} - \mathbf{Q}\_{\mathbf{s}} \boldsymbol{\Theta} (\boldsymbol{L}\_{\mathbf{s}}) \right) \mathbf{1}\_{\{\boldsymbol{L}\_{\mathbf{s}} = \boldsymbol{m}\}} \\ &= 2 \mathbf{Q}\_{\mathbf{s}}^{T} \boldsymbol{\Sigma}^{-1} (\boldsymbol{L}\_{\mathbf{s}}) \mathbf{X}\_{\mathbf{s}} \mathbf{1}\_{\{\boldsymbol{L}\_{\mathbf{s}} = \boldsymbol{m}\}} \end{split} \tag{36}$$

Finally, substituting the result of (36) into (34) gives us the identity

$$\mathbf{O}\_{q\times 1} = \sum\_{l=1}^{M} \sum\_{\mathbf{s}\in\mathcal{S}(l)} \mathbf{Q}\_{\mathbf{s}}^{T} \boldsymbol{\Sigma}^{-1} \left(L\_{\mathbf{s}}\right) \mathbf{X}\_{\mathbf{s}} \mathbf{1}\_{\{l\_{\mathbf{s}} = m\}} = \sum\_{\mathbf{s}\in\mathcal{S}(m)} \mathbf{Q}\_{\mathbf{s}}^{T} \boldsymbol{\Sigma}^{-1} \left(m\right) \mathbf{X}\_{\mathbf{s}}.\tag{37}$$

#### **5.2 Interaction matrix identities**

From the covariance identity

$$\operatorname{cov}\left(\mathbf{X}\_{\text{s}},\mathbf{X}\_{\text{s}-\text{r}} \middle| \mathbf{L};\Theta\right) = \operatorname{cov}^{T}\left(\mathbf{X}\_{\text{s}-\text{r}},\mathbf{X}\_{\text{s}} \middle| \mathbf{L};\Theta\right) \tag{38}$$

(Ravishanker and Dey, 2002), from (5), we obtain the following relationship:

$$
\boldsymbol{\Theta}\_{-\mathbf{r}}\left(\boldsymbol{L}\_{\mathbf{s}}\right) = \boldsymbol{\Sigma}\left(\boldsymbol{L}\_{\mathbf{s}}\right)\boldsymbol{\Theta}\_{\mathbf{r}}^{T}\left(\boldsymbol{L}\_{\mathbf{s}}\right)\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{L}\_{\mathbf{s}}\right).\tag{39}
$$

One consequence of this result is that **Xs** can be written as follows:

$$\mathbf{X}\_{\mathbf{s}} = \left(\mathbf{Y}\_{\mathbf{s}} - \boldsymbol{\mathfrak{m}}(L\_{\mathbf{s}})\right) - \sum\_{\mathbf{r}\mathbf{s}, \boldsymbol{\mathcal{N}}\_{\mathbf{s}}} \left[\boldsymbol{\mathfrak{e}}\_{\mathbf{r}}(L\_{\mathbf{s}})\mathbf{1}\_{\left(L\_{\mathbf{s}} = l\_{\mathbf{s} + \mathbf{r}}\right)}\left(\mathbf{Y}\_{\mathbf{s} - \mathbf{r}} - \boldsymbol{\mathfrak{m}}(L\_{\mathbf{s}})\right) + \boldsymbol{\mathfrak{s}}\,\Sigma(L\_{\mathbf{s}})\boldsymbol{\mathfrak{e}}\_{\mathbf{r}}^{T}(L\_{\mathbf{s}})\Sigma^{-1}(L\_{\mathbf{s}})\mathbf{1}\_{\left(L\_{\mathbf{s}} = l\_{\mathbf{s} + \mathbf{r}}\right)}\left(\mathbf{Y}\_{\mathbf{s} + \mathbf{r}} - \boldsymbol{\mathfrak{m}}(L\_{\mathbf{s}})\right)\right] \tag{40}$$

where *<sup>S</sup>* , a subset of which represents the symmetric neighborhood set (Kashyap and Chellappa, 1983), is defined as follows: *S S* **r r** and . *S S* � **r r**

Another consequence of (39) are the specifications of the interaction matrices in the separable case. If the interaction matrices are modeled as separable, then by (39), we obtain

$$\boldsymbol{\Theta}\_{-\mathbf{r}}\left(m\right) = \boldsymbol{\Theta}\_{-\mathbf{r}}^{(2)}\left(m\right) \otimes \boldsymbol{\Theta}\_{-\mathbf{r}}^{(1)}\left(m\right) = \boldsymbol{\Sigma}\left(m\right)\left(\boldsymbol{\Theta}\_{\mathbf{r}}^{(2)}\left(m\right) \otimes \boldsymbol{\Theta}\_{\mathbf{r}}^{(1)}\left(m\right)\right)^{T}\boldsymbol{\Sigma}^{-1}\left(m\right) = \boldsymbol{\Sigma}\left(m\right)\boldsymbol{\Theta}\_{\mathbf{r}}^{T}\left(m\right)\boldsymbol{\Sigma}^{-1}\left(m\right) \tag{41}$$

for 1 *m M* . The RHS of (40) can be made separable if **Σ** *m* is also separable. Hence,

$$\begin{split} \boldsymbol{\Theta}\_{\mathbf{r}}^{(2)}(m) \otimes \boldsymbol{\Phi}\_{\mathbf{r}}^{(1)}(m) &= \left(\boldsymbol{\Sigma}^{(2)}(m) \otimes \boldsymbol{\Sigma}^{(1)}(m)\right) \left(\boldsymbol{\Theta}\_{\mathbf{r}}^{(2)}(m) \otimes \boldsymbol{\Phi}\_{\mathbf{r}}^{(1)}(m)\right)^{\mathrm{T}} \left(\boldsymbol{\Sigma}^{(2)}(m) \otimes \boldsymbol{\Sigma}^{(1)}(m)\right)^{-1} \\ &= \left(\boldsymbol{\Sigma}^{(2)}(m) \otimes \boldsymbol{\Sigma}^{(1)}(m)\right) \left(\boldsymbol{\Phi}\_{\mathbf{r}}^{(2)T}(m) \otimes \boldsymbol{\Phi}\_{\mathbf{r}}^{(1)T}(m)\right) \left(\boldsymbol{\Sigma}^{(2)}(m)\right)^{-1} \otimes \left(\boldsymbol{\Sigma}^{(1)}(m)\right)^{-1} \\ &= \boldsymbol{\Sigma}^{(2)}(m) \boldsymbol{\Phi}\_{\mathbf{r}}^{(2)T}(m) \left(\boldsymbol{\Sigma}^{(2)}(m)\right)^{-1} \otimes \boldsymbol{\Sigma}^{(1)}(m) \boldsymbol{\Phi}\_{\mathbf{r}}^{(1)T}(m) \left(\boldsymbol{\Sigma}^{(1)}(m)\right)^{-1}. \end{split} \tag{42}$$

The identification of **θ<sup>r</sup>** *m* is completely specified from (39) if we take

$$\boldsymbol{\Theta}\_{-\mathbf{r}}^{(1)}(m) = \boldsymbol{\Sigma}^{(1)}(m)\boldsymbol{\Theta}\_{\mathbf{r}}^{(1)T}(m)\left(\boldsymbol{\Sigma}^{(1)}(m)\right)^{-1} \tag{43}$$

$$
\Theta\_{-\mathbf{r}}^{(2)}(m) = \boldsymbol{\Sigma}^{(2)}(m)\boldsymbol{\Theta}\_{\mathbf{r}}^{(2)\text{Tr}}\left(m\right)\left(\boldsymbol{\Sigma}^{(2)}(m)\right)^{-1}\tag{44}
$$

which is analogous to the relation in (39).

108 Remote Sensing – Advanced Techniques and Platforms

*m m*

**Φ Φ**

11 1

2 2

2 .

*l l m*

**s s**

*L*

**s ss**

**Q Σ X 1**

1 1

*TT T*

*L LL L*

**s ss s s s s s s s**

**X Σ X PQ Φ Σ P Q Φ**

2

*T*

*T*

Finally, substituting the result of (36) into (34) gives us the identity

*q L m*

**s ss <sup>s</sup>**

(Ravishanker and Dey, 2002), from (5), we obtain the following relationship:

One consequence of this result is that **Xs** can be written as follows:

1

*M*

**0 Θ L X Σ X**

1

1 1

then taking the gradient in (34) with respect to **Φ** yields

*T T*

1

**5.2 Interaction matrix identities** 

*S*

Chellappa, 1983), is defined as follows: *S S* **r r**

**r**

*<sup>S</sup>* , a subset of

where

From the covariance identity

*q*

**Φ**

Since

*M*

1 <sup>1</sup> log . <sup>2</sup>

1 1 <sup>1</sup> 2

**P Σ P P Σ Q Φ Φ Q Σ Q Φ**

1 1

*L m*

*L m*

*T T*

**s**

*L L L L*

*L L*

*T T T T*

1

**s sr s sr s s s r s sr s s r s s sr s**

Another consequence of (39) are the specifications of the interaction matrices in the separable case. If the interaction matrices are modeled as separable, then by (39), we obtain

 21 21 1 1 *<sup>T</sup> <sup>T</sup> m m m m m m m mm m* **rr r r r <sup>r</sup> θ θ θ Σ θ θ Σ Σθ Σ** (41)

for 1 *m M* . The RHS of (40) can be made separable if **Σ** *m* is also separable. Hence,

**X Y μ θ 1 Y μ Σθ Σ 1 Y μ**

*L L L L L L L LL L L*

**0 Q Σ X 1 Q Σ X** 

**s ss s s**

**Q Σ P Q Φ 1**

**s s s s ss s s s s**

**X Σ X Q Σ P1 Q Σ Q Φ 1**

 **s ss s**

*l l PL L*

*L LL L LL*

**s ss s s s s s s s s s**

1

(34)

(35)

(36)

(40)

 

(37)

*T*

*L m L m*

**s s**

.

and . *S S*

**r r**

 

*L m*

**s**

cov , ; cov , ; *<sup>T</sup>* **XX L s sr <sup>Θ</sup> X XL sr s <sup>Θ</sup>** (38)

*T*

which represents the symmetric neighborhood set (Kashyap and

<sup>1</sup> . *<sup>T</sup> L LL L* **θ Σθ Σ rs sr s s** (39)

�

**s s**

By considering the hybrid separability cases which involve a separable interaction matrix and a non-separable covariance matrix, the expression *<sup>T</sup>* <sup>1</sup> *mm m* **Σθ Σ <sup>r</sup>** is not separable, in general. This implies that **θ<sup>r</sup>** *m* cannot be expressed in the form 2 1 **θθ θ rr r** *mmm* for *<sup>S</sup>* **<sup>r</sup>** ,1 *m M* and thus these cases are not possible.

#### **6. GMRF parameter estimation**

This section proposes an estimation procedure for the GMRF parameters for both separable and non-separable cases based on the MPL.

#### **6.1 Mean parameter estimation**

**Proposition 2**Assume that the interaction matrix coefficients **θ<sup>r</sup>** *m* for **r** , 1 *m M* and the covariance matrices **Σ***m* for 1 *m M* are known. Then the mean parameters are estimated as follows:

a. Non-Separable Case:

$$
\hat{\boldsymbol{\mu}}\left(\boldsymbol{m}\right) = \left[\sum\_{\boldsymbol{\kappa}\in\mathcal{S}(m)} \left(\mathbf{I}\_N - \sum\_{\boldsymbol{\kappa}\prec\boldsymbol{\kappa}'} \boldsymbol{\Phi}\_{\boldsymbol{\kappa}}\left(m\right) \mathbf{1}\_{\{\boldsymbol{\ell}\_{\boldsymbol{\kappa}\_{\boldsymbol{\kappa}}-\boldsymbol{m}}\}}\right)^T \boldsymbol{\Sigma}^{-1}\left(m\right) \left(\mathbf{I}\_N - \sum\_{\boldsymbol{\kappa}\prec\boldsymbol{\kappa}'} \boldsymbol{\Phi}\_{\boldsymbol{\kappa}}\left(m\right) \mathbf{1}\_{\{\boldsymbol{\ell}\_{\boldsymbol{\kappa}\_{\boldsymbol{\kappa}}}-\boldsymbol{m}\}}\right)\right]^{-1}.\tag{45}
$$

$$
\left[\sum\_{\boldsymbol{\kappa}\in\mathcal{S}(m)} \left(\mathbf{I}\_N - \sum\_{\boldsymbol{\kappa}\prec\boldsymbol{\kappa}'} \boldsymbol{\Phi}\_{\boldsymbol{\kappa}}\left(m\right) \mathbf{1}\_{\{\boldsymbol{\ell}\_{\boldsymbol{\kappa}\_{\boldsymbol{\kappa}}}-\boldsymbol{m}\}}\right)^T \boldsymbol{\Sigma}^{-1}\left(m\right) \left(\mathbf{Y}\_{\boldsymbol{\kappa}} - \sum\_{\boldsymbol{\kappa}\prec\boldsymbol{\kappa}'} \boldsymbol{\Phi}\_{\boldsymbol{\kappa}}\left(m\right) \mathbf{1}\_{\{\boldsymbol{\ell}\_{\boldsymbol{\kappa}\_{\boldsymbol{\kappa}}}-\boldsymbol{m}\}}\mathbf{Y}\_{\boldsymbol{\kappa}-\boldsymbol{\kappa}}\right)\right]$$

for 1 *m M* .

b. Separable Case:

In addition, if we assume the following for 1 *m M* :

Estimation of the Separable MGMRF Parameters for Thematic Classification 111

 <sup>1</sup> **Φ μ** *m m* , 1 *m M*

 **sr s s r s s**

> <sup>1</sup> **Φ μ** *m m* , 1 *m M*

**Proposition 3** Assume that the mean vectors **μ** *m* for 1 *m M* and the covariance matrices **Σ** *m* for 1 *m M* are known, then interaction matrix parameters are estimated

1

1

<sup>ˆ</sup> , **Ψ θ** *m row vec m* **<sup>r</sup>** *<sup>S</sup>* **<sup>r</sup>**

 <sup>1</sup> , , *N NN <sup>N</sup>* . *m m L m mm m L m* **s-r s r A Y s r s-r μ s r 1 IK Σ Σ Y μ 1 I** (57)

From the invariance property of the MPL, the complete set of non-separable interaction

**s t s r**

**H A Σ Ar t**

**Γ A Σ Y μ t**

**s t <sup>s</sup>**

*m row mm m*

, , , , *<sup>T</sup>*

, , *<sup>S</sup>*

 **sr s s r <sup>s</sup> <sup>s</sup>**

**Q I θ 1 I μ**

**Q I θ 1 μ I**

 <sup>1</sup> <sup>2</sup> *N N* . *L L L L*

 <sup>2</sup> <sup>1</sup> *N N* . *L L L L*

**H***mm m* **Ψ Γ** (53)

*S S*

  (54)

(55)

(56)

(51)

(52)

1 

 

*m m row col mm m*

*m*

**s**

matrix estimators is estimated as follows:

**s**

**r**

by applying Preposition 1 and rearranging terms, we obtain (47).

by solving the simultaneous linear equations given as follows:

**r**

By applying Preposition 1 and rearranging terms, we obtain (46).

 2 

For this case, we recognize the following from (50):

For this case from (50), we recognize

a. Non-Separable Case:

where

and

**6.2 Interaction matrix parameter estimation** 

 <sup>1</sup> **<sup>μ</sup>** *<sup>m</sup>* is estimated, given that <sup>2</sup> **μ** *m* is known <sup>2</sup> **<sup>μ</sup>** *<sup>m</sup>* is estimated, given that <sup>1</sup> **μ** *m* is known. Thus

$$\begin{split} & \hat{\mathbf{p}}^{(1)}(m) \\ = & \left[ \sum\_{\boldsymbol{\omega} \in \mathcal{S}(m)} \left( \boldsymbol{\mu}^{(1)}(m) \otimes \mathbf{I}\_{\boldsymbol{\omega}\_{1}} \right)^{\intercal} \left( \mathbf{I}\_{\boldsymbol{\aleph}} - \sum\_{\boldsymbol{\kappa} \in \boldsymbol{\boldsymbol{\nu}}^{\boldsymbol{\prime}}} \boldsymbol{\Phi}\_{\boldsymbol{\epsilon}}(m) \mathbf{1}\_{\{\boldsymbol{\ell}\_{\boldsymbol{\zeta}\_{\boldsymbol{\omega}\_{\boldsymbol{\mathsf{m}}}} \mathbf{m} \}} \right)^{\intercal} \boldsymbol{\Sigma}^{-1}(m) \left( \mathbf{I}\_{\boldsymbol{\aleph}} - \sum\_{\boldsymbol{\kappa} \in \boldsymbol{\boldsymbol{\nu}}^{\boldsymbol{\prime}}} \boldsymbol{\Phi}\_{\boldsymbol{\epsilon}}(m) \mathbf{1}\_{\{\boldsymbol{\ell}\_{\boldsymbol{\sigma}\_{\boldsymbol{\mathsf{m}}} \mathbf{m} \}} \right) \left( \mathbf{p}^{(2)}(m) \otimes \mathbf{I}\_{\boldsymbol{\aleph}\_{\boldsymbol{\mathsf{m}}}} \right)^{\intercal} \\ = & \left[ \sum\_{\boldsymbol{\omega} \in \mathcal{S}(m)} \left( \boldsymbol{\mu}^{(2)}(m) \otimes \mathbf{I}\_{\boldsymbol{\aleph}\_{\boldsymbol{\mathsf{m}}}} \right)^{\intercal} \left( \mathbf{I}\_{\boldsymbol{\aleph}} - \sum\_{\boldsymbol{\kappa} \in \boldsymbol{\boldsymbol{\nu}}^{\boldsymbol{\prime}}} \boldsymbol{\Phi}\_{\boldsymbol{\epsilon}}(m) \mathbf{1}\_{\{\boldsymbol{\ell}\_{\boldsymbol{\sigma}\_{\boldsymbol{\mathsf{m}}} \mathbf{m}} \}} \right)^{\intercal} \boldsymbol{\Sigma}^{-1}(m) \left( \mathbf{I}\_{\boldsymbol{\aleph}\_{\boldsymbol{\mathsf{m}}}} - \sum\_{\boldsymbol{\kappa} \in \boldsymbol{\boldsymbol{\$$

for 1 *m M* .

*Proof* 


$$\mathbf{X}\_{\mathbf{s}} = \left(\mathbf{Y}\_{\mathbf{s}} - \sum\_{\mathbf{r} \in \mathcal{N}} \boldsymbol{\Theta}\_{\mathbf{r}} \left(\boldsymbol{L}\_{\mathbf{s}}\right) \mathbf{1}\_{\{\boldsymbol{L}\_{\mathbf{s}\leftarrow\mathbf{r}} = \boldsymbol{L}\_{\mathbf{s}}\}} \mathbf{Y}\_{\mathbf{s}-\mathbf{r}}\right) - \left(\mathbf{I}\_{N} - \sum\_{\mathbf{r} \in \mathcal{N}} \boldsymbol{\Theta}\_{\mathbf{r}} \left(\boldsymbol{L}\_{\mathbf{s}}\right) \mathbf{1}\_{\{\boldsymbol{L}\_{\mathbf{s}\leftarrow\mathbf{r}} = \boldsymbol{L}\_{\mathbf{s}}\}}\right) \boldsymbol{\upmu}\left(\boldsymbol{L}\_{\mathbf{s}}\right). \tag{48}$$

For the separable case, the mean can be written as follows:

$$\begin{split} \mathfrak{p}\left(m\right) &= \mathfrak{p}^{\left(2\right)}\left(m\right) \otimes \mathfrak{p}^{\left(1\right)}\left(m\right) \\ &= \left(\mathfrak{p}^{\left(2\right)}\left(m\right) \otimes \mathbf{I}\_{N\_1}\right)\left(\mathbf{1} \otimes \mathfrak{p}^{\left(1\right)}\left(m\right)\right) = \left(\mathfrak{p}^{\left(2\right)}\left(m\right) \otimes \mathbf{I}\_{N\_1}\right)\mathfrak{p}^{\left(1\right)}\left(m\right) \\ &= \left(\mathbf{I}\_{N\_2} \otimes \mathfrak{p}^{\left(1\right)}\left(m\right)\right)\left(\mathfrak{p}^{\left(2\right)}\left(m\right) \otimes \mathbf{1}\right) = \left(\mathbf{I}\_{N\_2} \otimes \mathfrak{p}^{\left(1\right)}\left(m\right)\right)\mathfrak{p}^{\left(2\right)}\left(m\right). \end{split} \tag{49}$$

Plugging the results of (49) into (48) yields

$$\begin{split} \mathbf{X\_{s}} &= \left(\mathbf{Y\_{s}} - \sum\_{\mathbf{re},\boldsymbol{\mathcal{V}}} \boldsymbol{\Theta}\_{\mathbf{r}}(L\_{\mathbf{s}}) \mathbf{1}\_{\{l\_{\star,\boldsymbol{\bullet}}=l\_{\star}\}} \mathbf{Y\_{s-\mathbf{r}}}\right) - \left(\mathbf{I\_{N}} - \sum\_{\mathbf{re},\boldsymbol{\mathcal{V}}} \boldsymbol{\Theta}\_{\mathbf{r}}(L\_{\mathbf{s}}) \mathbf{1}\_{\{l\_{\star,\boldsymbol{\bullet}}=l\_{\star}\}}\right) \left(\mathbf{\mu}^{(2)}(L\_{\mathbf{s}}) \otimes \mathbf{I}\_{N\_{\mathbf{r}}}\right) \mathbf{\mu}^{(1)}(L\_{\mathbf{s}}) \\ &= \left(\mathbf{Y\_{s}} - \sum\_{\mathbf{re},\boldsymbol{\mathcal{V}}} \boldsymbol{\Theta}\_{\mathbf{r}}(L\_{\mathbf{s}}) \mathbf{1}\_{\{l\_{\star,\boldsymbol{\bullet}}=l\_{\star}\}} \mathbf{Y\_{s-\mathbf{r}}}\right) - \left(\mathbf{I\_{N}} - \sum\_{\mathbf{re},\boldsymbol{\mathcal{V}}} \boldsymbol{\Theta}\_{\mathbf{r}}(L\_{\mathbf{s}}) \mathbf{1}\_{\{l\_{\star,\boldsymbol{\bullet}}=l\_{\star}\}}\right) \left(\mathbf{I\_{N\_{\mathbf{s}}}} \otimes \mathbf{\mu}^{(1)}(L\_{\mathbf{s}})\right) \mathbf{\mu}^{(2)}(L\_{\mathbf{s}}). \end{split} \tag{50}$$

$$\left(\mathbf{1}^{\cdot}\right)\_{\cdot} \qquad \Phi\left(m\right) = \mu^{\left(\cdot\right)}\left(m\right), \ 1 \le m \le M$$

For this case, we recognize the following from (50):

$$\mathbf{Q\_s} = \left(\mathbf{I\_N} - \sum\_{\mathbf{r} \neq \boldsymbol{\mathcal{N}}} \mathbf{\theta\_r}(L\_\mathbf{s}) \mathbf{1}\_{\{L\_\mathbf{s} = \boldsymbol{I}\_\mathbf{s}\}}\right) \left(\mathbf{p}^{(2)}(L\_\mathbf{s}) \otimes \mathbf{I}\_{N\_1}\right). \tag{51}$$

By applying Preposition 1 and rearranging terms, we obtain (46).

$$\left(\begin{array}{c} \begin{pmatrix} \circ \\ \end{pmatrix} \right) \qquad \Phi\left(\begin{pmatrix} m \\ \end{pmatrix}\right) = \mathfrak{h}^{\left(\begin{smallmatrix} \\ \end{smallmatrix}\right)}\left(\begin{smallmatrix} m \\ \end{smallmatrix}\right) , \; 1 \le m \le M \end{smallmatrix}$$

For this case from (50), we recognize

$$\mathbf{Q\_s} = \left(\mathbf{I\_N} - \sum\_{\mathbf{r} \neq \angle \forall} \mathbf{\hat{e}\_r}(L\_\mathbf{s}) \mathbf{1}\_{\{L\_\mathbf{s-r} = L\_\mathbf{s}\}}\right) \left(\mathbf{I\_{N\_2}} \otimes \mathbf{\mu^{(1)}}(L\_\mathbf{s})\right). \tag{52}$$

by applying Preposition 1 and rearranging terms, we obtain (47).

#### **6.2 Interaction matrix parameter estimation**

**Proposition 3** Assume that the mean vectors **μ** *m* for 1 *m M* and the covariance matrices **Σ** *m* for 1 *m M* are known, then interaction matrix parameters are estimated by solving the simultaneous linear equations given as follows:

a. Non-Separable Case:

$$\mathbf{H}(m)\,\mathbf{\varPsi}(m) = \mathbf{\varGamma}(m)\tag{53}$$

where

110 Remote Sensing – Advanced Techniques and Platforms

**μ** *m* is known

**μ** *m* is known.

1 1

**s r s r**

**r s r s r**

**s r s r**

**s r s r**

**r s r s r**

**s r s r**

*<sup>N</sup>* . *L L L L L L L*

 

1 .

2

 

 

 

 

*N N L m N L m N*

1

1

(48)

(49)

(50)

1

1 2 .

2 1

(47)

(46)

2 1 2

*m m mm m*

**μ I I θ 1 Σ I θ 1 μ I**

**r r**

1 1 1

**r r**

**I μ I θ 1 Σ I θ 1 I μ**

a. The proof for the non-separable case is derived by applying Proposition 1 (Navarro, et.

**sr s sr s s s rs s r r s <sup>s</sup>**

*N N*

2 12 1

**μ I μ μ I μ**

**I μ μ I μ μ**

*L L N N L L*

 

 

**s rs s r r s s s**

 1 1

1 2 1 2

*L L L L*

*L L N N L L*

*L LL L*

*m m m m*

*m mm m*

**X Y θ 1YI θ 1 μ**

*m m mm m*

*N N L m L m*

2 2

*N N L m N L m N*

*N N L m L m*

**r r**

1

2 2

**X Y θ 1YI θ 1 μ I μ**

 

**s s rs s r r s s s**

**sr s sr s**

**Y θ 1YI θ 1 I μ μ**

**sr s sr s**

*N N*

*m m mm*

**I μ I θ 1 Σ Y θ 1 Y**

**μ I I θ 1 Σ Y θ 1 Y**

*m m mm*

<sup>1</sup>

<sup>2</sup>

Thus

1

ˆ

**μ**

*m*

*m*

*m*

*Proof* 

*m*

for 1 *m M* .

al., 2009).

2

*m*

2

ˆ

**μ**

*m*

**<sup>μ</sup>** *<sup>m</sup>* is estimated, given that <sup>2</sup>

**<sup>μ</sup>** *<sup>m</sup>* is estimated, given that <sup>1</sup>

2 1

*<sup>T</sup> <sup>T</sup>*

*<sup>T</sup> <sup>T</sup>*

**sr r**

1 1

**sr r**

b. From (3), **Xs** can be written as follows:

Plugging the results of (49) into (48) yields

For the separable case, the mean can be written as follows:

*mmm*

**r r**

**r r**

**μμ μ**

2 1

**sr r**

*<sup>T</sup> <sup>T</sup>*

*<sup>T</sup> <sup>T</sup>*

**sr r**

1

$$\mathbf{H}(m) = row \left\{ col \left( \sum\_{\mathbf{s} \in \mathcal{S}(m)} \mathbf{A}\_{\mathbf{s},\mathbf{t}}(m) \boldsymbol{\Sigma}^{-1}(m) \mathbf{A}\_{\mathbf{s},\mathbf{t}}^{T}(m), \mathbf{r} \in \mathcal{N}\_{S} \right), \mathbf{t} \in \mathcal{N}\_{S} \right\} \tag{54}$$

$$\Gamma(m) = row \left( \sum\_{s \in \mathcal{S}(m)} \mathbf{A}\_{\mathbf{s},\mathbf{t}}(m) \boldsymbol{\Sigma}^{-1}(m) \big( \mathbf{Y}\_{\mathbf{s}} - \boldsymbol{\mathfrak{u}}(m) \big), \ t \in \mathcal{N}\_S \right) \tag{55}$$

$$\mathbf{WP}\left(m\right) = row\left(vec\left(\hat{\boldsymbol{\theta}}\_{\mathbf{r}}\left(m\right)\right), \ \mathbf{r} \in \mathcal{N}\_S\right) \tag{56}$$

and

$$\mathbf{A}\_{\mathbf{s},\mathbf{r}}(m) = \left( \left( \mathbf{Y}\_{\mathbf{s}\cdot\mathbf{r}} - \boldsymbol{\mathfrak{p}}\left(m\right) \right) \mathbf{1}\_{\{l\_{\mathbf{s}\cdot\mathbf{r}}=m\}} \otimes \mathbf{I}\_{N} \right) + \mathbf{K}\_{N,N} \left( \boldsymbol{\Sigma}^{-1}(m) \otimes \boldsymbol{\mathfrak{s}}(m) \right) \left( \left( \mathbf{Y}\_{\mathbf{s}+\mathbf{r}} - \boldsymbol{\mathfrak{p}}\left(m\right) \right) \mathbf{1}\_{\{l\_{\mathbf{s}\cdot\mathbf{r}}=m\}} \otimes \mathbf{I}\_{N} \right). \tag{57}$$

From the invariance property of the MPL, the complete set of non-separable interaction matrix estimators is estimated as follows:

Estimation of the Separable MGMRF Parameters for Thematic Classification 113

a. The proof for the non-separable case is derived by applying Proposition 1 (Navarro, et.

*<sup>T</sup> vec L L vec L* **s s s s sr s r s r**

The above expression can also be written using the following matrix identities (Magnus and

, *<sup>T</sup>*

where *m n* **A** . In addition, from the identity (Magnus and Neudecker, 1999)

2 1

**B** , it follows that

2 2

2 12 1 2

*N NN N N*

,

*N N*

1

*vec m vec m m*

**θ θθ**

**r r**

*vec m vec m*

**θ θ**

2 1

**r rr**

**X XY μ A θ**

 2 1 **θθ θ** ˆˆ ˆ

 , . *S*

**rr r** *mmm* (68)

*<sup>T</sup> vec* **ABC C A B** *vec* (70)

*<sup>T</sup> T T* **AB A B** (71)

*m n vec* **AK A** *vec* (72)

2 1

2 1

1 2

*vec m vec m*

**r r**

*vec m vec m*

**r r**

*vec* **AB I K I A B** *n qm p* , *vec vec* (73)

2 12 1

**IK I θ θ**

**r rr r**

*vec m vec m vec m vec m*

**θ I θ θ I θ**

2 12 1

*N N*

1 1

1 2 1 2

**IK I I θ θ**

, .

*vec m vec m vec m vec m*

**r r r r**

**I θ θ I θ θ**

1

2 12 1 1

*N NN N N*

*vec m vec m vec m*

**θ IK I θ I θ**

**r r r**

, *N NN N* .

(69)

(74)

(75)

(76)

for *<sup>S</sup>* **r** 

al., 2009).

Neudecker, 1999)

where *m n*

Furthermore, since

then,

*Proof* 

and 1 *m M* .

b. From (3), **Xs** can be written as

where *m n* **A** , *n<sup>p</sup>* **B** , and . *p q* **C**

**A** and *p q*

$$\hat{\boldsymbol{\Theta}}\_{\mathbf{r}}(m) = \text{reshape}\left(\text{vec}\left(\hat{\boldsymbol{\Theta}}\_{\mathbf{r}}(m)\right), \boldsymbol{N}, \boldsymbol{N}\right) \tag{58}$$

$$
\hat{\boldsymbol{\Theta}}\_{-\mathbf{r}}(m) = \boldsymbol{\Sigma}(m)\hat{\boldsymbol{\Theta}}\_{\mathbf{r}}^T(m)\left(\boldsymbol{\Sigma}(m)\right)^{-1} \tag{59}
$$

for *<sup>S</sup>* **r** , 1 *m M* .

#### b. Separable Case:

In addition, if we assume the following for *<sup>S</sup>* **r** and 1 *m M* :


then

$$\mathbf{H}^{(k)}(m)\mathbf{W}^{(k)}(m) = \mathbf{I}^{(k)}(m) \tag{60}$$

where

$$\mathbf{H}^{(k)}(m) = row \left\{ col \left( \sum\_{\mathbf{s} \in \mathcal{S}(m)} \mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(k)}(m) \Sigma^{-1}(m) \mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(k)T}(m), \ \mathbf{r} \in \mathcal{N}\_S \right), \ \mathbf{t} \in \mathcal{N}\_S \right\} \tag{61}$$

$$\Gamma^{(k)}(m) = row \left( \sum\_{\mathbf{s} \in \mathcal{S}(m)} \mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(k)}(m) \Sigma^{-1}(m) (\mathbf{Y}\_{\mathbf{s}} - \mathfrak{u}(m))\_{\mathbf{t}} \ \mathbf{t} \in \mathcal{N}\_S \right) \tag{62}$$

$$\mathbf{W}^{(k)}(m) = \text{row}\left(\text{vec}\left(\hat{\mathsf{H}}\_{\mathbf{r}}^{(k)}(m)\right), \ \mathbf{r} \in \mathcal{N}\_S\right) \tag{63}$$

for 1 2 *k* and

$$\mathbf{A}\_{\mathbf{s},\mathbf{r}}^{(1)}(m) = \left(\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}(m)\right) \otimes \mathbf{I}\_{N\_1}\right)^{T} \left(\mathbf{I}\_{N\_2} \otimes \mathbf{K}\_{N\_1,N\_2} \otimes \mathbf{I}\_{N\_1}\right)^{T} \mathbf{A}\_{\mathbf{s},\mathbf{r}}(m) \tag{64}$$

$$\mathbf{A}\_{\mathbf{s},\mathbf{r}}^{(2)}(m) = \left(\mathbf{I}\_{N\_2} \otimes vec\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}(m)\right)\right)^{T} \left(\mathbf{I}\_{N\_2} \otimes \mathbf{K}\_{N\_1,N\_2} \otimes \mathbf{I}\_{N\_1}\right)^{T} \mathbf{A}\_{\mathbf{s},\mathbf{r}}(m). \tag{65}$$

From the invariance property of the MPL, the complete set of separable interaction matrix estimators is estimated as follows for *<sup>S</sup>* **r** , 1 *m M* , 1 2 *k* :

$$\hat{\boldsymbol{\Theta}}\_{\mathbf{r}}^{(k)}(m) = reshape\left(vec\left(\hat{\boldsymbol{\Theta}}\_{\mathbf{r}}^{(k)}\left(m\right)\right), \boldsymbol{N}\_{k'} \boldsymbol{N}\_{k}\right) \tag{66}$$

$$
\hat{\boldsymbol{\Theta}}\_{\mathbf{r}}^{(k)}(m) = \boldsymbol{\Sigma}^{(k)}(m)\hat{\boldsymbol{\Theta}}\_{\mathbf{r}}^{(k)T}\left(m\right)\left(\boldsymbol{\Sigma}^{(k)}(m)\right)^{-1} \tag{67}
$$

and also

$$
\hat{\boldsymbol{\Theta}}\_{\mathbf{r}}(m) = \hat{\boldsymbol{\Theta}}\_{\mathbf{r}}^{(2)}(m) \otimes \hat{\boldsymbol{\Theta}}\_{\mathbf{r}}^{(1)}(m) \tag{68}
$$

for *<sup>S</sup>* **r** and 1 *m M* .

*Proof* 

112 Remote Sensing – Advanced Techniques and Platforms

1 , , , , *k k <sup>k</sup> <sup>T</sup>*

**s t s r**

**H A Σ Art**

, , *k k*

**s t <sup>s</sup>**

<sup>ˆ</sup> , *k k* **Ψ θ** *m row vec m* **<sup>r</sup>** *<sup>S</sup>* **<sup>r</sup>**

1 2 12 1

2 2 <sup>1</sup> <sup>2</sup> <sup>1</sup>

From the invariance property of the MPL, the complete set of separable interaction matrix

ˆ ˆ , , *k k*

 <sup>1</sup> ˆ ˆ *k kk T k m mmm*

, , , . *<sup>T</sup> <sup>T</sup>*

, , ,

*m row mm m*

**Γ A Σ Y μ t**

1

 <sup>2</sup> **θ<sup>r</sup>** *m* is known

 <sup>1</sup> **θ<sup>r</sup>** *m* is known

 

*m m row col mm m*

*m*

**s**

1 2

estimators is estimated as follows for *<sup>S</sup>* **r**

2 1

**s**

for *<sup>S</sup>* **r** 

then

where

b. Separable Case:

<sup>1</sup>

<sup>2</sup>

for 1 2 *k* and

and also

, 1 *m M* .

In addition, if we assume the following for *<sup>S</sup>* **r**

**<sup>θ</sup><sup>r</sup>** *<sup>m</sup>* is estimated, given that

**<sup>θ</sup><sup>r</sup>** *<sup>m</sup>* is estimated, given that

ˆ ˆ *m resha pe vec m N N* , , **r r θ θ** (58)

<sup>1</sup> ˆ ˆ*<sup>T</sup> m mmm* **θ Σθ Σ r r** (59)

*kk k* **H** *mm m* **Ψ Γ** (60)

*T T* **As r** *m vec m* **θ<sup>r</sup> I IK IA** *N N NN N* **s r** *m* (64)

, 1 *m M* , 1 2 *k* :

**r r** *m reshape vec m N N k k* **θ θ** (66)

**θ Σθ Σ r r** (67)

**A I s r** *m vec m <sup>N</sup>* **θ<sup>r</sup> IK IA** *N NN N* **s r** *m* (65)

*S S*

*S*

  (61)

(62)

(63)

and 1 *m M* :


$$\mathbf{X}\_{\mathbf{s}} = \text{vec}\left(\mathbf{X}\_{\mathbf{s}}\right) = \left(\mathbf{Y}\_{\mathbf{s}} - \boldsymbol{\mu}\left(L\_{\mathbf{s}}\right)\right) - \sum\_{\mathbf{r} \in \mathcal{N}\_{S}} \mathbf{A}\_{\mathbf{s},\mathbf{r}}^{T}\left(L\_{\mathbf{s}}\right) \text{vec}\left(\boldsymbol{\Theta}\_{\mathbf{r}}\left(L\_{\mathbf{s}}\right)\right). \tag{69}$$

The above expression can also be written using the following matrix identities (Magnus and Neudecker, 1999)

$$vec{c}\left(\mathbf{ABC}\right) = \left(\mathbf{C}^T \otimes \mathbf{A}\right)vec{c}\left(\mathbf{B}\right)\tag{70}$$

where *m n* **A** , *n<sup>p</sup>* **B** , and . *p q* **C**

$$\left(\mathbf{A}\otimes\mathbf{B}\right)^{T}=\mathbf{A}^{T}\otimes\mathbf{B}^{T}\tag{71}$$

$$vec(\mathbf{A}^T) = \mathbf{K}\_{m,n} v \text{vec}(\mathbf{A})\tag{72}$$

where *m n* **A** . In addition, from the identity (Magnus and Neudecker, 1999)

$$vec{ec}\left(\mathbf{A}\otimes\mathbf{B}\right) = \left(\mathbf{I}\_n\otimes\mathbf{K}\_{q,m}\otimes\mathbf{I}\_p\right)\cdot\left(vec\left(\mathbf{A}\right)\otimes vec\left(\mathbf{B}\right)\right)\tag{73}$$

where *m n* **A** and *p q* **B** , it follows that

$$\begin{split} \text{vec}\left(\boldsymbol{\Phi}\_{\mathbf{r}}\left(m\right)\right) &= \text{vec}\left(\boldsymbol{\Phi}\_{\mathbf{r}}^{(2)}\left(m\right) \otimes \boldsymbol{\Phi}\_{\mathbf{r}}^{(1)}\left(m\right)\right) \\ &= \left(\mathbf{I}\_{N\_{2}} \otimes \mathbf{K}\_{N\_{1}, N\_{2}} \otimes \mathbf{I}\_{N\_{1}}\right) \cdot \left(\text{vec}\left(\boldsymbol{\Phi}\_{\mathbf{r}}^{(2)}\left(m\right)\right) \otimes \text{vec}\left(\boldsymbol{\Phi}\_{\mathbf{r}}^{(1)}\left(m\right)\right)\right). \end{split} \tag{74}$$

Furthermore, since

$$\begin{aligned} &\left(\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}(m)\right)\otimes\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}(m)\right)\right) \\ &= \left(\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}(m)\right)\otimes\mathbf{I}\_{N\_{1}}\right)\left(\mathbf{1}\otimes\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}(m)\right)\right) = \left(\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}(m)\right)\otimes\mathbf{I}\_{N\_{1}}\right)\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}(m)\right) \\ &= \left(\mathbf{I}\_{N\_{2}}\otimes\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}(m)\right)\right)\left(\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}(m)\right)\otimes\mathbf{1}\right) = \left(\mathbf{I}\_{N\_{2}}\otimes\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}(m)\right)\right)\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}(m)\right) \end{aligned} \tag{75}$$

then,

$$\begin{split} \text{vec}\left(\boldsymbol{\Phi\_{\mathbf{r}}}\left(m\right)\right) &= \left(\mathbf{I}\_{N\_{2}} \otimes \mathbf{K}\_{N\_{1},N\_{2}} \otimes \mathbf{I}\_{N\_{1}}\right) \cdot \left(\text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}\left(m\right)\right) \otimes \mathbf{I}\_{N\_{1}}\right) \text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}\left(m\right)\right) \\ &= \left(\mathbf{I}\_{N\_{2}} \otimes \mathbf{K}\_{N\_{1},N\_{2}} \otimes \mathbf{I}\_{N\_{1}}\right) \cdot \left(\mathbf{I}\_{N\_{2}} \otimes \text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}\left(m\right)\right)\right) \text{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}\left(m\right)\right). \end{split} \tag{76}$$

Estimation of the Separable MGMRF Parameters for Thematic Classification 115

**Σ** *m* is known

**Σ** *m* is known

 <sup>1</sup> 1 2 # #

 <sup>1</sup> 2 1 # #

The above estimators are not in their closed form. The estimators can be solved iteratively

The multispectral and multitemporal satellite image under consideration is the 'Butuan' image acquired from the LANDSAT TM. The image shows the scenery of Butuan City and its surroundings in Northeastern Mindanao, Philippines. It consists of six spectral bands and four temporal slots with a dynamic range of 8 bits. The images were captured chronologically on the following dates: August 1, 1992, August 7, 2000, May 22, 2001, and December 3, 2002. The images were radiometrically corrected, geometrically co-registered with each other, and have been resized to 600 x 800 pixels. The image in Fig. 1 is a gray-

<sup>1</sup> <sup>ˆ</sup> . *<sup>T</sup> m m m*

**s s**

**s s**

1 ˆ ˆ *<sup>T</sup> m m m*

2

1

**s Σ X Σ X** 

**s Σ X Σ X** 

*rmN*

*rmN*

(83)

(84)

b. Separable Case:

**7. Data preparation** 

<sup>1</sup>

<sup>2</sup>

then

In addition, if we assume the following for 1 *m M* :

**<sup>Σ</sup>** *<sup>m</sup>* is estimated, given that <sup>2</sup>

**<sup>Σ</sup>** *<sup>m</sup>* is estimated, given that <sup>1</sup>

using the flip-flop algorithm (Dutilleul, 1999).

scaled RGB realization captured on May 22, 2001.

Fig. 1. RGB image of 'Butuan' captured on May 22, 2001.

Plugging the results of (76) into (69) yields

 1 1 , 2 2 , *S S T T L L vec L L L vec L* **ss s sr s r s r s s sr s r s r X Y μ A θ Y μ A θ** (77) 1 <sup>1</sup> *m vec m* **Φ θ<sup>t</sup>** , , 1 **t** *<sup>S</sup> m M*

For this case, we recognize from (77),

$$\mathbf{Q}\_{\mathbf{s}} = \mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(1)T}(m) \,. \tag{78}$$

By applying Preposition 1 and rearranging terms, we obtain the following expression

$$\sum\_{\mathbf{s}\in\mathcal{N}\_{\mathbf{s}}}\sum\_{\mathbf{s}\in\mathcal{S}(m)}\mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(1)}(m)\Sigma^{-1}(m)\mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(1)T}(m)\operatorname{vec}\Big(\boldsymbol{\theta}\_{\mathbf{r}}^{(1)}(m)\Big) = \sum\_{\mathbf{s}\in\mathcal{S}(m)}\mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(1)}(m)\Sigma^{-1}(m)\left(\mathbf{Y}\_{\mathbf{s}}-\boldsymbol{\mathfrak{y}}(m)\right). \tag{79}$$

By aggregating the equation in (79) for *<sup>S</sup>* **t** , the interaction matrix coefficients are estimated by solving the simultaneous linear equations in (60) for 1. *k*

$$\mathfrak{a}\left(2^{\cdot}\right) \qquad \mathfrak{sp}\left(m\right) = \mathsf{vec}\left(\mathfrak{G}\_{\ast}^{(\cdot)}\left(m\right)\right), \ \mathsf{t} \in \mathscr{N}\_{\mathbb{S}}, \ \mathsf{l} \le m \le M$$

For this case, we recognize from (77)

$$\mathbf{Q}\_{\mathbf{s}} = \mathbf{A}\_{\mathbf{s},\mathbf{t}}^{\{2\}^{\rm T}}(m). \tag{80}$$

By applying Preposition 1 and rearranging terms, we obtain the following expression

$$\sum\_{\mathbf{r}\in\mathcal{N}\_{\mathbf{s}}}\sum\_{\mathbf{s}\in\mathcal{S}(m)}\mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(2)}(m)\Sigma^{-1}(m)\mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(2)T}(m)\mathrm{vec}\left(\boldsymbol{\theta}\_{\mathbf{r}}^{(2)}(m)\right) = \sum\_{\mathbf{s}\in\mathcal{S}(m)}\mathbf{A}\_{\mathbf{s},\mathbf{t}}^{(2)}(m)\Sigma^{-1}(m)(\mathbf{Y}\_{\mathbf{s}}-\boldsymbol{\mathfrak{y}}(m)). \tag{81}$$

By aggregating the equations in (79) for *<sup>S</sup>* **t** , the interaction matrix coefficients are estimated by solving the simultaneous linear equations in (60) for 2. *k*

#### **6.3 Covariance matrix parameter estimation**

Since **Xs** is dependent on a covariance matrix in finding the MPL estimator of **Σ** *m* , for all 1 *m M* is cumbersome to derive. As an alternative, we estimate the covariance matrix as the sample covariance matrix given that the mean vectors **μ***m* for 1 *m M* and the interaction matrix coefficients *m* **<sup>r</sup> θ** , for **r** , 1 *m M* are known, then the covariance matrix parameters are estimated as follows:

a. Non-Separable Case:

$$
\hat{\Sigma}(m) = \frac{1}{r(m)} \sum\_{\mathbf{s} \in \mathcal{S}(m)} \mathbf{X}\_{\mathbf{s}} \mathbf{X}\_{\mathbf{s}}^T \tag{82}
$$

#### b. Separable Case:

In addition, if we assume the following for 1 *m M* :


then

114 Remote Sensing – Advanced Techniques and Platforms

*S*

**ss s sr s r s r**

**X Y μ A θ**

**Y μ A θ** 

**r s s**

By aggregating the equation in (79) for *<sup>S</sup>* **t**

 2 

For this case, we recognize from (77)

*T m m*

estimated by solving the simultaneous linear equations in (60) for 1. *k*

**r s s**

By aggregating the equations in (79) for *<sup>S</sup>* **t**

**6.3 Covariance matrix parameter estimation** 

interaction matrix coefficients *m* **<sup>r</sup> θ** , for **r**

matrix parameters are estimated as follows:

*T m m*

estimated by solving the simultaneous linear equations in (60) for 2. *k*

**r**

By applying Preposition 1 and rearranging terms, we obtain the following expression

1 11 1 1 1

**s t s r <sup>r</sup> s t <sup>s</sup>**

By applying Preposition 1 and rearranging terms, we obtain the following expression

2 22 2 1 1

**s t s r <sup>r</sup> s t <sup>s</sup>**

*S*

 <sup>1</sup> *m vec m* **Φ θ<sup>t</sup>** , , 1 **t**

,

*T*

*L L vec L*

*T*

*L L vec L*

1 1

2 2

*<sup>S</sup> m M*

*<sup>T</sup>* **Q A s st** *<sup>m</sup>* . (78)

*<sup>S</sup> m M*

*<sup>T</sup>* **Q A s st** *<sup>m</sup>* . (80)

, the interaction matrix coefficients are

, 1 *m M* are known, then the covariance

(82)

, the interaction matrix coefficients are

(77)

(79)

(81)

**s s sr s r s**

,

 (1) ,

**A Σ A θ A Σ Y μ**

 (2) ,

**A Σ A θ A Σ Y μ**

Since **Xs** is dependent on a covariance matrix in finding the MPL estimator of **Σ** *m* , for all 1 *m M* is cumbersome to derive. As an alternative, we estimate the covariance matrix as the sample covariance matrix given that the mean vectors **μ***m* for 1 *m M* and the

*m*

 1 ˆ *<sup>T</sup>*

*r m* **s s s Σ X X** 

*m*

, , , .

*m m m vec m m m m*

 <sup>2</sup> *m vec m* **Φ θ<sup>t</sup>** , , 1 **t**

, , , .

*m m m vec m m m m*

Plugging the results of (76) into (69) yields

1 

For this case, we recognize from (77),

a. Non-Separable Case:

*S*

*S*

$$
\hat{\boldsymbol{\Sigma}}^{(1)}(m) = \frac{1}{r(m)N\_{2}} \sum\_{\mathbf{s} \in \mathcal{S}(m)} \mathbf{X}\_{\mathbf{s}}^{\#} \left(\hat{\boldsymbol{\Sigma}}^{(2)}(m)\right)^{-1} \mathbf{X}\_{\mathbf{s}}^{\#T} \tag{83}
$$

$$
\hat{\boldsymbol{\Sigma}}^{(2)}(m) = \frac{1}{r\left(m\right)N\_{1}} \sum\_{\mathbf{s} \in \mathcal{S}\left(m\right)} \mathbf{X}\_{\mathbf{s}}^{\#\mathcal{T}} \left(\boldsymbol{\Sigma}^{(1)}(m)\right)^{-1} \mathbf{X}\_{\mathbf{s}}^{\#}.\tag{84}
$$

The above estimators are not in their closed form. The estimators can be solved iteratively using the flip-flop algorithm (Dutilleul, 1999).

## **7. Data preparation**

The multispectral and multitemporal satellite image under consideration is the 'Butuan' image acquired from the LANDSAT TM. The image shows the scenery of Butuan City and its surroundings in Northeastern Mindanao, Philippines. It consists of six spectral bands and four temporal slots with a dynamic range of 8 bits. The images were captured chronologically on the following dates: August 1, 1992, August 7, 2000, May 22, 2001, and December 3, 2002. The images were radiometrically corrected, geometrically co-registered with each other, and have been resized to 600 x 800 pixels. The image in Fig. 1 is a grayscaled RGB realization captured on May 22, 2001.

Fig. 1. RGB image of 'Butuan' captured on May 22, 2001.

Estimation of the Separable MGMRF Parameters for Thematic Classification 117

(Hazel, 2000) which in general, does not hold the multivariate case. This relation, however, holds in the univariate case (Kashyap and Chellappa, 1983) as well as the Rellier's GMRF. On the other hand, anisotropic models, such as Rellier's GMRF, and our model exhibited a substantially better classification performance as compared to the GSC. Since the covariance matrix estimators used a sub-optimal alternative, some slight performance degradation has

Denote *S***<sup>μ</sup>** , *S***<sup>θ</sup>** , and *S***Σ** to be the separable indicators for the mean, interaction matrix, and

Since the GSC model is a degenerate form of our MGMRF with zero interaction matrices, the separability structure of the mean and covariance matrices are examined. The results are presented in Table 3 showed that no improvement in the classification performance,

> *S***<sup>Σ</sup>** *S***<sup>μ</sup>** Accuracy 0 0 55.3% 0 1 54.2% 0 0 54.3% 1 1 54.1%

The hybrid separable anisotropic MGMRF shows the separability of the covariance matrix has a slight improvement in performance over a non-separable spectro-temporal observation. As discussed in Section 5.2, the hybrid separable model with separable interaction matrix, together with a non-separable matrix, were excluded in the model performance as these modes are not possible. The classification accuracy is presented in

> *S***<sup>Σ</sup>** *S***<sup>θ</sup>** *S***<sup>μ</sup>** Accuracy 0 0 0 84.3% 0 0 1 84.6%

resulted.

Table 4.

**8.2 Hybrid separable case** 

covariance matrix, respectively.

**8.2.1 Hybrid separable GSC model** 

regardless of separability of the parameters.

Table 3. Classification Accuracy of Hybrid Separable GSC models.

0 1 0 0 1 1

**8.2.2 Hybrid separable anisotropic GMRF model** 

**θ θ rs rs** *L L* (85)

The thematic classes were established by employing the k-means algorithm (Richards and Jia, 2006). The thematic classes were identified and their mean reflectance vector form the training data are shown in Table 1.


Table 1. Average reflectances from the training data.

Training and verification sites were obtained from a random sample of 1200 sites. The firstorder neighborhood system in the MRF modeling of the thematic map and the image were used.

## **8. Discussion**

## **8.1 Non-separable case**

The classification performance of our model with non-separable MGMRF parameters, as compared to the GSC, Hazel's, and Rellier's models are presented in Table 2.


Table 2. Classification Accuracy of Different MGMRF models.

The GSC model has a low accuracy compared to the remaining MGMRF models. It substantiates that Markov dependence would yield a better accuracy to the thematic map classification than to the site independence model.

It is noticeable that Hazel's GMRF presents a relatively poor classification accuracy which is attributed to the bilateral symmetry imposed into the interaction matrices, that is,

$$\boldsymbol{\Theta}\_{\mathbf{r}}\left(\boldsymbol{L}\_{\mathbf{s}}\right) = \boldsymbol{\Theta}\_{-\mathbf{r}}\left(\boldsymbol{L}\_{\mathbf{s}}\right) \tag{85}$$

(Hazel, 2000) which in general, does not hold the multivariate case. This relation, however, holds in the univariate case (Kashyap and Chellappa, 1983) as well as the Rellier's GMRF.

On the other hand, anisotropic models, such as Rellier's GMRF, and our model exhibited a substantially better classification performance as compared to the GSC. Since the covariance matrix estimators used a sub-optimal alternative, some slight performance degradation has resulted.

## **8.2 Hybrid separable case**

116 Remote Sensing – Advanced Techniques and Platforms

The thematic classes were established by employing the k-means algorithm (Richards and Jia, 2006). The thematic classes were identified and their mean reflectance vector form the

1 Thick Vegetation 62 48 33 91 69 29

2 Sparse Vegetation 70 58 43 99 83 37

3 Built Up Areas 77 63 54 75 78 41

4 Body of Water 72 41 29 12 13 11

5 Thin Clouds 104 84 76 88 85 53

6 Thick Clouds 197 190 190 144 167 115

Training and verification sites were obtained from a random sample of 1200 sites. The firstorder neighborhood system in the MRF modeling of the thematic map and the image were

The classification performance of our model with non-separable MGMRF parameters, as

Model Accuracy

GSC 55.3%

Hazel's GMRF 45.6%

Rellier's GMRF 83.1%

Our Model 84.3%

The GSC model has a low accuracy compared to the remaining MGMRF models. It substantiates that Markov dependence would yield a better accuracy to the thematic map

It is noticeable that Hazel's GMRF presents a relatively poor classification accuracy which is

attributed to the bilateral symmetry imposed into the interaction matrices, that is,

compared to the GSC, Hazel's, and Rellier's models are presented in Table 2.

Table 2. Classification Accuracy of Different MGMRF models.

classification than to the site independence model.

*1 2 3 4 5 7* 

M Thematic Class Landsat TM Band Number

training data are shown in Table 1.

Table 1. Average reflectances from the training data.

used.

**8. Discussion** 

**8.1 Non-separable case** 

Denote *S***<sup>μ</sup>** , *S***<sup>θ</sup>** , and *S***Σ** to be the separable indicators for the mean, interaction matrix, and covariance matrix, respectively.

## **8.2.1 Hybrid separable GSC model**

Since the GSC model is a degenerate form of our MGMRF with zero interaction matrices, the separability structure of the mean and covariance matrices are examined. The results are presented in Table 3 showed that no improvement in the classification performance, regardless of separability of the parameters.


Table 3. Classification Accuracy of Hybrid Separable GSC models.

## **8.2.2 Hybrid separable anisotropic GMRF model**

The hybrid separable anisotropic MGMRF shows the separability of the covariance matrix has a slight improvement in performance over a non-separable spectro-temporal observation. As discussed in Section 5.2, the hybrid separable model with separable interaction matrix, together with a non-separable matrix, were excluded in the model performance as these modes are not possible. The classification accuracy is presented in Table 4.


Estimation of the Separable MGMRF Parameters for Thematic Classification 119

Fig. 3. Thematic Map – GSC with separable mean and covariance matrix

Fig. 2. Thematic Map – Hazel's MGMRF


Table 4. Classification Accuracy of Hybrid Separable Anisotropic MGMRF models.

## **8.3 Thematic maps**

Some of the thematic map labels are presented in Figs. 2 to 4, based on the May 22, 2001 satellite image. For clarity of visual presentation, thematic map labels were based on the gray-scaled average RGB reflectance of the training data.

#### Fig. 2. Thematic Map – Hazel's MGMRF

118 Remote Sensing – Advanced Techniques and Platforms

*S***<sup>Σ</sup>** *S***<sup>θ</sup>** *S***<sup>μ</sup>** Accuracy 1 0 0 84.5% 1 0 1 86.6% 1 1 0 83.8% 1 1 1 86.2%

Table 4. Classification Accuracy of Hybrid Separable Anisotropic MGMRF models.

gray-scaled average RGB reflectance of the training data.

Some of the thematic map labels are presented in Figs. 2 to 4, based on the May 22, 2001 satellite image. For clarity of visual presentation, thematic map labels were based on the

**8.3 Thematic maps** 

Fig. 3. Thematic Map – GSC with separable mean and covariance matrix

Estimation of the Separable MGMRF Parameters for Thematic Classification 121

Casella, G. & Berger, R. L. (2002). *Statistical Inference 2nd ed.*, Wadswoth Group, ISBN 978-053-

Dutilleul, P. (1999). The MLE Algorithm for the Matrix Normal Distribution. *Journal of Statistical Computation and Simulation*, Vol. 64, No. 2 , ISSN 0094-9655 Fuentes, M. (2006). Testing for Separability of Spatial-Temporal Covariance Functions. *Journal of Statistical Planning and Inference*, Vol. 136, pp. 447-466, ISSN 0378-3758 Geman, S. & Graffigne, C. (1987). Markov Random Field Models and Their Applications to

Hazel, G. G. (2000). Multivariate Gaussian MRF for Multispectral Scene Segmentation and

Huizenga, H., Munck, J., Waldorp, R., Grasman, R. (2002). Spatiotemporal EEG/MEG

Kashyap, R. and Chellappa, R. (1983). Estimation and Choice of Neighbors in Spatial-

Kreyzig, E. (2005). *Advanced Engineering Mathematics, 8th. ed.*, Wiley, ISBN 978-047-1488-85-9,

Kyriakidis, P. C. & Journel, A. G. (1999). Geostatistical Space-Time Models: A Review. *Mathematical Geology*, Vol. 31, No. 6, (August 1999), pp. 651-684, ISSN 0882-8121 Li, S. Z. (1995) *Markov Random Field Modeling in Computer Vision*, Springer-Verlag, ISBN 978-

Lu, N. & Zimmerman, D. (2005). The Likelihood Ratio Test for a Separable Covariance

Magnus, J. R. & H. Neudecker (1999). *Matrix Differential Calculus with Applications in Statistics* 

Moura, J. M. F. & Balram N. (1993). Chapter 15: Statistical Algorithms for Noncausal Markov

Naik, D. N. & Rao, S. S. (2001). Analysis of Multivariate Repeated Measures Data with a

Navarro, R. D. Jr., Magadia, J. C., & Paringit, E. C. (2009). Estimating the Gauss-Markov

Neudecker, H. (1969). Some Theorems on Matrix Differentiation with Special Reference to

*and Econometrics 2nd ed.*, Wiley, ISBN 978-047-1986-33-1, Chichester

623-691, North Holland, ISBN 978-044-4892-05-8, Amsterdam

28, No. 1, (January 2001), pp. 91-105, ISSN 0013-1644;

No. 327, (September 1969), pp. 953-963, ISSN 0162-1459

Matrix. *Statistics and Probability Letters*, Vol. 73, No. 4, (July 2005), pp. 449-457, ISSN

Random Fields, In: *Handbook of Statistics Volume 10*, Bose, N. K. & Rao, C. R., pp.

Kronecker Product Structured Covariance Matrix. *Journal of Applied Statistics*, Vol.

Random Field Parameters for Remote Sensing Image Textures, Proceedings of TENCON 2009 - 2009 IEEE Region 10 Conference, ISBN 978-142-4445-46-2,

Kronecker Matrix Products. Journal of American Statistical Association, Vol. 64,

ISBN 978-082-1801-10-9,Berkeley, CA, August, 1986

3, (May 2000), pp. 1199–1211, ISSN 0196-2892

1, (January 1983), pp. 60-72, ISSN 0018-9448

Computer Vision, Proceedings of the International Congress of Mathematicians,

Anomaly Detection. *IEEE Transactions on Geoscience and Remote Sensing*, Vol. 38, No.

Source Analysis Based on a Parametric Noise Covariance Model. *IEEE Transactions on Biomedical Engineering*, Vol. 49, No. 6, (June 2002), pp. 533-539, ISSN 0018-9294 Jeng, F. and Woods, J. (1991). Compound Gauss-Markov Random Fields for Image

Estimation. *IEEE Transactions on Signal Processing*, Vol. 39, No. 3, (March 1991), pp.

Interaction Models of Images. *IEEE Transactions on Information Theory*, Vol. 29, No.

4243-12-8, Pacific Grove, CA

683–697, ISSN 1053-587X

4431701453, New York

Singapore, November, 2009

New York

0167-7152

Fig. 4. Thematic Map – Anisotropic MGMRF separable mean, interaction matrices, and covaraince matrices

## **9. Summary, conclusions, and recommendations**

This study presents a parameter estimation procedure based on the MPL for an anisotropic MGMRF with hybrid-separable parameters. Although the MGMRF is a natural extension of its univariate counterpart, the interaction matrix relationship is, in general, dependent on the covariance matrix. In an effort to make the estimation and classification procedure more tractable to compute, some sub-optimal approximations were incorporated. This resulted in a slight degradation in the classification performance. The classification performance based on our model performed well when compared to the GSC model and Hazel's MGMRF. Nonetheless, its performance is comparable to the Rellier's MGMRF. Moreover, for spectrotemporal observations, the separability of the interaction matrix as well as the covariance matrix improved the classification performance. Computational capabilities are foreseen to further advance in the near future following the improvement of numerical estimation and classification procedures.

This study presents a parameter estimation procedure based on the MPL for anisotropic MGMRF with hybrid-separable parameters. Although the MGMRF is a natural extension of its univariate counterpart, the interaction matrix relationship is, in general, dependent on the covariance matrix. In an effort to make the estimation and classification procedure more tractable to compute, some sub-optimal approximations were incorporated in the process. This resulted in a slight degradation in the classification performance. The classification performance, based on our model, has performed well, as compared to the GSC model and Hazel's MGMRF. Furthermore, its performance is comparable to Rellier's MGMRF. In terms of spectro-temporal observations, the separability of the covariance matrix has improved the classification performance. This study can be improved even more with numerical estimation and classification procedure as computational capabilities. This is foreseen to further advance in the near future.

## **10. Acknowledgment**

We acknowledge the invaluable support of extended by the Statistical Training and Research Center of the Philippine Statistical System.

## **11. References**


This study presents a parameter estimation procedure based on the MPL for an anisotropic MGMRF with hybrid-separable parameters. Although the MGMRF is a natural extension of its univariate counterpart, the interaction matrix relationship is, in general, dependent on the covariance matrix. In an effort to make the estimation and classification procedure more tractable to compute, some sub-optimal approximations were incorporated. This resulted in a slight degradation in the classification performance. The classification performance based on our model performed well when compared to the GSC model and Hazel's MGMRF. Nonetheless, its performance is comparable to the Rellier's MGMRF. Moreover, for spectrotemporal observations, the separability of the interaction matrix as well as the covariance matrix improved the classification performance. Computational capabilities are foreseen to further advance in the near future following the improvement of numerical estimation and

This study presents a parameter estimation procedure based on the MPL for anisotropic MGMRF with hybrid-separable parameters. Although the MGMRF is a natural extension of its univariate counterpart, the interaction matrix relationship is, in general, dependent on the covariance matrix. In an effort to make the estimation and classification procedure more tractable to compute, some sub-optimal approximations were incorporated in the process. This resulted in a slight degradation in the classification performance. The classification performance, based on our model, has performed well, as compared to the GSC model and Hazel's MGMRF. Furthermore, its performance is comparable to Rellier's MGMRF. In terms of spectro-temporal observations, the separability of the covariance matrix has improved the classification performance. This study can be improved even more with numerical estimation and classification procedure as computational capabilities. This is foreseen to

We acknowledge the invaluable support of extended by the Statistical Training and

Aarts, E. and Korts, J. (1987). *Simulated Annealing and Boltzmann Machines*, Wiley, ISBN 978-

Arnold, S. F. (1981). *The Theory of Linear Models and Multivariate Analysis*, Wiley, ISBN 978-

Besag, J. (1986). On the Statistical Analysis of Dirty Pictures (with discussions). *Journal of Royal Statistical Society B*. Vol. 48, No. 3, pp. 259-302, ISSN 0035-9246 Campbell, N. A. and Kiiveri, H. T. (1988). Spectral-Temporal Indices for Discrimination.

*Applied Statistics*, Vol. 37, No. 1, pp. 51-62, ISSN 0035-9254

Fig. 4. Thematic Map – Anisotropic MGMRF separable mean, interaction matrices, and

**9. Summary, conclusions, and recommendations** 

covaraince matrices

classification procedures.

further advance in the near future.

Research Center of the Philippine Statistical System.

047-1921-46-2, New York

047-1050-65-0, New York

**10. Acknowledgment** 

**11. References** 


**0**

**6**

*Spain*

**Low Rate High Frequency Data Transmission**

This chapter deals with the difficulties of transmitting data gathered from sensors placed in very remote areas where energy supplies are scarce. The data link is established by means of the ionosphere, a layer of the upper atmosphere that is ionized by solar radiation. Communications through the ionosphere have persisted, although the use of artificial repeaters, such as satellites, has provided more reliable communication. In spite of being random, noisy and susceptible to interference, ionospheric transmission still has favorable characteristics (e.g. low cost equipment, worldwide coverage, invulnerability, etc.) that

The Research Group in Electromagnetism and Communications (GRECO) from La Salle - Universitat Ramon Llull (Spain) is investigating techniques for the improvement of remote sensing and skywave digital communications. The GRECO has focused its attention on the link between Antarctica and Spain. The main objectives of this study are: to implement a long-haul oblique ionospheric sounder and to transmit data from sensors located at the

The SAS is located on Livingston Island (62.7 ◦S, 299.6 ◦E; geomagnetic latitude 52.6 ◦S) in the South Shetlands archipelago. Spanish research is focused on the study of the biological and geological environment, and also the physical geography. Many of the research activities undertaken at the SAS collect data on temperature, position, magnetic field, height, etc. which is temporarily stored in data loggers on-site. Part of this data is then transmitted to research laboratories in Spain. Even though the SAS is only manned during the austral summer, data collection never stops. While the station is left unmanned, the sets of data are stored in memory devices, and are not downloaded until the next Antarctic season. The information that has to be analyzed in almost real-time is transmitted to Spain through a satellite link. The skywave digital communication system, presented here, is intended to transmit the information from the Antarctic sensors as a backup, or even as an alternative

Antarctica is a continent of great scientific interest in terms of remote sensing experiments related to physics and geology. Due to the peculiarities of Antarctica, some of these experiments cannot be conducted anywhere else on the Earth and this fact might oblige the

to the satellite, without depending on other entities for support or funding.

**1. Introduction**

appeal to current communications engineering.

Spanish Antarctic Station (SAS) Juan Carlos I to Spain.

**from Very Remote Sensors**

and Joan Ramon Regué *La Salle - Universitat Ramon Llull*

Pau Bergada, Rosa Ma Alsina-Pages, Carles Vilella


## **Low Rate High Frequency Data Transmission from Very Remote Sensors**

Pau Bergada, Rosa Ma Alsina-Pages, Carles Vilella and Joan Ramon Regué *La Salle - Universitat Ramon Llull Spain*

#### **1. Introduction**

122 Remote Sensing – Advanced Techniques and Platforms

Ravishanker, N. & Dey, D. K. (2002). *A First Course in Linear Model Theory*, CRC Press LLC,

Richards, J. A. & Jia, X. (1999). *Remote Sensing Image Analysis: An Introduction, 4th ed*.,

Rellier, G., Descombes, X., Falzon, F., & Zerubia, J. (2004). Texture Feature Analysis Using a

Winkler, G. (2006). *Image Analysis, Random Fields and Dynamic Monte Carlo Methods: A Mathematical Introduction 2nd ed.*, Springer-Verlag. ISBN 978-354-0442-13-4, Berlin

Gauss–Markov Model in Hyperspectral Image Segmentation. *IEEE Transactions on Geoscience and Remote Sensing*, Vol. 42, No. 7, (July 2004), pp. 1543-1551, ISSN 0196-

ISBN 978-158-4882-47-3, Boca Raton, FL

2892

Springer-Verlag. ISBN 978-354-0251-28-6, Berlin

This chapter deals with the difficulties of transmitting data gathered from sensors placed in very remote areas where energy supplies are scarce. The data link is established by means of the ionosphere, a layer of the upper atmosphere that is ionized by solar radiation. Communications through the ionosphere have persisted, although the use of artificial repeaters, such as satellites, has provided more reliable communication. In spite of being random, noisy and susceptible to interference, ionospheric transmission still has favorable characteristics (e.g. low cost equipment, worldwide coverage, invulnerability, etc.) that appeal to current communications engineering.

The Research Group in Electromagnetism and Communications (GRECO) from La Salle - Universitat Ramon Llull (Spain) is investigating techniques for the improvement of remote sensing and skywave digital communications. The GRECO has focused its attention on the link between Antarctica and Spain. The main objectives of this study are: to implement a long-haul oblique ionospheric sounder and to transmit data from sensors located at the Spanish Antarctic Station (SAS) Juan Carlos I to Spain.

The SAS is located on Livingston Island (62.7 ◦S, 299.6 ◦E; geomagnetic latitude 52.6 ◦S) in the South Shetlands archipelago. Spanish research is focused on the study of the biological and geological environment, and also the physical geography. Many of the research activities undertaken at the SAS collect data on temperature, position, magnetic field, height, etc. which is temporarily stored in data loggers on-site. Part of this data is then transmitted to research laboratories in Spain. Even though the SAS is only manned during the austral summer, data collection never stops. While the station is left unmanned, the sets of data are stored in memory devices, and are not downloaded until the next Antarctic season. The information that has to be analyzed in almost real-time is transmitted to Spain through a satellite link. The skywave digital communication system, presented here, is intended to transmit the information from the Antarctic sensors as a backup, or even as an alternative to the satellite, without depending on other entities for support or funding.

Antarctica is a continent of great scientific interest in terms of remote sensing experiments related to physics and geology. Due to the peculiarities of Antarctica, some of these experiments cannot be conducted anywhere else on the Earth and this fact might oblige the

for data transmission from the SAS to Spain. Data provided by the VIS is used to conduct ionospheric research, mainly to characterize the climatology of the ionospheric characteristics and to investigate the ionospheric effects caused during geomagnetically disturbed periods

Low Rate High Frequency Data Transmission from Very Remote Sensors 125

The oblique ionosonde monitors various parameters to model the HF radiolink between the SAS and Spain (Vilella et al., 2008). These parameters include link availability, power delay profile and frequency dispersion of the channel. The sounder includes a transmitter, placed on the premises of the SAS and a receiver deployed in Spain. The main drawback of the oblique sounder is the difficulty in establishing the ionospheric link. Firstly, the long distance of the link (12700 km) requires four hops to reach the receiver. And secondly, the transmitted signal

The ionosphere study can be approximated from several points of view. Vertical incidence soundings provide accurate information about electron density profiles below the peak electron density. However, when using this technique the electron profile must be extrapolated from the peak point to the upper limit of the ionosphere. Moreover, the low density of vertical ionosondes, especially in oceans and remote areas, is a serious impairment. GNSS receivers constitute a high temporal and spatial resolution sounding network which despite gaps over oceans and remote regions, can be used to study fast perturbations affecting local regions, such as Travelling Ionospheric Disturbances and scintillations, or wider regions such as solar flares. Data gathered from GNSS receivers can provide information about the Total Electron Content (TEC) between receivers and satellites by means of proper tomographic modeling approaches. Spatial and temporal variations of the main ionospheric events can be monitored by means of GNSS receivers, especially those placed in the Antarctic Region, which is considered the entrance point of many ionospheric disturbances coming from Solar events.

This chapter will study, analyze and experimentally verify a possible candidate for the physical layer of a long-haul ionospheric data link, focusing on the case SAS-Spain. Preliminary studies of data transmission feasibility over this link were already performed

The first application of this link is the transmission of data generated by a geomagnetic sensor installed at the SAS. Future applications may include sending information of another nature

The minimum requirements regarding the geomagnetic sensor data transmissions from the

Furthermore, TEC reaches its highest variability peaks in the Antarctica area.

in (Deumal et al., 2006) and (Bergada et al., 2009), with encouraging results.

• The system should support a data throughput of 5120 bits per hour. • The maximum delivery delay of the data should not exceed 24 hours.

such as temperature, glacier movements, seismic activity, etc.

(see (Solé et al., 2006) and (Vilella et al., 2009)).

has to cross the equator and four different time zones.

**1.1.4 Global Navigation Satellite Systems**

**1.1.3 Oblique ionosonde**

**1.2 Data transmission**

SAS to Spain are:

researchers to transmit gathered data to laboratories placed on other continents for intensive study. Because of the remoteness of the transmitter placed at the SAS, the system suffers from power restrictions mainly during austral winter. Therefore, maintaining the radio link, even at a reduced throughput, is a challenge. One possible solution to increase data rate, with minimal power, is to improve the spectral efficiency of the physical layer of the radio link while maintaining acceptable performance. The outcomes and conclusions of this research work may be extrapolated to other environments where communication is scarcely possible due to economic or coverage problems. Therefore, the solutions presented in this study may be adopted in other situations, such as communications in developing countries or in any other remote area.

#### **1.1 Remote sensors at the SAS**

In this section we describe the main sensors located at the SAS, including a geomagnetic sensor, a vertical incidence ionosonde, an oblique incidence ionosonde and a Global Navigation Satellite System (GNSS) receiver. They have all been deployed in the premises of the SAS by engineers of the GRECO and scientist of the Observatori de l'Ebre. The geomagnetic sensor, the vertical incidence ionosonde and the GNSS receiver are commercial solutions. The oblique incidence ionosonde, used to sound the ionospheric channel between Antarctica and Spain, was developed by the GRECO in the framework of this research work.

#### **1.1.1 Geomagnetic sensor**

Ground-based geomagnetic observatories provide a time series of accurate measurements of the natural magnetic field vector in a particular location on the Earth's surface. This data is used for several scientific and practical purposes, including the synthesis and updates of global magnetic field models, the study of the solar-terrestrial relationships and the Earth's space environment, and support for other types of geophysical studies.

Once the raw observatory data is processed, it is sent to the World Data Centers, where the worldwide scientific community can access them. International Real-time Magnetic Observatory Network (INTERMAGNET) provides means to access the data by an almost real-time satellite link. The data is packed, sent to the geostationary satellites, and collected by Geomagnetic Information Nodes (GINs), where the information can be accessed freely. However, experience has shown that the satellite link is not 100% reliable, and it is preferable to have alternative means to retrieve the geomagnetic data.

There are three main reasons for designing a transmission backup system by skywave. Firstly, visibility problems appear when trying to reach geostationary satellites from polar latitudes. Secondly, end-to-end reliability can be increased by transmitting each frame repeatedly throughout the day. And finally, the ionospheric channel is freely accessed anywhere, whereas satellite communications have operational costs.

#### **1.1.2 Ionosonde: vertical incidence soundings of the ionosphere**

A vertical incidence ionospheric sounder (VIS) (Zuccheretti et al., 2003) was installed in order to have a sensor providing ionospheric monitoring in this remote region. This ionosonde is also being used to provide information for the High Frequency (HF) radio link employed for data transmission from the SAS to Spain. Data provided by the VIS is used to conduct ionospheric research, mainly to characterize the climatology of the ionospheric characteristics and to investigate the ionospheric effects caused during geomagnetically disturbed periods (see (Solé et al., 2006) and (Vilella et al., 2009)).

## **1.1.3 Oblique ionosonde**

2 Will-be-set-by-IN-TECH

researchers to transmit gathered data to laboratories placed on other continents for intensive study. Because of the remoteness of the transmitter placed at the SAS, the system suffers from power restrictions mainly during austral winter. Therefore, maintaining the radio link, even at a reduced throughput, is a challenge. One possible solution to increase data rate, with minimal power, is to improve the spectral efficiency of the physical layer of the radio link while maintaining acceptable performance. The outcomes and conclusions of this research work may be extrapolated to other environments where communication is scarcely possible due to economic or coverage problems. Therefore, the solutions presented in this study may be adopted in other situations, such as communications in developing countries or in any

In this section we describe the main sensors located at the SAS, including a geomagnetic sensor, a vertical incidence ionosonde, an oblique incidence ionosonde and a Global Navigation Satellite System (GNSS) receiver. They have all been deployed in the premises of the SAS by engineers of the GRECO and scientist of the Observatori de l'Ebre. The geomagnetic sensor, the vertical incidence ionosonde and the GNSS receiver are commercial solutions. The oblique incidence ionosonde, used to sound the ionospheric channel between Antarctica and Spain, was developed by the GRECO in the framework of this research work.

Ground-based geomagnetic observatories provide a time series of accurate measurements of the natural magnetic field vector in a particular location on the Earth's surface. This data is used for several scientific and practical purposes, including the synthesis and updates of global magnetic field models, the study of the solar-terrestrial relationships and the Earth's

Once the raw observatory data is processed, it is sent to the World Data Centers, where the worldwide scientific community can access them. International Real-time Magnetic Observatory Network (INTERMAGNET) provides means to access the data by an almost real-time satellite link. The data is packed, sent to the geostationary satellites, and collected by Geomagnetic Information Nodes (GINs), where the information can be accessed freely. However, experience has shown that the satellite link is not 100% reliable, and it is preferable

There are three main reasons for designing a transmission backup system by skywave. Firstly, visibility problems appear when trying to reach geostationary satellites from polar latitudes. Secondly, end-to-end reliability can be increased by transmitting each frame repeatedly throughout the day. And finally, the ionospheric channel is freely accessed anywhere, whereas

A vertical incidence ionospheric sounder (VIS) (Zuccheretti et al., 2003) was installed in order to have a sensor providing ionospheric monitoring in this remote region. This ionosonde is also being used to provide information for the High Frequency (HF) radio link employed

space environment, and support for other types of geophysical studies.

to have alternative means to retrieve the geomagnetic data.

**1.1.2 Ionosonde: vertical incidence soundings of the ionosphere**

satellite communications have operational costs.

other remote area.

**1.1 Remote sensors at the SAS**

**1.1.1 Geomagnetic sensor**

The oblique ionosonde monitors various parameters to model the HF radiolink between the SAS and Spain (Vilella et al., 2008). These parameters include link availability, power delay profile and frequency dispersion of the channel. The sounder includes a transmitter, placed on the premises of the SAS and a receiver deployed in Spain. The main drawback of the oblique sounder is the difficulty in establishing the ionospheric link. Firstly, the long distance of the link (12700 km) requires four hops to reach the receiver. And secondly, the transmitted signal has to cross the equator and four different time zones.

## **1.1.4 Global Navigation Satellite Systems**

The ionosphere study can be approximated from several points of view. Vertical incidence soundings provide accurate information about electron density profiles below the peak electron density. However, when using this technique the electron profile must be extrapolated from the peak point to the upper limit of the ionosphere. Moreover, the low density of vertical ionosondes, especially in oceans and remote areas, is a serious impairment.

GNSS receivers constitute a high temporal and spatial resolution sounding network which despite gaps over oceans and remote regions, can be used to study fast perturbations affecting local regions, such as Travelling Ionospheric Disturbances and scintillations, or wider regions such as solar flares. Data gathered from GNSS receivers can provide information about the Total Electron Content (TEC) between receivers and satellites by means of proper tomographic modeling approaches. Spatial and temporal variations of the main ionospheric events can be monitored by means of GNSS receivers, especially those placed in the Antarctic Region, which is considered the entrance point of many ionospheric disturbances coming from Solar events. Furthermore, TEC reaches its highest variability peaks in the Antarctica area.

## **1.2 Data transmission**

This chapter will study, analyze and experimentally verify a possible candidate for the physical layer of a long-haul ionospheric data link, focusing on the case SAS-Spain. Preliminary studies of data transmission feasibility over this link were already performed in (Deumal et al., 2006) and (Bergada et al., 2009), with encouraging results.

The first application of this link is the transmission of data generated by a geomagnetic sensor installed at the SAS. Future applications may include sending information of another nature such as temperature, glacier movements, seismic activity, etc.

The minimum requirements regarding the geomagnetic sensor data transmissions from the SAS to Spain are:


**1.2.3 HF communication standards**

2000).

its non-suitability for the purposes of this project.

Regarding the interests of this work it is noted that:

robust to interference modulations.

power and antennas (Vilella et al., 2008).

transmitting in free bands (IEEE802.11, 2007).

impairments of the channel, the environment and other services.

synchronized code in the receiver to despread and retrieve data.

systems is not considered.

with other users.

In this section we briefly review the current communication standards for HF and we justify

Low Rate High Frequency Data Transmission from Very Remote Sensors 127

Due to the proliferation of modems in the field of HF communications, interoperability between equipment from different manufacturers became a problem (NTIA, 1998). Hence the need to standardize communication protocols. Worldwide, there are three organizations proposing standards regarding HF communications: (*i*) the U.S. Department of Defense proposes the Military Standards (MIL-STD-188-110A, 1991; MIL-STD-188-110B, 2000; MIL-STD-188-141A, 1991), (*ii*) the Institute for Telecommunications Science (ITS), which depends on the U.S. Department of Commerce, writes the Federal Standard (FED-STD) and (*iii*) NATO proposes the Standardization Agreements (STANAG-4406, 1999; STANAG-5066,

• The standard modes are designed for primary or secondary services. Therefore:

**–** The bandwidth of the channels is standardized (3 kHz or multiples). Interference reduction, i.e. minimize the output power spectral density, with other transmitting

**–** No modes are considered based on short sporadic burst transfers to reduce interference

**–** There are anti-jamming techniques (see MIL-STD-188-148) for additional application on a appropriate communication standard, but the proposals are not based on intrinsically

• Robust configurations require a minimum signal to noise ratio (SNR) of 0 dB at 3 kHz bandwidth, which is not common in this link under the specified conditions of transmitted

We conclude that the configurations proposed by current standards do not meet the desirable characteristics for the type of communication that is required in this work, and consequently, a new proposal should be suggested. In this chapter, we study a number of alternatives based on the use of Direct Sequence Spread Spectrum (DS-SS) techniques in order to cope with the

Spread Spectrum (SS) techniques are described by (Pickholtz et al., 1982) as a kind of transmission in which signal occupies a greater bandwidth than the necessary bandwidth to send the information; bandwidth spreading is achieved by an independent data source, and a

SS began to be developed especially for military purposes in the mid twentieth century and has continued in the forefront of research to present, which is, nowadays, a key point for the 3G mobile cellular systems (Third Generation Partnership Project, 1999) and wireless systems

**2. Data transmission with Direct Sequence Spread Spectrum techniques**

#### **1.2.1 Constraints**

The extreme conditions prevailing at the SAS impose a number of restrictions that affect the transmission system. We highlight the following ones:


#### **1.2.2 Approach**

This section justifies the need for a new data communication system adapted to the requirements of the project and presents the main ideas of this proposal. Firstly, we review the mechanisms that exist worldwide regarding the regulation of occupation of the radio spectrum. Then we review the features of current standards of HF data communications and discuss the non-suitability of these to the requirements of the project.

The International Telecommunication Union (ITU) is responsible for regulating the use of radio spectrum. From the point of view of frequency allocation, it has divided the world into three regions. Broadly speaking, region 1 comprises Europe and Africa, Asia and Oceania constitute region 2 and North and South America region 3.

In each region, the ITU recommends the allocation of each frequency band to one or several services. When multiple services are attributed to the same frequency band in the same region, these fall into two categories: primary or secondary. The ones that are classified as secondary services can not cause interference with the primary services and can not claim protection from interference from the primary services; however, they can demand protection from interference from other secondary services attributed afterwards.

Given these considerations, we propose a system transmission with the following guidelines:


To meet these requirements, we propose a system with the following characteristics:


Moreover, given the ionospheric channel measures described in (Vilella et al., 2008) the following additional features are required:


## **1.2.3 HF communication standards**

4 Will-be-set-by-IN-TECH

The extreme conditions prevailing at the SAS impose a number of restrictions that affect the

• The transmission power should be minimal. It is noted that the SAS is inhabited only during the austral summer, approximately from November to March. During this period there is no limitation regarding the maximum power consumption. However, the transmission system is designed to continue operating during the austral winter, when energy is obtained entirely from batteries powered by wind generators and solar panels.

• Environmental regulations applicable at the site advise against the installation of large

This section justifies the need for a new data communication system adapted to the requirements of the project and presents the main ideas of this proposal. Firstly, we review the mechanisms that exist worldwide regarding the regulation of occupation of the radio spectrum. Then we review the features of current standards of HF data communications and

The International Telecommunication Union (ITU) is responsible for regulating the use of radio spectrum. From the point of view of frequency allocation, it has divided the world into three regions. Broadly speaking, region 1 comprises Europe and Africa, Asia and Oceania

In each region, the ITU recommends the allocation of each frequency band to one or several services. When multiple services are attributed to the same frequency band in the same region, these fall into two categories: primary or secondary. The ones that are classified as secondary services can not cause interference with the primary services and can not claim protection from interference from the primary services; however, they can demand protection

Given these considerations, we propose a system transmission with the following guidelines: • It can not cause harmful interference to any other service stations (primary or secondary).

Moreover, given the ionospheric channel measures described in (Vilella et al., 2008) the

• Robustness against noise (possibility of working with negative signal to noise ratio).

To meet these requirements, we propose a system with the following characteristics: • Reduced transmission power (accordingly with the consumption constrains).

structures that would be needed to install certain types of directive antennas.

**1.2.1 Constraints**

**1.2.2 Approach**

transmission system. We highlight the following ones:

Hence the power amplifier is set to a maximum of only 250 watts.

discuss the non-suitability of these to the requirements of the project.

from interference from other secondary services attributed afterwards.

• It can not claim protection from interference from other services.

• Robustness against time and frequency dispersive channels.

• Low power spectral density. • Robustness to interference.

• Sporadic communications.

• Burst transmissions (few seconds).

following additional features are required:

constitute region 2 and North and South America region 3.

In this section we briefly review the current communication standards for HF and we justify its non-suitability for the purposes of this project.

Due to the proliferation of modems in the field of HF communications, interoperability between equipment from different manufacturers became a problem (NTIA, 1998). Hence the need to standardize communication protocols. Worldwide, there are three organizations proposing standards regarding HF communications: (*i*) the U.S. Department of Defense proposes the Military Standards (MIL-STD-188-110A, 1991; MIL-STD-188-110B, 2000; MIL-STD-188-141A, 1991), (*ii*) the Institute for Telecommunications Science (ITS), which depends on the U.S. Department of Commerce, writes the Federal Standard (FED-STD) and (*iii*) NATO proposes the Standardization Agreements (STANAG-4406, 1999; STANAG-5066, 2000).

Regarding the interests of this work it is noted that:

	- **–** The bandwidth of the channels is standardized (3 kHz or multiples). Interference reduction, i.e. minimize the output power spectral density, with other transmitting systems is not considered.
	- **–** No modes are considered based on short sporadic burst transfers to reduce interference with other users.
	- **–** There are anti-jamming techniques (see MIL-STD-188-148) for additional application on a appropriate communication standard, but the proposals are not based on intrinsically robust to interference modulations.

We conclude that the configurations proposed by current standards do not meet the desirable characteristics for the type of communication that is required in this work, and consequently, a new proposal should be suggested. In this chapter, we study a number of alternatives based on the use of Direct Sequence Spread Spectrum (DS-SS) techniques in order to cope with the impairments of the channel, the environment and other services.

## **2. Data transmission with Direct Sequence Spread Spectrum techniques**

Spread Spectrum (SS) techniques are described by (Pickholtz et al., 1982) as a kind of transmission in which signal occupies a greater bandwidth than the necessary bandwidth to send the information; bandwidth spreading is achieved by an independent data source, and a synchronized code in the receiver to despread and retrieve data.

SS began to be developed especially for military purposes in the mid twentieth century and has continued in the forefront of research to present, which is, nowadays, a key point for the 3G mobile cellular systems (Third Generation Partnership Project, 1999) and wireless systems transmitting in free bands (IEEE802.11, 2007).

**2.1 Robustness against interference**

against interference.

Then (Pickholtz et al., 1982),

 *Eb No* 

It is noted that when *Bss* increases *Eb*

of *Eb No i*(*t*)

is achieved.

Ionospheric communications have global coverage range. Consequently, any system operating in a given area might potentially interfere with other remote systems operating at the same frequency band. Hence the transmission system proposed in this work might be interfered with primary or secondary services that are assigned the same frequency band. For these reasons it is appropriate to review the characteristics of DS-SS regarding robustness

Low Rate High Frequency Data Transmission from Very Remote Sensors 129

Let a DS-SS based system that transmits *Rb* bits per second in a bandwidth *Bss* (*Bss* � *Rb*) in the presence of additive white Gaussian noise *z*(*t*) with power spectral density *No* [*W*/*Hz*]

> *Pn Pn* + *Pi*

where *Pn* = *BssNo* is the noise power within the transmission bandwidth and *P* = *EbRb* is the signal power. We can deduce from Equation 6 that we can reduce the effect of interfering signals by increasing *Bss*. In other words, as *Bss* = *L* · *Rb*, the larger the spreading factor the

*Gp* and is a measure of the robustness of a spread spectrum system against interference. In

in the same proportion, whereas an increase of *Bss* implies an equivalent improvement

improvement regarding narrowband interfering signals whereas no improvement over noise

Feasibility studies of DS-SS systems with different types of interference can be found in the literature. See, for instance, (Schilling et al., 1980) when the interference is a narrowband

According to the analysis described in (Vilella et al., 2008), the ionospheric channel established between the SAS and Spain shows a maximum multipath delay spread (*τmax*) that varies, depending on time and frequency, between 0.5 ms and 2.5 ms. Therefore, the coherence bandwidth of the channel, which can be considered as approximately the inverse of the maximum multipath delay spread (Proakis, 1995), can be narrower than 400 Hz. In case of transmitting with a wider bandwidth the channel would be frequency selective and distortion due to multipath would arise. Below, the properties of DS-SS against multipath are discussed.

Let a DS-SS based system with bandwidth *Bss* in a channel with coherence bandwidth *Wc* <sup>∼</sup> <sup>1</sup> *<sup>τ</sup>max* � *Bss*. If symbol time *Ts* � *τmax* intersymbol interference due to multipath can be

, as *Pi* is unchanged. To summarize, the use of spread spectrum provides

DS-SS systems the processing gain coincides with the spreading sequence length (*L*).

*No z*(*t*) <sup>=</sup> *<sup>P</sup> Pn Bss Rb*

*rss*(*t*) = *sss*(*t*) + *i*(*t*) + *z*(*t*). (5)

*N*0 *N*<sup>0</sup> + *Pi Bss*

does not change because *Pn* = *NoBss* increases

, (6)

*Rb* is called the process gain

and narrowband interference *i*(*t*) with power *Pi*. At the receiver side:

<sup>=</sup> *<sup>P</sup> Pn Bss Rb*

*z*(*t*),*i*(*t*)

lower the degradation due to interfering signals. The quotient *Bss*

signal and (Milstein, 1988) for multiple interfering signals.

**2.2 Robustness against multipath channels**

In the field of HF communications new techniques have always been slowly introduced due to a widespread sense that reliable communications were not feasible in this frequency band, while improvements of its implementation would be irrelevant. However, SS techniques have been suggested several times as suitable for the lower band of frequencies (i.e. LF, MF and, by extension, HF) (see (Enge & Sarwate, 1987)), since the intrinsic characteristics of SS systems to cope with multipath and interference (typical ionospheric channel characteristics).

There are three types of spread spectrum systems (Peterson et al., 1995): Direct Sequence, Frequency Hopping and hybrid systems composed by a mixture of both. In this study we will focus on Direct Sequence schemes.

DS-SS systems spread the spectrum by multiplying the information data by a spreading sequence. Consider the following model (Proakis, 1995):

$$s\_{\rm ss}(t) = \sum\_{i=0}^{N\_s - 1} d\_l c(t - iT\_s)\_\prime \ c(t) = \sum\_{l=0}^{L-1} c\_l p(t - lT\_\varepsilon)\_\prime \tag{1}$$

where *di* denotes the *ith* symbol, of length *Ts*, of a modulated signal:

$$d = \{d\_0, d\_1, \dots, d\_{N\_s - 1}\} \tag{2}$$

and *cl* are the chips 1, of length *Tc*, of a spreading sequence of length *L*:

$$\overline{\mathfrak{C}} = \{c\_{0}, c\_{1}, \dots, c\_{L-1}\} \tag{3}$$

and *p*(*t*) is a pulse shaping defined as

$$p(t) = \begin{cases} 1, t \in [0, T\_{\mathcal{C}}) \\ 0 \Rightarrow \text{otherwise} \end{cases} \tag{4}$$

In addition, it holds that *LTc* = *Ts* and thus if the base band signal is formed by the symbols *di* and occupies a bandwidth of <sup>1</sup> *Ts* the spread spectrum signal *sss*(*t*) occupies a bandwidth of 1 *Tc* <sup>=</sup> *<sup>L</sup>* <sup>1</sup> *Ts* .

The spreading sequence *c* should have good properties of autocorrelation and cross-correlation in order to ease the detection and synchronization at the receiver side.

Some of the main advantages of a system based on DS-SS are: (*i*) jamming and interference robustness, (*ii*) privacy, (*iii*) ability to use Code Division Multiple Access (CDMA) and (*iv*) robustness against multipath and time variant channels. On the other hand, the drawbacks of this technique are: (*i*) bandwidth inefficiency and (*ii*) receiver complexity: chip-level synchronization, symbol despreading (DS-SS signaling) and channel estimation and detection (RAKE receiver) (Viterbi, 1995).

Throughout the following sections we will discuss the most important considerations that justify the choice of DS-SS; as well as the technical basis to design the data modem for the ionospheric link between the SAS and Spain.

<sup>1</sup> The bits of a spreading sequence are called chips

#### **2.1 Robustness against interference**

6 Will-be-set-by-IN-TECH

In the field of HF communications new techniques have always been slowly introduced due to a widespread sense that reliable communications were not feasible in this frequency band, while improvements of its implementation would be irrelevant. However, SS techniques have been suggested several times as suitable for the lower band of frequencies (i.e. LF, MF and, by extension, HF) (see (Enge & Sarwate, 1987)), since the intrinsic characteristics of SS systems to

There are three types of spread spectrum systems (Peterson et al., 1995): Direct Sequence, Frequency Hopping and hybrid systems composed by a mixture of both. In this study we

DS-SS systems spread the spectrum by multiplying the information data by a spreading

*L*−1 ∑ *l*=0

*<sup>d</sup>* = {*d*0, *<sup>d</sup>*1, ..., *dNs*−1} (2)

*<sup>c</sup>* = {*c*0, *<sup>c</sup>*1, ..., *cL*−1} (3)

*Ts* the spread spectrum signal *sss*(*t*) occupies a bandwidth of

<sup>0</sup> <sup>⇒</sup> otherwise (4)

*cl p*(*t* − *lTc*), (1)

*dic*(*t* − *iTs*), *c*(*t*) =

1, *<sup>t</sup>* <sup>∈</sup> [0, *Tc*)

In addition, it holds that *LTc* = *Ts* and thus if the base band signal is formed by the symbols

The spreading sequence *c* should have good properties of autocorrelation and cross-correlation in order to ease the detection and synchronization at the receiver side.

Some of the main advantages of a system based on DS-SS are: (*i*) jamming and interference robustness, (*ii*) privacy, (*iii*) ability to use Code Division Multiple Access (CDMA) and (*iv*) robustness against multipath and time variant channels. On the other hand, the drawbacks of this technique are: (*i*) bandwidth inefficiency and (*ii*) receiver complexity: chip-level synchronization, symbol despreading (DS-SS signaling) and channel estimation and detection

Throughout the following sections we will discuss the most important considerations that justify the choice of DS-SS; as well as the technical basis to design the data modem for the

cope with multipath and interference (typical ionospheric channel characteristics).

will focus on Direct Sequence schemes.

and *p*(*t*) is a pulse shaping defined as

*di* and occupies a bandwidth of <sup>1</sup>

(RAKE receiver) (Viterbi, 1995).

ionospheric link between the SAS and Spain.

<sup>1</sup> The bits of a spreading sequence are called chips

1 *Tc* <sup>=</sup> *<sup>L</sup>* <sup>1</sup> *Ts* .

sequence. Consider the following model (Proakis, 1995):

*Ns*−1 ∑ *i*=0

where *di* denotes the *ith* symbol, of length *Ts*, of a modulated signal:

and *cl* are the chips 1, of length *Tc*, of a spreading sequence of length *L*:

*p*(*t*) =

*sss*(*t*) =

Ionospheric communications have global coverage range. Consequently, any system operating in a given area might potentially interfere with other remote systems operating at the same frequency band. Hence the transmission system proposed in this work might be interfered with primary or secondary services that are assigned the same frequency band. For these reasons it is appropriate to review the characteristics of DS-SS regarding robustness against interference.

Let a DS-SS based system that transmits *Rb* bits per second in a bandwidth *Bss* (*Bss* � *Rb*) in the presence of additive white Gaussian noise *z*(*t*) with power spectral density *No* [*W*/*Hz*] and narrowband interference *i*(*t*) with power *Pi*. At the receiver side:

$$r\_{\rm ss}(t) = s\_{\rm ss}(t) + i(t) + z(t). \tag{5}$$

Then (Pickholtz et al., 1982),

$$\left(\frac{E\_b}{N\_0}\right)\_{z(t),i(t)} = \frac{P}{P\_n}\frac{B\_{ss}}{R\_b}\frac{P\_n}{P\_n + P\_i} = \frac{P}{P\_n}\frac{B\_{ss}}{R\_b}\frac{N\_0}{N\_0 + \frac{P\_i}{B\_{ss}}}\tag{6}$$

where *Pn* = *BssNo* is the noise power within the transmission bandwidth and *P* = *EbRb* is the signal power. We can deduce from Equation 6 that we can reduce the effect of interfering signals by increasing *Bss*. In other words, as *Bss* = *L* · *Rb*, the larger the spreading factor the lower the degradation due to interfering signals. The quotient *Bss Rb* is called the process gain *Gp* and is a measure of the robustness of a spread spectrum system against interference. In DS-SS systems the processing gain coincides with the spreading sequence length (*L*).

It is noted that when *Bss* increases *Eb No z*(*t*) does not change because *Pn* = *NoBss* increases in the same proportion, whereas an increase of *Bss* implies an equivalent improvement of *Eb No i*(*t*) , as *Pi* is unchanged. To summarize, the use of spread spectrum provides improvement regarding narrowband interfering signals whereas no improvement over noise is achieved.

Feasibility studies of DS-SS systems with different types of interference can be found in the literature. See, for instance, (Schilling et al., 1980) when the interference is a narrowband signal and (Milstein, 1988) for multiple interfering signals.

#### **2.2 Robustness against multipath channels**

According to the analysis described in (Vilella et al., 2008), the ionospheric channel established between the SAS and Spain shows a maximum multipath delay spread (*τmax*) that varies, depending on time and frequency, between 0.5 ms and 2.5 ms. Therefore, the coherence bandwidth of the channel, which can be considered as approximately the inverse of the maximum multipath delay spread (Proakis, 1995), can be narrower than 400 Hz. In case of transmitting with a wider bandwidth the channel would be frequency selective and distortion due to multipath would arise. Below, the properties of DS-SS against multipath are discussed.

Let a DS-SS based system with bandwidth *Bss* in a channel with coherence bandwidth *Wc* <sup>∼</sup> <sup>1</sup> *<sup>τ</sup>max* � *Bss*. If symbol time *Ts* � *τmax* intersymbol interference due to multipath can be

**2.4 Flexibility regarding spectral efficiency**

system.

*k* is the corresponding number of bits per symbol).

and we note the spectral efficiency of each of them.

**2.4.1 DS-SS M-ary signaling**

Let a set of *M* spreading sequences *Q* =

*sss*(*t*) =

*Ps* =

in the demodulation of the coded bits in *d*:

*M*−1 ∑ *p*=1

*<sup>P</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> 2

(−1)*p*+<sup>1</sup>

certain sequence can be computed from the following expression (Proakis, 1995):

*Ps* + (1 − *Ps*) *Q*

*Ns*−1 ∑ *i*=0 *di L*−1 ∑ *l*=0 *c* (*v*)

log2(*M*) bits of information. Then

The signal model expressed by Equation 1 is able to transmit *k* = log2 *K* bits (*b*

� *c*(1) , *c*(2)

modulated in *di* during a period *Ts* (*K* is the number of possible modulation symbols in *d* and

Low Rate High Frequency Data Transmission from Very Remote Sensors 131

The spectral efficiency (*Css* = *k*/(*Ts* · *Bss*)), expressed in [*bits*/*s*/*Hz*] and defined as the ratio between bit-rate and transmission bandwidth, is *Gp* times lower than the non spreading

There are several alternatives to increase spectral efficiency without decreasing process gain (and hence, robustness to interference) at the expense of increasing computational cost of the receiver. In the following sections we describe two of them: DS-SS M-ary signaling and quadriphase spreading. We briefly present the signal model, a study of the probability of error

relationship (orthogonal or nearly orthogonal according to Equation 8). Suppose that a certain sequence *v* from the previous set (*v* ∈ [1, *M*]) is transmitted depending on the value of *m* =

This technique is called DS-SS M-ary signaling (see, for example, (Enge & Sarwate, 1987) for orthogonal sequences). On the receiver side, the optimum demodulator correlates the received signal with a replica of each of the *M* possible sequences belonging to the set *Q*. A noncoherent detector will make a decision based on the computation of the maximum likelihood of the *M* envelopes at the output of each correlator. The probability *Ps* of detecting an incorrect sequence in the presence of only additive white noise is given by (Proakis, 1995):

> � *<sup>M</sup>* <sup>−</sup> <sup>1</sup> *p*

*<sup>P</sup>*<sup>1</sup> <sup>=</sup> <sup>2</sup>*m*−<sup>1</sup> <sup>2</sup>*<sup>m</sup>* − <sup>1</sup>

The probability *P*<sup>1</sup> of making an error in the demodulation of coded bits transmitted in a

Once the sequence is detected we proceed to compute the probability *P*<sup>2</sup> of making an error

⎛ ⎝ � 2 *k* � *Eb No* �� ⎞

� 1 *p* + 1 *e* <sup>−</sup> *<sup>p</sup>*

*<sup>l</sup> p*(*t* − *iTs* − *lTC*) =

,..., *c*(*M*)

�

*Ns*−1 ∑ *i*=0

*<sup>p</sup>*+<sup>1</sup> (*m*+*k*) *Eb*

*Ps*. (12)

*dic*(*v*)

(0) <sup>0</sup> ... *b*

that satisfy a certain correlation

(*t*). (10)

*No* . (11)

⎠ , (13)

(0) *<sup>k</sup>*−1)

neglected. Moreover, if *Ts* � <sup>1</sup> *<sup>υ</sup>max* (where *υmax* denotes the maximum Doppler spread) the channel is almost invariant during a symbol time. Under these conditions it can be shown that (Proakis, 1995):

$$r\_{\rm ss}^{(k)}(t) = \sum\_{n=1}^{N} h\left(\frac{n}{B\_{\rm ss}}\right) s\_{\rm ss}^{(k)}\left(t - \frac{n}{B\_{\rm ss}}\right) + z(t),\tag{7}$$

where *h <sup>n</sup> Bss* denotes a coefficient of the equivalent low-pass of the channel impulse response, *sss*(*t*) corresponds to the base band spread signal defined in Equation 1, (*k*) denotes the contribution due to symbol *k*, *N* = *τmaxBss* is the number of non zero channel taps (since *sss*(*t*) has a limited bandwidth of *Bss*) and *z*(*t*) is additive white Gaussian noise. In consequence, the signal reception is formed by delayed replicas of the transmitted signal. Then, we substitute Equation 1 in Equation 7 and apply an array of correlators to correlate the received signal with *N* copies of the spreading sequence *c* (each of them delayed a chip time). Let *c* be a sequence with good properties of circular autocorrelation:

$$\rho(m) = \sum\_{l=1}^{L} c\_l c\_{l+m} \begin{cases} 1 & m = 0 \\ 0 \Rightarrow \text{otherwise} \end{cases} \tag{8}$$

The output *Um* of each correlator can be expressed as:

$$d\mathcal{U}\_{\rm ll} = d\_k \hbar \left(\frac{m}{B\_{\rm ss}}\right) + \int\_0^{T\_s} c(t)z(t + \frac{m}{B\_{\rm ss}}) \, dt, \; m \in [0, N-1]. \tag{9}$$

Therefore, at the output of each correlator we obtain each transmitted symbol (*dk*) multiplied by a channel coefficient *h <sup>m</sup> Bss* plus a noise term. Hence, the use of DS-SS can take advantage of different replicas of the signal if correctly combined. The most general linear combination is the criterion of Maximal Ratio Combining that chooses the coefficients that maximize instantaneous SNR (Peterson et al., 1995). To properly apply this method it is mandatory to know the coefficients of the channel. Alternatively, the outputs of the correlators can be equally weighed (Equal Gain Combining), thus simplifying the receiver at the expense of worse performance.

#### **2.3 Transmission with low spectral density power**

One of the requirements of the proposed transmission system consists in causing minimal interference with primary and secondary services. For this purpose we propose alternatives to minimize power spectral density. We should note that as process gain increases, DS-SS techniques enable transmission with arbitrarily low power density. Suppose the transmission of a data stream *d* using a bandwidth *Bd* and power *P*. Then the average power spectral density is *D* = *<sup>P</sup> Bd <sup>W</sup> Hz* . Under the same conditions of power consider the transmission of the same data stream with DS-SS (*sss*(*t*)) by means of a spreading sequence *c*(*t*) of length *L*. Then, the spectral occupancy of *sss*(*t*) will be at least *L* · *Bd* and the average power spectral density will be *Dss* = *<sup>P</sup> L*·*Bd <sup>W</sup> Hz* .

Therefore, the use of DS-SS involves an average reduction of power spectral density by a factor equal to the process gain *Gp* = *L*. Then, the spectral occupancy proportionally increases; however, it is not an inconvenience in this case since there is no limitation in this regard.

#### **2.4 Flexibility regarding spectral efficiency**

8 Will-be-set-by-IN-TECH

channel is almost invariant during a symbol time. Under these conditions it can be shown

response, *sss*(*t*) corresponds to the base band spread signal defined in Equation 1, (*k*) denotes the contribution due to symbol *k*, *N* = *τmaxBss* is the number of non zero channel taps (since *sss*(*t*) has a limited bandwidth of *Bss*) and *z*(*t*) is additive white Gaussian noise. In consequence, the signal reception is formed by delayed replicas of the transmitted signal. Then, we substitute Equation 1 in Equation 7 and apply an array of correlators to correlate the received signal with *N* copies of the spreading sequence *c* (each of them delayed a chip time).

denotes a coefficient of the equivalent low-pass of the channel impulse

1, *m* = 0

*m Bss*

plus a noise term. Hence, the use of DS-SS can take advantage

. Under the same conditions of power consider the transmission of

*<sup>υ</sup>max* (where *υmax* denotes the maximum Doppler spread) the

+ *z*(*t*), (7)

<sup>0</sup> <sup>⇒</sup> otherwise (8)

) *dt*, *m* ∈ [0, *N* − 1]. (9)

neglected. Moreover, if *Ts* � <sup>1</sup>

*r* (*k*) *ss* (*t*) =

*N* ∑ *n*=1 *h n Bss s* (*k*) *ss <sup>t</sup>* <sup>−</sup> *<sup>n</sup> Bss* 

Let *c* be a sequence with good properties of circular autocorrelation:

*L* ∑ *l*=1

*cl cl*<sup>+</sup>*<sup>m</sup>*

*c*(*t*)*z*(*t* +

Therefore, at the output of each correlator we obtain each transmitted symbol (*dk*) multiplied

of different replicas of the signal if correctly combined. The most general linear combination is the criterion of Maximal Ratio Combining that chooses the coefficients that maximize instantaneous SNR (Peterson et al., 1995). To properly apply this method it is mandatory to know the coefficients of the channel. Alternatively, the outputs of the correlators can be equally weighed (Equal Gain Combining), thus simplifying the receiver at the expense of

One of the requirements of the proposed transmission system consists in causing minimal interference with primary and secondary services. For this purpose we propose alternatives to minimize power spectral density. We should note that as process gain increases, DS-SS techniques enable transmission with arbitrarily low power density. Suppose the transmission of a data stream *d* using a bandwidth *Bd* and power *P*. Then the average power spectral

the same data stream with DS-SS (*sss*(*t*)) by means of a spreading sequence *c*(*t*) of length *L*. Then, the spectral occupancy of *sss*(*t*) will be at least *L* · *Bd* and the average power spectral

Therefore, the use of DS-SS involves an average reduction of power spectral density by a factor equal to the process gain *Gp* = *L*. Then, the spectral occupancy proportionally increases; however, it is not an inconvenience in this case since there is no limitation in this regard.

*ρ*(*m*) =

 *m Bss* + *Ts* 0

The output *Um* of each correlator can be expressed as:

 *<sup>m</sup> Bss* 

**2.3 Transmission with low spectral density power**

*L*·*Bd*

 *<sup>W</sup> Hz* .

*Um* = *dkh*

that (Proakis, 1995):

 *<sup>n</sup> Bss* 

by a channel coefficient *h*

worse performance.

density is *D* = *<sup>P</sup>*

density will be *Dss* = *<sup>P</sup>*

*Bd <sup>W</sup> Hz* 

where *h*

The signal model expressed by Equation 1 is able to transmit *k* = log2 *K* bits (*b* (0) <sup>0</sup> ... *b* (0) *<sup>k</sup>*−1) modulated in *di* during a period *Ts* (*K* is the number of possible modulation symbols in *d* and *k* is the corresponding number of bits per symbol).

The spectral efficiency (*Css* = *k*/(*Ts* · *Bss*)), expressed in [*bits*/*s*/*Hz*] and defined as the ratio between bit-rate and transmission bandwidth, is *Gp* times lower than the non spreading system.

There are several alternatives to increase spectral efficiency without decreasing process gain (and hence, robustness to interference) at the expense of increasing computational cost of the receiver. In the following sections we describe two of them: DS-SS M-ary signaling and quadriphase spreading. We briefly present the signal model, a study of the probability of error and we note the spectral efficiency of each of them.

#### **2.4.1 DS-SS M-ary signaling**

Let a set of *M* spreading sequences *Q* = � *c*(1) , *c*(2) ,..., *c*(*M*) � that satisfy a certain correlation relationship (orthogonal or nearly orthogonal according to Equation 8). Suppose that a certain sequence *v* from the previous set (*v* ∈ [1, *M*]) is transmitted depending on the value of *m* = log2(*M*) bits of information. Then

$$s\_{\rm ss}(t) = \sum\_{i=0}^{N\_\circ - 1} d\_i \sum\_{l=0}^{L-1} c\_l^{(v)} p(t - iT\_\circ - lT\_\circ) = \sum\_{i=0}^{N\_\circ - 1} d\_i c^{(v)}(t). \tag{10}$$

This technique is called DS-SS M-ary signaling (see, for example, (Enge & Sarwate, 1987) for orthogonal sequences). On the receiver side, the optimum demodulator correlates the received signal with a replica of each of the *M* possible sequences belonging to the set *Q*. A noncoherent detector will make a decision based on the computation of the maximum likelihood of the *M* envelopes at the output of each correlator. The probability *Ps* of detecting an incorrect sequence in the presence of only additive white noise is given by (Proakis, 1995):

$$P\_{\mathbf{S}} = \sum\_{p=1}^{M-1} (-1)^{p+1} \binom{M-1}{p} \frac{1}{p+1} e^{-\frac{p}{p+1}(m+k)\frac{\mathbb{E}\_k}{\mathbb{N}\_0}}.\tag{11}$$

The probability *P*<sup>1</sup> of making an error in the demodulation of coded bits transmitted in a certain sequence can be computed from the following expression (Proakis, 1995):

$$P\_1 = \frac{2^{m-1}}{2^m - 1} P\_s.\tag{12}$$

Once the sequence is detected we proceed to compute the probability *P*<sup>2</sup> of making an error in the demodulation of the coded bits in *d*:

$$P\_2 = \frac{1}{2}P\_s + (1 - P\_s) \bigotimes \left(\sqrt{\frac{2}{k}\left(\frac{E\_b}{N\_o}\right)'}\right) \tag{13}$$

In a symbol time *Ts* we send *k* + *m* bits (*b*

the computational cost of the receiver.

{*c*(1)

*sss*(*t*) =

, ··· , *<sup>c</sup>*(*M*/2)

*Ns*−1 ∑ *i*=0

**2.4.2 DS-SS M-ary signaling + Quadriphase**

� {*di*} *<sup>c</sup>*(*v*1)

symbol encoded in the modulation of *di*).

the presence of only additive white Gaussian noise is:

It is worth noting that the factor 1/2 multiplying *Eb*

*M*/2−1 ∑ *p*=1

Then, the probability *P*<sup>2</sup> for BPSK and QPSK, respectively, is:

*PsPs* <sup>+</sup> <sup>2</sup>*Ps*(<sup>1</sup> <sup>−</sup> *Ps*)*Q*�

*<sup>P</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> 2 (−1)*p*+<sup>1</sup>

*Ps* =

sequence. Consequently, the spectral efficiency is:

*<sup>C</sup>* <sup>=</sup> *<sup>k</sup>* <sup>+</sup> *<sup>m</sup> Gp*

} on one side and *Qi* <sup>=</sup> {*c*(*M*/2+1)

DS-SS M-ary signaling on both the real and the imaginary part of *d*. Thus,

(*t*) + *<sup>j</sup>* · � {*di*} *<sup>c</sup>*(*v*2)

(1) <sup>0</sup> ··· *b*

= *Css* +

the bits modulated with *d* and *m* corresponds to the bits used in the choice of the spreading

Low Rate High Frequency Data Transmission from Very Remote Sensors 133

Therefore, the larger the *M* the lower the BER for a given SNR per bit and the greater the spectral efficiency at the expense of greater computational cost of the receiver. In addition, the larger the number of bits per symbol the better the BER for a given SNR per bit and the lower

Another alternative is to divide the set of *M* sequences into two subsets: *Qr* =

(*t*) 

This variant is called quadriphase chip spreading and permits us to send *M* = 2 log2 (*M*/2) bits per symbol by choosing a sequence from each of the two sets (plus *k* additional bits per

At the receiver side, the demodulator correlates the received signal with a replica of each of the *M* possible sequences. The detector will decide on the envelopes computed at the output of correlators corresponding to the sequences of the subset *Qr* and a similar decision on the subset *Qi*. The probability of incorrectly detecting a sequence from both the set *Qr* and *Qi* in

> *<sup>M</sup>*/2 <sup>−</sup> <sup>1</sup> *p*

energy is equally distributed between real and imaginary parts (see Equation 16). The probability *P*<sup>1</sup> of incorrectly demodulating the bits coded in a sequence that belongs to the

> *<sup>P</sup>*<sup>1</sup> <sup>=</sup> <sup>2</sup>(*m*/2)−<sup>1</sup> <sup>2</sup>(*m*/2) <sup>−</sup> <sup>1</sup>

We then discuss the probability *P*<sup>2</sup> of error on the bits modulated in *d* in the case of BPSK and QPSK. For BPSK, the decision is based on the sign of the sum of the two outputs of the correlators for the two sequences detected in the previous step. The case of QPSK is equivalent to two BPSK with half the SNR per each bit, both independently demodulated from the detection of two sequences (the corresponding to subsets *Qr* and *Qi*, respectively).

subset *Qr* or *Qi* can be obtained by applying an equation analogous to Equation 12:

 1 *p* + 1 *e* <sup>−</sup> *<sup>p</sup> <sup>p</sup>*+<sup>1</sup> ( *<sup>m</sup>*+*<sup>k</sup>* <sup>2</sup> ) *Eb*

*m Gp*

(1) *<sup>k</sup>*−1*<sup>b</sup>* (1) <sup>0</sup> ··· *b*

<sup>=</sup> *Css* <sup>+</sup> log2 *<sup>M</sup>*

, ··· , *<sup>c</sup>*(*M*)

(1)

*Gp*

*<sup>m</sup>*−1), where *<sup>k</sup>* corresponds to

. (15)

} on the other side. Then, apply

*No* . (17)

, *v*<sup>1</sup> ∈ [1, *M*/2], *v*<sup>2</sup> ∈ [*M*/2 + 1, *M*]. (16)

*No* comes from considering that the symbol

*Ps*. (18)

*bpsk* + (1 − *Ps*)(1 − *Ps*)*Qbpsk* , (19)

Fig. 1. Probability of error versus SNR per bit using DS-SS M-ary signaling for various values of *M* (32, 64, 128) and *k* (*k* = 0 : no modulation, *k* = 1: BPSK, *k* = 2: QPSK). Probability is analytically (A) computed and derived from algorithm simulations (S)

where *Eb No* � = *Eb No* (*m* + *k*). Finally, the joint probability *Pb* of bit error considering the contribution of both mechanisms is:

$$P\_b = \frac{m \cdot P\_1 + k \cdot P\_2}{m + k}.\tag{14}$$

Figure 1 shows that the higher the *M*, the lower the SNR per bit required to obtain a certain BER. It can be explained by the fact that *L* increases as *M* (in a DS-SS system) and so does the process gain. It can be shown that the minimum SNR per bit required to obtain an arbitrarily small BER when *M* → ∞ is -1.6 dB. Figure 1 also shows that the larger the *k*, the smaller the SNR per bit required to achieve a given BER. This apparent contradiction can be derived from the following two arguments: (*i*) for a given bit-rate, a high value of *k* enables the reduction of transmission bandwidth (and thus reduction of noise) and hence, improve the probability of finding the transmitted sequence (Equation 13). (*ii*) The probability *Pb* of total error (Equation 14) is a balance between *P*<sup>1</sup> and *P*2. The second term in *P*<sup>2</sup> (Equation 13) derives from the probability of error in demodulating the bits in *d* once the sequence is successfully detected. So, if this term is lower than both the first term in Equation 13 and *P*<sup>1</sup> (Equation 12) the use of any kind of modulation will not result in significant degradation in *Pb*.

In a symbol time *Ts* we send *k* + *m* bits (*b* (1) <sup>0</sup> ··· *b* (1) *<sup>k</sup>*−1*<sup>b</sup>* (1) <sup>0</sup> ··· *b* (1) *<sup>m</sup>*−1), where *<sup>k</sup>* corresponds to the bits modulated with *d* and *m* corresponds to the bits used in the choice of the spreading sequence. Consequently, the spectral efficiency is:

$$\mathcal{C} = \frac{k+m}{\mathcal{G}\_p} = \mathcal{C}\_{ss} + \frac{m}{\mathcal{G}\_p} = \mathcal{C}\_{ss} + \frac{\log\_2 M}{\mathcal{G}\_p}.\tag{15}$$

Therefore, the larger the *M* the lower the BER for a given SNR per bit and the greater the spectral efficiency at the expense of greater computational cost of the receiver. In addition, the larger the number of bits per symbol the better the BER for a given SNR per bit and the lower the computational cost of the receiver.

#### **2.4.2 DS-SS M-ary signaling + Quadriphase**

10 Will-be-set-by-IN-TECH

DS−SS Signaling

(A) 32−0 (S) 32−0 (A) 64−0 (S) 64−0 (S) 128−0 (A) 32−1 (S) 32−1 (A) 32−2 (S) 32−2

0 2 4 6 8 10

Eb/No [dB]

(*m* + *k*). Finally, the joint probability *Pb* of bit error considering the

*<sup>m</sup>* <sup>+</sup> *<sup>k</sup>* . (14)

Fig. 1. Probability of error versus SNR per bit using DS-SS M-ary signaling for various values of *M* (32, 64, 128) and *k* (*k* = 0 : no modulation, *k* = 1: BPSK, *k* = 2: QPSK). Probability is

*Pb* <sup>=</sup> *<sup>m</sup>* · *<sup>P</sup>*<sup>1</sup> <sup>+</sup> *<sup>k</sup>* · *<sup>P</sup>*<sup>2</sup>

Figure 1 shows that the higher the *M*, the lower the SNR per bit required to obtain a certain BER. It can be explained by the fact that *L* increases as *M* (in a DS-SS system) and so does the process gain. It can be shown that the minimum SNR per bit required to obtain an arbitrarily small BER when *M* → ∞ is -1.6 dB. Figure 1 also shows that the larger the *k*, the smaller the SNR per bit required to achieve a given BER. This apparent contradiction can be derived from the following two arguments: (*i*) for a given bit-rate, a high value of *k* enables the reduction of transmission bandwidth (and thus reduction of noise) and hence, improve the probability of finding the transmitted sequence (Equation 13). (*ii*) The probability *Pb* of total error (Equation 14) is a balance between *P*<sup>1</sup> and *P*2. The second term in *P*<sup>2</sup> (Equation 13) derives from the probability of error in demodulating the bits in *d* once the sequence is successfully detected. So, if this term is lower than both the first term in Equation 13 and *P*<sup>1</sup> (Equation 12) the use of

analytically (A) computed and derived from algorithm simulations (S)

any kind of modulation will not result in significant degradation in *Pb*.

10−4

 *Eb No* � = *Eb No* 

contribution of both mechanisms is:

where

10−3

10−2

BER

10−1

100

Another alternative is to divide the set of *M* sequences into two subsets: *Qr* = {*c*(1) , ··· , *<sup>c</sup>*(*M*/2) } on one side and *Qi* <sup>=</sup> {*c*(*M*/2+1) , ··· , *<sup>c</sup>*(*M*) } on the other side. Then, apply DS-SS M-ary signaling on both the real and the imaginary part of *d*. Thus,

$$s\_{\rm ss}(t) = \sum\_{i=0}^{N\_t - 1} \left( \Re \left\{ d\_i \right\} c^{\left( v\_1 \right)}(t) + j \cdot \Im \left\{ d\_i \right\} c^{\left( v\_2 \right)}(t) \right), \ v\_1 \in [1, M/2], \\ v\_2 \in [M/2 + 1, M]. \tag{16}$$

This variant is called quadriphase chip spreading and permits us to send *M* = 2 log2 (*M*/2) bits per symbol by choosing a sequence from each of the two sets (plus *k* additional bits per symbol encoded in the modulation of *di*).

At the receiver side, the demodulator correlates the received signal with a replica of each of the *M* possible sequences. The detector will decide on the envelopes computed at the output of correlators corresponding to the sequences of the subset *Qr* and a similar decision on the subset *Qi*. The probability of incorrectly detecting a sequence from both the set *Qr* and *Qi* in the presence of only additive white Gaussian noise is:

$$P\_S = \sum\_{p=1}^{M/2-1} (-1)^{p+1} \binom{M/2-1}{p} \frac{1}{p+1} e^{-\frac{p}{p+1} \left(\frac{m+k}{2}\right) \frac{E\_b}{N\_0}}.\tag{17}$$

It is worth noting that the factor 1/2 multiplying *Eb No* comes from considering that the symbol energy is equally distributed between real and imaginary parts (see Equation 16). The probability *P*<sup>1</sup> of incorrectly demodulating the bits coded in a sequence that belongs to the subset *Qr* or *Qi* can be obtained by applying an equation analogous to Equation 12:

$$P\_1 = \frac{2^{\left(m/2\right)-1}}{2^{\left(m/2\right)}-1} P\_s. \tag{18}$$

We then discuss the probability *P*<sup>2</sup> of error on the bits modulated in *d* in the case of BPSK and QPSK. For BPSK, the decision is based on the sign of the sum of the two outputs of the correlators for the two sequences detected in the previous step. The case of QPSK is equivalent to two BPSK with half the SNR per each bit, both independently demodulated from the detection of two sequences (the corresponding to subsets *Qr* and *Qi*, respectively). Then, the probability *P*<sup>2</sup> for BPSK and QPSK, respectively, is:

$$P\_2 = \frac{1}{2} P\_s P\_s + 2P\_s (1 - P\_s) Q\_{bpsk}^{'} + (1 - P\_s)(1 - P\_s) Q\_{bpsk} \,\prime \tag{19}$$

sequence, keeping the same bandwidth, the number of transmitted sequences is halved as is

Low Rate High Frequency Data Transmission from Very Remote Sensors 135

(1) *<sup>k</sup>*−1*<sup>b</sup>* (1) <sup>0</sup> ··· *b*

(1)

*Gp*

<sup>=</sup> *Css* <sup>+</sup> 2 log2 (*M*/2)

*<sup>m</sup>*−1), *<sup>k</sup>* bits due to the modulation

(22)

(1) <sup>0</sup> ··· *b*

= *Css* +

of *d* and *m* bits due the choice of the spreading sequence. Therefore, spectral efficiency is:

*m Gp*

Comparing Equation 22 with Equation 15 and equal bit rate, it is shown that quadriphase and biphase chip spreading have an approximate spectral efficiency (assuming *Gp* = *L* ≈ *M*).

This section describes the outcomes of various experiments based on DS-SS over the link established between the SAS and Spain. Firstly, we define the objectives of the study and point out some methodological criteria that was taken into account. Following, the testbench and the algorithms used to carry out the tests are described. Finally, the experiments are

The aim of this work is to experimentally evaluate various alternatives, based on DS-SS, concerning the maximum achievable performance in terms of bit error rate and spectral efficiency at the expense of greater complexity at the receiver side. The final goal is to come up with a proposal for the data transmission link between the SAS and Spain. Therefore, the

• Related to DS-SS signaling: (*i*) process gain (determined by *L*), (*ii*) number of bits per

However, there are a number of aspects, which are beyond the scope of this study, that must be defined and implemented. They are, specifically: (*i*) frame format, (*ii*) frequency and time synchronization (chip and frame), (*iii*) coding and interleaving and (*iv*) channel estimation

It is noteworthy that it is not the aim of these experiments to measure the percentage of satisfactory receptions among the total number of receptions, since this magnitude is strongly related to the robustness of the synchronization method, which is beyond the scope of this study. Consequently, we will only evaluate expected performance from satisfactory receptions

Figures 3 and 4 depict a block diagram of the transmitter and receiver, respectively. On one

• At transmitter side: (*i*) a binary random source (320 bits), (*ii*) a turbo encoder (*rate* = 1/3) which operates combined with an interleaver (972 coded bits at the output), (*iii*) a frame compiler, designed according to the measured characteristics of multipath and Doppler spread, which builds a frame that consists of: (*iii*.*a*) a initial field for synchronization and channel estimation, (*iii*.*b*) a field that is periodically repeated to track channel estimation

sequence (expressed in terms of *M*), (*iii*) spreading: biphase or quadriphase.

the number of encoded bits in the modulation.

*<sup>C</sup>* <sup>=</sup> *<sup>k</sup>* <sup>+</sup> *<sup>m</sup> Gp*

explained and the outcomes derived from them carefully discussed.

alternatives that we suggest may combine the following aspects:

• General features: (*i*) frequency chip, (*ii*) modulation.

by means of a testbench explained below (see Section 3.3).

hand, common modules are shown in green. Specifically:

and (*iii*.*c*) data (see Section 3.3.1 and Figure 5).

In a symbol time *Ts*, *k* + *m* bits are sent (*b*

**3. The experiments**

and multipath diversity use.

**3.1 Goals**

Fig. 2. Probability of error as a function of SNR per bit combining both techniques DS-SS M-ary signaling and quadriphase chip spreading for various values of *M* (32, 64, 128) and *k* (*k* = 0 : no modulation, *k* = 1: BPSK, *k* = 2: QPSK). Probability is analytically (A) computed and derived from algorithm simulations (S)

$$P\_2 = \frac{1}{2} P\_\text{s} P\_\text{s} + P\_\text{s} (1 - P\_\text{s}) \left( Q\_{bpsk}^{'} + 0.5 \right) + (1 - P\_\text{s}) (1 - P\_\text{s}) Q\_{bpsk}^{'} \tag{20}$$

where

$$Q\_{bpsk} = Q\left(\sqrt{2\left(\frac{Eb}{N\_o}\right)'}\right) \text{ and } \mathcal{Q}\_{bpsk}' = Q\left(\sqrt{\left(\frac{E\_b}{N\_o}\right)'}\right). \tag{21}$$

Finally, the probability *Pb* of bit error is equal to Equation 14. If we compare Figure 2 with Figure 1 it is shown that, for a given bit-rate, in terms of BER versus SNR per bit (for *k* = 0 with only additive white Gaussian noise) applying DS-SS M-ary signaling using M sequences is almost equivalent to using DS-SS M-ary signaling plus quadriphase chip spreading using 2*M* sequences. In this latter case, however, the process gain is doubled.

When we introduce modulation (i.e. *k* �= 0) Figure 2 and Figure 1 show that the equivalence noted in the previous paragraph is no longer true: the use of quadriphase chip spreading with sequences of length 2*M* in combination with modulation produces inefficiency in terms of BER with respect to a system that does not use quadriphase chip spreading with sequences of length *M*. This is intuitively explained by noticing that when doubling the length of the sequence, keeping the same bandwidth, the number of transmitted sequences is halved as is the number of encoded bits in the modulation.

In a symbol time *Ts*, *k* + *m* bits are sent (*b* (1) <sup>0</sup> ··· *b* (1) *<sup>k</sup>*−1*<sup>b</sup>* (1) <sup>0</sup> ··· *b* (1) *<sup>m</sup>*−1), *<sup>k</sup>* bits due to the modulation of *d* and *m* bits due the choice of the spreading sequence. Therefore, spectral efficiency is:

$$\mathcal{C} = \frac{k+m}{G\_p} = \mathcal{C}\_{\rm ss} + \frac{m}{G\_p} = \mathcal{C}\_{\rm ss} + \frac{2\log\_2\left(M/2\right)}{G\_p} \tag{22}$$

Comparing Equation 22 with Equation 15 and equal bit rate, it is shown that quadriphase and biphase chip spreading have an approximate spectral efficiency (assuming *Gp* = *L* ≈ *M*).

#### **3. The experiments**

This section describes the outcomes of various experiments based on DS-SS over the link established between the SAS and Spain. Firstly, we define the objectives of the study and point out some methodological criteria that was taken into account. Following, the testbench and the algorithms used to carry out the tests are described. Finally, the experiments are explained and the outcomes derived from them carefully discussed.

#### **3.1 Goals**

12 Will-be-set-by-IN-TECH

DS−SS Signaling + Quadriphase Spreading

(A) 32−0 (S) 32−0 (A) 64−0 (S) 64−0 (S) 128−0 (A) 64−1 (S) 64−1 (A) 64−2 (S) 64−2

0 2 4 6 8 10

Eb/No [dB]

*bpsk* + 0.5

<sup>⎠</sup> and *<sup>Q</sup>*�

Finally, the probability *Pb* of bit error is equal to Equation 14. If we compare Figure 2 with Figure 1 it is shown that, for a given bit-rate, in terms of BER versus SNR per bit (for *k* = 0 with only additive white Gaussian noise) applying DS-SS M-ary signaling using M sequences is almost equivalent to using DS-SS M-ary signaling plus quadriphase chip spreading using

When we introduce modulation (i.e. *k* �= 0) Figure 2 and Figure 1 show that the equivalence noted in the previous paragraph is no longer true: the use of quadriphase chip spreading with sequences of length 2*M* in combination with modulation produces inefficiency in terms of BER with respect to a system that does not use quadriphase chip spreading with sequences of length *M*. This is intuitively explained by noticing that when doubling the length of the

�

*bpsk* = *Q*

+ (<sup>1</sup> <sup>−</sup> *Ps*)(<sup>1</sup> <sup>−</sup> *Ps*)*Q*�

�� *Eb No* �� ⎞

⎛ ⎝ *bpsk* , (20)

⎠ . (21)

Fig. 2. Probability of error as a function of SNR per bit combining both techniques DS-SS M-ary signaling and quadriphase chip spreading for various values of *M* (32, 64, 128) and *k* (*k* = 0 : no modulation, *k* = 1: BPSK, *k* = 2: QPSK). Probability is analytically (A) computed

> � *Q*�

10−4

where

and derived from algorithm simulations (S)

*Qbpsk* = *Q*

*PsPs* + *Ps*(1 − *Ps*)

⎛ ⎝ � 2 � *Eb No* �� ⎞

2*M* sequences. In this latter case, however, the process gain is doubled.

*<sup>P</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> 2

10−3

10−2

BER

10−1

100

The aim of this work is to experimentally evaluate various alternatives, based on DS-SS, concerning the maximum achievable performance in terms of bit error rate and spectral efficiency at the expense of greater complexity at the receiver side. The final goal is to come up with a proposal for the data transmission link between the SAS and Spain. Therefore, the alternatives that we suggest may combine the following aspects:


However, there are a number of aspects, which are beyond the scope of this study, that must be defined and implemented. They are, specifically: (*i*) frame format, (*ii*) frequency and time synchronization (chip and frame), (*iii*) coding and interleaving and (*iv*) channel estimation and multipath diversity use.

It is noteworthy that it is not the aim of these experiments to measure the percentage of satisfactory receptions among the total number of receptions, since this magnitude is strongly related to the robustness of the synchronization method, which is beyond the scope of this study. Consequently, we will only evaluate expected performance from satisfactory receptions by means of a testbench explained below (see Section 3.3).

Figures 3 and 4 depict a block diagram of the transmitter and receiver, respectively. On one hand, common modules are shown in green. Specifically:

• At transmitter side: (*i*) a binary random source (320 bits), (*ii*) a turbo encoder (*rate* = 1/3) which operates combined with an interleaver (972 coded bits at the output), (*iii*) a frame compiler, designed according to the measured characteristics of multipath and Doppler spread, which builds a frame that consists of: (*iii*.*a*) a initial field for synchronization and channel estimation, (*iii*.*b*) a field that is periodically repeated to track channel estimation and (*iii*.*c*) data (see Section 3.3.1 and Figure 5).

etc.). Experiments are transmitted during a sounding period that has a maximum time length of 20 seconds, which is repeated every minute except for 18 minutes assigned to

Low Rate High Frequency Data Transmission from Very Remote Sensors 137

• In each sounding period several signals are transmitted within a frame. Each frame is repeated at least twice within a sounding period (more repetitions will be possible in case

• Each sounding period is associated with a carrier frequency. Seven different carrier frequencies have been chosen based on availability outcomes presented in (Vilella et al., 2008). These carrier frequencies are: {8078, 8916, 10668, 11411, 12785, 14642, 16130} [*kHz*].

• Each frame is transmitted a minimum of two days. Under these assumptions, each experiment was performed at a certain time and frequency, at least 24 times (2 days, 6

• There are a number of days with frames containing a common experiment. This fact allows

The testbench consists of a frame and a set of algorithms shared between all experiments,

*sL*(*s*)−*l*+<sup>1</sup> ···*sL*(*s*) *ss s*<sup>1</sup> ···*sl*

Therefore, **C** has a length of 2*L*(*s*) + 2*l* chips, where *l* is the number of chips circularly added before the first and after the second sequence. This header is used to achieve frame, chip and sample synchronization as well as initial channel estimation. The value of *l* can be computed by means of the maximum multipath spread of the channel (*τmax*) as:

where �·� denotes the integer immediately above. Therefore, *l* is the number of guard chips before and after the block formed by the two sequences *s*. This guard ensures both circular correlation during synchronization and channel estimation free from intersymbol

*sL*(*s*)−*l*+<sup>1</sup> ···*sL*(*s*) *s s*<sup>1</sup> ··· *sl*

10 *υmax*

Therefore, **S** is of length *L*(*s*) + 2*l*. The value of *l* is calculated using Equation 24. This field provides channel estimation tracking. The period of repetition of **S** (denoted by *TS*) is

*TS* <sup>≈</sup> <sup>1</sup>

computed by means of the maximum Doppler spread of the channel (*υmax*) as:

*l* = �*τmax fchip*�, (24)

. (23)

. (25)

, (26)

• Each day consists of 18 available hours (from 18 UTC to 11 UTC, both included).

maintenance and other functions.

Then, each frequency is tested 6 times per hour.

times per hour, 2 frames per sounding period).

The testbench is based on a frame which is shown in Figure 5, where:

**C** =

• **S** is a signaling field based on sequence *s*, with the following form: **S** =

• **C** is a header based on two identical sequences *s* of length *L*(*s*) chips, as follows:

the assessment of interday variability.

of short frames).

**3.3 Testbench**

which are all described below.

**3.3.1 Frame compilation**

interference.

Fig. 3. Transmitter block diagram. Common modules to all experiments (testbench) are shown in green and modules with specific characteristics are shown in blue

Fig. 4. Receiver block diagram. Common modules to all experiments (testbench) are shown in green and modules with specific characteristics are shown in blue

• At receiver side: (*i*) frequency synchronization by means of an unmodulated tone previously emitted, (*ii*) frame synchronization, (*iii*) channel estimation (*iv*) decoding and deinterleaving and (*v*) a SNR estimation module.

On the other hand, modules with specific parameters for the experiments are shown in blue in both the transmitter and the receiver. These parameters are: (*i*) chip frequency (*fchip*), which determines the signal bandwidth (2500, 3125 and 6250 chips per second), (*ii*) modulation, a choice between no modulation, BPSK or QPSK, (*iii*) the process gain (spreading sequence of length 31, 63 or 127 chips), (*iv*) biphase or quadriphase spreading and (*v*) the number of bits per sequence log2 (*M*) (always *L* = *M* − 1).

#### **3.2 Methodology**

In this section we explain the approach followed prior to obtaining the outcomes from these experiments. We emphasize the following points:


etc.). Experiments are transmitted during a sounding period that has a maximum time length of 20 seconds, which is repeated every minute except for 18 minutes assigned to maintenance and other functions.


#### **3.3 Testbench**

14 Will-be-set-by-IN-TECH

Fig. 3. Transmitter block diagram. Common modules to all experiments (testbench) are

Fig. 4. Receiver block diagram. Common modules to all experiments (testbench) are shown

• At receiver side: (*i*) frequency synchronization by means of an unmodulated tone previously emitted, (*ii*) frame synchronization, (*iii*) channel estimation (*iv*) decoding and

On the other hand, modules with specific parameters for the experiments are shown in blue in both the transmitter and the receiver. These parameters are: (*i*) chip frequency (*fchip*), which determines the signal bandwidth (2500, 3125 and 6250 chips per second), (*ii*) modulation, a choice between no modulation, BPSK or QPSK, (*iii*) the process gain (spreading sequence of length 31, 63 or 127 chips), (*iv*) biphase or quadriphase spreading and (*v*) the number of bits

In this section we explain the approach followed prior to obtaining the outcomes from these

• According to the explanations of the previous section, all experiments use a common

• Each experiment consists of a signal composed of 320 bits of data (972 coded bits), which are modulated, spread, filtered, and finally appropriate headers are appended to them. This signal has the appearance of a burst with a duration that depends on specific characteristics of the experiment (number of bits per symbol, sequence length,

testbench. Consequently, the test algorithms equally affect all experiments.

NONE BPSK QPSK

shown in green and modules with specific characteristics are shown in blue

MATCHED FILTER M

in green and modules with specific characteristics are shown in blue

deinterleaving and (*v*) a SNR estimation module.

per sequence log2 (*M*) (always *L* = *M* − 1).

experiments. We emphasize the following points:

SPREADING DS-SS SIGNALING

CHANNEL

PROCESS GAIN QUADRIPHASE SPREAD

FRAME

ESTIMATION DESPREADING DEMODULATION

MULTIPATH SPREAD

COMPILER L + RRCOS

DECODING DEINTERLEAVING

> SNR ESTIMATION

DOPPLER SPREAD FCHIP

DATA SOURCE [320 bits]

> FREQUENCY SYNCHRONIZATION

**3.2 Methodology**

CODING+

FRAME SYNCHRONIZATION

RATE 1/3

INTERLEAVING MODULATION

The testbench consists of a frame and a set of algorithms shared between all experiments, which are all described below.

#### **3.3.1 Frame compilation**

The testbench is based on a frame which is shown in Figure 5, where:

• **C** is a header based on two identical sequences *s* of length *L*(*s*) chips, as follows:

$$\mathbf{C} = \left\{ s\_{L^{(s)} - l + 1} \cdots s\_{L^{(s)}} \, \overbrace{\mathbf{S}}^{\mathbf{s}} s\_1 \cdots s\_l \right\}. \tag{23}$$

Therefore, **C** has a length of 2*L*(*s*) + 2*l* chips, where *l* is the number of chips circularly added before the first and after the second sequence. This header is used to achieve frame, chip and sample synchronization as well as initial channel estimation. The value of *l* can be computed by means of the maximum multipath spread of the channel (*τmax*) as:

$$l = \lceil \tau\_{\max} f\_{\text{clip}} \rceil \, \, \, \, \, \tag{24}$$

where �·� denotes the integer immediately above. Therefore, *l* is the number of guard chips before and after the block formed by the two sequences *s*. This guard ensures both circular correlation during synchronization and channel estimation free from intersymbol interference.

• **S** is a signaling field based on sequence *s*, with the following form:

$$\mathbf{S} = \left\{ \mathbf{s}\_{L^{(s)} - l + 1} \cdot \dots \cdot \mathbf{s}\_{L^{(s)}} \, \overline{\mathbf{s}} \, \mathbf{s}\_1 \cdot \dots \, \mathbf{s}\_l \right\}. \tag{25}$$

Therefore, **S** is of length *L*(*s*) + 2*l*. The value of *l* is calculated using Equation 24. This field provides channel estimation tracking. The period of repetition of **S** (denoted by *TS*) is computed by means of the maximum Doppler spread of the channel (*υmax*) as:

$$T\_S \approx \frac{1}{10 \text{ } \upsilon\_{\text{max}}} \text{ } \tag{26}$$

It can be easily deduced from Equation 29 and Equations 30 that the greater the length of the sequence *s* (in chips and samples), the greater the likelihood of synchronization; however, the greater the length of the header. So, there is trade-off between synchronization performance

Low Rate High Frequency Data Transmission from Very Remote Sensors 139

Once frame, chip and sample (*ts*) are successfully synchronized a matched filter is applied, followed by a decimate process to adjust the signal to one sample per chip (see Figure 4):

where *p* is a pulse of length *Np* samples, namely a root raised cosine with roll-off factor *α* =

Channel estimation is initally obtained from the second sequence on header **C** and tracked by

where *δt* denotes the time offset of the sequence (**C** or **S**) from which we obtain channel

The despreading of each symbol is achieved by a bank of correlators using each of the

correlation is calculated for all those *l* values such that the channel estimation exceeds a certain

where *td* indicates the starting point of the symbol under consideration and *L* is the length of

When using quadriphase spreading, the set of sequences *Q* is divided into two subsets *Qr* =

The decision of which sequence has been transmitted is performed based on a criterion of maximum absolute value at the output of the correlators. It is only evaluated over the set or subset of appropriate sequences and for the shift *l* such that the channel estimation is maximum. We denote by *p* (*p* ∈ [1, *M*]) the index for the sequence with maximum correlator output, when not using quadriphase spreading, and *pr* (*pr* ∈ [1, *M*/2]) and *pi*

The demodulation of bits contained in *d* (see Equation 1) is achieved using a RAKE architecture. If not using quadriphase spreading, the decision is based on the decision variable

*U*(*p*)(*k*)

*ρ*(*p*)

(*l* − *k*)

, ··· , *<sup>c</sup>*(*M*)

+ *l* + *k*]*c*(*m*)

<sup>+</sup> *<sup>k</sup> fm fchip*

+ *l*]*p*[*l*], (31)

, *c*(2)

[*k*], *m* ∈ [1, *M*], ∀*l* | �*hl*� *γ*, (33)

, ··· , *<sup>c</sup>*(*M*)

} on the other side. Then we

, ∀*l* | �*hl*� *γ*, (34)

}. The

+ *k* + *l*]*s*[*k*], *l in*[−*τmax fchip*, *τmax fchip*], *l* ∈ *Z*, (32)

*rd*[*k*] =

*Np*−1 ∑ *l*=0 *r*[ *ts fm*

and spectral efficiency.

0.65.

the field **S** as:

estimation.

threshold *γ*:

{*c*(1)

*hl* =

*L*(*s*) −1 ∑ *k*=0

*U*(*m*)(*l*) =

the sequences used to spread data.

, ··· , *<sup>c</sup>*(*M*/2)

*U* computed as follows:

*U* = ∑ *l h*∗ *l* 

*rd*[ *<sup>δ</sup><sup>t</sup> fchip*

*L*−1 ∑ *k*=0 *rd*[ *td fm*

compute both decision variables similarly to Equation 33.

(*pi* ∈ [*M*/2 + 1, *M*]) when using quadriphase spreading.

sequences *<sup>c</sup>*(*m*) that belongs to the family denoted by *<sup>Q</sup>* <sup>=</sup> {*c*(1)

} on one side and *Qi* <sup>=</sup> {*c*(*M*/2+1)

*<sup>U</sup>*(*p*)(*l*) <sup>−</sup> ∑

*k*<*l*


Fig. 5. Testbench frame format

where the channel is considered to be flat over a tenth of the inverse of *υmax*.

• **D** is a data symbol based on a Gold sequence of length *L* chips. Between the header **C** and the field **S**, or between two consecutive **S** fields there are *B* symbols that build a block. The number of symbols per block is given by the following equation:

$$B = round\left(\frac{T\_S \cdot f\_{chip}}{L^{(s)}}\right). \tag{27}$$

#### **3.3.2 Algorithms description**

This section explains reception algorithms used by all experiments (in green in block diagram of Figure 4).

Let *r*[*n*] � be the signal at the output of a downsampling filter during the sounding period. *r*[*n*] � is Δ*t* seconds long with Δ*t* · *fm* samples, where *fm* is the sampling frequency at the receiver side (*fm* = 50 *ksps*).

Estimation of frequency synchronization error (*δ f*) between transmitter and receiver is obtained by applying algorithms explained in (Vilella et al., 2008) to a non modulated signal which is transmitted immediately before the data signal. Then, the signal *r*[*n*] � is downconverted to baseband by a complex exponential signal with frequency −*δ f* :

$$r[n] = r[n]' \cdot e^{2\pi \frac{\delta f}{f\_m} n} \tag{28}$$

The next point to be considered is the frame, chip and sample synchronization which is obtained from the header **C** (Alsina et al., 2009). Firstly, emitter and receiver are time synchronized by means of a GPS receiver at each side, with time resolution of one second. Hence, the receiver knows the second *ta* in which an experiment is transmitted. Let a synchronization window around *ta* : [*ta* − *δa*/2, *ta* + *δa*/2]. Then the frame, chip and sample synchronization point *ts* is:

$$t\_s = \frac{\operatorname\*{argmax}\_{m} \left( \|\mathbf{S}\_1\| + \|\mathbf{S}\_2\| \right)}{f\_m}, m \in [t\_a - \delta\_a/2, t\_a + \delta\_a/2] f\_m. \tag{29}$$

where

$$\mathcal{S}\_1 = \sum\_{k=0}^{L^{(s)}-1} r[m+k]\overline{s}[k] \text{ and } \mathcal{S}\_2 = \sum\_{k=0}^{L^{(s)}-1} r[m+L^{(s)}\frac{f\_m}{f\_{chip}} + k]\overline{s}[k],\tag{30}$$

where *s* is the sequence of length *L*(*s*) , interpolated by a root raised cosine filter, that forms header **C**.

It is noted that *S*<sup>1</sup> and *S*<sup>2</sup> are the correlation of the signal *r* with a replica of the header sequence *s* and with the same header sequence delayed *L*(*s*) chips, respectively. Therefore, synchronization probability is maximum for that value of *m* such that the sequences in *S*<sup>1</sup> and *S*<sup>2</sup> match in phase with header **C**.

16 Will-be-set-by-IN-TECH

• **D** is a data symbol based on a Gold sequence of length *L* chips. Between the header **C** and the field **S**, or between two consecutive **S** fields there are *B* symbols that build a block. The

This section explains reception algorithms used by all experiments (in green in block diagram

is Δ*t* seconds long with Δ*t* · *fm* samples, where *fm* is the sampling frequency at the receiver

Estimation of frequency synchronization error (*δ f*) between transmitter and receiver is obtained by applying algorithms explained in (Vilella et al., 2008) to a non modulated signal which is transmitted immediately before the data signal. Then, the signal *r*[*n*]

> � · *e* 2*π <sup>δ</sup> <sup>f</sup>*

The next point to be considered is the frame, chip and sample synchronization which is obtained from the header **C** (Alsina et al., 2009). Firstly, emitter and receiver are time synchronized by means of a GPS receiver at each side, with time resolution of one second. Hence, the receiver knows the second *ta* in which an experiment is transmitted. Let a synchronization window around *ta* : [*ta* − *δa*/2, *ta* + *δa*/2]. Then the frame, chip and sample

> *L*(*s*) −1 ∑ *k*=0

It is noted that *S*<sup>1</sup> and *S*<sup>2</sup> are the correlation of the signal *r* with a replica of the header sequence *s* and with the same header sequence delayed *L*(*s*) chips, respectively. Therefore, synchronization probability is maximum for that value of *m* such that the sequences in *S*<sup>1</sup> and

downconverted to baseband by a complex exponential signal with frequency −*δ f* :

*r*[*n*] = *r*[*n*]

(�*S*1� + �*S*2�)

*r*[*m* + *k*]*s*[*k*] and *S*<sup>2</sup> =

*fm*

be the signal at the output of a downsampling filter during the sounding period. *r*[*n*]

 *TS* · *fchip L*(*s*)

where the channel is considered to be flat over a tenth of the inverse of *υmax*.

*B* = *round*

*<sup>B</sup>* **<sup>S</sup>** *<sup>D</sup>* **<sup>S</sup>** (1) <sup>0</sup> *<sup>D</sup>*(1)

<sup>1</sup> *<sup>D</sup>*(1)

*B*

. (27)

*fm <sup>n</sup>* (28)

, *m* ∈ [*ta* − *δa*/2, *ta* + *δa*/2] *fm*, (29)

+ *k*]*s*[*k*], (30)

*fchip*

, interpolated by a root raised cosine filter, that forms

*<sup>r</sup>*[*<sup>m</sup>* <sup>+</sup> *<sup>L</sup>*(*s*) *fm*

�

� is

**C** *D*(0)

Fig. 5. Testbench frame format

**3.3.2 Algorithms description**

of Figure 4).

side (*fm* = 50 *ksps*).

synchronization point *ts* is:

*ts* =

*L*(*s*) −1 ∑ *k*=0

*S*<sup>1</sup> =

*S*<sup>2</sup> match in phase with header **C**.

where *s* is the sequence of length *L*(*s*)

argmax *m*

Let *r*[*n*] �

where

header **C**.

<sup>0</sup> *<sup>D</sup>*(0)

<sup>1</sup> *<sup>D</sup>*(0)

number of symbols per block is given by the following equation:

It can be easily deduced from Equation 29 and Equations 30 that the greater the length of the sequence *s* (in chips and samples), the greater the likelihood of synchronization; however, the greater the length of the header. So, there is trade-off between synchronization performance and spectral efficiency.

Once frame, chip and sample (*ts*) are successfully synchronized a matched filter is applied, followed by a decimate process to adjust the signal to one sample per chip (see Figure 4):

$$r\_d[k] = \sum\_{l=0}^{N\_p - 1} r[\frac{t\_s}{f\_m} + k\frac{f\_m}{f\_{chip}} + l] \, p[l] \, \tag{31}$$

where *p* is a pulse of length *Np* samples, namely a root raised cosine with roll-off factor *α* = 0.65.

Channel estimation is initally obtained from the second sequence on header **C** and tracked by the field **S** as:

$$h\_l = \sum\_{k=0}^{L^{(s)}-1} r\_d[\frac{\delta t}{f\_{chip}} + k + l] s[k], \ l \operatorname{ in}[-\tau\_{\max} f\_{clip}, \tau\_{\max} f\_{clip}], l \in \mathbb{Z},\tag{32}$$

where *δt* denotes the time offset of the sequence (**C** or **S**) from which we obtain channel estimation.

The despreading of each symbol is achieved by a bank of correlators using each of the sequences *<sup>c</sup>*(*m*) that belongs to the family denoted by *<sup>Q</sup>* <sup>=</sup> {*c*(1) , *c*(2) , ··· , *<sup>c</sup>*(*M*) }. The correlation is calculated for all those *l* values such that the channel estimation exceeds a certain threshold *γ*:

$$\mathcal{U}^{(m)(l)} = \sum\_{k=0}^{L-1} r\_d[\frac{t\_d}{f\_m} + l + k] \mathcal{c}^{(m)}[k], \ m \in [1, M], \ \forall l \mid \|h\_l\| \gg \gamma,\tag{33}$$

where *td* indicates the starting point of the symbol under consideration and *L* is the length of the sequences used to spread data.

When using quadriphase spreading, the set of sequences *Q* is divided into two subsets *Qr* = {*c*(1) , ··· , *<sup>c</sup>*(*M*/2) } on one side and *Qi* <sup>=</sup> {*c*(*M*/2+1) , ··· , *<sup>c</sup>*(*M*) } on the other side. Then we compute both decision variables similarly to Equation 33.

The decision of which sequence has been transmitted is performed based on a criterion of maximum absolute value at the output of the correlators. It is only evaluated over the set or subset of appropriate sequences and for the shift *l* such that the channel estimation is maximum. We denote by *p* (*p* ∈ [1, *M*]) the index for the sequence with maximum correlator output, when not using quadriphase spreading, and *pr* (*pr* ∈ [1, *M*/2]) and *pi* (*pi* ∈ [*M*/2 + 1, *M*]) when using quadriphase spreading.

The demodulation of bits contained in *d* (see Equation 1) is achieved using a RAKE architecture. If not using quadriphase spreading, the decision is based on the decision variable *U* computed as follows:

$$\Delta U = \sum\_{l} h\_{l}^{\*} \left( U^{(p)(l)} - \sum\_{k$$

Config. *fchip L M* QS Modulation bit rate (*C*) Num. days

Low Rate High Frequency Data Transmission from Very Remote Sensors 141

(1) 2500 63 64 0 none 238 (0.10) 79 (0.03) 1 (2) 2500 63 64 1 none 397 (0.16) 132 (0.05) 1 (3) 2500 63 64 1 QPSK 476 (0.19) 159 (0.06) 4 (4) 2500 31 32 1 QPSK 806 (0.32) 267 (0.11) 2 (5) 3125 63 64 0 none 298 (0.10) 99 (0.03) 1 (6) 3125 63 64 1 none 496 (0.16) 165 (0.05) 1 (7) 3125 63 64 1 QPSK 595 (0.19) 198 (0.06) 11 (8) 3125 31 32 1 QPSK 1008 (0.32) 336 (0.11) 2 (9) 6250 63 64 0 none 595 (0.10) 198 (0.03) 1 (10) 6250 63 64 1 none 992 (0.16) 331 (0.05) 1 (11) 6250 63 64 1 QPSK 1190 (0.19) 397 (0.06) 5

Table 1. Configurations of the experiments carried out on the ionospheric link between the

The basic plot that is used to show the most important outcomes is a scatterplot (see,

estimation. The estimation of SNR at the receiver side is computed immediately after despreading by means of Equation 37. Regarding this estimation it should be noted that (*i*) the signal strength is measured by means of only the most powerful path and hence, the signal at the receiver input is actually higher in case of multipath channel, (*ii*) when the detector at the output of correlators commits an error the subsequential SNR estimation is incorrect (see, for instance, Figure 6 (top) which shows that the detector systematically fails, producing *BER*(�

) refers to the bit error rate measured on bits contained in a burst of *Nbits* (320 uncoded

of a burst of *Nbits*. The thick line shown on each scatterplot is obtained by calculating the

The relationship between BER and SNR can be obtained by simulation, or analytically, according to the explanations in Section 2.4. Then, the probability *P* that a burst of *Nbits*

> *Nbits k*

<sup>=</sup> 0.05 and P

BER*<sup>k</sup>* (<sup>1</sup> <sup>−</sup> BER)

BER (�

=

) *h* 

) *l* 

) in consecutive subintervals of width 0.02.

)) of the scatterplot corresponds to the demodulation

which, given a BER, contains with a probability of 90

) *h* 

) > BER(�

) performance versus SNR

*Nbits*−*<sup>k</sup>* . (38)

= 0.05 . (39)

)

SAS and Spain during the 2006/07 Antarctic season

close to 0.5 when the SNR is approximately -8 dB).

bits). Therefore, each point (SNR,*BER*(�

P BER(�

median of points of *BER*(�

contains *k* erroneous bits is:

We define the interval

P BER (�

% a defined *BER*(�

*BER*(�

for instance, the two top pictures in Figure 6 containing *BER*(�

) <sup>=</sup> *<sup>k</sup> Nbits*

*BER*(� ) *<sup>l</sup>* , *BER*(�

) < BER(�

). Specifically:

uncoded coded

where *ρ*(*p*) is the circular autocorrelation of sequence *p*. If applying quadriphase spreading two decision variables (*Ur* and *Ui*) will be needed, one per each branch.

The value of *p* (or *pr* and *pi*) determines the bits used by the technique of spread spectrum, and the decision on *U* (or *Ur* and *Ui*) determines the bits used by modulation of *d*.

If not using quadriphase spreading, each bit mapped to a symbol is linked to a soft-bit *Sb* that is computed according to the following expression:

$$Sb = \frac{\left\|{\boldsymbol{U}^{(p)}(l)}\right\|^2}{\frac{1}{M-1} \sum\_{m=1, m \neq p}^{M} \left(\left\|{\boldsymbol{U}^{(m)}(l)} \mid -\overline{\boldsymbol{U}^{(l)}}\right\|^2\right)'}, l \; |\forall k \neq l, \; ||h\_l|| > ||h\_k||\,\tag{35}$$

where:

$$\overline{\mathcal{U}^{(l)}} = \frac{1}{\mathcal{M} - 1} \sum\_{m=1, m \neq p}^{M} ||\mathcal{U}^{(m)}(^{l)}||\,. \tag{36}$$

It is noted that the term on the numerator in Equation 35 is a measure of the power of the signal after despreading, while the denominator is an estimation of the noise power, computed at the output of the correlators for those sequences which are not sent. Therefore, the soft-bit is an estimation of the signal to noise ratio after despreading. When using quadriphase spreading, soft-bits are calculated similarly to the biphase spreading option, for both detected sequences (*pr* and *pi*) and the corresponding subsets of sequences (*Qr* and *Qi*).

The noise variance is also computed at the output of the correlators except for those corresponding to the transmitted sequences. Once despreading and demodulation processes have finished (with the corresponding soft-bits) a deinterleaving and a Turbo decoding (Berrou & Glavieux, 1996) are applied. These two modules operate on a set of 972 coded bits and generate a set of 320 decoded bits. The Turbo code has a constraint length of 4 and runs 8 iterations.

If not using quadriphase spreading, SNR estimation is obtained averaging soft-bits values for each symbol of the burst. Specifically:

$$SNR = \frac{1}{N\_{symbols}} \sum\_{n=0}^{N\_{symbols}-1} \underbrace{Sb^{(n)}}\_{L}.\tag{37}$$

#### **3.4 Outcomes**

As a summary of the characteristics of most of the experiments carried out during the Antarctic season 2006/07 we have compiled Table 1. For each configuration we give the bandwidth (*fchip*), the length of the sequence (*L*), the number of sequences (*M*), the use of quadriphase (QS), the type of modulation, the achieved bit rate, the spectral efficiency (*C*) (in parenthesis) and finally, the number of days each experiment was transmitted.

In order to summarize the outcomes obtained from the experiments carried out on the link between the SAS and Spain the plots shown in Figures 6 and 7 contain information from tens of thousands of bursts and are compared to the maximum achievable performance discussed in Section 2.4.

18 Will-be-set-by-IN-TECH

where *ρ*(*p*) is the circular autocorrelation of sequence *p*. If applying quadriphase spreading

The value of *p* (or *pr* and *pi*) determines the bits used by the technique of spread spectrum,

If not using quadriphase spreading, each bit mapped to a symbol is linked to a soft-bit *Sb* that

*M* <sup>∑</sup> *<sup>m</sup>*=1,*m*�=*<sup>p</sup>*

It is noted that the term on the numerator in Equation 35 is a measure of the power of the signal after despreading, while the denominator is an estimation of the noise power, computed at the output of the correlators for those sequences which are not sent. Therefore, the soft-bit is an estimation of the signal to noise ratio after despreading. When using quadriphase spreading, soft-bits are calculated similarly to the biphase spreading option, for both detected sequences

The noise variance is also computed at the output of the correlators except for those corresponding to the transmitted sequences. Once despreading and demodulation processes have finished (with the corresponding soft-bits) a deinterleaving and a Turbo decoding (Berrou & Glavieux, 1996) are applied. These two modules operate on a set of 972 coded bits and generate a set of 320 decoded bits. The Turbo code has a constraint length of 4 and runs 8

If not using quadriphase spreading, SNR estimation is obtained averaging soft-bits values for

As a summary of the characteristics of most of the experiments carried out during the Antarctic season 2006/07 we have compiled Table 1. For each configuration we give the bandwidth (*fchip*), the length of the sequence (*L*), the number of sequences (*M*), the use of quadriphase (QS), the type of modulation, the achieved bit rate, the spectral efficiency (*C*) (in

In order to summarize the outcomes obtained from the experiments carried out on the link between the SAS and Spain the plots shown in Figures 6 and 7 contain information from tens of thousands of bursts and are compared to the maximum achievable performance discussed

*Nsymbols*−1 ∑ *n*=0

*Sb*(*n*)

*Nsymbols*

�*U*(*m*)(*l*)

<sup>2</sup> , *<sup>l</sup>* | ∀*<sup>k</sup>* �<sup>=</sup> *<sup>l</sup>*, �*hl*� <sup>&</sup>gt; �*hk*� , (35)

� . (36)

*<sup>L</sup>* . (37)

and the decision on *U* (or *Ur* and *Ui*) determines the bits used by modulation of *d*.

�*U*(*m*)(*l*) | − *<sup>U</sup>*(*l*)

*M* − 1

two decision variables (*Ur* and *Ui*) will be needed, one per each branch.

 *U*(*p*)(*l*) 2

*<sup>U</sup>*(*l*) <sup>=</sup> <sup>1</sup>

(*pr* and *pi*) and the corresponding subsets of sequences (*Qr* and *Qi*).

*SNR* <sup>=</sup> <sup>1</sup>

parenthesis) and finally, the number of days each experiment was transmitted.

*M* ∑ *m*=1,*m*�=*p*

is computed according to the following expression:

1 *M*−1

*Sb* =

each symbol of the burst. Specifically:

where:

iterations.

**3.4 Outcomes**

in Section 2.4.


Table 1. Configurations of the experiments carried out on the ionospheric link between the SAS and Spain during the 2006/07 Antarctic season

The basic plot that is used to show the most important outcomes is a scatterplot (see, for instance, the two top pictures in Figure 6 containing *BER*(� ) performance versus SNR estimation. The estimation of SNR at the receiver side is computed immediately after despreading by means of Equation 37. Regarding this estimation it should be noted that (*i*) the signal strength is measured by means of only the most powerful path and hence, the signal at the receiver input is actually higher in case of multipath channel, (*ii*) when the detector at the output of correlators commits an error the subsequential SNR estimation is incorrect (see, for instance, Figure 6 (top) which shows that the detector systematically fails, producing *BER*(� ) close to 0.5 when the SNR is approximately -8 dB).

*BER*(� ) refers to the bit error rate measured on bits contained in a burst of *Nbits* (320 uncoded bits). Therefore, each point (SNR,*BER*(� )) of the scatterplot corresponds to the demodulation of a burst of *Nbits*. The thick line shown on each scatterplot is obtained by calculating the median of points of *BER*(� ) in consecutive subintervals of width 0.02.

The relationship between BER and SNR can be obtained by simulation, or analytically, according to the explanations in Section 2.4. Then, the probability *P* that a burst of *Nbits* contains *k* erroneous bits is:

$$\mathbf{P}\left(\mathbf{BER}^{(\prime)} = \frac{k}{N\_{\rm bits}}\right) = \binom{N\_{\rm bits}}{k} \mathbf{BER}^k \left(1 - \mathbf{BER}\right)^{N\_{\rm bits} - k}.\tag{38}$$

We define the interval *BER*(� ) *<sup>l</sup>* , *BER*(� ) *h* which, given a BER, contains with a probability of 90 % a defined *BER*(� ). Specifically:

$$\Pr\left(\text{BER}^{(\prime)} < \text{BER}^{(\prime)}\_{l}\right) = 0.05 \text{ and } \Pr\left(\text{BER}^{(\prime)} > \text{BER}^{(\prime)}\_{h}\right) = 0.05. \tag{39}$$

spectral efficiency improves without additional energy cost. In this context, we highlight the fact that the results of degradation of 2 dB observed between theory and experimental outcomes appear in both cases: modulation and no modulation. Therefore, this malfunction can be attributed to detection algorithms rather than the channel estimator and combiner

Low Rate High Frequency Data Transmission from Very Remote Sensors 143

Mod): (63, 64, yes, QPSK) with coded bits and a bandwidth of 3125 Hz. It should be noted that

greater than 80 % (see Figure 7). If we compare this figure with frequency availability results presented in (Vilella et al., 2008), which indicates the frequency with highest availability at a given SNR in a 3 kHz bandwidth, we can highlight that: (*a*) The distribution of frequencies with best availability rates is very similar in both studies: above 15 MHz between 18 and 22 UTC, from 9 MHz to 11 MHz between 23 and 6 UTC, and again about 15 MHz between 7 and 11 UTC. Therefore, there is a very good correspondence between channel sounding results and the analysis of data transmissions. (*b*) If we focus on specific values of percentages, we observe that (*b*.*i*) there are a set of hours, mostly belonging to the evening and morning (20, 21, 23, 2, 5, 6, 7, 8, 10 UTC) when the probability of overcoming -3 dB (measured by channel sounding) coincides, with high accuracy, with the probability of obtaining *BER*(

(measured by data analysis). (*b*.*ii*) There are a number of hours at night (0, 1, 3, 4 UTC) when

by narrow-band sounding. (*b*.*iii*) Finally, a set of hours in both measures show mixed results (18, 22, 9 UTC). 18 and 9 UTC are noteworthy because the channel study shows very low

One possible explanation for these results could be derived from the following two arguments: (*i*) SNR measurements conducted by channel sounding consider noise everything that is not the transmitted signal (Gaussian noise and interference). During evening (18 to 22 UTC) and morning (07 to 11 UTC) the weight of interference power with respect to the total noise power is lower than during full night (23 to 06 UTC). It is precisely in the evening and morning when the two measurements (channel and data) are more similar. From this statement we can conclude that, rather than Gaussian noise, interference is the main factor on signal degradation. (*ii*) At full night and low frequencies (6 MHz to 10 MHz) channel time dispersion is greater than during evening and morning at high frequencies (14 MHz to 16 MHz) and, therefore, it is more difficult to obtain good performance for the same SNR (Vilella et al., 2008).

on the configuration (L, M, QS, Mod): (63, 64, yes, QPSK) with coded bits and a bandwidth of 3125 Hz. This plot is especially useful when trying to use a directive antenna tuned to a particular frequency. It is found that the best results are achieved at high frequencies (around

) percentage, based on configuration (L, M, QS,

) = 0 is approximately 45 % below the prediction made

) = 0 for SNR above -6 dB with probability

) = 0 with rates around 20%.

) = 0 at each frequency, based

) = 0

algorithms.

**3.4.3 Best frequencies**

Figure 8 shows frequencies with best *BER*(

the probability of obtaining *BER*(

**3.4.4 Best hours**

this configuration experimentally obtained *BER*(

availability (less than 5 %), whereas data analysis gets *BER*(

Figure 9 shows the hours with highest percentage of *BER*(

16 MHz) in the early hours of night (21 UTC).

These scatterplots includes *BER*( ) *<sup>l</sup>* <sup>=</sup> *<sup>f</sup>*(*SNR*) and *BER*( ) *<sup>h</sup>* = *f*(*SNR*) curves for the analogous configuration. All the points should be found in 90 % of cases in the space between these curves if the tests were performed in a laboratory in the presence of only additive white Gaussian noise. However, as shown in Figures 6 and 7, it should be noted that in all scatterplots points are located outside the space bounded by the curves *BER*( ) *<sup>l</sup>* and *BER*( ) *h* and shifted about 2 dB to higher SNRs. This shift is due to different causes: (*i*) interference and no Gaussian noise, (*ii*) channel: multipath, Doppler, fading, etc., (*iii*) etc. The optimization of testbench algorithms could mitigate this loss of performance, but in any case we must take into account this shift when performing the design from a theoretical point of view.

Each scatterplot is accompanied by two histograms which derive from it. The first of these histograms shows, for each SNR, the percentage of receptions with *BER*( ) = 0 of the total number of receptions *BER*( ) = 0. It is noted that the higher the SNR the higher the probability of demodulating with *BER*( ) = 0, but simultaneously that SNR is less likely. This first histogram shows, therefore, the values of SNR at which the experiment is more successful. The second histogram shows, for each SNR, the percentage of receptions with *BER*( ) = 0 of the total number of receptions at that SNR. This figure allows us to evaluate at which SNR the probability of receiving a burst without errors is above a given value.

The results are discussed in terms of comparison with expected theoretical values. Specifically, in Figure 6 a scatterplot shows the effect of the variation in bandwidth and in Figure 7 the use of modulation is studied. Furthermore, in Figure 8 frequencies with best percentage of receptions of bursts with *BER*( ) = 0 per hour are shown and in Figure 9 the hours with best percentage of receptions of bursts with *BER*( ) = 0 at each frequency are also shown.

#### **3.4.1 Bandwidth**

Figure 6 compares the use of configuration (L, M, QS, Mod): (63, 64, yes, QPSK) with coded bits using a bandwidth of 3125 Hz (left column) and the same configuration using a bandwidth of 6250 Hz (right column). It is observed that the benefits obtained are slightly better for high bandwidth: for instance for SNR= -6 dB about 25 % of the receptions are *BER*( ) = 0 when *fchip*= 3125 Hz, whereas this amount is over 40 % when *fchip* = 6250 Hz (the percentages are also better in the second case for higher SNRs: -5 dB, -4 dB, -3 dB, etc.). This fact may be partly explained by a better performance of the RAKE receiver when working with higher multipath resolution.

#### **3.4.2 Modulation**

Figure 7 compares the application of QPSK modulation with a configuration with no modulation based on a system with (L, M, QS): (63, 64, yes) with coded bits and a bandwidth of 3125 Hz. Curves *BER*( ) *<sup>l</sup>* and *BER*( ) *<sup>h</sup>* indicate that theoretical maximum benefits are almost identical (slightly better when not using any modulation). The histograms confirm this estimation, where small deviations of about 5 % or 10 % to the no modulation option are observed.

It is worth noting that when using modulation the channel must be estimated and the use of a RAKE module is advised. Therefore, computational complexity is slightly increased while spectral efficiency improves without additional energy cost. In this context, we highlight the fact that the results of degradation of 2 dB observed between theory and experimental outcomes appear in both cases: modulation and no modulation. Therefore, this malfunction can be attributed to detection algorithms rather than the channel estimator and combiner algorithms.

#### **3.4.3 Best frequencies**

20 Will-be-set-by-IN-TECH

configuration. All the points should be found in 90 % of cases in the space between these curves if the tests were performed in a laboratory in the presence of only additive white Gaussian noise. However, as shown in Figures 6 and 7, it should be noted that in all

and shifted about 2 dB to higher SNRs. This shift is due to different causes: (*i*) interference and no Gaussian noise, (*ii*) channel: multipath, Doppler, fading, etc., (*iii*) etc. The optimization of testbench algorithms could mitigate this loss of performance, but in any case we must take

Each scatterplot is accompanied by two histograms which derive from it. The first of these

histogram shows, therefore, the values of SNR at which the experiment is more successful.

the total number of receptions at that SNR. This figure allows us to evaluate at which SNR the

The results are discussed in terms of comparison with expected theoretical values. Specifically, in Figure 6 a scatterplot shows the effect of the variation in bandwidth and in Figure 7 the use of modulation is studied. Furthermore, in Figure 8 frequencies with best percentage of

Figure 6 compares the use of configuration (L, M, QS, Mod): (63, 64, yes, QPSK) with coded bits using a bandwidth of 3125 Hz (left column) and the same configuration using a bandwidth of 6250 Hz (right column). It is observed that the benefits obtained are slightly better for high bandwidth: for instance for SNR= -6 dB about 25 % of the receptions are

Figure 7 compares the application of QPSK modulation with a configuration with no modulation based on a system with (L, M, QS): (63, 64, yes) with coded bits and a bandwidth

identical (slightly better when not using any modulation). The histograms confirm this estimation, where small deviations of about 5 % or 10 % to the no modulation option are

It is worth noting that when using modulation the channel must be estimated and the use of a RAKE module is advised. Therefore, computational complexity is slightly increased while

) = 0 when *fchip*= 3125 Hz, whereas this amount is over 40 % when *fchip* = 6250 Hz (the percentages are also better in the second case for higher SNRs: -5 dB, -4 dB, -3 dB, etc.). This fact may be partly explained by a better performance of the RAKE receiver when working

The second histogram shows, for each SNR, the percentage of receptions with *BER*(

)

) = 0. It is noted that the higher the SNR the higher the probability

) = 0, but simultaneously that SNR is less likely. This first

) = 0 per hour are shown and in Figure 9 the hours with best

) = 0 at each frequency are also shown.

*<sup>h</sup>* indicate that theoretical maximum benefits are almost

*<sup>h</sup>* = *f*(*SNR*) curves for the analogous

)

*<sup>l</sup>* and *BER*(

) = 0 of the total

) = 0 of

) *h*

*<sup>l</sup>* <sup>=</sup> *<sup>f</sup>*(*SNR*) and *BER*(

scatterplots points are located outside the space bounded by the curves *BER*(

into account this shift when performing the design from a theoretical point of view.

histograms shows, for each SNR, the percentage of receptions with *BER*(

probability of receiving a burst without errors is above a given value.

)

These scatterplots includes *BER*(

number of receptions *BER*(

of demodulating with *BER*(

receptions of bursts with *BER*(

with higher multipath resolution.

)

*<sup>l</sup>* and *BER*(

)

**3.4.1 Bandwidth**

**3.4.2 Modulation**

observed.

of 3125 Hz. Curves *BER*(

*BER*(

percentage of receptions of bursts with *BER*(

Figure 8 shows frequencies with best *BER*( ) percentage, based on configuration (L, M, QS, Mod): (63, 64, yes, QPSK) with coded bits and a bandwidth of 3125 Hz. It should be noted that this configuration experimentally obtained *BER*( ) = 0 for SNR above -6 dB with probability greater than 80 % (see Figure 7). If we compare this figure with frequency availability results presented in (Vilella et al., 2008), which indicates the frequency with highest availability at a given SNR in a 3 kHz bandwidth, we can highlight that: (*a*) The distribution of frequencies with best availability rates is very similar in both studies: above 15 MHz between 18 and 22 UTC, from 9 MHz to 11 MHz between 23 and 6 UTC, and again about 15 MHz between 7 and 11 UTC. Therefore, there is a very good correspondence between channel sounding results and the analysis of data transmissions. (*b*) If we focus on specific values of percentages, we observe that (*b*.*i*) there are a set of hours, mostly belonging to the evening and morning (20, 21, 23, 2, 5, 6, 7, 8, 10 UTC) when the probability of overcoming -3 dB (measured by channel sounding) coincides, with high accuracy, with the probability of obtaining *BER*( ) = 0 (measured by data analysis). (*b*.*ii*) There are a number of hours at night (0, 1, 3, 4 UTC) when the probability of obtaining *BER*( ) = 0 is approximately 45 % below the prediction made by narrow-band sounding. (*b*.*iii*) Finally, a set of hours in both measures show mixed results (18, 22, 9 UTC). 18 and 9 UTC are noteworthy because the channel study shows very low availability (less than 5 %), whereas data analysis gets *BER*( ) = 0 with rates around 20%.

One possible explanation for these results could be derived from the following two arguments: (*i*) SNR measurements conducted by channel sounding consider noise everything that is not the transmitted signal (Gaussian noise and interference). During evening (18 to 22 UTC) and morning (07 to 11 UTC) the weight of interference power with respect to the total noise power is lower than during full night (23 to 06 UTC). It is precisely in the evening and morning when the two measurements (channel and data) are more similar. From this statement we can conclude that, rather than Gaussian noise, interference is the main factor on signal degradation. (*ii*) At full night and low frequencies (6 MHz to 10 MHz) channel time dispersion is greater than during evening and morning at high frequencies (14 MHz to 16 MHz) and, therefore, it is more difficult to obtain good performance for the same SNR (Vilella et al., 2008).

#### **3.4.4 Best hours**

Figure 9 shows the hours with highest percentage of *BER*( ) = 0 at each frequency, based on the configuration (L, M, QS, Mod): (63, 64, yes, QPSK) with coded bits and a bandwidth of 3125 Hz. This plot is especially useful when trying to use a directive antenna tuned to a particular frequency. It is found that the best results are achieved at high frequencies (around 16 MHz) in the early hours of night (21 UTC).

 0 0.1 0.2 0.3 0.4 0.5 0.6

0

*BER*(

*BER*( )

of receptions with *BER*(

)

*<sup>l</sup>* and *BER*(

Percentage of *BER*(

) = 0 10

20

Percentage of *BER*(

) = 0

30

40

50

BER(

)


 0 0.1 0.2 0.3 0.4 0.5 0.6

0

Percentage of *BER*(

estimation before despreading (top row), (*ii*) histogram of the percentage of receptions with

Fig. 7. Comparison of modulation (none and QPSK): (*i*) Scatterplot of *BER*(

*<sup>h</sup>* are included on the scatterplots

) = 0 10

20

Percentage of *BER*(

) = 0

30

40

50

BER(

)

Low Rate High Frequency Data Transmission from Very Remote Sensors 145


SNR [dB]

(L,M,QS,Mod): (63,64,yes,QPSK), coded


SNR [dB]

(L,M,QS,Mod): (63,64,yes,QPSK), coded


SNR [dB]

) = 0 (middle row); (*iii*) histogram of the percentage

) = 0 to total measurements at that SNR (bottom row). The curves

) versus SNR

(L,M,QS,Mod): (63,64,yes,QPSK), coded

SNR [dB]

(L,M,QS,Mod): (63,64,yes,none), coded.


SNR [dB]

(L,M,QS,Mod): (63,64,yes,none), coded.


SNR [dB]

) = 0 to total receptions with *BER*(

(L,M,QS,Mod): (63,64,yes,none), coded.

Fig. 6. Comparison of bandwidths (3125 Hz and 6250 Hz): (*i*) Scatterplot of *BER*( ) versus SNR estimation before despreading (top row), (*ii*) histogram of the percentage of receptions with *BER*( ) = 0 to total receptions with *BER*( ) = 0 (middle row); (*iii*) histogram of the percentage of receptions with *BER*( ) = 0 to total receptions at that SNR (bottom row). The curves *BER*( ) *<sup>l</sup>* and *BER*( ) *<sup>h</sup>* are included on the scatterplots

22 Will-be-set-by-IN-TECH

 0 0.1 0.2 0.3 0.4 0.5 0.6

0

Percentage of *BER*(

SNR estimation before despreading (top row), (*ii*) histogram of the percentage of receptions

Fig. 6. Comparison of bandwidths (3125 Hz and 6250 Hz): (*i*) Scatterplot of *BER*(

*<sup>h</sup>* are included on the scatterplots

) = 0 10

20

Percentage of *BER*(

) = 0

30

40

50

*BER*(

)


SNR [dB]

(L,M,QS,Mod): (63,64,yes,QPSK), coded, *fchip* = 6250*Hz*


SNR [dB]

(L,M,QS,Mod): (63,64,yes,QPSK), coded, *f chip* = 6250*Hz*


SNR [DB]

) = 0 (middle row); (*iii*) histogram of the

) = 0 to total receptions at that SNR (bottom row). The

) versus

(L,M,QS,Mod): (63,64,yes,QPSK), coded., *fchip* = 6250*Hz*

 0 0.1 0.2 0.3 0.4 0.5 0.6

0

with *BER*(

curves *BER*(

Percentage de *BER*(

) = 0 10

20

Percentage of *BER*(

) = 0

30

40

50

*BER*(

)


SNR [dB]

(L,M,QS,Mod): (63,64,yes,QPSK), coded, *fchip* = 3125*Hz*


SNR [dB]

(L,M,QS,Mod): (63,64,yes,QPSK), coded, *fchip* = 3125*Hz*


SNR [dB]

)

percentage of receptions with *BER*(

*<sup>l</sup>* and *BER*(

)

) = 0 to total receptions with *BER*(

(L,M,QS,Mod): (63,64,yes,QPSK), coded, *fchip* = 3125*Hz*

Fig. 7. Comparison of modulation (none and QPSK): (*i*) Scatterplot of *BER*( ) versus SNR estimation before despreading (top row), (*ii*) histogram of the percentage of receptions with *BER*( ) = 0 to total receptions with *BER*( ) = 0 (middle row); (*iii*) histogram of the percentage of receptions with *BER*( ) = 0 to total measurements at that SNR (bottom row). The curves *BER*( ) *<sup>l</sup>* and *BER*( ) *<sup>h</sup>* are included on the scatterplots

**4. Conclusions**

Throughout this chapter we have studied, both theoretically and experimentally, the feasibility of low rate data transmission over a very long ionospheric link. The ionosphere may be used as a communications channel available from anywhere on the Earth. Hence it can be adopted as a solution to cope with deficient or non-existent satellite coverage range. We have focused our research work on the link between the Spanish Antarctic Base Juan Carlos I and Spain. It has a length of approximately 12700 km along the surface of the Earth and passes over 4 continents in a straight line. The system is currently applied to the transmission of data of a geomagnetic sensor that generates a maximum of 5120 bits per day. The special conditions found in Antarctica have impaired several aspects of the transmission. To conserve energy, maximum transmit power is set at 250 watts. In addition, to prevent further environmental impact, a non directive antenna (a monopole) requiring

Low Rate High Frequency Data Transmission from Very Remote Sensors 147

We have reviewed current HF communication standards and noted that none of them are intended for links with negative SNR. Thus we propose a novel system to be used on the physical layer of a ionospheric link based on a Direct Sequence Spread Spectrum technique. The determining factors for the use of this technique were its robustness to multipath and narrowband interference, its ability to transmit with low power spectral density, and its

We propose a mode of transmission outside of current ITU standards, designed to cause minimal interference to primary and secondary services defined by the official agencies, able to operate in the presence of high values of noise power and interference, and robust to time and frequency channel dispersion. Hence, we suggest a transmission system based on sporadic short bursts of low density spectral power, focusing on increasing spectral efficiency

Several variants of DS-SS have been evaluated: signaling waveform, quadrature spreading and the impact of the modulation (BPSK and QPSK), all of them from the point of view of

• The DS-SS M-ary signaling technique allows an increase in spectral efficiency. The higher the number of sequences (*M*) the lower the SNR per bit required to achieve a given BER. In practice, if we use Gold spreading sequences, the maximum value of *M* is limited by the length of the spreading sequences (*M* ∼ *L*). However, for a given bit-rate, if we increase

• The combined use of modulation (BPSK and QPSK) and DS-SS M-ary signaling reduces the minimum required SNR per bit to achieve a certain BER. A greater reduction can be achieved with QPSK than with BPSK. However, modulation techniques require channel

We assessed the suitability of studying a channel code based on the use of a Turbo code (rate = 1/3), with inner interleaver, that converts a burst of 320 bits into 972 coded bits. Simulations (not shown here for reasons of brevity) demonstrate that coding gain is only achieved for BER

estimation (except for differential modulation) and, optionally, a RAKE combiner. • If we add quadriphase spreading to DS-SS M-ary signaling (without modulation), gain can be doubled while maintaining BER and spectral efficiency performance. When using

modulation, the use of quadriphase spreading results in energy inefficiency.

minimal infrastructure and installation was chosen to be placed at the SAS.

flexibility in terms of spectral efficiency in scenarios with negative SNR.

and energy savings at the expense of a higher complexity receiver.

BER versus SNR per bit and spectral efficiency. We then conclude that:

*M*, the computational complexity at the receiver side increases.

Fig. 8. Frequencies [MHz] with highest percentage of measurements with *BER*( ) = 0 per hour. The plot is based on the following configuration (L, M, QS, Mod): (63, 64, yes, QPSK) with channel coding

Fig. 9. Hours [UTC] with highest percentage of measurements with *BER*( ) = 0 at each carrier frequency. The plot is based on the following configuration (L, M, QS, Mod): (63, 64, yes, QPSK) with channel coding

### **4. Conclusions**

24 Will-be-set-by-IN-TECH

Best frequencies per hour

(63,64,yes,QPSK)

(63,64,yes,QPSK)

) = 0 per

) = 0 at each

18 19 20 21 22 23 0 1 2 3 4 5 6 7 8 9 10 11

Hour [UTC]

Best frequencies per hour

8078 8916 10668 11411 12785 14642 16130

Frequency [kHz]

carrier frequency. The plot is based on the following configuration (L, M, QS, Mod): (63, 64,

Fig. 9. Hours [UTC] with highest percentage of measurements with *BER*(

0 1 0 0 21 21 21

hour. The plot is based on the following configuration (L, M, QS, Mod): (63, 64, yes, QPSK)

Fig. 8. Frequencies [MHz] with highest percentage of measurements with *BER*(

16 16 13 16 15 9 9 9 9 11 11 9 8 11 15 16 16 16

0

0

yes, QPSK) with channel coding

20

Measurements

 with *BER*(

) = 0 [%]

40

60

80

100

20

Measurements

with channel coding

 with *BER*(

) = 0 [%]

40

60

80

100

Throughout this chapter we have studied, both theoretically and experimentally, the feasibility of low rate data transmission over a very long ionospheric link. The ionosphere may be used as a communications channel available from anywhere on the Earth. Hence it can be adopted as a solution to cope with deficient or non-existent satellite coverage range. We have focused our research work on the link between the Spanish Antarctic Base Juan Carlos I and Spain. It has a length of approximately 12700 km along the surface of the Earth and passes over 4 continents in a straight line. The system is currently applied to the transmission of data of a geomagnetic sensor that generates a maximum of 5120 bits per day. The special conditions found in Antarctica have impaired several aspects of the transmission. To conserve energy, maximum transmit power is set at 250 watts. In addition, to prevent further environmental impact, a non directive antenna (a monopole) requiring minimal infrastructure and installation was chosen to be placed at the SAS.

We have reviewed current HF communication standards and noted that none of them are intended for links with negative SNR. Thus we propose a novel system to be used on the physical layer of a ionospheric link based on a Direct Sequence Spread Spectrum technique. The determining factors for the use of this technique were its robustness to multipath and narrowband interference, its ability to transmit with low power spectral density, and its flexibility in terms of spectral efficiency in scenarios with negative SNR.

We propose a mode of transmission outside of current ITU standards, designed to cause minimal interference to primary and secondary services defined by the official agencies, able to operate in the presence of high values of noise power and interference, and robust to time and frequency channel dispersion. Hence, we suggest a transmission system based on sporadic short bursts of low density spectral power, focusing on increasing spectral efficiency and energy savings at the expense of a higher complexity receiver.

Several variants of DS-SS have been evaluated: signaling waveform, quadrature spreading and the impact of the modulation (BPSK and QPSK), all of them from the point of view of BER versus SNR per bit and spectral efficiency. We then conclude that:


We assessed the suitability of studying a channel code based on the use of a Turbo code (rate = 1/3), with inner interleaver, that converts a burst of 320 bits into 972 coded bits. Simulations (not shown here for reasons of brevity) demonstrate that coding gain is only achieved for BER

CTM2009-13843-C02-02 and CTM2010-21312-C03-03. La Salle thanks the *Comissionat per a Universitats i Recerca del DIUE de la Generalitat de Catalunya* for their support under the grant 2009SGR459. We must also acknowledge the support of the scientists of the Observatory

Low Rate High Frequency Data Transmission from Very Remote Sensors 149

Alsina, R. M., Bergada, P., Socoró, J. C. & Deumal, M. (2009). Multiresolutive Acquisition

*Ionospheric Radio Systems and Techniques*, IET, Edimburgh, United Kingdom. Bergada, P., Deumal, M., Vilella, C., Regué, J. R., Altadill, D. & Marsal, S. (2009).

Berrou, C. & Glavieux, A. (1996). Near optimum error correcting coding and decoding: Turbo-codes, *IEEE Transactions on Communications* 44(10): 1261–1271. Deumal, M., Vilella, C., Socoró, J. C., Alsina, R. M. & Pijoan, J. L. (2006). A DS-SS Signaling

Enge, P. K. & Sarwate, D. V. (1987). Spread-spectrum multiple-access performance

IEEE802.11 (2007). *Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) - Specifications (2007 Revision)*, number doi:10.1109/IEEESTD.2007.373646. MIL-STD-188-110A (1991). *Interoperability and Performance Standards for Data Modems*, U.S.

MIL-STD-188-110B (2000). *Interoperability and Performance Standards for Data Modems*, U.S.

MIL-STD-188-141A (1991). *Interoperability and Performance Standards for Medium and High*

Milstein, L. B. (1988). Interference rejection techniques in spread spectrum communications,

NTIA (1998). High frequency radio automatic link establishment (ALE) application handbook,

Peterson, R. L., Ziemer, R. E. & Borth, D. E. (1995). *Introduction to Spread Spectrum*

Pickholtz, R. L., Schilling, D. L. & Milstein, L. B. (1982). Theory of spread-spectrum communications - a tutorial, *IEEE Transactions on Communications* 30(5): 855–884.

Schilling, D. L., Milstein, L. B., Pickholtz, R. L. & Brown, R. W. (1980). Optimization of

Solé, J. G., Alberca, L. F. & Altadill, D. (2006). Ionospheric Station at the Spanish Antarctic Base:

STANAG-4406 (1999). *Military Message Handling System (MMHS)*, North Atlantic Treaty

system, *IEEE Transactions on Communications* 28(8): 1389–1398.

the processing gain of an M-ary direct sequence spread spectrum communication

Preliminary Results (in Spanish), *Proceedings of the 5th Asamblea Hispano-Portuguesa de*

*Frequency Radio Equipment*, U.S. Department of Defense.

*IEEE Transactions on Communications* 76(6): 657–671.

Technique for DS-SS Long-Haul HF Data Link, *Proceedings of the 11th Conference on*

Remote Sensing and Skywave Digital Communication from Antarctica, *Sensors*

Base System Proposal for Low SNR HF Digital Communications, *Proceedings of the 10th Conference on Ionospheric Radio Systems and Techniques*, IET, London, United

of orthogonal codes: Linear receivers, *IEEE Transactions on Communications*

de l'Ebre throughout the research work.

9(12): 10136–10157.

Kingdom.

35(12): 1309–1319.

Department of Defense.

Department of Defense.

*Communications*, Prentice Hall.

*Geodesia y Geofísica*, Sevilla, Spain.

Proakis, J. G. (1995). *Digital Communications*, McGraw-Hill.

*NTIA handbook*.

Organization.

**6. References**

values below 10−4. The reasons for using coding techniques will therefore depend, among other factors, on the size of the burst of bits and on the desired probability of error free.

We have defined a testbench to experimentally evaluate various configurations and to compare experiment outcomes with theoretical predictions. The testbench includes: (*i*) the definition of a header adapted to time and frequency channel dispersion to perform synchronization and channel estimation, (*ii*) the definition of a data frame, (*iii*) the design of a set of algorithms: encoding/decoding, synchronization, spreading/despreading, RAKE combiner, demodulator and SNR estimation.

The outcomes gathered from this testbench have shown that, for instance, with a SNR of -5 dB, this ionospheric data transmitter is able to transmit data (6 kHz and 320 bits burst size) with a rate of 397 bits per second (error free) with a successful probability of approximately 95 % (see Table 1 and Figure 6). It is noted that this rate would suffice to send the amount of data required by the application (5120 bits per hour), with sporadic frequency transmissions.

Experimental tests have been performed for different configurations and at different bandwidths in a frequency range between 8 MHz and 16 MHz and a time interval between 18 and 12 UTC. From the experimental results and comparison, with theoretical predictions in terms of BER versus SNR, the following conclusions can be drawn:


According to experimental results we make the following recommendations: (*i*) integrate the loss of 2 dB of SNR into theoretical calculations, (*ii*) prioritize larger bandwidths, use modulation (QPSK rather than BPSK) and use coding techniques, (*iii*) use modulation plus M-ary signaling without quadriphase spreading, (*iv*) optimally attempt to establish the data link at 21 UTC (at 16 MHz), or from 23 to 6 UTC (within the range 9-11 MHz).

#### **5. Acknowledgments**

This work has been funded by the Spanish Government under the projects REN2003-08376-C02-02, CGL2006-12437-C02-01/ANT, CTM2008-03236-E/ANT,

CTM2009-13843-C02-02 and CTM2010-21312-C03-03. La Salle thanks the *Comissionat per a Universitats i Recerca del DIUE de la Generalitat de Catalunya* for their support under the grant 2009SGR459. We must also acknowledge the support of the scientists of the Observatory de l'Ebre throughout the research work.

#### **6. References**

26 Will-be-set-by-IN-TECH

values below 10−4. The reasons for using coding techniques will therefore depend, among other factors, on the size of the burst of bits and on the desired probability of error free.

We have defined a testbench to experimentally evaluate various configurations and to compare experiment outcomes with theoretical predictions. The testbench includes: (*i*) the definition of a header adapted to time and frequency channel dispersion to perform synchronization and channel estimation, (*ii*) the definition of a data frame, (*iii*) the design of a set of algorithms: encoding/decoding, synchronization, spreading/despreading, RAKE

The outcomes gathered from this testbench have shown that, for instance, with a SNR of -5 dB, this ionospheric data transmitter is able to transmit data (6 kHz and 320 bits burst size) with a rate of 397 bits per second (error free) with a successful probability of approximately 95 % (see Table 1 and Figure 6). It is noted that this rate would suffice to send the amount of data required by the application (5120 bits per hour), with sporadic frequency transmissions. Experimental tests have been performed for different configurations and at different bandwidths in a frequency range between 8 MHz and 16 MHz and a time interval between 18 and 12 UTC. From the experimental results and comparison, with theoretical predictions

• There is a loss of about 2 dB of SNR between the theoretical and experimental BER. This loss may be attributable to several factors: non-Gaussian noise, interference, channel

• For a given SNR, the probability of receiving a burst without error is slightly higher for higher bandwidths. This improvement may be due to better performance of the RAKE combiner due to higher multipath resolution (this result should be confirmed in later

• Experimental results confirm that for a given SNR at the receiver, the use of modulation added to signaling techniques (thus increasing the bitrate without increasing the

• Regarding the frequencies that are more likely to transmit error free bursts, we observe that they correspond with great accuracy to those with highest availability, measured by channel studies ((Vilella et al., 2008)): above 15 MHz in the evening (18 to 22 UTC) and morning (7 to 11 UTC), and below 11 MHz in the early morning (23 to 6 UTC). Regarding specific percentages of bursts without errors, it appears that they are very similar to those equivalent measurements done by channel studies during the evening and morning, but are worse at night and early morning. This is mainly attributed to the increased amount of

According to experimental results we make the following recommendations: (*i*) integrate the loss of 2 dB of SNR into theoretical calculations, (*ii*) prioritize larger bandwidths, use modulation (QPSK rather than BPSK) and use coding techniques, (*iii*) use modulation plus M-ary signaling without quadriphase spreading, (*iv*) optimally attempt to establish the data

This work has been funded by the Spanish Government under the projects REN2003-08376-C02-02, CGL2006-12437-C02-01/ANT, CTM2008-03236-E/ANT,

link at 21 UTC (at 16 MHz), or from 23 to 6 UTC (within the range 9-11 MHz).

in terms of BER versus SNR, the following conclusions can be drawn:

transmitted power) does not affect the BER performance.

combiner, demodulator and SNR estimation.

dispersion, and so on.

interference at night.

**5. Acknowledgments**

experiments).


**7**

*Germany* 

Christian Rogaß\* et al.\*\*

**A Contribution to the Reduction of Radiometric** 

Imaging spectroscopy is used for a variety of applications such as the identification of surface cover materials and its spatiotemporal monitoring. Contrary to multispectral instruments more spectral information can be incorporated in the differentiation of materials. New generations of sensors are based on the pushbroom technology, where a linear array of sensors perpendicular to the flight direction scans the full width of the collected data in parallel as the platform moves. Contrary to whiskbroom scanners that collect data one pixel at a time pushbroom systems can simply gather more light as they sense a particular area for a longer time. This leads to a better Signal-to-Noise Ratio (SNR). In addition, the two dimensional photo detector array in pushbroom systems may enable different readout configuration settings, such as spatial and/or spectral binning, allowing a better control of the SNR. It follows from this that low reflective materials can be potentially sensed as well as high reflective materials without saturating the detector elements. However, the use of detector arrays requires a precise radiometric calibration as different detectors might have different physical characteristics. Any miscalibration results in visually perceptible striping and uncertainties increase in preceding analyses such as classification and segmentation (Datt et al., 2003). There are various reasons for miscalibration, for instance temporal fluctuations of the sensor temperature, deprecated calibration coefficients or uncertainties in the modelling of the calibration coefficients. In addition, ageing and environmental stresses highly affect the mechanical and optical components of a sensor system; its reliability is thus not such to grant

Radiometric calibration and the estimation of the calibration coefficients can be considered as the assignment of known incident at-sensor radiance to measured digital numbers (DN). For this, physically known, different reflective targets are artificially illuminated by electromagnetic radiation of a specific spectrum and the reflected radiation is then recorded by the sensor that consists of a number of detectors. Then, the response of each detector is

\*\* Daniel Spengler1, Mathias Bochow1, Karl Segl1, Angela Lausch2, Daniel Doktor2, Sigrid Roessner1, Robert Behling1, Hans-Ulrich Wetzel1, Katia Urata1, Andreas Hueni3 and Hermann Kaufmann1

unchanged calibration accuracies for the entire mission life span.

*1Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, Germany*

*2Helmholtz Centre for Environmental Research, UFZ Germany 3Remote Sensing Laboratories, University of Zurich, Switzerland*

**1. Introduction** 

\*

Corresponding Author

**Miscalibration of Pushbroom Sensors** 

*Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences,* 


## **A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors**

Christian Rogaß\* et al.\*\*

*Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, Germany* 

## **1. Introduction**

28 Will-be-set-by-IN-TECH

150 Remote Sensing – Advanced Techniques and Platforms

STANAG-5066 (2000). *Profile for High Frequency (HF) Radio Data Communications*, North

Third Generation Partnership Project (1999). *Physical layer - General description Release'99*, number 3GPP TS 25.201, Technical Specification Group Radio Access Network. Vilella, C., Miralles, D., Altadill, D., Costa, F., Solé, J. G., Torta, J. M. & Pijoan, J. L.

Vilella, C., Miralles, D. & Pijoan, J. L. (2008). An Antarctica-to-Spain HF ionospheric radio

Viterbi, A. J. (1995). *CDMA: Principles of Spread Spectrum Communication*, Prentice Hall PTR. Zuccheretti, E., Tutone, G., Sciacca, U., Bianchi, C. & Arokiasamy, B. (2003). Vertical and

link: Sounding results, *Radio Sci.* 43(doi:10.1029/2007RS003812).

midlatitudes: Results and relationships, *Ann. Geophys* (46): 647–659.

(2009). Vertical and Oblique Ionospheric Soundings over a Very Long Multihop HF Radio Link from Polar to Midlatitudes: Results and Relationships, *Radio Sci.*

oblique ionospheric soundings over a very long multihop hf radio link from polar to

Atlantic Treaty Organization.

44(doi:10.1029/2008RS004001).

Imaging spectroscopy is used for a variety of applications such as the identification of surface cover materials and its spatiotemporal monitoring. Contrary to multispectral instruments more spectral information can be incorporated in the differentiation of materials. New generations of sensors are based on the pushbroom technology, where a linear array of sensors perpendicular to the flight direction scans the full width of the collected data in parallel as the platform moves. Contrary to whiskbroom scanners that collect data one pixel at a time pushbroom systems can simply gather more light as they sense a particular area for a longer time. This leads to a better Signal-to-Noise Ratio (SNR). In addition, the two dimensional photo detector array in pushbroom systems may enable different readout configuration settings, such as spatial and/or spectral binning, allowing a better control of the SNR. It follows from this that low reflective materials can be potentially sensed as well as high reflective materials without saturating the detector elements. However, the use of detector arrays requires a precise radiometric calibration as different detectors might have different physical characteristics. Any miscalibration results in visually perceptible striping and uncertainties increase in preceding analyses such as classification and segmentation (Datt et al., 2003). There are various reasons for miscalibration, for instance temporal fluctuations of the sensor temperature, deprecated calibration coefficients or uncertainties in the modelling of the calibration coefficients. In addition, ageing and environmental stresses highly affect the mechanical and optical components of a sensor system; its reliability is thus not such to grant unchanged calibration accuracies for the entire mission life span.

Radiometric calibration and the estimation of the calibration coefficients can be considered as the assignment of known incident at-sensor radiance to measured digital numbers (DN). For this, physically known, different reflective targets are artificially illuminated by electromagnetic radiation of a specific spectrum and the reflected radiation is then recorded by the sensor that consists of a number of detectors. Then, the response of each detector is

\*\* Daniel Spengler1, Mathias Bochow1, Karl Segl1, Angela Lausch2, Daniel Doktor2, Sigrid Roessner1, Robert Behling1, Hans-Ulrich Wetzel1, Katia Urata1, Andreas Hueni3 and Hermann Kaufmann1

<sup>\*</sup> Corresponding Author

*<sup>1</sup>Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, Germany*

*<sup>2</sup>Helmholtz Centre for Environmental Research, UFZ Germany*

*<sup>3</sup>Remote Sensing Laboratories, University of Zurich, Switzerland*

A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 153

After these preceding reductions the image is radiometrically band wise rescaled to recover the radiometric scale. This is necessary since uncertainties in the estimation of parameters (e.g., detector resolution in the linear slope reduction) and in the incorporation of miscalibrated reference areas (e.g., potential miscalibration of the first image column as reference for the offset reduction) remain. The rescaling of ROME assumes that image columns that were less corrected than others can be used as reference for the whole image. After all reductions a detrending is performed reducing across track brightness gradients caused by reduction related frequency undershoots of low SNR bands. In this work an extension of ROME's detrend approach is presented evidencing an effective reduction of

In order to test the robustness of the algorithm due to different types of miscalibration, four grey valued images as well as 12 multispectral and hyperspectral scenes were considered. The grey valued images were randomly striped by linearly varying slope and/or offset. One HyMAP scene was three times differently and artificially striped by offset stripes. The simulated EnMAP scene was not corrected for nonlinear effects and, hence, the nonlinear correction facilities were tested. Miscalibrated scenes acquired by AISA DUAL (3 scenes), Hyperion (2 scenes), ASTER (1 scene), CHRIS/Proba (1 scene) and APEX (1 scene) were

In Rogass et al. (2011) four grey valued images (Fig. 1) from the image database of the Signal and Image Processing Institute (SIPI) of the University of California (Weber, 1997), 512 × 512 pixels in size, and six hyperspectral scenes (3 AISA DUAL, 2 Hyperion and 1 EnMAP) were selected to test and to evaluate the performance of the proposed ROME framework. The grey valued samples as well as the EnMAP scene were considered as noise free. However, the 'Lenna' image (Fig. 1a) and the 'Mandrill' image (Fig. 1b) are excluded from further considerations due to their unique spectral and spatial properties as detailed described in

a) b) c) d)

To simulate different types of miscalibrations and to evaluate their impact on the proposed work, the two grey valued images (Fig. 1 c and d) and the EnMAP scene were artificially degraded. The grey valued images were randomly degraded by applying 800 different sets of multiplicative (slope) and/or additive (offset) Gaussian white noise (Box and Muller,

Fig. 1. Grey scaled image samples from the USC SIPI image data base considered in the

following as a) 'Lenna', b) 'Mandrill', c) 'Aerial' and d) 'Sailboat on lake'

undershoots when compared to the original approach.

additionally processed.

**2. Materials** 

Rogass et al. (2011).

modelled with respect to the incident radiation, the reflective target and the defined illumination of the target. The mathematical modelling is often performed by applying a linear least squares regression. Contemporary, differences of detectors are balanced.

Consequently, calibration coefficients are obtained – shortly named as offset and slope. Offsets incorporate the unwanted detector-dependent dark current that is caused by thermally generated electrons (Oppelt and Mauser, 2007). In turn, slopes directly relate radiance to DN. Offsets are often measured before any image acquisition, but may change due to instabilities in the cooling system. Mechanical stress or uncertainties in foregoing laboratory calibration can cause changes in the physical characteristics of detectors as well. In order to support laboratory calibration, in-flight calibrations complement the calibration procedure, verifying the results obtained in the laboratory and, in addition, allowing the measurement of parameters that are only obtainable during flight (i.e. stability measurements, solar calibration, etc).

For this, physically known targets have to be sensed and incident illumination should be measured during the overflight. Uncertainties in the measurement of hemispheric incident solar radiation and in the incorporation of illumination, sensing and wavelength dependent response of imaged calibrations targets on incident light aggravate then this type of calibration and may also lead to miscalibrations or visually perceptible image stripes. Hence, any striping reduction or retrieval of calibration coefficients should reduce stripes and at the same time the spectral characteristics of the imaged surface materials have to be preserved.

In the literature, specific approaches for destriping of slope stripes, offset stripes or both exist, and these are primarily based on methods such as interpolation (Oliveira and Gomes, 2010; Tsai and Chen, 2008), local or global image moments (Datt et al., 2003; Cavalli et al., 2008; Le Maire et al., 2008; Liu et al., 2009), filtering (Garcia and Moreno, 2004; Shen et al., 2008; Simpson et al., 1995, Simpson et al., 1998) or complex image statistics of log transformed slopes (Bouali and Ladjal, 2010; Carfantan and Idier, 2010; Gomez-Chova et al., 2008). Most methods replace original, miscalibrated radiances. This should be only applied if information is completely missing or erroneous.

In the following, a framework that efficiently reduces linear as well as nonlinear miscalibration is reviewed concurrently preserving the spectral characteristics of sensed surface cover materials. This framework, originally proposed by Rogass et al. (2011) and named as Reduction of Miscalibration Effects (ROME), consists of a linear and a nonlinear slope reduction and an offset reduction that are consecutively performed and does not require a priori information or scene and sensor specific parameterisation.

Before any radiometric miscalibration reduction is applied, image gradients that are not orthogonal to the image are excluded if they do not represent the image content. Here, Minkowski metrics, gradient operators and edge extraction algorithms are combined to exclude discontinuities if they do not dominate the image content (Canny, 1986; Haralick et al., 1987; Rogass et al., 2009). The linear and the nonlinear slope reduction of ROME are performed for each detector element and band without any information from other detector elements. The offset reduction of ROME considers adjacent image columns and refers to a predefined image column (first column per default) that is assumed to be the reference. Specific image quality metrics, such as the change in SNR (Gao, 1993; Atkinson et al., 2005), were used to evaluate the necessity of such preceding reduction.

After these preceding reductions the image is radiometrically band wise rescaled to recover the radiometric scale. This is necessary since uncertainties in the estimation of parameters (e.g., detector resolution in the linear slope reduction) and in the incorporation of miscalibrated reference areas (e.g., potential miscalibration of the first image column as reference for the offset reduction) remain. The rescaling of ROME assumes that image columns that were less corrected than others can be used as reference for the whole image. After all reductions a detrending is performed reducing across track brightness gradients caused by reduction related frequency undershoots of low SNR bands. In this work an extension of ROME's detrend approach is presented evidencing an effective reduction of undershoots when compared to the original approach.

In order to test the robustness of the algorithm due to different types of miscalibration, four grey valued images as well as 12 multispectral and hyperspectral scenes were considered. The grey valued images were randomly striped by linearly varying slope and/or offset. One HyMAP scene was three times differently and artificially striped by offset stripes. The simulated EnMAP scene was not corrected for nonlinear effects and, hence, the nonlinear correction facilities were tested. Miscalibrated scenes acquired by AISA DUAL (3 scenes), Hyperion (2 scenes), ASTER (1 scene), CHRIS/Proba (1 scene) and APEX (1 scene) were additionally processed.

## **2. Materials**

152 Remote Sensing – Advanced Techniques and Platforms

modelled with respect to the incident radiation, the reflective target and the defined illumination of the target. The mathematical modelling is often performed by applying a

Consequently, calibration coefficients are obtained – shortly named as offset and slope. Offsets incorporate the unwanted detector-dependent dark current that is caused by thermally generated electrons (Oppelt and Mauser, 2007). In turn, slopes directly relate radiance to DN. Offsets are often measured before any image acquisition, but may change due to instabilities in the cooling system. Mechanical stress or uncertainties in foregoing laboratory calibration can cause changes in the physical characteristics of detectors as well. In order to support laboratory calibration, in-flight calibrations complement the calibration procedure, verifying the results obtained in the laboratory and, in addition, allowing the measurement of parameters that are only obtainable during flight (i.e. stability

For this, physically known targets have to be sensed and incident illumination should be measured during the overflight. Uncertainties in the measurement of hemispheric incident solar radiation and in the incorporation of illumination, sensing and wavelength dependent response of imaged calibrations targets on incident light aggravate then this type of calibration and may also lead to miscalibrations or visually perceptible image stripes. Hence, any striping reduction or retrieval of calibration coefficients should reduce stripes and at the same time the

In the literature, specific approaches for destriping of slope stripes, offset stripes or both exist, and these are primarily based on methods such as interpolation (Oliveira and Gomes, 2010; Tsai and Chen, 2008), local or global image moments (Datt et al., 2003; Cavalli et al., 2008; Le Maire et al., 2008; Liu et al., 2009), filtering (Garcia and Moreno, 2004; Shen et al., 2008; Simpson et al., 1995, Simpson et al., 1998) or complex image statistics of log transformed slopes (Bouali and Ladjal, 2010; Carfantan and Idier, 2010; Gomez-Chova et al., 2008). Most methods replace original, miscalibrated radiances. This should be only applied

In the following, a framework that efficiently reduces linear as well as nonlinear miscalibration is reviewed concurrently preserving the spectral characteristics of sensed surface cover materials. This framework, originally proposed by Rogass et al. (2011) and named as Reduction of Miscalibration Effects (ROME), consists of a linear and a nonlinear slope reduction and an offset reduction that are consecutively performed and does not

Before any radiometric miscalibration reduction is applied, image gradients that are not orthogonal to the image are excluded if they do not represent the image content. Here, Minkowski metrics, gradient operators and edge extraction algorithms are combined to exclude discontinuities if they do not dominate the image content (Canny, 1986; Haralick et al., 1987; Rogass et al., 2009). The linear and the nonlinear slope reduction of ROME are performed for each detector element and band without any information from other detector elements. The offset reduction of ROME considers adjacent image columns and refers to a predefined image column (first column per default) that is assumed to be the reference. Specific image quality metrics, such as the change in SNR (Gao, 1993; Atkinson et al., 2005),

spectral characteristics of the imaged surface materials have to be preserved.

require a priori information or scene and sensor specific parameterisation.

were used to evaluate the necessity of such preceding reduction.

linear least squares regression. Contemporary, differences of detectors are balanced.

measurements, solar calibration, etc).

if information is completely missing or erroneous.

In Rogass et al. (2011) four grey valued images (Fig. 1) from the image database of the Signal and Image Processing Institute (SIPI) of the University of California (Weber, 1997), 512 × 512 pixels in size, and six hyperspectral scenes (3 AISA DUAL, 2 Hyperion and 1 EnMAP) were selected to test and to evaluate the performance of the proposed ROME framework. The grey valued samples as well as the EnMAP scene were considered as noise free. However, the 'Lenna' image (Fig. 1a) and the 'Mandrill' image (Fig. 1b) are excluded from further considerations due to their unique spectral and spatial properties as detailed described in Rogass et al. (2011).

Fig. 1. Grey scaled image samples from the USC SIPI image data base considered in the following as a) 'Lenna', b) 'Mandrill', c) 'Aerial' and d) 'Sailboat on lake'

To simulate different types of miscalibrations and to evaluate their impact on the proposed work, the two grey valued images (Fig. 1 c and d) and the EnMAP scene were artificially degraded. The grey valued images were randomly degraded by applying 800 different sets of multiplicative (slope) and/or additive (offset) Gaussian white noise (Box and Muller,

A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 155

In the process of calibration each detector of the detector array must be solely considered. Known incident radiation reaches a detector pixel and once the incident photons have sufficient energy to excite electrons into a certain energy level, electron-hole pairs are generated – a phenomenon that is known as the photoelectric effect. These free charges are then transmitted and read out through sensor electronic. Dispersive optics placed in front of the sensor disperses the incident radiation into different wavelenghts that is further projected into each row of the detector array. The physical response, considered as signal S in electrons, of one detector element of a pushbroom sensor to incident radiation L can be

> - 2

where L is the at-sensor-radiance, A is the optical aperture of the sensing instrument, FOV is the field of view, T is the integration time, SSI is the Spectral Sampling Interval in respect to the Full Width at Half Maxima, h is the Planck constant, c is the speed of light, ne

number of collected electrons, τ is the optical transmission, λ is the centre wavelength, η is the quantum efficiency and F is the filter efficiency. This can be then related to the recorded

DN DN S FWC

where N is a noise term incorporating Shot-Noise, read-out noise and dark noise, DNmax is the radiometric resolution, FWC is the Full Well Capacity that defines the detector saturation and DN0 is the dark current. To enable a mathematical modelling relating incident radiation and measured DN, either the illumination is changed in a defined way or the integration time is changed or targets of different reflective properties are sensed. The association of at-sensor radiance L to DN is broadly considered as radiometric calibration or, reversely, as radiometric scaling (Chander et al., 2009). To reduce the influence of noise, a specific number of measurements is required. Then, the association can be realised, e.g., by least squares polynomial fit that minimises the differences between modelled and measured at-sensor radiance (Barducci et al., 2004; Xiong and Barnes, 2006). The minimisation of the merit function gives then the transformation coefficients for the association. This can be

FOV F L A tan <sup>τ</sup> <sup>T</sup> λ η SSI

2 e

0

0 i targets

(3)

L c c DN M 1; N 2

(2)

hcn (1)

<sup>−</sup>is the

approximated by a nonlinear relation (Dell`Endice, 2008; Dell`Endice et al., 2009):

<sup>2</sup> S e

max

FWC

S + N DN

<sup>2</sup> <sup>N</sup> <sup>M</sup> 2 i

where Ntargets denotes the number of calibration targets, c0 is the offset regarding the dark current, and M is the polynomial degree. The more the detector response differ from a linear response, the more it is necessary to use a polynomial degree higher as one. Mostly, detector responses can be mathematically modelled. Potential changes in the characteristics of

j 1 i 1

detectors require frequent calibrations that are not practicable.

digital number DN as follows:

achieved by applying the following model:

targets


1958). These 800 noisy matrices were transformed to provide always a mean equal to zero and standard deviations ranging from 0.0001 to 10000 for the multiplicative parts and from - 10000 to 10000 for the additive part. Such high noise levels were chosen to also simulate low SNR scenarious that are noise dominated. More details on the noise matrices and the hyperspectral scenes are given in Rogass et al. (2011).

In this work additional scenes from APEX, ASTER and CHRIS/Proba were inspected, destriped and evaluated. Contemporary, one HyMAP scene was selected and three times artificially and additively degraded by Gaussian white noise to extend the testing of correction facilities for airborne sensors. After degrading three mean SNR levels of 7.6, 76 and 760 were simulated.

The HyMAP sensor is a hyperspectral whiskbroom airborne sensor that consists of one detector column and, hence, offset miscalibrations cannot be perceived as image stripes since each image column has the same offset. Therefore, HyMAP image acquisitions can be used to test correction approaches for pushbroom sensors.

In the following, an image column or across track is considered as x and an image row or along track is considered as y.

## **3. Methods**

## **3.1 Calibration basics**

Radiometric calibrations are often performed in laboratory and basically assign known incident at-sensor radiance to measured digital number (DN). The association is usually realised by a linear least squares regression that minimises the difference between modelled at-sensor radiance and known at-sensor radiance. The regression coefficients are also used in the reverse process to assign measured DN to at-sensor radiance that is considered as radiometric scaling (Chander et al., 2009).

However, uncertainties in the laboratory measurements, in the mathematical modelling and in the incorporation of temporal changes of the detector characteristics lead to miscalibrations and, hence, to visually perceptible image stripes in y-direction. In the following it will be exemplarily shown how to suppress miscalibrations in accordance with the ROME framework. This framework consists of multiple steps that are consecutively processed (Fig. 2).

Pushbroom sensors have detector arrays. Each detector pixel of the array has different physical characteristics. It follows from this that an uncalibrated hyperspectral image is striped. The radiometric calibration and the reverse process - radiometric scaling - aim at the assignment of incident radiance to DN and vice versa. Usually, radiometric calibration can be performed in-flight, vicariously (Biggar et al., 2003; Bruegge et al., 2007), over a flat field (Bindschadler and Choi, 2003) or in laboratory.

1958). These 800 noisy matrices were transformed to provide always a mean equal to zero and standard deviations ranging from 0.0001 to 10000 for the multiplicative parts and from - 10000 to 10000 for the additive part. Such high noise levels were chosen to also simulate low SNR scenarious that are noise dominated. More details on the noise matrices and the

In this work additional scenes from APEX, ASTER and CHRIS/Proba were inspected, destriped and evaluated. Contemporary, one HyMAP scene was selected and three times artificially and additively degraded by Gaussian white noise to extend the testing of correction facilities for airborne sensors. After degrading three mean SNR levels of 7.6, 76

The HyMAP sensor is a hyperspectral whiskbroom airborne sensor that consists of one detector column and, hence, offset miscalibrations cannot be perceived as image stripes since each image column has the same offset. Therefore, HyMAP image acquisitions can be

In the following, an image column or across track is considered as x and an image row or

Radiometric calibrations are often performed in laboratory and basically assign known incident at-sensor radiance to measured digital number (DN). The association is usually realised by a linear least squares regression that minimises the difference between modelled at-sensor radiance and known at-sensor radiance. The regression coefficients are also used in the reverse process to assign measured DN to at-sensor radiance that is considered as

However, uncertainties in the laboratory measurements, in the mathematical modelling and in the incorporation of temporal changes of the detector characteristics lead to miscalibrations and, hence, to visually perceptible image stripes in y-direction. In the following it will be exemplarily shown how to suppress miscalibrations in accordance with the ROME framework. This framework consists of multiple steps that are consecutively

Pushbroom sensors have detector arrays. Each detector pixel of the array has different physical characteristics. It follows from this that an uncalibrated hyperspectral image is striped. The radiometric calibration and the reverse process - radiometric scaling - aim at the assignment of incident radiance to DN and vice versa. Usually, radiometric calibration can be performed in-flight, vicariously (Biggar et al., 2003; Bruegge et al., 2007), over a flat field

hyperspectral scenes are given in Rogass et al. (2011).

used to test correction approaches for pushbroom sensors.

and 760 were simulated.

along track is considered as y.

radiometric scaling (Chander et al., 2009).

Fig. 2. Workflow of ROME destriping per band

(Bindschadler and Choi, 2003) or in laboratory.

**3.1 Calibration basics** 

processed (Fig. 2).

**3. Methods** 

In the process of calibration each detector of the detector array must be solely considered. Known incident radiation reaches a detector pixel and once the incident photons have sufficient energy to excite electrons into a certain energy level, electron-hole pairs are generated – a phenomenon that is known as the photoelectric effect. These free charges are then transmitted and read out through sensor electronic. Dispersive optics placed in front of the sensor disperses the incident radiation into different wavelenghts that is further projected into each row of the detector array. The physical response, considered as signal S in electrons, of one detector element of a pushbroom sensor to incident radiation L can be approximated by a nonlinear relation (Dell`Endice, 2008; Dell`Endice et al., 2009):

$$\mathbf{S(e')} \propto \frac{\mathbf{F} \cdot \mathbf{L} \cdot \mathbf{A} \cdot \tan^2 \left(\frac{\mathbf{FOO'}}{2}\right) \cdot \mathbf{r} \cdot \mathbf{T} \cdot \lambda \cdot \mathbf{\eta} \cdot \mathbf{SSI}}{\mathbf{h} \cdot \mathbf{c} \cdot \mathbf{n}\_{e'}^2} \tag{1}$$

where L is the at-sensor-radiance, A is the optical aperture of the sensing instrument, FOV is the field of view, T is the integration time, SSI is the Spectral Sampling Interval in respect to the Full Width at Half Maxima, h is the Planck constant, c is the speed of light, ne <sup>−</sup>is the number of collected electrons, τ is the optical transmission, λ is the centre wavelength, η is the quantum efficiency and F is the filter efficiency. This can be then related to the recorded digital number DN as follows:

$$\text{DN} = \frac{\text{(S+N)} \cdot \text{DN}\_{\text{max}}}{\text{FWC}} + \text{DN}\_0 \quad \land \quad \text{S} \leq \text{FWC} \tag{2}$$

where N is a noise term incorporating Shot-Noise, read-out noise and dark noise, DNmax is the radiometric resolution, FWC is the Full Well Capacity that defines the detector saturation and DN0 is the dark current. To enable a mathematical modelling relating incident radiation and measured DN, either the illumination is changed in a defined way or the integration time is changed or targets of different reflective properties are sensed. The association of at-sensor radiance L to DN is broadly considered as radiometric calibration or, reversely, as radiometric scaling (Chander et al., 2009). To reduce the influence of noise, a specific number of measurements is required. Then, the association can be realised, e.g., by least squares polynomial fit that minimises the differences between modelled and measured at-sensor radiance (Barducci et al., 2004; Xiong and Barnes, 2006). The minimisation of the merit function gives then the transformation coefficients for the association. This can be achieved by applying the following model:

$$\mathbf{X}^2 = \sum\_{\mathbf{j=1}}^{\mathbf{N\_{target}}} \left[ \mathbf{L} - \left( \mathbf{c}\_0 + \sum\_{i=1}^{\mathbf{M}} \mathbf{c}\_i \cdot \mathbf{D} \mathbf{N}^\dagger \right) \right]^2 \quad \land \quad \mathbf{M} \ge \mathbf{1}; \ \mathbf{N\_{target}} \ge \mathbf{2} \tag{3}$$

where Ntargets denotes the number of calibration targets, c0 is the offset regarding the dark current, and M is the polynomial degree. The more the detector response differ from a linear response, the more it is necessary to use a polynomial degree higher as one. Mostly, detector responses can be mathematically modelled. Potential changes in the characteristics of detectors require frequent calibrations that are not practicable.

A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 157

second unique value is subtracted from the first one, the third unique value from the second one and so on. Then, the probability distribution of these differences is estimated by a histogram. The first frequency category (first bin) contains the smallest difference of unique values. The smallest difference is given as the minimum of all differences of this bin and represents the slope times the smallest difference of unique values (SDUV) of a perfectly calibrated band. The SDUV can be considered equivalent t0 the spectral detector resolution of the considered band. To estimate the slope, it is now necessary to assess the SDUV. This can be straightforwardly performed by computing the median of all binned differences. After dividing this smallest difference by the SDUV the slope for this band and detector is recovered. This is performed for each band and detector. After obtaining the slope coefficients the applicability is validated. This is performed by considering adjacent detector columns. For this, the shapes of the histograms of adjacent columns are inspected. If the number of frequency categories and the positions of the maxima are not equal, then the slope reduction is applied for the considered column. This evaluation bases on the assumption that significant different slopes of similar and adjacent detectors cause stretches (broadening) and shifts in the histogram since considered columns mostly cover the same regions and the related point spread functions (PSF) of each detector are stable during image acquisition and, hence, contribute to their neighbouring pixels the same fraction of their center pixel. In presence of c0 offset miscalibration these offsets are reduced concurrently to c0/c1. Subsequently, SNR is computed to indicate whether previous operation is necessary or not. However, radiometric rescaling is then applied to reduce

In the following it is assumed that the thermally induced offset is constant during one image acquisition and that homogeneous regions are spectrally homogeneous. It follows from this that the offset of one detector element and wavelength contributes the same fraction to all pixels of one detector column and wavelength. Hence, spectral homogeneous regions that appear spectrally different indicate c0 miscalibration if linear c1 or nonlinear c2..M reductions were performed beforehand. To reduce c0 miscalibration, it is necessary to spectrally compare adjacent image columns and to relate succeeding reduction to a predefined column (ROME uses per default the first column). In ROME the differences between adjacent columns are computed and binned in a histogram. Then, it is assumed that the bin (frequency category) with the highest frequency most likely contain the offset difference. To finally assess the offset difference, it is only necessary to average the differences of each bin by the median, to weight the bin according its frequency and to sum all weighted and averaged differences. After c0 reduction a radiometric rescaling should be applied as in ROME to avoid erroneous radiometric levelling due to the used reference column. However, after applying an offset reduction, it is necessary to check whether this operation was necessary or not. In ROME this is performed by considering the evolution of the SNR.

Previous described approaches to correct data for miscalibration can change the mean radiation of a band that is only acceptable if the new mean is closer to a perfect calibrated band compared to the mean of the uncorrected band. This is not known yet and, hence, it is

uncertainties in the estimation of SDUV (see section 3.5).

**3.4 Linear c0 reduction** 

**3.5 Radiometric rescaling** 

However, if then along track stripes in radiometrically scaled images are perceptible miscalibration is indicated. In that case, it is necessary to determine the type of miscalibration – multiplicative or additive – linear or nonlinear. In ROME this is performed by comparing the output SNR to the input SNR due to the specific processing step (Brunn et al., 2003; Gao, 1993). If the SNR is increased, a successful operation is indicated and finally applied. In the following the stripe types are distinguished with respect to equation 3 – additive c0 and multiplicative c1..M miscalibration and reduction. In any case the reduction of miscalibration should be applied before rectification.

## **3.2 Edge exclusion**

Discontinuities such as impulse noise, edges or translucent objects like tree vegetation should be excluded from further processing unless they contribute a high spatial distribution. This is relevant for approaches that aim on the reduction of miscalibration by relying on statistical analyses of spatial and spectral differences in homogeneous regions. Edges can be generally excluded if they do not coincide with along track or across track direction. Since uncertainties in the impact of edges on the reduction process remain edges should be excluded if they do not dominate image content (compare Fig. 1b). In ROME this is performed by a combination of edge detection algorithms with morphological dilation with respect to Minkowski metrics. Potential edge detection algorithms for single banded images must be then adapted to incorporate only along track gradients, because gradients of radiometric miscalibration might superimpose across track gradients. In Rogass et al. (2011) the Canny algorithm (Canny, 1986) is used for single banded images and the Hyperspectral Edge Detection Algorithm (HEDA) is used for multi banded images (Rogass et al., 2010). After obtaining binary edge maps morphological dilations (Haralick et al., 1987; Rogass et al., 2009) are additionally applied to minimise edge adjacency effects caused by Point Spread Function (PSF) related blooming of edges into adjacent regions. The reversed edge map gives then the mask. In case of tree vegetation indices are computed and pixel wise thresholded by the highest two likelihood quartiles of containing vegetation. This binary vegetation map is reversed and multiplied with the reversed binary edge map. Hence, edges and translucent vegetation is excluded. Related equations are given in Rogass et al. (2010) and Rogass et al. (2011). The application of the reverse edge map gives than an edge filtered image.

## **3.3 Linear c1 slope reduction**

In case of linear miscalibration each pixel of one detector (one column) of the same channel is scaled by the same c1 slope (the term 'gain' is often misleading used and corresponds to the maximisation of the radiometric resolution; Chander et al., 2009). A simple differential operation between two pixels from the same column leads to the mathematical elimination of the c0 offset. This difference is then equivalent to the difference of radiance levels. This corresponds to the c1 slope of this detector times the spectral difference of surface cover materials constrained by the detector resolution. Hence, a reduction of c1 miscalibration must recover both c1 slope and the spectral characteristics of the surface cover material. In ROME this is performed per detector or column and band by applying a multistep approach. Here, the radiances are sorted in ascending order. Then, unique radiance values are extracted and ascendingly sorted. Next, all adjacent differences are extracted, i.e. the second unique value is subtracted from the first one, the third unique value from the second one and so on. Then, the probability distribution of these differences is estimated by a histogram. The first frequency category (first bin) contains the smallest difference of unique values. The smallest difference is given as the minimum of all differences of this bin and represents the slope times the smallest difference of unique values (SDUV) of a perfectly calibrated band. The SDUV can be considered equivalent t0 the spectral detector resolution of the considered band. To estimate the slope, it is now necessary to assess the SDUV. This can be straightforwardly performed by computing the median of all binned differences. After dividing this smallest difference by the SDUV the slope for this band and detector is recovered. This is performed for each band and detector. After obtaining the slope coefficients the applicability is validated. This is performed by considering adjacent detector columns. For this, the shapes of the histograms of adjacent columns are inspected. If the number of frequency categories and the positions of the maxima are not equal, then the slope reduction is applied for the considered column. This evaluation bases on the assumption that significant different slopes of similar and adjacent detectors cause stretches (broadening) and shifts in the histogram since considered columns mostly cover the same regions and the related point spread functions (PSF) of each detector are stable during image acquisition and, hence, contribute to their neighbouring pixels the same fraction of their center pixel. In presence of c0 offset miscalibration these offsets are reduced concurrently to c0/c1. Subsequently, SNR is computed to indicate whether previous operation is necessary or not. However, radiometric rescaling is then applied to reduce uncertainties in the estimation of SDUV (see section 3.5).

## **3.4 Linear c0 reduction**

156 Remote Sensing – Advanced Techniques and Platforms

However, if then along track stripes in radiometrically scaled images are perceptible miscalibration is indicated. In that case, it is necessary to determine the type of miscalibration – multiplicative or additive – linear or nonlinear. In ROME this is performed by comparing the output SNR to the input SNR due to the specific processing step (Brunn et al., 2003; Gao, 1993). If the SNR is increased, a successful operation is indicated and finally applied. In the following the stripe types are distinguished with respect to equation 3 – additive c0 and multiplicative c1..M miscalibration and reduction. In any case the reduction of

Discontinuities such as impulse noise, edges or translucent objects like tree vegetation should be excluded from further processing unless they contribute a high spatial distribution. This is relevant for approaches that aim on the reduction of miscalibration by relying on statistical analyses of spatial and spectral differences in homogeneous regions. Edges can be generally excluded if they do not coincide with along track or across track direction. Since uncertainties in the impact of edges on the reduction process remain edges should be excluded if they do not dominate image content (compare Fig. 1b). In ROME this is performed by a combination of edge detection algorithms with morphological dilation with respect to Minkowski metrics. Potential edge detection algorithms for single banded images must be then adapted to incorporate only along track gradients, because gradients of radiometric miscalibration might superimpose across track gradients. In Rogass et al. (2011) the Canny algorithm (Canny, 1986) is used for single banded images and the Hyperspectral Edge Detection Algorithm (HEDA) is used for multi banded images (Rogass et al., 2010). After obtaining binary edge maps morphological dilations (Haralick et al., 1987; Rogass et al., 2009) are additionally applied to minimise edge adjacency effects caused by Point Spread Function (PSF) related blooming of edges into adjacent regions. The reversed edge map gives then the mask. In case of tree vegetation indices are computed and pixel wise thresholded by the highest two likelihood quartiles of containing vegetation. This binary vegetation map is reversed and multiplied with the reversed binary edge map. Hence, edges and translucent vegetation is excluded. Related equations are given in Rogass et al. (2010) and Rogass et al. (2011). The application of the reverse edge map gives than an edge filtered

In case of linear miscalibration each pixel of one detector (one column) of the same channel is scaled by the same c1 slope (the term 'gain' is often misleading used and corresponds to the maximisation of the radiometric resolution; Chander et al., 2009). A simple differential operation between two pixels from the same column leads to the mathematical elimination of the c0 offset. This difference is then equivalent to the difference of radiance levels. This corresponds to the c1 slope of this detector times the spectral difference of surface cover materials constrained by the detector resolution. Hence, a reduction of c1 miscalibration must recover both c1 slope and the spectral characteristics of the surface cover material. In ROME this is performed per detector or column and band by applying a multistep approach. Here, the radiances are sorted in ascending order. Then, unique radiance values are extracted and ascendingly sorted. Next, all adjacent differences are extracted, i.e. the

miscalibration should be applied before rectification.

**3.2 Edge exclusion** 

image.

**3.3 Linear c1 slope reduction** 

In the following it is assumed that the thermally induced offset is constant during one image acquisition and that homogeneous regions are spectrally homogeneous. It follows from this that the offset of one detector element and wavelength contributes the same fraction to all pixels of one detector column and wavelength. Hence, spectral homogeneous regions that appear spectrally different indicate c0 miscalibration if linear c1 or nonlinear c2..M reductions were performed beforehand. To reduce c0 miscalibration, it is necessary to spectrally compare adjacent image columns and to relate succeeding reduction to a predefined column (ROME uses per default the first column). In ROME the differences between adjacent columns are computed and binned in a histogram. Then, it is assumed that the bin (frequency category) with the highest frequency most likely contain the offset difference. To finally assess the offset difference, it is only necessary to average the differences of each bin by the median, to weight the bin according its frequency and to sum all weighted and averaged differences. After c0 reduction a radiometric rescaling should be applied as in ROME to avoid erroneous radiometric levelling due to the used reference column. However, after applying an offset reduction, it is necessary to check whether this operation was necessary or not. In ROME this is performed by considering the evolution of the SNR.

#### **3.5 Radiometric rescaling**

Previous described approaches to correct data for miscalibration can change the mean radiation of a band that is only acceptable if the new mean is closer to a perfect calibrated band compared to the mean of the uncorrected band. This is not known yet and, hence, it is

A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 159

global Shannon Entropy (Rogass et al., 2010, Frank and Smith, 2010) and the local Modified Structural Similarity Index (MSSIM) (Tsai and Chen, 2008; Wang and Bovik, 2009, Wang et al., 2004). In case of available ground truth as for the HyMAP scene the metrics were applied on the result and on ground truth. In case of missing ground truth the metrics were applied

The ROME framework is the most recent approach to recover radiometric calibration in presence of miscalibration. In this work more tests were included to show that ROME is able to reduce miscalibration of broadly used sensors. The summarised results of Tab. 1 show how miscalibration was reduced that is detailed discussed per newly considered sensor in the next sections. All newly tested sensors were miscalibrated due to varying dark current.

**Sensor Scene PSNR Entropy MSSIM Average** 

SNR=76 0 % 3 % 5 % **3 %**  SNR=760 0 % 2 % 5 % **2 %** 

**APEX** 1 4 % 19 % 1 % **8 % ASTER** 1 4 % 19 % 1 % **8 % CHRIS** 1 1 % 0 % 3 % **1 % HyMAP** SNR=7.6 -5 % 4% 6 % **5 %** 

**AISA1,2** 1 -2 % 9 % 8 % **5 % EnMAP1,2** 1 2 % 8 % 7% **6 % Grey images1,2** 3,4 4 % 4 % 2 % **3 % Hyperion1,2** 1 2 % 5 % 7 % **5 %** 

The grey valued images that have been selected for testing in Rogass et al. (2011) cover a broad range of spectral and spatial image properties. In this work 2 out of 4 of the test images were selected due to their similar spatial and spectral distributions compared to remote sensing scenes. The 'Aerial' image is characterised by leptokurtic grey value distribution. The 'Sailboat on lake' image has a balanced grey value distribution and edge quantity. With regard to Rogass et al. (2011) ROME achieved a destriping accuracy of 97 % (compare Tab. 1 and Fig. 3) for the two grey test images. As perceptible in Fig. 3 all stripes were removed and the results differ from ground truth (Fig. 1c and d) only by 3% on

The HyMAP whiskbroom sensor was three times differently offset striped, ROME destriped and the results were evaluated based on the metrics of section 3.7. The offset stripes were

on both the input and the output, but can only be relatively considered.

**4. Results and discussion** 

1 compared to ground truth; 2 from Rogass et al. (2011)

Table 1. Destriping results

**4.1 Grey valued images** 

average (Tab. 1).

**4.2 Artificial striped HyMAP** 

necessary to recover the physical meaning of such. A simple rescaling to the old maximum and minimum cannot be applied since it can be assumed that the old maximum and minimum are biased or erroneous due to miscalibration. In order to preserve the spectral characteristics a specific approach was proposed within the ROME framework as detection of lowest reduction zones. In this approach the correction vectors are inspected in a moving window. In each window the mean of the first and last reduction is rationed by the middle window reduction. After computing all windowed ratios the ratio that is closest to one is selected as reference. Then, the middle column of the reference is considered with regard to its maximum and minimum. The old maximum and minimum, i.e. before any reduction, is compared with the extrema of the reference. These are used to obtain linear transformation coefficients for the whole band that are subsequently applied.

## **3.6 Extended detrending**

In Rogass, et al. (2011) a detrending approach is proposed that aims on the reduction of across track brightness gradients that are caused by offset reduction related frequency undershoots or by material, illumination and viewing geometry dependent surface responses on incident light. These undershoots have a medium frequency on average in comparison to the spatial distribution of the image content.

In ROME the detrending is realised per band by computing the median average of each column, by smoothing and mean normalising this column to its related average vector and by applying this vector on the image by row wise division.

However, lower frequencies are not considered in ROME as they can be perceived as broad brightness gradients. In this work, the new detrending approach is extended to capture lower frequency undershoots. For this, the column median per band of the uncorrected image and the corrected image is computed. This then gives one vector per band and image of the same length as the number of detectors. Each vector is then fitted to a second order polynomial with regard to least squares principles. Consequently, polynomial coefficients for each vector and image are obtained. The polynomial coefficients of the uncorrected image are subtracted from the coefficients of the corrected image. This gives differential coefficients for each band of the corrected image. After this an index vector is created that contains the same number of elements as detectors and consists of detector numbers (i.e. 0, 1, 2, 3… etc.). This could be considered as a x-vector. The x-vector is used to obtain functional values of the differential polynomials. This then gives the differential low frequency trend of this band with respect to the corrected and the uncorrected image. This trend is applied contrary to the detrending of ROME by row wise addition. Both the original detrending of ROME and this extension of the detrending enable a correction for medium and low frequency undershoots. A comparison of this approach and the originally proposed approach of ROME will be given in the results chapter.

#### **3.7 Image quality metrics**

In Rogass et al. (2011) several image quality metrics were combined to evaluate destriping results on the one hand and to avoid potential drawbacks associated with relying on a single type of evaluation on the other hand. In this work the same metrics are used. Those were the global Peak-Signal-to-Noise-Ratio (PSNR) (Rogass et al., 2010; Wang and Bovik, 2009), the global Shannon Entropy (Rogass et al., 2010, Frank and Smith, 2010) and the local Modified Structural Similarity Index (MSSIM) (Tsai and Chen, 2008; Wang and Bovik, 2009, Wang et al., 2004). In case of available ground truth as for the HyMAP scene the metrics were applied on the result and on ground truth. In case of missing ground truth the metrics were applied on both the input and the output, but can only be relatively considered.

## **4. Results and discussion**

158 Remote Sensing – Advanced Techniques and Platforms

necessary to recover the physical meaning of such. A simple rescaling to the old maximum and minimum cannot be applied since it can be assumed that the old maximum and minimum are biased or erroneous due to miscalibration. In order to preserve the spectral characteristics a specific approach was proposed within the ROME framework as detection of lowest reduction zones. In this approach the correction vectors are inspected in a moving window. In each window the mean of the first and last reduction is rationed by the middle window reduction. After computing all windowed ratios the ratio that is closest to one is selected as reference. Then, the middle column of the reference is considered with regard to its maximum and minimum. The old maximum and minimum, i.e. before any reduction, is compared with the extrema of the reference. These are used to obtain linear transformation

In Rogass, et al. (2011) a detrending approach is proposed that aims on the reduction of across track brightness gradients that are caused by offset reduction related frequency undershoots or by material, illumination and viewing geometry dependent surface responses on incident light. These undershoots have a medium frequency on average in

In ROME the detrending is realised per band by computing the median average of each column, by smoothing and mean normalising this column to its related average vector and

However, lower frequencies are not considered in ROME as they can be perceived as broad brightness gradients. In this work, the new detrending approach is extended to capture lower frequency undershoots. For this, the column median per band of the uncorrected image and the corrected image is computed. This then gives one vector per band and image of the same length as the number of detectors. Each vector is then fitted to a second order polynomial with regard to least squares principles. Consequently, polynomial coefficients for each vector and image are obtained. The polynomial coefficients of the uncorrected image are subtracted from the coefficients of the corrected image. This gives differential coefficients for each band of the corrected image. After this an index vector is created that contains the same number of elements as detectors and consists of detector numbers (i.e. 0, 1, 2, 3… etc.). This could be considered as a x-vector. The x-vector is used to obtain functional values of the differential polynomials. This then gives the differential low frequency trend of this band with respect to the corrected and the uncorrected image. This trend is applied contrary to the detrending of ROME by row wise addition. Both the original detrending of ROME and this extension of the detrending enable a correction for medium and low frequency undershoots. A comparison of this approach and the originally proposed

In Rogass et al. (2011) several image quality metrics were combined to evaluate destriping results on the one hand and to avoid potential drawbacks associated with relying on a single type of evaluation on the other hand. In this work the same metrics are used. Those were the global Peak-Signal-to-Noise-Ratio (PSNR) (Rogass et al., 2010; Wang and Bovik, 2009), the

coefficients for the whole band that are subsequently applied.

comparison to the spatial distribution of the image content.

by applying this vector on the image by row wise division.

approach of ROME will be given in the results chapter.

**3.7 Image quality metrics** 

**3.6 Extended detrending** 

The ROME framework is the most recent approach to recover radiometric calibration in presence of miscalibration. In this work more tests were included to show that ROME is able to reduce miscalibration of broadly used sensors. The summarised results of Tab. 1 show how miscalibration was reduced that is detailed discussed per newly considered sensor in the next sections. All newly tested sensors were miscalibrated due to varying dark current.


1 compared to ground truth; 2 from Rogass et al. (2011)

Table 1. Destriping results

## **4.1 Grey valued images**

The grey valued images that have been selected for testing in Rogass et al. (2011) cover a broad range of spectral and spatial image properties. In this work 2 out of 4 of the test images were selected due to their similar spatial and spectral distributions compared to remote sensing scenes. The 'Aerial' image is characterised by leptokurtic grey value distribution. The 'Sailboat on lake' image has a balanced grey value distribution and edge quantity. With regard to Rogass et al. (2011) ROME achieved a destriping accuracy of 97 % (compare Tab. 1 and Fig. 3) for the two grey test images. As perceptible in Fig. 3 all stripes were removed and the results differ from ground truth (Fig. 1c and d) only by 3% on average (Tab. 1).

## **4.2 Artificial striped HyMAP**

The HyMAP whiskbroom sensor was three times differently offset striped, ROME destriped and the results were evaluated based on the metrics of section 3.7. The offset stripes were

A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 161

Column # (**-** Ground truth, **-** Striped, **-** Destriped)

Wavelength [nm] (**-** Ground truth, **-** Striped, **-** Destriped) b)

Fig. 5. Random arbitrary transect a) and spectral profile for a random point due to the

The ASTER sensor was selected for destriping since it has broader bands as an typical hyperspectral sensor and the potential miscalibration is often underestimated in the literature. However, the visible and near infrared bands were selected since these bands were mostly preceptible miscalibrated as exemplarily shown in Fig. 6. With regard to the results of Tab. 1 the destriping of the ASTER scene improved the radiometric calibration by 8 % on average. That is significant in comparison to the CHRIS/Proba related destriping

As shown in Fig. 6 and 7 miscalibration is mostly visually perceptible in contrary to arbitrary transects as presented in Fig. 7a). However, the ROME framework and the adaptive detrending reduced the miscalibration. In consequence, the spectral profile has changed as given in Fig. 7 b). Contrary to airborne sensors miscalibrations of satellite sensors such as ASTER slowly vary over time. It follows from this that correction sets

As shown in Fig. 8 the test scene acquired by the CHRIS sensor is well calibrated. However,

With regard to Tab. 1 ROME improved the radiometric calibration by 1 % on average. This shows on the one hand that the scene of this sensor was well calibrated and on the other hand that ROME is also able to detect and to reduce small variations of miscalibrations.

obtained by the ROME framework can be reused for scenes that are timely close.

remaining miscalibration is visually perceptible as given in Fig. 8 c).

a)

Radiance (offset for clarity)

Radiance (offset

for clarity)

**4.3 ASTER** 

**4.4 CHRIS/Proba** 

subsets of Fig. 4 a), b) and c)

results. As perceptible in Fig. 6 all stripes were removed.

generated as described in Rogass et al. (2011) and scaled to achieve an overall SNR of 7.6, 76 and 760. The offset stripe type was selected since this type is most common to broadly used pushbroom sensors. However, about 97 % of a perfect calibration could be recovered (compare Tab. 1). Hence, the accuracy assumption of Rogass et al. (2011) that 97 % of a perfect calibration can be recovered by ROME is confirmed. With regard to the results visually presented in Fig. 4 the stripes were completely removed.

Fig. 3. Striped (left) and destriped images (right) for a)'Aerial' and b) 'Sailboat on lake'

Fig. 4. False coloured image subset of band 30 (874 nm) of a HyMAP scene (subset a and zoom d), striped representation with a SNR of 7.6 (subset b and zoom e) and the ROME result adaptively detrended (subset c and zoom f)

Uncertainties remain in the assessment of the true radiometric scale as well as in the correct trend. This is visualised in Fig. 5. Considering both the transect and the spectral profile of Fig. 5 leads to the perception that small differences between ground truth and the destriping result persist. These differences approximately amounts 3% due to Tab. 1. This underlines the robustness of the ROME approach and contemporary shows that miscalibration can be efficiently suppressed.

Fig. 5. Random arbitrary transect a) and spectral profile for a random point due to the subsets of Fig. 4 a), b) and c)

## **4.3 ASTER**

160 Remote Sensing – Advanced Techniques and Platforms

generated as described in Rogass et al. (2011) and scaled to achieve an overall SNR of 7.6, 76 and 760. The offset stripe type was selected since this type is most common to broadly used pushbroom sensors. However, about 97 % of a perfect calibration could be recovered (compare Tab. 1). Hence, the accuracy assumption of Rogass et al. (2011) that 97 % of a perfect calibration can be recovered by ROME is confirmed. With regard to the results

a) b)

a) b) c)

d) e) f) Fig. 4. False coloured image subset of band 30 (874 nm) of a HyMAP scene (subset a and zoom d), striped representation with a SNR of 7.6 (subset b and zoom e) and the ROME

Uncertainties remain in the assessment of the true radiometric scale as well as in the correct trend. This is visualised in Fig. 5. Considering both the transect and the spectral profile of Fig. 5 leads to the perception that small differences between ground truth and the destriping result persist. These differences approximately amounts 3% due to Tab. 1. This underlines the robustness of the ROME approach and contemporary shows that miscalibration can be

result adaptively detrended (subset c and zoom f)

efficiently suppressed.

Fig. 3. Striped (left) and destriped images (right) for a)'Aerial' and b) 'Sailboat on lake'

visually presented in Fig. 4 the stripes were completely removed.

The ASTER sensor was selected for destriping since it has broader bands as an typical hyperspectral sensor and the potential miscalibration is often underestimated in the literature. However, the visible and near infrared bands were selected since these bands were mostly preceptible miscalibrated as exemplarily shown in Fig. 6. With regard to the results of Tab. 1 the destriping of the ASTER scene improved the radiometric calibration by 8 % on average. That is significant in comparison to the CHRIS/Proba related destriping results. As perceptible in Fig. 6 all stripes were removed.

As shown in Fig. 6 and 7 miscalibration is mostly visually perceptible in contrary to arbitrary transects as presented in Fig. 7a). However, the ROME framework and the adaptive detrending reduced the miscalibration. In consequence, the spectral profile has changed as given in Fig. 7 b). Contrary to airborne sensors miscalibrations of satellite sensors such as ASTER slowly vary over time. It follows from this that correction sets obtained by the ROME framework can be reused for scenes that are timely close.

#### **4.4 CHRIS/Proba**

As shown in Fig. 8 the test scene acquired by the CHRIS sensor is well calibrated. However, remaining miscalibration is visually perceptible as given in Fig. 8 c).

With regard to Tab. 1 ROME improved the radiometric calibration by 1 % on average. This shows on the one hand that the scene of this sensor was well calibrated and on the other hand that ROME is also able to detect and to reduce small variations of miscalibrations.

A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 163

a) b)

c) d) Fig. 8. False coloured image subset of band 44 (803.8 nm) of a striped CHRIS/Proba scene (subset a and zoom c) and the ROME result adaptively detrended (subset b and zoom d)

Column # (**-** Striped, **-** Destriped)

Wavelength [nm] (**-** Striped, **-** Destriped)

b)

Fig. 9. Random arbitrary transect a) and spectral profile for a random point due to the

a)

Radiance (offset for clarity)

Radiance (offset for clarity)

subsets of Fig. 8 a) and b)

Fig. 6. False coloured image subset of band 3 (807 nm) of a striped ASTER scene (subset a and zoom c) and the ROME result adaptively detrended (subset b and zoom d)

Fig. 7. Random arbitrary transect a) and spectral profile for a random point due to the subsets of Fig. 6 a) and b)

a) b)

c) d) Fig. 6. False coloured image subset of band 3 (807 nm) of a striped ASTER scene (subset a

Column # (**-** Striped, **-** Destriped)

Wavelength [µm] (**-** Striped, **-** Destriped) b)

Fig. 7. Random arbitrary transect a) and spectral profile for a random point due to the

a)

and zoom c) and the ROME result adaptively detrended (subset b and zoom d)

Radiance (offset for clarity)

Radiance (offset for

clarity)

subsets of Fig. 6 a) and b)

Fig. 8. False coloured image subset of band 44 (803.8 nm) of a striped CHRIS/Proba scene (subset a and zoom c) and the ROME result adaptively detrended (subset b and zoom d)

Fig. 9. Random arbitrary transect a) and spectral profile for a random point due to the subsets of Fig. 8 a) and b)

A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 165

Column # (**-** Striped, **-** Destriped)

Wavelength [nm] (**-** Striped, **-** Destriped)

b)

The ROME framework as proposed in Rogass et al. (2011) has limited facilities for short scenes. In this work the impact of short scenes is inspected and an extension to its detrending proposed. Since the effect varies from scene to scene and sensor to sensor it is not possible to quantify the impact. To qualify the impact of short scenes on ROME, one artifically offset striped HyMAP scene subset (SNR=7.6) was destriped. Then, the result was ROME detrended and detrended by the nex approach. The respective results are given Fig. 12 and 13. As perceptible in Fig. 12 b) and e) compared to Fig. 12 c and f significant reduction related brightness gradients are significantly reduced by the new approach.

a) b) c)

Fig. 11. Random arbitrary transect a) and spectral profile for a random point due to the

a)

Radiance (offset for clarity)

Radiance (offset for clarity)

subsets of Fig. 10 a) and b)

**4.6 Results for extended detrending** 

The transect as well as the spectral profile given in Fig. 9 show that ROME preserved spatial and spectral shapes. Contrary to ASTER it appears that the ROME destriping of CHRIS/Proba scenes is only necessary if succeeding processing consider adjacent image columns. In relation to Rogass et al. (2011) 97 % of a perfect calibration can be recovered by ROME. It follows from this that the decision whether ROME is applied on CHRIS/Proba or not should be application driven.

## **4.5 APEX**

The APEX sensor belongs to the recently developed pushbroom sensors and offers a high SNR for a broad set of applications. However, as most pushbroom sensors APEX acquisitions also show perceptible variations in dark current as offset stripes although it is well calibrated like CHRIS/Proba. These stripes are difficult to be detected due to the high SNR of APEX and to the overall low contribution of miscalibration to image spectra. To additionally test the new detrending approach, a subset of a scene (400 lines) was used. In consequence, the results of Tab. 1 that show an overall improvement of calibration of about 8 % are not fully representative for the APEX sensor. In this case it is assumed that 97 % of a perfect calibration has been achieved. The respective results are exemplarily represented in Fig. 10 and 11. Comparing the along track transect of Fig. 11 a) and the spectral profile of Fig. 11 b) with the false coloured image representations of Fig. 10 it appears that changes of spectra are mostly visually perceptible. That supports the assumption that APEX acquisitions are not dominated by dark current variations contrary to Hyperion or AISA DUAL. The assumption that potential frequency undershoots caused by, e.g. offset reductions, are minimised by the new detrending approach is also supported (compare also next chapter).

Fig. 10. False coloured image subset of band 19 (557.3 nm) of a striped APEX scene (subset a and zoom c) and the ROME result adaptively detrended (subset b and zoom d)

Fig. 11. Random arbitrary transect a) and spectral profile for a random point due to the subsets of Fig. 10 a) and b)

## **4.6 Results for extended detrending**

164 Remote Sensing – Advanced Techniques and Platforms

The transect as well as the spectral profile given in Fig. 9 show that ROME preserved spatial and spectral shapes. Contrary to ASTER it appears that the ROME destriping of CHRIS/Proba scenes is only necessary if succeeding processing consider adjacent image columns. In relation to Rogass et al. (2011) 97 % of a perfect calibration can be recovered by ROME. It follows from this that the decision whether ROME is applied on CHRIS/Proba or

The APEX sensor belongs to the recently developed pushbroom sensors and offers a high SNR for a broad set of applications. However, as most pushbroom sensors APEX acquisitions also show perceptible variations in dark current as offset stripes although it is well calibrated like CHRIS/Proba. These stripes are difficult to be detected due to the high SNR of APEX and to the overall low contribution of miscalibration to image spectra. To additionally test the new detrending approach, a subset of a scene (400 lines) was used. In consequence, the results of Tab. 1 that show an overall improvement of calibration of about 8 % are not fully representative for the APEX sensor. In this case it is assumed that 97 % of a perfect calibration has been achieved. The respective results are exemplarily represented in Fig. 10 and 11. Comparing the along track transect of Fig. 11 a) and the spectral profile of Fig. 11 b) with the false coloured image representations of Fig. 10 it appears that changes of spectra are mostly visually perceptible. That supports the assumption that APEX acquisitions are not dominated by dark current variations contrary to Hyperion or AISA DUAL. The assumption that potential frequency undershoots caused by, e.g. offset reductions, are minimised by the new detrending approach is also supported (compare also

a) b)

c) d) Fig. 10. False coloured image subset of band 19 (557.3 nm) of a striped APEX scene (subset a

and zoom c) and the ROME result adaptively detrended (subset b and zoom d)

not should be application driven.

**4.5 APEX** 

next chapter).

The ROME framework as proposed in Rogass et al. (2011) has limited facilities for short scenes. In this work the impact of short scenes is inspected and an extension to its detrending proposed. Since the effect varies from scene to scene and sensor to sensor it is not possible to quantify the impact. To qualify the impact of short scenes on ROME, one artifically offset striped HyMAP scene subset (SNR=7.6) was destriped. Then, the result was ROME detrended and detrended by the nex approach. The respective results are given Fig. 12 and 13. As perceptible in Fig. 12 b) and e) compared to Fig. 12 c and f significant reduction related brightness gradients are significantly reduced by the new approach.

A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 167

along track dimension. This is supported and can be clearly demonstrated, e.g. by transects and spectral profiles of corrected short scenes as presented in Fig. 12 and 13. The subsets for

Pushbroom sensors must be carefully calibrated and miscalibrations aggravate succeeding operations such as atmospheric correction (Richter, 1997), classification and segmentation (Datt et al., 2003). Therefore, it is necessary to efficiently reduce them. The ROME framework and the extended detrending proposed in this work significantly reduce miscalibrations of any type. Like other methods there are also limitations. These limitations

However, a calibration recovery rate of about 97 % still remains uncertainties. High spatial densities of translucent objects such as trees reduce offset reduction facilities and should be excluded beforehand. Tests with different data sets also showed that dense haze or clouds may hinder offset reduction. These effects can be minimised by destriping subsets and by applying estimated correction coefficients on the whole image. In case of clouds or dense

With regard to tests of Rogass et al. (2011) and tests performed for this work it can be assumed that the ROME framework is capable to reduce miscalibrations for most pushbroom sensors. With regard to the high processing speed and the freedom of parameters it can be operationally used. The nonlinear correction has to be improved but represents the current state of the art method as the other methods implemented in ROME. However, further research is necessary. This is particularly applicable for high frequency

Atkinson, P.M.; Sargent, I.M.; Foody, G.M.; Williams, J. Interpreting Image-Based Methods for Estimating the Signal-to-Noise Ratio. Int. J. Rem. Sens. 2005, 26, 5099–5115. Barducci, A.; Castagnoli, F.; Guzzi, D.; Marcoionni, P.; Pippi, I.; Poggesi, M. Solar Spectral

Barnsley, M. J., Allison, D., Lewis, P. 1997. On the information content of multiple view angle (MVA) images. International Journal of Remote Sensing, 18:1936- 1960. Biggar, S.; Thome, K.; Wisniewski, W. Vicarious Radiometric Calibration of EO-1 Sensors by

Bindschadler, R.; Choi, H. Characterizing and Correcting Hyperion Detectors Using Ice-

Bouali, M.; Ladjal, S. A Variational Approach for the Destriping of Modis Data. In IGARSS

Box, G.; Muller, M. A Note on the Generation of Random Normal Deviates. Ann. Math. Stat.

Sheet Images. IEEE Trans. Geosci. Rem. Sens. 2003, 41, 1189–1193.

Symposium, Honolulu, Hawaii, 25–30 July, 2010; pp. 2194–2197.

Irradiometer for Validation of Remotely Sensed Hyperspectral Data. Appl. Opt.

Reference to High-Reflectance Ground Targets. IEEE Trans. Geosci. Rem.

2010: Proceedings of the IEEE International Geoscience and Remote Sensing

mostly relate to offset and nonlinear reductions, not the linear slope reduction.

haze a reference column for offset reduction that is haze or cloud free is suggested.

detrending comparisons had a size of 400 lines.

undershoots that are currently not considered.

2004, 43, 183–195.

1958, 29, 610–611.

Sens.2003, 41, 1174–1179.

**5. Conclusions** 

**6. References** 

Fig. 12. False coloured, small image subset of band 30 (874 nm) of a HyMAP scene (subset a and zoom d) that war artificially offset striped (SNR=7.6), ROME result (subset b and zoom e) and the ROME result adaptively detrended (subset c and zoom f)

The across track transect as well as the spectral profile given in Fig. 13 clearly show the impact of the detrending on the spectral scale. Comparing the old detrending approach with the new detrending approach leads to the perception that the new detrending preserves the spectral profile in both directions the spatial domain - across track (correction direction) and the spectral domain – along the spectrum.

Fig. 13. Random arbitrary transect a) and spectral profile for a random point due to the subsets of Fig. 10 a) and b)

It follows from this that relatively short scenes are more difficult to correct as long scenes. In Rogass et al. (2011) it was assumed that the ROME correction facilities are dependent on the along track dimension. This is supported and can be clearly demonstrated, e.g. by transects and spectral profiles of corrected short scenes as presented in Fig. 12 and 13. The subsets for detrending comparisons had a size of 400 lines.

## **5. Conclusions**

166 Remote Sensing – Advanced Techniques and Platforms

d) e) f) Fig. 12. False coloured, small image subset of band 30 (874 nm) of a HyMAP scene (subset a and zoom d) that war artificially offset striped (SNR=7.6), ROME result (subset b and zoom

The across track transect as well as the spectral profile given in Fig. 13 clearly show the impact of the detrending on the spectral scale. Comparing the old detrending approach with the new detrending approach leads to the perception that the new detrending preserves the spectral profile in both directions the spatial domain - across track (correction direction) and

> Column # (**-** Ground truth, **-** ROME + Detrend, **-** ROME + new detrend) a)

Wavelength [nm] (**-** Ground truth, **-** ROME + Detrend, **-** ROME + new detrend) b)

Fig. 13. Random arbitrary transect a) and spectral profile for a random point due to the

It follows from this that relatively short scenes are more difficult to correct as long scenes. In Rogass et al. (2011) it was assumed that the ROME correction facilities are dependent on the

e) and the ROME result adaptively detrended (subset c and zoom f)

the spectral domain – along the spectrum.

Radiance (offset for clarity)

Radiance

subsets of Fig. 10 a) and b)

Pushbroom sensors must be carefully calibrated and miscalibrations aggravate succeeding operations such as atmospheric correction (Richter, 1997), classification and segmentation (Datt et al., 2003). Therefore, it is necessary to efficiently reduce them. The ROME framework and the extended detrending proposed in this work significantly reduce miscalibrations of any type. Like other methods there are also limitations. These limitations mostly relate to offset and nonlinear reductions, not the linear slope reduction.

However, a calibration recovery rate of about 97 % still remains uncertainties. High spatial densities of translucent objects such as trees reduce offset reduction facilities and should be excluded beforehand. Tests with different data sets also showed that dense haze or clouds may hinder offset reduction. These effects can be minimised by destriping subsets and by applying estimated correction coefficients on the whole image. In case of clouds or dense haze a reference column for offset reduction that is haze or cloud free is suggested.

With regard to tests of Rogass et al. (2011) and tests performed for this work it can be assumed that the ROME framework is capable to reduce miscalibrations for most pushbroom sensors. With regard to the high processing speed and the freedom of parameters it can be operationally used. The nonlinear correction has to be improved but represents the current state of the art method as the other methods implemented in ROME. However, further research is necessary. This is particularly applicable for high frequency undershoots that are currently not considered.

## **6. References**


A Contribution to the Reduction of Radiometric Miscalibration of Pushbroom Sensors 169

Itten, K. I.; Dell'Endice, F.; Hueni, A.; Kneubühler, M.; Schläpfer, D. ; Odermatt, D.; Seidel,

Kaufmann, H.; Segl, K.; Guanter, L.; Förster, K.P.; Stuffler, T.; Müller, A.; Richter, R.; Bach,

Le Maire, G.; François, C.; Soudani, K.; Berveiller, D.; Pontailler, J.-Y.; Bréda, N.; Genet, H.;

Oliveira, P.; Gomes, L. Interpolation of Signals with Missing Data Using Principal Component Analysis. Multidimens. Syst. Signal Process. 2010, 21, 25–43. Oppelt, N.; Mauser, W. The Airborne Visible/Infrared Imaging Spectrometer Avis: Design,

Richter, R. Correction of Atmospheric and Topographic Effects for High Spatial Resolution

Rogass, C.; Itzerott, S.; Schneider, B.; Kaufmann, H.; Hüttl, R. Edge Segmentation by

Rogass, C.; Itzerott, S.; Schneider, B.; Kaufmann, H.; Hüttl, R. Hyperspectral Boundary

Rogass, C.; Spengler, D.; Bochow, M.; Segl, K.; Lausch, A.; Doktor, D.; Roessner, S.; Behling,

Segl, K.; Guanter, L.; Kaufmann, H.; Schubert, J.; Kaiser, S.; Sang, B.; Hofer, S. Simulation of

Shen, H.F.; Ai, T.H.; Li, P.X. Destriping and Inpainting of Remote Sensing Images Using

Simpson, J.J.; Stitt, J.R.; Leath, D.M. Improved Finite Impulse Response Filters for Enhanced Destriping of Geostationary Satellite Data. Rem. Sens. Environ. 1998, 66, 235–249.

Applications to Pushbroom Sensors. Sensors 2011, 11, 6370-6395.

Impulse Response Filters. Rem. Sens. Environ. 1995, 52, 15–35.

http://www.specim.fi/media/aisa-datasheets/dual\_datasheet\_ver2–10.pdf

IEEE Trans. Geosci. Rem. Sens. 2010, 48, 3046–3054.

Spectral Imaging Ltd. Aisa Dual, 2nd Version. Available online:

(accessed on 5 January 2011).

Alternating Vector Field Convolution Snakes. Int. J. Comput. Sci. Netw. Secur.

Detection Based on the Busyness Multiple Correlation Edge Detector and Alternating Vector Field Convolution Snakes. ISPRS J. Photogramm. Rem. Sens.

R.; Wetzel, H.-U.; Kaufmann, H. Reduction of Radiometric Miscalibration—

Spatial Sensor Characteristics in the Context of the EnMAP Hyperspectral Mission.

Maximum a-Posteriori Method. In ISPRS 2008: Proceedings of the XXI Congress: Silk Road for Information from Imagery: The International Society for Photogrammetry and Remote Sensing, 3–11 July, Beijing, China, 2008; pp. 63–70. Simpson, J.J.; Gobat, J.I.; Frouin, R. Improved Destriping of Goes Images Using Finite

Characterization and Calibration. Sensors 2007, 7, 1934–1953.

Satellite Imagery. Int. J. Rem. Sens. 1997, 18, 1099–1111.

Experiment. Sensors 2008, 8, 6235-6259.

USA, 2008; pp. IV-109-IV-112.

2009, 9, 3090–3108.

2009, 9, 123–131.

2010, 55, 468–478.

F.; Huber, S.; Schopfer, J.; Kellenberger, T.; Bühler, Y.; D'Odorico, P.; Nieke, J.; Alberti, E.; Meuleman, K. APEX - the Hyperspectral ESA Airborne Prism

H.; Hostert, P.; Chlebek, C. Environmental Mapping and Analysis Program (EnMAP)—Recent Advances and Status. In IGARSS 2008: Proceedings of the IEEE International Geoscience and remote Sensing Syposium, 7–11 July, Boston, MA,

Davi, H.; Dufrêne, E. Calibration and Validation of Hyperspectral Indices for the Estimation of Broadleaved Forest Leaf Chlorophyll Content, Leaf Mass Per Area, Leaf Area Index and Leaf Canopy Biomass. Rem. Sens. Environ. 2008, 112, 3846–3864. Liu, B.; Zhang, L.; Zhang, X.; Zhang, B.; Tong, Q. Simulation of EO-1 Hyperion Data from

ALI Multispectral Data Based on the Spectral Reconstruction Approach. Sensors


Bruegge, C.; Diner, D.; Kahn, R.; Chrien, N.; Helmlinger, M.; Gaitley, B.; Abdou, W. The Misr Radiometric Calibration Process. Rem. Sens. Environ. 2007, 107, 2–11. Brunn, A.; Fischer, C.; Dittmann, C.; Richter, R. Quality Assessment, Atmospheric and

Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach.

Carfantan, H.; Idier, J. Statistical linear destriping of satellite-based pushbroom-type images.

Cavalli, R.; Fusilli, L.; Pascucci, S.; Pignatti, S.; Santini, F. Hyperspectral Sensor Data

Cocks, T., Jenssen, R., Stewart, A., Wilson, I., and Shields, T., 1998, The HyMap airborne

Dell'Endice, F. Improving the Performance of Hyperspectral Pushbroom Imaging

Dell'Endice, F.; Nieke, J.; Koetz, B.; Schaepman, M.E.; Itten, K. Improving Radiometry of

Frank, S.; Smith, E. Measurement Invariance, Entropy, and Probability. Entropy 2010, 12,

Gao, B.-C. An Operational Method for Estimating Signal to Noise Ratios from Data Acquired with Imaging Spectrometers. Rem. Sens. Environ. 1993, 43, 23–33. García, J.; Moreno, J. Removal of Noises in CHRIS/Proba Images: Application to the SPARC

Gómez-Chova, L.; Alonso, L.; Guanter, L.; Camps-Valls, G.; Calpe, J.; Moreno, J. Correction

Guanter, L.; Segl, K.; Kaufmann, H. Simulation of optical remote-sensing scenes with

Haralick, R.M.; Sternberg, S.R.; Zhuang, X. Image Analysis Using Mathematical Morphology. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 9, 532–550.

IEEE Trans. Geosci. Rem. Sens. 2010, 48, 1860–1871.

IEEE Trans. Geosci. Rem. Sens. 2003, 41, 1246–1259.

J. Photogramm. Rem. Sens. 2009, 64, 632–639.

Frascati, Italy, 28–30 April, 2004; pp. 29–33.

CHRIS/Proba Images. Appl. Opt. 2008, 47, F46-F60.

May 2003; pp. 72–81.

Intell. 1986, 8, 679–698.

Environ. 2009, 113, 893–903.

220.

289–303.

2009, 47, 2340–2351.

Geometric Correction of Airborne Hyperspectral Hymap Data. In Proceedings of the 3rd EARSeL Workshop on Imaging Spectroscopy, Herrsching, Germany, 13–16

Capability for Retrieving Complex Urban Land Cover in Comparison with Multispectral Data: Venice City Case Study (Italy). Sensors 2008, 8, 3299–3320. Chander, G.; Markham, B.; Helder, D. Summary of Current Radiometric Calibration

Coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI Sensors. Rem. Sens.

hyperspectral sensor: the system, calibration and performance, First EARSeL Workshop on Imaging Spectroscopy, 6-8 Oct. 1998, Zurich, Switzerland, pp. 37-42. Datt, B.; McVicar, T.R.; van Niel, T.G.; Jupp, D.L.B.; Pearlman, J.S. Preprocessing EO-1

Hyperion Hyperspectral Data to Support the Application of Agricultural Indexes.

Spectrometers for Specific Science Applications. In ISPRS 2008: Proceedings of the XXI Congress: Silk Road for Information from Imagery: The International Society for Photogrammetry and Remote Sensing, 3–11 July, Beijing, China, 2008; pp. 215–

Imaging Spectrometers by Using Programmable Spectral Regions of Interest. ISPRS

Campaign Data. In Proceedings of the 2nd CHRIS/Proba Workshop, ESA/ERSIN,

of Systematic Spatial Noise in Push-Broom Hyperspectral Sensors: Application to

application to the enmap hyperspectral mission. IEEE Trans. Geosci. Rem. Sens.


 http://www.specim.fi/media/aisa-datasheets/dual\_datasheet\_ver2–10.pdf (accessed on 5 January 2011).

**8** 

**Differential Absorption** 

*1Old Dominion University 2NASA Langley Research Center* 

*3SUNY at Albany* 

*USA* 

**Microwave Radar Measurements for** 

**Remote Sensing of Barometric Pressure** 

As coastal regions around the world continue to grow and develop, the threat to these communities from tropical cyclones also increases. The predicted sea level rise over the next decades will certainly add to these risks. Developed low-lying coastal regions are already of major concern to emergency management professionals. While hurricane forecasting is available, improved predictions of storm intensity and track are needed to allow the time to prepare and evacuate larger cities. The predictions and forecasts of the intensity and track of tropical storms by regional numerical weather models can be improved with the addition of large spatial coverage and frequent sampling of sea surface barometry. These data are

This chapter will present recent advances in the development of a microwave radar instrument technique to remotely sense barometric pressure over the ocean and may provide the large-scale sea surface barometric pressure data needed to substantially improve the tropical storm forecasts. The chapter will include a brief introduction, a discussion of the applications of remote sensing of sea surface barometric pressure, a discussion of the theoretical basis for the differential absorption radar concept, the results of laboratory and flight testing using a prototype radar, and a detailed discussion of the

Surface air pressure is one of the most important atmospheric parameters that are regularly measured at ground based surface meteorological stations. Over oceans, sea surface air barometric pressures are usually measured by limited numbers of in-situ observations conducted by buoy stations and oil platforms. The spatial coverage of the observations of this dynamically critical parameter for use by weather forecasters is very poor. For example, along the east coast of the United States and Gulf of Mexico, only about 40 buoys are

performance challenges and requirements of an operational instrument.

**1. Introduction 1.1 Overview** 

**1.2 Background** 

critically needed for use in models.

Roland Lawrence1, Bin Lin2, Steve Harrah2 and Qilong Min3


## **Differential Absorption Microwave Radar Measurements for Remote Sensing of Barometric Pressure**

Roland Lawrence1, Bin Lin2, Steve Harrah2 and Qilong Min3 *1Old Dominion University 2NASA Langley Research Center 3SUNY at Albany USA* 

## **1. Introduction**

### **1.1 Overview**

170 Remote Sensing – Advanced Techniques and Platforms

Tsai, F.; Chen, W. Striping Noise Detection and Correction of Remote Sensing Images. IEEE

Ungar, S.G.; Pearlman, J.S.; Mendenhall, J.A.; Reuter, D. Overview of the Earth Observing One (EO-1) Mission. IEEE Trans. Geosci. Rem. Sens. 2003, 41, 1149–1159. Wang, Z.; Bovik, A.C. Mean Squared Error: Love It or Leave It? A New Look at Signal

Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. Weber, A. The USC-SIPI Image Database; Technical Report, University of Southern California, Signal and Image Processing Institute: Los Angeles, CA, USA, 1997. Xiong, X.; Barnes, W. An Overview of Modis Radiometric Calibration and Characterization.

Yamaguchi, Y.; Kahle, A.B.; Tsu, H.; Kawakami, T.; Pniel, M. Overview of Advanced

Spaceborne Thermal Emission and Reflection Radiometer (ASTER). IEEE Trans.

Fidelity Measures. IEEE Signal Process. Mag. 2009, 26, 98–117.

Trans. Geosci. Rem. Sens. 2008, 46, 4122–4131.

Geosci. Remote Sensing 1998, 36(4), 1062-1071.

Adv. Atmos. Sci. 2006, 23, 69–79.

As coastal regions around the world continue to grow and develop, the threat to these communities from tropical cyclones also increases. The predicted sea level rise over the next decades will certainly add to these risks. Developed low-lying coastal regions are already of major concern to emergency management professionals. While hurricane forecasting is available, improved predictions of storm intensity and track are needed to allow the time to prepare and evacuate larger cities. The predictions and forecasts of the intensity and track of tropical storms by regional numerical weather models can be improved with the addition of large spatial coverage and frequent sampling of sea surface barometry. These data are critically needed for use in models.

This chapter will present recent advances in the development of a microwave radar instrument technique to remotely sense barometric pressure over the ocean and may provide the large-scale sea surface barometric pressure data needed to substantially improve the tropical storm forecasts. The chapter will include a brief introduction, a discussion of the applications of remote sensing of sea surface barometric pressure, a discussion of the theoretical basis for the differential absorption radar concept, the results of laboratory and flight testing using a prototype radar, and a detailed discussion of the performance challenges and requirements of an operational instrument.

#### **1.2 Background**

Surface air pressure is one of the most important atmospheric parameters that are regularly measured at ground based surface meteorological stations. Over oceans, sea surface air barometric pressures are usually measured by limited numbers of in-situ observations conducted by buoy stations and oil platforms. The spatial coverage of the observations of this dynamically critical parameter for use by weather forecasters is very poor. For example, along the east coast of the United States and Gulf of Mexico, only about 40 buoys are

Differential Absorption Microwave Radar

surface pressure presently available.

modeling and forecasting over oceans.

Measurements for Remote Sensing of Barometric Pressure 173

such as temperature and humidity. There have been suggestions for using satellite oxygen A-band methods, both passive and active, to measure pressure (Barton & Scott, 1986; Korb & Weng, 1982; Singer, 1968; Wu, 1985; and references therein). The active instruments rely on the operation of complicated, highly-stable laser systems on a space platform and are thus technically difficult. Passive methods are restricted to daytime measurements and areas of low cloud cover (Barton & Scott, 1986). Although substantial research efforts have been underway, there are no realizations of remote sensing measurements for atmospheric

This chapter will describe the development of an active microwave radar working at moderate to strong O2 absorption bands in the frequency range of 50~56 GHz for surface barometric pressure remote sensing, especially over oceans. The sensor concept and flight testing of a proof-of-concept O2-band radar system for sea surface air pressure remote sensing will also be discussed. At these radar wavelengths, the reflection of radar echoes from water surfaces is strongly attenuated by atmospheric column O2 amounts. Because of the uniform mixture of O2 gases within the atmosphere, the atmospheric column O2 amounts are proportional to atmospheric path lengths and atmospheric column air amounts, thus, to surface barometric pressures. Historically, (Flower & Peckham, 1978) studied the possibility of a microwave pressure sounder using active microwave techniques. A total of six channels covering frequencies from ~25GHz to ~75GHz were considered. A major challenge in this approach is the wide spectral region and the significant additional dependence of radar signals on microwave absorption from liquid water (LW) clouds and atmospheric water vapor (WV) over this range of frequencies. Atmospheric and cloud water temperatures also have different effects on the absorptions at different wavelengths (Lin et al., 1998a, 1998b, 2001). The complexity in matching footprints and obtaining accurate surface reflectivities of the six different wavelength channels makes their system problematic (Barton & Scott, 1986). Recently, (Lin & Hu, 2005) have considered a different technique that uses a dual-frequency, O2-band radar to overcome the technical obstacles. They have outlined the characteristics of the novel radar system, and simulated the system performance. The technique uses dual wavelength channels with similar water vapor and liquid water absorption characteristics, as well as similar footprints and sea surface reflectivities, because of the closely spaced spectra. The microwave absorption effects due to LW and WV and the influences of sea surface reflection should be effectively removed by use of the ratio of reflected radar signals of the two channels. Simulated results (Lin & Hu, 2005) suggest that the accuracy of instantaneous surface air pressure estimations from the echo ratio could reach 4 – 7 millibars (mb). With multiple pressure measurements over less than ~1km2 sea surface spots from the radar echoes, the pressure estimates could be significantly reduced to a few millibars, which is close to the accuracy of in situ measurements and very useful for tropical storm and large scale operational weather

**2. Sea surface barometric pressure measurements for hurricane forecasts**  One of the proposed applications of the Differential Absorption Barometric Radar, hereafter called DiBAR, is to improve weather forecasts and predictions, especially for tropical storms. To address the usefulness of sea surface barometric measurements from DiBAR, we use weather prediction models to simulate predicted hurricane intensities and tracks. Predicted results with sea surface air pressure data incorporated are compared with those

available under the NOAA Ocean Observing System (NOOS) of the NOAA National Data Buoy Center (NDBC; http://www.ndbc.noaa.gov/). The tropical atmosphere ocean (TAO) program only has 10 sites from which the barometric pressure is measured. For severe weather conditions, such as tropical storms and hurricanes, these NOOS and TAO buoy systems usually cannot provide spatially desirable in-situ measurements due to either the lack of buoy stations along the actual track of the storm or malfunctions of buoys caused by the severe weather itself.

Under tropical cyclone conditions, including tropical depression, tropical storm, hurricane, and super-typhoon cases, the surface barometric pressure is one of the most important meteorological parameters in the prediction and forecast of the intensity and track of tropical storms and hurricanes. The central air pressure at sea level of tropical cyclones is the most commonly used indicator for hurricane intensity. The classification of tropical storms and hurricanes on the Saffir-Simpson Hurricane Scale (SSHS) is based on the maximum sustained surface wind speed that is a direct result of the interaction between the central air pressure and the pressure fields surrounding tropical storms. Because intensity predictions and landfall forecasts heavily rely upon them, measurements of the central pressure of tropical storms are extremely important. The only method currently available for use is a manned aircraft dropsonde technique. The problem with the dropsonde technique is that each dropsonde supplies only one spatial point measurement at one instant of interest during the passage of the storm. This limits data to the number of dropsondes used and their spatial distribution and thereby leaves most of the storm area unmeasured. Furthermore, dropsondes are difficult to precisely position and cannot be reused. Figure 1 shows the current capability for sea surface barometric measurements; all of them are in situ observations.

To improve predictions and forecasts of the intensity and track of tropical storms, large spatial coverage and frequent sampling of sea surface barometry are critically needed for use in numerical weather models. These needed measurements of sea surface barometric pressure cannot be realized by in-situ buoy and aircraft dropsonde techniques. One approach that may provide barometry in large spatial and temporal scales over oceans is the use of remote sensing techniques including those on board manned aircraft, unmanned aerial vehicles (UAVs), and satellite platforms.

During the last two decades, the development of remote sensing methods, especially airborne and satellite techniques, for large and global scale sea surface pressure measurements significantly lagged methods for other important meteorological parameters,

Fig. 1. Drift Buoy (left), Moored Buoy (middle), and Dropsonde (right).

available under the NOAA Ocean Observing System (NOOS) of the NOAA National Data Buoy Center (NDBC; http://www.ndbc.noaa.gov/). The tropical atmosphere ocean (TAO) program only has 10 sites from which the barometric pressure is measured. For severe weather conditions, such as tropical storms and hurricanes, these NOOS and TAO buoy systems usually cannot provide spatially desirable in-situ measurements due to either the lack of buoy stations along the actual track of the storm or malfunctions of buoys caused by

Under tropical cyclone conditions, including tropical depression, tropical storm, hurricane, and super-typhoon cases, the surface barometric pressure is one of the most important meteorological parameters in the prediction and forecast of the intensity and track of tropical storms and hurricanes. The central air pressure at sea level of tropical cyclones is the most commonly used indicator for hurricane intensity. The classification of tropical storms and hurricanes on the Saffir-Simpson Hurricane Scale (SSHS) is based on the maximum sustained surface wind speed that is a direct result of the interaction between the central air pressure and the pressure fields surrounding tropical storms. Because intensity predictions and landfall forecasts heavily rely upon them, measurements of the central pressure of tropical storms are extremely important. The only method currently available for use is a manned aircraft dropsonde technique. The problem with the dropsonde technique is that each dropsonde supplies only one spatial point measurement at one instant of interest during the passage of the storm. This limits data to the number of dropsondes used and their spatial distribution and thereby leaves most of the storm area unmeasured. Furthermore, dropsondes are difficult to precisely position and cannot be reused. Figure 1 shows the current capability for sea surface

To improve predictions and forecasts of the intensity and track of tropical storms, large spatial coverage and frequent sampling of sea surface barometry are critically needed for use in numerical weather models. These needed measurements of sea surface barometric pressure cannot be realized by in-situ buoy and aircraft dropsonde techniques. One approach that may provide barometry in large spatial and temporal scales over oceans is the use of remote sensing techniques including those on board manned aircraft, unmanned

During the last two decades, the development of remote sensing methods, especially airborne and satellite techniques, for large and global scale sea surface pressure measurements significantly lagged methods for other important meteorological parameters,

barometric measurements; all of them are in situ observations.

Fig. 1. Drift Buoy (left), Moored Buoy (middle), and Dropsonde (right).

aerial vehicles (UAVs), and satellite platforms.

the severe weather itself.

such as temperature and humidity. There have been suggestions for using satellite oxygen A-band methods, both passive and active, to measure pressure (Barton & Scott, 1986; Korb & Weng, 1982; Singer, 1968; Wu, 1985; and references therein). The active instruments rely on the operation of complicated, highly-stable laser systems on a space platform and are thus technically difficult. Passive methods are restricted to daytime measurements and areas of low cloud cover (Barton & Scott, 1986). Although substantial research efforts have been underway, there are no realizations of remote sensing measurements for atmospheric surface pressure presently available.

This chapter will describe the development of an active microwave radar working at moderate to strong O2 absorption bands in the frequency range of 50~56 GHz for surface barometric pressure remote sensing, especially over oceans. The sensor concept and flight testing of a proof-of-concept O2-band radar system for sea surface air pressure remote sensing will also be discussed. At these radar wavelengths, the reflection of radar echoes from water surfaces is strongly attenuated by atmospheric column O2 amounts. Because of the uniform mixture of O2 gases within the atmosphere, the atmospheric column O2 amounts are proportional to atmospheric path lengths and atmospheric column air amounts, thus, to surface barometric pressures. Historically, (Flower & Peckham, 1978) studied the possibility of a microwave pressure sounder using active microwave techniques. A total of six channels covering frequencies from ~25GHz to ~75GHz were considered. A major challenge in this approach is the wide spectral region and the significant additional dependence of radar signals on microwave absorption from liquid water (LW) clouds and atmospheric water vapor (WV) over this range of frequencies. Atmospheric and cloud water temperatures also have different effects on the absorptions at different wavelengths (Lin et al., 1998a, 1998b, 2001). The complexity in matching footprints and obtaining accurate surface reflectivities of the six different wavelength channels makes their system problematic (Barton & Scott, 1986). Recently, (Lin & Hu, 2005) have considered a different technique that uses a dual-frequency, O2-band radar to overcome the technical obstacles. They have outlined the characteristics of the novel radar system, and simulated the system performance. The technique uses dual wavelength channels with similar water vapor and liquid water absorption characteristics, as well as similar footprints and sea surface reflectivities, because of the closely spaced spectra. The microwave absorption effects due to LW and WV and the influences of sea surface reflection should be effectively removed by use of the ratio of reflected radar signals of the two channels. Simulated results (Lin & Hu, 2005) suggest that the accuracy of instantaneous surface air pressure estimations from the echo ratio could reach 4 – 7 millibars (mb). With multiple pressure measurements over less than ~1km2 sea surface spots from the radar echoes, the pressure estimates could be significantly reduced to a few millibars, which is close to the accuracy of in situ measurements and very useful for tropical storm and large scale operational weather modeling and forecasting over oceans.

## **2. Sea surface barometric pressure measurements for hurricane forecasts**

One of the proposed applications of the Differential Absorption Barometric Radar, hereafter called DiBAR, is to improve weather forecasts and predictions, especially for tropical storms. To address the usefulness of sea surface barometric measurements from DiBAR, we use weather prediction models to simulate predicted hurricane intensities and tracks. Predicted results with sea surface air pressure data incorporated are compared with those

Differential Absorption Microwave Radar

that Ivan reached the category 5 level.

model physics options were used for the two experiments.

Measurements for Remote Sensing of Barometric Pressure 175

**2.2 Forecast improvements with the addition of storm central pressure measurements**  The analyzed case here is hurricane Ivan (2004). Ivan was a classical, long-lived Cape Verde hurricane that reached Category 5 strength (SSHS) three times and caused considerable damage and loss of life as it passed through the Caribbean Sea. Ivan developed from a large tropical wave accompanied by a surface low-pressure system that moved off the west coast of Africa on 31 August 2004. The development of the system continued and became tropical storm Ivan at 0600 UTC 3 September and a hurricane at 0600 UTC 5 September. After passing Grenada and moving into the southeastern Caribbean Sea, the hurricane's intensity leveled off until 1800 UTC on 8 September when a brief period of rapid intensification ensued. Reconnaissance aircraft data indicated Ivan reached its second peak intensity -- 140 kt and category 5 strength (SSHS) -- just 12 hours later. This was the first of three occasions

We choose the forecast period from 0000 UTC 8 Sept. to 0000 UTC 11 Sept. 2004 to examine effects of the central sea surface air pressure on predicting the hurricane track. For the control run (referred as CTL), the model started at 0000 UTC 8 Sep 2004 with the NOAA NCEP Global Forecast System (GFS) analysis fields as the model initial condition. For the central sea level air pressure experiment run (referred as SLP), only the observed central pressure was added to the initialization, using the GFS analysis as the first guess. The lateral boundary conditions for both simulations came from the GFS 6-hour forecasts. The same

As shown in Figure 3, from run CTL, the hurricane central pressure at the initial time of 0000 UTC 8 Sept 2004 is about 998.7 hPa (obtained from the NOAA/NCEP GFS global largescale analysis), which is ~15 hPa lower than normal conditions. Although this simulated pressure drop is much smaller than the real hurricane center air pressure depression (see below) and relatively weak for a hurricane, it still could be well captured with our proposed O2-band radar systems. At 0000 UTC 8 Sept 2004, based on the report of the National Hurricane Center, hurricane Ivan was located at 12.0 N and 62.6 W, and the value of central sea level pressure of the hurricane is actually 950 hPa. This observation-based central pressure estimate was assimilated into the model analysis system. The assimilated initialization field shown in Figure 4 is used as the initial condition in run SLP. The value of the central pressure of the hurricane now is about 951.5 hPa, much closer to the observed

Fig. 2. ARPS: a regional cloud-scale modeling/assimilation system.

without the pressure measurements. These surface pressures were obtained from later analysis of in-situ measurements and the assimilated data of the actual hurricane events. During these actual hurricane events, these sea surface pressure data were not available a priori for modeling and prediction. Quantitative potential improvements in the forecasts and predictions of studied hurricane cases are evaluated. We emphasize that the sea surface air pressure data injected into weather prediction models are not exactly the same as those from later analysis of in-situ measurements and the assimilated data of the actual hurricane events. Some uncertainties exist in the injected pressure data in our simulations to reflect potential DiBAR remote sensing errors, according to our current understanding of DiBAR systems and retrieval uncertainties. This section provides a brief description of the weather forecast model used to simulate the impact of pressure data consistent with our instrument concept, as well as, the results of our study to simulate the improved track and intensity predictions that result from the inclusion of the simulated DiBAR pressure data.

## **2.1 Weather forecast model description**

The numerical weather forecast model used in this study is the Advanced Regional Prediction System (ARPS) developed by the Center for Analysis and Prediction of Storms (CAPS) of the University of Oklahoma and adopted by NASA Langley Research Center (Wang et al., 2001; Xue et al., 2003; Wang & Minnis, 2003). The forward prediction component of the ARPS is a three-dimensional, non-hydrostatic compressible model in a terrain-following coordinate system. The model includes a set of equations for momentum, continuity, potential temperature, water vapor, and turbulence kinetic energy (TKE). It also includes five conservation equations for hydrometeor species: cloud water (small cloud liquid droplets), cloud ice (small ice crystals), rain, snow, and hail (Tao & Simpson 1993). The cloud water and cloud ice move with the air, whereas the rain, snow, and hail fall with their terminal velocity. It has multiple-nested capability to cover the cloud-scale domain and mesoscale domain at the same time. The model employs advanced numerical techniques (e.g., a flux-corrected transport advection scheme, a positive definite advection scheme, and the split-time step). The most unique physical processes included in the model system are a scheme of Kessler-type warm-rain formation and 3-type ice (ice, snow, and hail) microphysics; a soil-vegetation land-surface model; a 1.5-order TKE-based non-local planetary boundary layer parameterization scheme; a cloud-radiation interaction atmospheric radiative transfer scheme; and some cumulus parameterization schemes used for coarse grid-size. Furthermore, a sophisticated long- and short-wave cloud-radiation interaction package (Chou, 1990, 1992; Chou & Suarez, 1994) has been applied to the ARPS model. The ARPS can provide more physically realistic 4D cloud information in very-highresolution of spatial (cloud processes) and temporal (minutes) scales (Figure. 2).

The ARPS model was run in a horizontal domain of 4800 km, east-west and 4000 km, southnorth, and a vertical domain of 25 km. The horizontal grid spacing is 25 km, and the vertical grid space varies from 20 m at the surface to 980 m at the model top. These spatial resolutions are used because they are comparable to those of the models used in the Global Modeling and Assimilation Office, NASA Goddard Space Flight Center. The options for ice microphysics and atmospheric cloud-radiation interactive transfer parameterization were both used in the model. Because of the use of the relatively coarser grid-size of 25 km, the new Kain & Fritsch cumulus parameterization scheme was used together with explicit ice microphysics.

without the pressure measurements. These surface pressures were obtained from later analysis of in-situ measurements and the assimilated data of the actual hurricane events. During these actual hurricane events, these sea surface pressure data were not available a priori for modeling and prediction. Quantitative potential improvements in the forecasts and predictions of studied hurricane cases are evaluated. We emphasize that the sea surface air pressure data injected into weather prediction models are not exactly the same as those from later analysis of in-situ measurements and the assimilated data of the actual hurricane events. Some uncertainties exist in the injected pressure data in our simulations to reflect potential DiBAR remote sensing errors, according to our current understanding of DiBAR systems and retrieval uncertainties. This section provides a brief description of the weather forecast model used to simulate the impact of pressure data consistent with our instrument concept, as well as, the results of our study to simulate the improved track and intensity

predictions that result from the inclusion of the simulated DiBAR pressure data.

resolution of spatial (cloud processes) and temporal (minutes) scales (Figure. 2).

The ARPS model was run in a horizontal domain of 4800 km, east-west and 4000 km, southnorth, and a vertical domain of 25 km. The horizontal grid spacing is 25 km, and the vertical grid space varies from 20 m at the surface to 980 m at the model top. These spatial resolutions are used because they are comparable to those of the models used in the Global Modeling and Assimilation Office, NASA Goddard Space Flight Center. The options for ice microphysics and atmospheric cloud-radiation interactive transfer parameterization were both used in the model. Because of the use of the relatively coarser grid-size of 25 km, the new Kain & Fritsch cumulus parameterization scheme was used together with explicit ice

The numerical weather forecast model used in this study is the Advanced Regional Prediction System (ARPS) developed by the Center for Analysis and Prediction of Storms (CAPS) of the University of Oklahoma and adopted by NASA Langley Research Center (Wang et al., 2001; Xue et al., 2003; Wang & Minnis, 2003). The forward prediction component of the ARPS is a three-dimensional, non-hydrostatic compressible model in a terrain-following coordinate system. The model includes a set of equations for momentum, continuity, potential temperature, water vapor, and turbulence kinetic energy (TKE). It also includes five conservation equations for hydrometeor species: cloud water (small cloud liquid droplets), cloud ice (small ice crystals), rain, snow, and hail (Tao & Simpson 1993). The cloud water and cloud ice move with the air, whereas the rain, snow, and hail fall with their terminal velocity. It has multiple-nested capability to cover the cloud-scale domain and mesoscale domain at the same time. The model employs advanced numerical techniques (e.g., a flux-corrected transport advection scheme, a positive definite advection scheme, and the split-time step). The most unique physical processes included in the model system are a scheme of Kessler-type warm-rain formation and 3-type ice (ice, snow, and hail) microphysics; a soil-vegetation land-surface model; a 1.5-order TKE-based non-local planetary boundary layer parameterization scheme; a cloud-radiation interaction atmospheric radiative transfer scheme; and some cumulus parameterization schemes used for coarse grid-size. Furthermore, a sophisticated long- and short-wave cloud-radiation interaction package (Chou, 1990, 1992; Chou & Suarez, 1994) has been applied to the ARPS model. The ARPS can provide more physically realistic 4D cloud information in very-high-

**2.1 Weather forecast model description** 

microphysics.

Fig. 2. ARPS: a regional cloud-scale modeling/assimilation system.

## **2.2 Forecast improvements with the addition of storm central pressure measurements**

The analyzed case here is hurricane Ivan (2004). Ivan was a classical, long-lived Cape Verde hurricane that reached Category 5 strength (SSHS) three times and caused considerable damage and loss of life as it passed through the Caribbean Sea. Ivan developed from a large tropical wave accompanied by a surface low-pressure system that moved off the west coast of Africa on 31 August 2004. The development of the system continued and became tropical storm Ivan at 0600 UTC 3 September and a hurricane at 0600 UTC 5 September. After passing Grenada and moving into the southeastern Caribbean Sea, the hurricane's intensity leveled off until 1800 UTC on 8 September when a brief period of rapid intensification ensued. Reconnaissance aircraft data indicated Ivan reached its second peak intensity -- 140 kt and category 5 strength (SSHS) -- just 12 hours later. This was the first of three occasions that Ivan reached the category 5 level.

We choose the forecast period from 0000 UTC 8 Sept. to 0000 UTC 11 Sept. 2004 to examine effects of the central sea surface air pressure on predicting the hurricane track. For the control run (referred as CTL), the model started at 0000 UTC 8 Sep 2004 with the NOAA NCEP Global Forecast System (GFS) analysis fields as the model initial condition. For the central sea level air pressure experiment run (referred as SLP), only the observed central pressure was added to the initialization, using the GFS analysis as the first guess. The lateral boundary conditions for both simulations came from the GFS 6-hour forecasts. The same model physics options were used for the two experiments.

As shown in Figure 3, from run CTL, the hurricane central pressure at the initial time of 0000 UTC 8 Sept 2004 is about 998.7 hPa (obtained from the NOAA/NCEP GFS global largescale analysis), which is ~15 hPa lower than normal conditions. Although this simulated pressure drop is much smaller than the real hurricane center air pressure depression (see below) and relatively weak for a hurricane, it still could be well captured with our proposed O2-band radar systems. At 0000 UTC 8 Sept 2004, based on the report of the National Hurricane Center, hurricane Ivan was located at 12.0 N and 62.6 W, and the value of central sea level pressure of the hurricane is actually 950 hPa. This observation-based central pressure estimate was assimilated into the model analysis system. The assimilated initialization field shown in Figure 4 is used as the initial condition in run SLP. The value of the central pressure of the hurricane now is about 951.5 hPa, much closer to the observed

Differential Absorption Microwave Radar

the background.

forecast (B80) case.

Measurements for Remote Sensing of Barometric Pressure 177

Fig. 4. The sea level pressure at the initial time of 0000 UTC 8 Sep 2004 for the experimental run SLP. The observed central pressure was used for the initialization with GFS analysis as

The results of typical weather predictions for a tropical cyclone, using not only center sea surface air pressures but also large area pressure fields, is shown in Fig. 6 for 1996 hurricane Fran, which occurred from 0000UTC September 3 to 0060 UTC September 6, 1996 (Xiao et al. 2000). Due to the lack of data, the model standard run (control run; CTL curve) started with a location error of about 100km, and gradually deviated from the observed hurricane track (OBS curve) up to about 350km for the predicted landfall site. With pressure data and calculated wind fields as inputs, the assimilations with 54km (A80 curve) and 18km (B80 curve) spatial resolution significantly reduced the errors in predicted storm tracks. Comparing the 3 day forecasts, the high-resolution model (18 km, B80) had a small starting location error of about 10 km that increased to about 100 km at the predicted landfall site, and the low-resolution model (54 km, A80) had a starting error of about 35 km and predicted landfall with a 170 km error. Such greatly improved predictions could make hurricane preparation and evacuation much easier, especially for the high resolution

**2.3 Forecast improvements when pressure fields are ingested into model** 

Fig. 3. The sea level air pressure at the initial time of 0000 UTC 8 Sep 2004 for the control run CTL. It is directly interpolated from GFS analysis.

950 hPa and within the error bar of observations. Compared to Figure 3, the change in the initial hurricane center sea level pressure is about 47mb, which significantly improves the predicted hurricane intensity.

The model was integrated for 72 hours at a time step of 15-seconds and used to estimate the storm track. It is not surprising that both of the experiments capture the hurricane track much better than the operational GFS global forecasting (Figure 5). This is mainly because the regional numerical model is non-hydrostatic with explicit cloud/ice-physics parameterizations, cloud-radiation interaction, as well as advanced turbulence schemes, and land-surface interaction. This kind of advanced regional model can better resolve multiscale atmospheric processes, especially for organized convective cloud systems. A significant improvement in the predicted hurricane track resulted from the use of the observations of the central surface pressure in the initialization of SLP, as shown in Figure 5. The SLP experiment generated a more realistic hurricane track, especially for the first two forecasts. The results of our sensitivity tests suggest that it is possible to make better predictions of hurricane track by using surface pressure observations/measurements within the targeted tropical cyclone region.

Fig. 3. The sea level air pressure at the initial time of 0000 UTC 8 Sep 2004 for the control run

950 hPa and within the error bar of observations. Compared to Figure 3, the change in the initial hurricane center sea level pressure is about 47mb, which significantly improves the

The model was integrated for 72 hours at a time step of 15-seconds and used to estimate the storm track. It is not surprising that both of the experiments capture the hurricane track much better than the operational GFS global forecasting (Figure 5). This is mainly because the regional numerical model is non-hydrostatic with explicit cloud/ice-physics parameterizations, cloud-radiation interaction, as well as advanced turbulence schemes, and land-surface interaction. This kind of advanced regional model can better resolve multiscale atmospheric processes, especially for organized convective cloud systems. A significant improvement in the predicted hurricane track resulted from the use of the observations of the central surface pressure in the initialization of SLP, as shown in Figure 5. The SLP experiment generated a more realistic hurricane track, especially for the first two forecasts. The results of our sensitivity tests suggest that it is possible to make better predictions of hurricane track by using surface pressure observations/measurements within

CTL. It is directly interpolated from GFS analysis.

predicted hurricane intensity.

the targeted tropical cyclone region.

Fig. 4. The sea level pressure at the initial time of 0000 UTC 8 Sep 2004 for the experimental run SLP. The observed central pressure was used for the initialization with GFS analysis as the background.

## **2.3 Forecast improvements when pressure fields are ingested into model**

The results of typical weather predictions for a tropical cyclone, using not only center sea surface air pressures but also large area pressure fields, is shown in Fig. 6 for 1996 hurricane Fran, which occurred from 0000UTC September 3 to 0060 UTC September 6, 1996 (Xiao et al. 2000). Due to the lack of data, the model standard run (control run; CTL curve) started with a location error of about 100km, and gradually deviated from the observed hurricane track (OBS curve) up to about 350km for the predicted landfall site. With pressure data and calculated wind fields as inputs, the assimilations with 54km (A80 curve) and 18km (B80 curve) spatial resolution significantly reduced the errors in predicted storm tracks. Comparing the 3 day forecasts, the high-resolution model (18 km, B80) had a small starting location error of about 10 km that increased to about 100 km at the predicted landfall site, and the low-resolution model (54 km, A80) had a starting error of about 35 km and predicted landfall with a 170 km error. Such greatly improved predictions could make hurricane preparation and evacuation much easier, especially for the high resolution forecast (B80) case.

Differential Absorption Microwave Radar

Measurements for Remote Sensing of Barometric Pressure 179

Storm intensity predictions can also be improved with knowledge about the storm center pressure, pressure gradients, and derived wind fields. As expected, the intensity of the B80 prediction is very close to observations at the landfall site (Xiao et al., 2000). The hurricane eye, rain band, and precipitation intensity determined from radar reflectivity simulations (a) and radar observations (b) are very similar (Figure 7). The similarity between these predicted hurricane intensity fields, using pressure fields as one of critical initial conditions, and fields based on observations is remarkable. Unfortunately, there have been no operational, or even experimental, surface air pressure measurements over open oceans

Fig. 7. Radar reflectivity (dB*Z*) (a) predicted by B80 at 0000 UTC 6 Sep 1996 and (b) captured

at Wilmington, NC, at 0028 UTC 6 Sep 1996.

Fig. 5. The predicted hurricane tracks from 0000 UTC 8 Sep 2004 to 0000 UTC 11 Sep 2004.

Fig. 6. Predicted tracks of 1996 hurricane Fran by CTL, B80, and A80, along with observations, from 0000 UTC 3 Sep to 0600 UTC 6 Sep. Predicted landing times are also indicated in the figure.

Control SLP NOAA/NCEP Best Observed track

km

Fig. 5. The predicted hurricane tracks from 0000 UTC 8 Sep 2004 to 0000 UTC 11 Sep 2004.

Fig. 6. Predicted tracks of 1996 hurricane Fran by CTL, B80, and A80, along with observations, from 0000 UTC 3 Sep to 0600 UTC 6 Sep. Predicted landing times are also

km

indicated in the figure.

Storm intensity predictions can also be improved with knowledge about the storm center pressure, pressure gradients, and derived wind fields. As expected, the intensity of the B80 prediction is very close to observations at the landfall site (Xiao et al., 2000). The hurricane eye, rain band, and precipitation intensity determined from radar reflectivity simulations (a) and radar observations (b) are very similar (Figure 7). The similarity between these predicted hurricane intensity fields, using pressure fields as one of critical initial conditions, and fields based on observations is remarkable. Unfortunately, there have been no operational, or even experimental, surface air pressure measurements over open oceans

Fig. 7. Radar reflectivity (dB*Z*) (a) predicted by B80 at 0000 UTC 6 Sep 1996 and (b) captured at Wilmington, NC, at 0028 UTC 6 Sep 1996.

Differential Absorption Microwave Radar

*P*

barometric pressure.

*o*

 

 

*P*

*o*

 

2

equation (2), then the surface pressure can be written as,

or defining terms for a linear relationship between Ri and Po,

*P f*

*r*

   

<sup>1</sup> 2 exp

2 1

1 2

*o o o*

*g f f M*

1 2

*g f f M*

ln <sup>2</sup>

*C f C f*

Measurements for Remote Sensing of Barometric Pressure 181

*P f <sup>o</sup> <sup>o</sup> <sup>o</sup> <sup>o</sup>*

where C(f) is the frequency dependent radar characteristics. Further, if we define the differential absorption index, Ri(f1, f2), as the logarithm of the radar return ratio shown in

 <sup>1</sup> <sup>2</sup> <sup>1</sup> <sup>2</sup> 1

, , <sup>2</sup>

The term C0(f1,f2) includes the instrument residual calibration error. The differential absorption index, Ri(f1,f2), is the logarithm of the ratio of the radar return exclusive of the frequency response of the radar. From equation 4, it can be seen that a simple near-linear relationship between surface air pressure and the differential absorption index is expected from the O2 band radar data. The linear relationship between Ri and the surface pressure was firstly suggested by the results of modeled differential absorption for several frequencies in the range of interest here (*Lin and Hu* 2005). Further, *Lin and Hu* 2005 suggest that the accuracy of instantaneous surface air pressure estimations from the measured Ri could reach 4 – 7 mb. However, the O2 absorption increases at higher frequencies and the receiver Signal to Noise Ratio (SNR) may limit the retrieval accuracy as this loss increases. For a fixed transmit power the optimum frequencies for the surface pressure measurement will depend on the received power, which depends on the atmospheric loss and surface reflectivity. The flight testing of the DiBAR instrument discussed in Section 4 is intended to measure the atmospheric attenuation as a function of frequency and the differential absorption index Ri(f1,f2). These measurements can then be compared to predicted values to assess the measurement approach and the affect of receiver noise on the measurement of

In addition to the above analysis a multiple layered atmospheric microwave radiative transfer model was also employed to simulate the atmospheric loss. The technique used to simulate the propagation of radar signals within the atmosphere is based on a planeparallel, multiple layered atmospheric microwave radiative transfer (MWRT) model that has been used to determine cloud liquid/ice water path, column water vapor, precipitation, land surface emissivity and other parameters over land and oceans ( Ho et al., 2003; Huang et al., 2005; Lin & Rossow, 1994, 1996, 1997; Lin et al. 1998a, 1998b; Lin & Minnis, 2000). To avoid complexities of microwave scattering by precipitating hydrometeors and surface backscattering, this study deals only with non-rain weather conditions and homogeneous backgrounds (such as sea surface). Thus, transmission and absorption of radar signals within each atmospheric layer are the major radiative transfer processes considered in the model calculations. For the absorption process, this MWRT model carefully accounts for the

  1

*o o o r*

 

 

*<sup>g</sup>*

 

*r* 1 2

*f f M P*

*C f C f*

   

1 2

*Ci f f Ri f f*

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>1</sup> <sup>1</sup> <sup>2</sup> <sup>1</sup> <sup>2</sup> *P C f* , *f C f* , *f Ri f* , *f <sup>o</sup>* (4)

 

 

2 1  

  *P f P f*

*r*

 

(2)

(3)

from both in-situ and remote sensing instruments, and thus it remains difficult to predict the tracks and intensities of tropical storms with high accuracies (within 100km landfall site for 3-day forecasts).

The results of the above simulations suggest that tropical storm forecasts of landfall and intensity at landfall may be improved by adding pressure field data consistent with the DiBAR measurement concept. With the pressure measurements of the center and whole field of tropical storms, our simulations using regional weather forecast models show that the prediction of hurricane tracks and intensities can be significantly improved. For the hurricane Fran case, model prediction reduces the landfall site errors from ~350km in the standard prediction to ~100km for 3 day forecasts, which could improve hurricane preparation and evacuation.

An operational airborne instrument could provide unprecedented barometric sampling in terms of spatial coverage and repeat rates. Assuming similar operational flights a DiBAR instrument would be expected to provide data at the same pressure resolution but much higher spatial density. If UAV is used, the cost of providing the needed barometric measurements could be significantly lower than that of current operations using in-situ techniques with the accompanying increase in personnel safety. Future space borne systems may further improve the pressure field sampling, albeit with a more coarse spatial resolution. Furthermore, the availability of these data could result in improved weather forecasts for catastrophic events and could significantly reduce human loss and property damage.

## **3. Measurement approach**

The DiBAR instrument is based on the retrieval of the differential absorption near the O2 line complex (frequencies: 50–56 GHz). This selection of frequencies provides large changes in absorption for the reflected radar signals as a function of the frequency of the radar due in part to the different atmospheric O2 attenuation coefficients. In the atmosphere, O2 is generally uniformly mixed with other gases. The O2 in the column observed by the radar is proportion to column air mass, the column air mass is proportional to the surface air pressure, and the reflected power measured by the radar can be approximated as (*Lin and Hu* 2005)

$$P\_r(f') = \left(\frac{P\_r G\_r G\_r \mathcal{Z}^2}{(4\pi)^3}\right) \frac{\sigma^o(f)}{r^2} \left(\frac{2\alpha\_o M\_o P\_o}{\mathcal{g}} - 2\alpha\_L L - 2\alpha\_v V\right) \tag{1}$$

where the first term in equation (1) includes frequency dependent characteristics of the radar, which must be determined by instrument calibration: PT is the transmitter power and G represents the transmitter and receiver antenna gain. The second term includes changes in the surface reflectivity, , over the radar frequency, and the last term represents the atmospheric absorption, where M0 is the mixing ratio of O2 to total air and Po is the surface pressure. Thus, if the frequency response of the radar is well characterized from 50 -56 GHz, and the absorption characteristics due to liquid water and water vapor, and spatial resolution of the radar are similar over this range of frequencies, then the ratio of the radar received powers from two frequencies is then,

from both in-situ and remote sensing instruments, and thus it remains difficult to predict the tracks and intensities of tropical storms with high accuracies (within 100km landfall site for

The results of the above simulations suggest that tropical storm forecasts of landfall and intensity at landfall may be improved by adding pressure field data consistent with the DiBAR measurement concept. With the pressure measurements of the center and whole field of tropical storms, our simulations using regional weather forecast models show that the prediction of hurricane tracks and intensities can be significantly improved. For the hurricane Fran case, model prediction reduces the landfall site errors from ~350km in the standard prediction to ~100km for 3 day forecasts, which could improve hurricane

An operational airborne instrument could provide unprecedented barometric sampling in terms of spatial coverage and repeat rates. Assuming similar operational flights a DiBAR instrument would be expected to provide data at the same pressure resolution but much higher spatial density. If UAV is used, the cost of providing the needed barometric measurements could be significantly lower than that of current operations using in-situ techniques with the accompanying increase in personnel safety. Future space borne systems may further improve the pressure field sampling, albeit with a more coarse spatial resolution. Furthermore, the availability of these data could result in improved weather forecasts for catastrophic events and could significantly reduce human loss and property

The DiBAR instrument is based on the retrieval of the differential absorption near the O2 line complex (frequencies: 50–56 GHz). This selection of frequencies provides large changes in absorption for the reflected radar signals as a function of the frequency of the radar due in part to the different atmospheric O2 attenuation coefficients. In the atmosphere, O2 is generally uniformly mixed with other gases. The O2 in the column observed by the radar is proportion to column air mass, the column air mass is proportional to the surface air pressure, and the reflected power measured by the radar can be approximated as (*Lin and* 

> 

*<sup>L</sup> <sup>V</sup>*

 

*<sup>f</sup> <sup>P</sup> <sup>f</sup> <sup>L</sup> <sup>v</sup>*

where the first term in equation (1) includes frequency dependent characteristics of the radar, which must be determined by instrument calibration: PT is the transmitter power and G represents the transmitter and receiver antenna gain. The second term includes changes in

atmospheric absorption, where M0 is the mixing ratio of O2 to total air and Po is the surface pressure. Thus, if the frequency response of the radar is well characterized from 50 -56 GHz, and the absorption characteristics due to liquid water and water vapor, and spatial resolution of the radar are similar over this range of frequencies, then the ratio of the radar

 

exp <sup>4</sup>

*r*

2 0

, over the radar frequency, and the last term represents the

*o o o*

*g M P*

<sup>2</sup> <sup>2</sup> <sup>2</sup>

 

(1)

3-day forecasts).

damage.

*Hu* 2005)

preparation and evacuation.

**3. Measurement approach** 

3 2

P G G

T t r

 

received powers from two frequencies is then,

*r*

the surface reflectivity,

$$\frac{P\_r(f\_1)}{P\_r(f\_2)} = \left(\frac{C(f\_1)}{C(f\_2)}\right) \exp\left(-\frac{2(\alpha\_o(f\_1) - \alpha\_o(f\_2))M\_o P\_o}{\text{g}}\right) \tag{2}$$

where C(f) is the frequency dependent radar characteristics. Further, if we define the differential absorption index, Ri(f1, f2), as the logarithm of the radar return ratio shown in equation (2), then the surface pressure can be written as,

$$\begin{aligned} P\_o &= \left(\frac{2\left(\alpha\_o\left(f\_1\right) - \alpha\_o\left(f\_2\right)\right)M\_o}{\mathfrak{g}}\right)^{-1} \ln\left(\left(\frac{C\left(f\_2\right)}{C\left(f\_1\right)}\right)\left(\frac{P\_r\left(f\_1\right)}{P\_r\left(f\_2\right)}\right)\right) \\\ P\_o &= \left(\frac{2\left(\alpha\_o\left(f\_1\right) - \alpha\_o\left(f\_2\right)\right)M\_o}{\mathfrak{g}}\right)^{-1} \left(Ci\left(f\_1, f\_2\right) + Ri\left(f\_1, f\_2\right)\right) \end{aligned} \tag{3}$$

or defining terms for a linear relationship between Ri and Po,

$$P\_o = C\_0(f\_1, f\_2) + C\_1(f\_1, f\_2) \text{Ri}(f\_1, f\_2) \tag{4}$$

The term C0(f1,f2) includes the instrument residual calibration error. The differential absorption index, Ri(f1,f2), is the logarithm of the ratio of the radar return exclusive of the frequency response of the radar. From equation 4, it can be seen that a simple near-linear relationship between surface air pressure and the differential absorption index is expected from the O2 band radar data. The linear relationship between Ri and the surface pressure was firstly suggested by the results of modeled differential absorption for several frequencies in the range of interest here (*Lin and Hu* 2005). Further, *Lin and Hu* 2005 suggest that the accuracy of instantaneous surface air pressure estimations from the measured Ri could reach 4 – 7 mb. However, the O2 absorption increases at higher frequencies and the receiver Signal to Noise Ratio (SNR) may limit the retrieval accuracy as this loss increases. For a fixed transmit power the optimum frequencies for the surface pressure measurement will depend on the received power, which depends on the atmospheric loss and surface reflectivity. The flight testing of the DiBAR instrument discussed in Section 4 is intended to measure the atmospheric attenuation as a function of frequency and the differential absorption index Ri(f1,f2). These measurements can then be compared to predicted values to assess the measurement approach and the affect of receiver noise on the measurement of barometric pressure.

In addition to the above analysis a multiple layered atmospheric microwave radiative transfer model was also employed to simulate the atmospheric loss. The technique used to simulate the propagation of radar signals within the atmosphere is based on a planeparallel, multiple layered atmospheric microwave radiative transfer (MWRT) model that has been used to determine cloud liquid/ice water path, column water vapor, precipitation, land surface emissivity and other parameters over land and oceans ( Ho et al., 2003; Huang et al., 2005; Lin & Rossow, 1994, 1996, 1997; Lin et al. 1998a, 1998b; Lin & Minnis, 2000). To avoid complexities of microwave scattering by precipitating hydrometeors and surface backscattering, this study deals only with non-rain weather conditions and homogeneous backgrounds (such as sea surface). Thus, transmission and absorption of radar signals within each atmospheric layer are the major radiative transfer processes considered in the model calculations. For the absorption process, this MWRT model carefully accounts for the

Differential Absorption Microwave Radar

employed radar techniques.

sensing.

Measurements for Remote Sensing of Barometric Pressure 183

coding techniques for carrier frequencies, correlators for signal receiving and long-time (0.2s) averages of received powers are useful components for consideration for the radar system. Preliminary studies have disclosed advantages from a number of commonly

The radar-received signals reflected from sea surfaces, i.e. RRP values, used in this section are simulated through the complicated MWRT calculations discussed previously. With the RRP values, we calculate the radar differential absorption index, Ri, defined in equation 4. As shown above, the index and sea surface air pressure have a near-linear relationship, which points out the basic directions and sensitivities for surface air pressure remote

Atmospheric extinctions (or attenuations) vary dramatically at the O2 band radar frequencies between 50.3 and 55.5 GHz. At the lowest frequency (50.3GHz), the atmospheric extinction optical depth is about 0.5, and at the highest frequency (55.5GHz), the optical depth goes sharply up to about 9. These two frequency cases represent the two extreme ends of weak and strong, respectively, atmospheric O2 absorptions for our considered active microwave remote sensing of sea surface barometric pressure. With a weak O2 absorption (i.e., small optical depth) radar signals would have significant influence from environments, such as atmospheric water vapor, cloud water amount and atmospheric temperature profile but transmitted powers used might be lower. While the atmospheric O2 absorption is too strong, most of radar-transmitted powers would be close to attenuation, and small changes in surface air pressure (or column O2 amount) would not produce significant differences in the received powers. This might be offset somewhat by using higher transmitted power. Thus at constant transmitter power levels, wavelengths with moderate to reasonably strong O2 absorptions in the atmosphere are expected to serve our purpose best by giving a

Figure 8 shows examples of atmospheric extinction optical depths counted from TOA under clear conditions using the standard profiles (*McClatchey et al.* 1972). The three different color

Fig. 8. Atmospheric extinction optical depths for various atmospheric temperatures and

reasonable compromise between transmission and visibility.

moisture levels at 52.8 and 54.9 GHz.

temperature and pressure dependences of cloud water and atmospheric gas absorptions (Lin et al., 2001). At microwave wavelengths, temperature dependences of gas and water absorptions are significant, and produce some difficulties for MWRT modeling. The several models available to account for gas absorption differ mainly in their treatment of water vapor continuum absorption. The Liebe model i.e., MPM89 was used here (Liebe, 1989). It yields results that differ negligibly from those of the (Rosenkranz, 1998) model at the O2 bands. Liquid water absorption coefficients were calculated from the empirical water refractive index formulae of (Ray, 1972), which agree well (relative differences < 5%) with those from (Liebe et al., 1991) for T > 15 C. For colder clouds, the uncertainties in the absorption coefficients could be larger by more than 15% (Lin et al., 2001) because of a lack of direct measurements of the refractive index.

Current MWRT model is consistent of 200 constant-thickness layers from surface to 40km. There is virtually no gas absorption above the modeled top-of-atmosphere (TOA) at our considered spectra. The atmospheric profiles of temperature, pressure, humidity and gas amount are obtained from NOAA 1988 (NOAA'88) global radiosonde measurements. This NOAA'88 data set is widely used in radiation simulations and satellite remote sensing (e.g., Seemann et al., 2003) and covers both land and oceans. The data set has more than 5000 profiles, and about 1/3 of them are for cloudy skies. In cloudy cases, the NOAA'88 profiles can have up to two layers of clouds. Thus, the simulated results represent both clear and cloudy conditions. Since the model TOA (40km) height is much higher than that of radiosonde measurements, whenever there are no radiosonde upper atmospheric observations, interpolated climatological values of the upper atmosphere (McClatchey et al., 1972) are used. The weighting functions for the interpolation are decided from the surface air temperatures and pressures to meet the radiosonde measured weather conditions. In order to have large variations in surface air pressure, for each NOAA'88 measured profile, the surface pressure is randomly shifted by a Gaussian number with standard deviation 12mb, and the ratio of the shifted surface air pressure to the measured surface pressure is calculated. The atmospheric pressures in the measured profile above the surface are, then, adjusted to the values using the same ratio as that of the surface pressure.

For the analysis in this section, the radar system is assumed to fly on an aircraft at 15 km altitude with velocity 200 m/s, downward-looking and having a beamwidth of 3, which produces a footprint of 785 m. The NOAA hurricane reconnaissance aircraft generally fly above 10 km height through and/or over hurricanes. Since this study is the first step in the model simulations for the radar system to show feasibility of the radar remote sensing for sea surface barometry, the 15 km altitude simulations provide us sufficient theoretical and technical insights for the radar sea surface pressure measurements. For other altitudes, the radar retrievals should have similar accuracy to those simulated here. During our simulation, since all wavelengths used in the radar system are very close to each other, we assume the surface reflection (or 0) to be the same (11 dB) for all frequency channels (Callahan et al., 1994). As we have showed in the previous section, the absolute magnitude of the surface reflectivity is not very important for surface pressure estimation as long as the spectrum dependence of 0 within the O2 bands is negligible.

Simulated signals are analyzed in the form of relative received power (RRP), i.e., the ratio of the received and transmitted powers of the considered radar system. Since the system works at the O2 absorption bands, the relative received powers are generally weak. Certain signal

temperature and pressure dependences of cloud water and atmospheric gas absorptions (Lin et al., 2001). At microwave wavelengths, temperature dependences of gas and water absorptions are significant, and produce some difficulties for MWRT modeling. The several models available to account for gas absorption differ mainly in their treatment of water vapor continuum absorption. The Liebe model i.e., MPM89 was used here (Liebe, 1989). It yields results that differ negligibly from those of the (Rosenkranz, 1998) model at the O2 bands. Liquid water absorption coefficients were calculated from the empirical water refractive index formulae of (Ray, 1972), which agree well (relative differences < 5%) with those from (Liebe et al., 1991) for T > 15 C. For colder clouds, the uncertainties in the absorption coefficients could be larger by more than 15% (Lin et al., 2001) because of a lack

Current MWRT model is consistent of 200 constant-thickness layers from surface to 40km. There is virtually no gas absorption above the modeled top-of-atmosphere (TOA) at our considered spectra. The atmospheric profiles of temperature, pressure, humidity and gas amount are obtained from NOAA 1988 (NOAA'88) global radiosonde measurements. This NOAA'88 data set is widely used in radiation simulations and satellite remote sensing (e.g., Seemann et al., 2003) and covers both land and oceans. The data set has more than 5000 profiles, and about 1/3 of them are for cloudy skies. In cloudy cases, the NOAA'88 profiles can have up to two layers of clouds. Thus, the simulated results represent both clear and cloudy conditions. Since the model TOA (40km) height is much higher than that of radiosonde measurements, whenever there are no radiosonde upper atmospheric observations, interpolated climatological values of the upper atmosphere (McClatchey et al., 1972) are used. The weighting functions for the interpolation are decided from the surface air temperatures and pressures to meet the radiosonde measured weather conditions. In order to have large variations in surface air pressure, for each NOAA'88 measured profile, the surface pressure is randomly shifted by a Gaussian number with standard deviation 12mb, and the ratio of the shifted surface air pressure to the measured surface pressure is calculated. The atmospheric pressures in the measured profile above the surface are, then,

adjusted to the values using the same ratio as that of the surface pressure.

spectrum dependence of 0 within the O2 bands is negligible.

For the analysis in this section, the radar system is assumed to fly on an aircraft at 15 km altitude with velocity 200 m/s, downward-looking and having a beamwidth of 3, which produces a footprint of 785 m. The NOAA hurricane reconnaissance aircraft generally fly above 10 km height through and/or over hurricanes. Since this study is the first step in the model simulations for the radar system to show feasibility of the radar remote sensing for sea surface barometry, the 15 km altitude simulations provide us sufficient theoretical and technical insights for the radar sea surface pressure measurements. For other altitudes, the radar retrievals should have similar accuracy to those simulated here. During our simulation, since all wavelengths used in the radar system are very close to each other, we assume the surface reflection (or 0) to be the same (11 dB) for all frequency channels (Callahan et al., 1994). As we have showed in the previous section, the absolute magnitude of the surface reflectivity is not very important for surface pressure estimation as long as the

Simulated signals are analyzed in the form of relative received power (RRP), i.e., the ratio of the received and transmitted powers of the considered radar system. Since the system works at the O2 absorption bands, the relative received powers are generally weak. Certain signal

of direct measurements of the refractive index.

coding techniques for carrier frequencies, correlators for signal receiving and long-time (0.2s) averages of received powers are useful components for consideration for the radar system. Preliminary studies have disclosed advantages from a number of commonly employed radar techniques.

The radar-received signals reflected from sea surfaces, i.e. RRP values, used in this section are simulated through the complicated MWRT calculations discussed previously. With the RRP values, we calculate the radar differential absorption index, Ri, defined in equation 4. As shown above, the index and sea surface air pressure have a near-linear relationship, which points out the basic directions and sensitivities for surface air pressure remote sensing.

Atmospheric extinctions (or attenuations) vary dramatically at the O2 band radar frequencies between 50.3 and 55.5 GHz. At the lowest frequency (50.3GHz), the atmospheric extinction optical depth is about 0.5, and at the highest frequency (55.5GHz), the optical depth goes sharply up to about 9. These two frequency cases represent the two extreme ends of weak and strong, respectively, atmospheric O2 absorptions for our considered active microwave remote sensing of sea surface barometric pressure. With a weak O2 absorption (i.e., small optical depth) radar signals would have significant influence from environments, such as atmospheric water vapor, cloud water amount and atmospheric temperature profile but transmitted powers used might be lower. While the atmospheric O2 absorption is too strong, most of radar-transmitted powers would be close to attenuation, and small changes in surface air pressure (or column O2 amount) would not produce significant differences in the received powers. This might be offset somewhat by using higher transmitted power. Thus at constant transmitter power levels, wavelengths with moderate to reasonably strong O2 absorptions in the atmosphere are expected to serve our purpose best by giving a reasonable compromise between transmission and visibility.

Figure 8 shows examples of atmospheric extinction optical depths counted from TOA under clear conditions using the standard profiles (*McClatchey et al.* 1972). The three different color

Fig. 8. Atmospheric extinction optical depths for various atmospheric temperatures and moisture levels at 52.8 and 54.9 GHz.

Differential Absorption Microwave Radar

Measurements for Remote Sensing of Barometric Pressure 185

Fig. 10. Similar to Fig. 9, except frequencies are changed to 53.6 and 54.9 GHz.

range of the sea surface pressure is significantly higher than some sea surface air pressures of hurricane centers. NOAA 1988 profiles were measured in generally average weather and meteorological environments, and were not taken from tropical storm cases. Thus, there were no extreme low sea surface air pressures in the NOAA data set. Actually, for tropical storm cases, the signal strength and SNR of the radar measurements at all O2 band channels would be higher than those in normal conditions due to low atmospheric radar attenuation caused by low O2 amounts (or the low hurricane center pressures). Also, the hurricane centers are generally clear skies. So, the accuracy of radar retrievals of the sea surface barometric pressure for hurricane center cases would be higher than those shown in the figures. The key to reach high accuracies of sea surface barometric pressure measurements is

to have a high SNR of radar received powers reflected from sea surfaces.

Fig. 11. Same as Fig. 10, except for 52.8 and 54.9GHz.

curves represent atmospheric surface temperatures of 280, 290 and 300K, respectively. It can be seen that these curves are very close each other, indicating atmospheric temperature effects are minimal. For channel 2 (i.e. 52.8GHz, left panel) cases, the optical depths for moist atmospheres (solid curves) with 40mm column water vapor are about 1.25 and only 0.1 higher than those of dry atmospheres. At 54.9GHz (right panel), the optical depths are increased considerably to about 6, and different temperature and moisture conditions have little effect on the total extinctions. For this frequency, the atmospheric extinctions of radar received signals due to double atmospheric path lengths reach about 50dB. This may require enhancements to the radar signals to control end to end noise, as mentioned before.

For tropical meteorological cases, such as hurricane cases, the changes in temperature and moisture profiles are even much smaller than those shown in the figure due to limited temperature and humidity conditions for the tropical storm development. To test accuracies of surface pressure measurements, a 15 dB SNR (signal-to-noise ratio) for radar-received signals is assumed for this primary study.

Figure 9 shows the simulated relationship between the differential absorption index (the logarithm of the radar return ratio of relative received powers at wavelengths 53.6 and 54.4 GHz and sea surface air pressure. Each point in the figure represents one adjusted NOAA'88 profile. As discussed above, good linear correlations of the two variables are further established by these simulations. A linear regression gives the root mean square (rms) error in sea surface air pressure estimates about 7.5 mb, which may be suitable for many meteorological uses. For frequencies of 53.6 and 54.9 GHz (Figure 10), simulated results (5.4 mb) are close to current theoretical O2 A-band results. The best results (in Figure 11) we found are those from the differential absorption index 52.8 and 54.9GHz. The rms error in this case is about 4.1 mb. The tight linear relation between the sea surface air pressure and differential absorption index provides a great potential of remote sensing surface air pressure from airborne radar systems. Note that in Figs. 9-11, the dynamic range of sea surface barometric pressure is only from ~ 960mb to ~1050mb. The low end of the dynamic

Fig. 9. Simulated relationship between the differential absorption index, the logarithm of the radar spectrum ratio at wavelengths 53.6 and 54.4 GHz , and surface air pressure.

curves represent atmospheric surface temperatures of 280, 290 and 300K, respectively. It can be seen that these curves are very close each other, indicating atmospheric temperature effects are minimal. For channel 2 (i.e. 52.8GHz, left panel) cases, the optical depths for moist atmospheres (solid curves) with 40mm column water vapor are about 1.25 and only 0.1 higher than those of dry atmospheres. At 54.9GHz (right panel), the optical depths are increased considerably to about 6, and different temperature and moisture conditions have little effect on the total extinctions. For this frequency, the atmospheric extinctions of radar received signals due to double atmospheric path lengths reach about 50dB. This may require

enhancements to the radar signals to control end to end noise, as mentioned before.

signals is assumed for this primary study.

For tropical meteorological cases, such as hurricane cases, the changes in temperature and moisture profiles are even much smaller than those shown in the figure due to limited temperature and humidity conditions for the tropical storm development. To test accuracies of surface pressure measurements, a 15 dB SNR (signal-to-noise ratio) for radar-received

Figure 9 shows the simulated relationship between the differential absorption index (the logarithm of the radar return ratio of relative received powers at wavelengths 53.6 and 54.4 GHz and sea surface air pressure. Each point in the figure represents one adjusted NOAA'88 profile. As discussed above, good linear correlations of the two variables are further established by these simulations. A linear regression gives the root mean square (rms) error in sea surface air pressure estimates about 7.5 mb, which may be suitable for many meteorological uses. For frequencies of 53.6 and 54.9 GHz (Figure 10), simulated results (5.4 mb) are close to current theoretical O2 A-band results. The best results (in Figure 11) we found are those from the differential absorption index 52.8 and 54.9GHz. The rms error in this case is about 4.1 mb. The tight linear relation between the sea surface air pressure and differential absorption index provides a great potential of remote sensing surface air pressure from airborne radar systems. Note that in Figs. 9-11, the dynamic range of sea surface barometric pressure is only from ~ 960mb to ~1050mb. The low end of the dynamic

Fig. 9. Simulated relationship between the differential absorption index, the logarithm of the

radar spectrum ratio at wavelengths 53.6 and 54.4 GHz , and surface air pressure.

Fig. 10. Similar to Fig. 9, except frequencies are changed to 53.6 and 54.9 GHz.

Fig. 11. Same as Fig. 10, except for 52.8 and 54.9GHz.

range of the sea surface pressure is significantly higher than some sea surface air pressures of hurricane centers. NOAA 1988 profiles were measured in generally average weather and meteorological environments, and were not taken from tropical storm cases. Thus, there were no extreme low sea surface air pressures in the NOAA data set. Actually, for tropical storm cases, the signal strength and SNR of the radar measurements at all O2 band channels would be higher than those in normal conditions due to low atmospheric radar attenuation caused by low O2 amounts (or the low hurricane center pressures). Also, the hurricane centers are generally clear skies. So, the accuracy of radar retrievals of the sea surface barometric pressure for hurricane center cases would be higher than those shown in the figures. The key to reach high accuracies of sea surface barometric pressure measurements is to have a high SNR of radar received powers reflected from sea surfaces.

Differential Absorption Microwave Radar

Agilent 8362B Options 014, UNL, 010, H11, 080, 081

**4.1 Preliminary functional testing** 

Again, this had no impact on flight tests.

Port 1

Port 2

Fig. 12. DIBAR demonstration instrument block diagram.

sea surface, or leakage between the transmitter and receiver.

10 MHz Ref

5-11 GHz BP

5-11 GHz BP

HD14100

5-11 GHz BP

5-11 GHz BP

HD14100

Measurements for Remote Sensing of Barometric Pressure 187

15 GHx PLO X 5 = 45GHz

For the data discussed here, the DiBAR instrument was operated in a stepped Continuous Wave (CW) mode using Fourier transform and windowing to produce software gating in the time domain. This processing minimized the effect of radar returns other than from the

Laboratory functional testing of the system such as, characterization of system linearity, noise figure, antenna gain, and isolation between antennas has been completed and reported elsewhere (Lawrence et al. 2007; Lin et al., 2006). Results of these tests were nominal with two minor exceptions. The frequency response of the Up/Down Converter, shown in figure 12, varied over the frequency range of 50 and 56 GHz by more that 12 dB. This change with frequency was larger that expected. However, it has been assumed that low altitude DiBAR data would be used to characterize the frequency response of the instrument during the flight tests. Therefore, as long as frequency response is stable, this should not affect the DiBAR demonstration flight tests. The leakage from the transmitter to the receiver within the Up/Down Converter enclosure was larger than mutual coupling between antennas. The impact of this leakage is minor. Our stepped CW measurement approach allowed software gating to suppress this term as long as the range to the target is more than about 10 to 15 m.

The assembled DiBAR demonstration radar is shown in figure 13 during a quick test using a water tower as a target to verify the operation of the radar. The DiBAR instrument collected 16001 stepped CW measurements for frequencies from 53 to 56 GHz. The Fourier transform of these data then results in a time domain representation of the radar return as a function of range. The resulting time domain data is shown in figure 14 and the large return from the

The data in figure 14 may be helpful in illustrating the DiBAR measurement approach. The DiBAR instrument must provide precision measurements of the variation in the radar return as a function of frequency. Using a similar stepped CW measurement approach over the ocean, we can transform the data to the time domain, and then use windowing to minimize the effects of clutter. The windowed time domain data can then be transformed back to the frequency domain to measure the differential absorption index. An important assumption for our test flight planning is that the frequency response of the instrument will be

water tower as well as the internal leakage term can clearly be seen in the figure.

5-11 GHz IF

Spacek WR-19 Tranciever

5-11 GHz IF 50-56 GHz

This theoretical and modeling study establishes a remote sensing method for sea surface air pressure. Simulated results show that with an airborne radar working at about 53~55GHz O2 absorption bands, the rms errors of the radar surface pressure estimations can be as small as 4~7mb. The considered radar systems should at least have 2 frequency channels to obtain the relative received power ratios of the two wavelengths. For the best simulated combination of 52.8 and 54.9 GHz channels, the power loss of radar received signals due to dual atmospheric path length absorptions could be as high as about 50 dB. High signal-tonoise ratios for radar reflected powers after these atmospheric absorptions will require modern radar technologies. In addition, careful radar design to insure stable instrument gain will be required.

## **4. DiBAR demonstration instrument**

The goal in developing the demonstration instrument was to use commercial-of-the-shelf hardware wherever possible to develop the capability to collect differential absorption data that would verify the simulated differential absorption results, and to allow various measurement approaches to be assessed. An important operational characteristic for the radar, and determining factor in most design tradeoffs for the DiBAR system, is the SNR. The optimum channel to use in the O2 absorption band from 50 ~ 56 GHz is a function of the radar SNR, which depended on the surface reflectivity and the total atmospheric absorption. Thus, rather than selecting a set of frequencies bases on the microwave atmospheric absorption model, the demonstration instrument will have the flexibility to vary the measurement frequencies, and even to measure the differential absorption from 50 to 56 GHz and allow multiple processing and data analysis strategies to be evaluated for the same data set.

The basic instrument concept utilizes a Vector Network Analyzer (VNA) and a millimeter wave Up/Down Converter subsystem to enable operation from 50 ~ 56 GHz. The millimeter wave Up/Down Converter will translate the VNA measurements to the O2 absorption band, and provide very flexible signal processing options. As shown in Figure 12, the Up/Down Converter provides a millimeter power amplifier for the transmitter and a Low Noise Amplifier (LNA) for the receiver. The transmit power is selectable but the maximum is limited by the Q-band output amplifier to +14 dBm. The maximum transmit power and the receiver noise figure, 5.3 dB, will establish the SNR for our selected flight altitude. Our analysis indicates that for altitudes below 1000 m the SNR will be sufficient to verify the differential absorption across the O2 absorption band. The transmit power can also be reduced during the flight to assess the impact of SNR on various data analysis approaches. Finally, to maximize isolation and eliminate the need for a Q-band transmit/receive (T/R) switch, the demonstration instrument transmitter and receiver are each fitted with an antenna.

The DiBAR demonstration instrument is extremely versatile and can be operated in several modes to emulate a wide range of radar modes and processing concepts. Several modes of operation can be used to collect absorption band data to increase probability of success and provide additional insight into the concept of differential absorption. The anticipated data sets will also provide insight into other phenomenon, at these frequencies, such as sea surface scattering. The instrument can be retrofitted with microwave switches to allow hardware gating, if required, to reduce any radar return other than the ocean surface. This option is not presently implemented.

This theoretical and modeling study establishes a remote sensing method for sea surface air pressure. Simulated results show that with an airborne radar working at about 53~55GHz O2 absorption bands, the rms errors of the radar surface pressure estimations can be as small as 4~7mb. The considered radar systems should at least have 2 frequency channels to obtain the relative received power ratios of the two wavelengths. For the best simulated combination of 52.8 and 54.9 GHz channels, the power loss of radar received signals due to dual atmospheric path length absorptions could be as high as about 50 dB. High signal-tonoise ratios for radar reflected powers after these atmospheric absorptions will require modern radar technologies. In addition, careful radar design to insure stable instrument

The goal in developing the demonstration instrument was to use commercial-of-the-shelf hardware wherever possible to develop the capability to collect differential absorption data that would verify the simulated differential absorption results, and to allow various measurement approaches to be assessed. An important operational characteristic for the radar, and determining factor in most design tradeoffs for the DiBAR system, is the SNR. The optimum channel to use in the O2 absorption band from 50 ~ 56 GHz is a function of the radar SNR, which depended on the surface reflectivity and the total atmospheric absorption. Thus, rather than selecting a set of frequencies bases on the microwave atmospheric absorption model, the demonstration instrument will have the flexibility to vary the measurement frequencies, and even to measure the differential absorption from 50 to 56 GHz and allow

multiple processing and data analysis strategies to be evaluated for the same data set.

The basic instrument concept utilizes a Vector Network Analyzer (VNA) and a millimeter wave Up/Down Converter subsystem to enable operation from 50 ~ 56 GHz. The millimeter wave Up/Down Converter will translate the VNA measurements to the O2 absorption band, and provide very flexible signal processing options. As shown in Figure 12, the Up/Down Converter provides a millimeter power amplifier for the transmitter and a Low Noise Amplifier (LNA) for the receiver. The transmit power is selectable but the maximum is limited by the Q-band output amplifier to +14 dBm. The maximum transmit power and the receiver noise figure, 5.3 dB, will establish the SNR for our selected flight altitude. Our analysis indicates that for altitudes below 1000 m the SNR will be sufficient to verify the differential absorption across the O2 absorption band. The transmit power can also be reduced during the flight to assess the impact of SNR on various data analysis approaches. Finally, to maximize isolation and eliminate the need for a Q-band transmit/receive (T/R) switch, the demonstration instrument transmitter and receiver are each fitted with an

The DiBAR demonstration instrument is extremely versatile and can be operated in several modes to emulate a wide range of radar modes and processing concepts. Several modes of operation can be used to collect absorption band data to increase probability of success and provide additional insight into the concept of differential absorption. The anticipated data sets will also provide insight into other phenomenon, at these frequencies, such as sea surface scattering. The instrument can be retrofitted with microwave switches to allow hardware gating, if required, to reduce any radar return other than the ocean surface. This

gain will be required.

antenna.

option is not presently implemented.

**4. DiBAR demonstration instrument** 

Fig. 12. DIBAR demonstration instrument block diagram.

For the data discussed here, the DiBAR instrument was operated in a stepped Continuous Wave (CW) mode using Fourier transform and windowing to produce software gating in the time domain. This processing minimized the effect of radar returns other than from the sea surface, or leakage between the transmitter and receiver.

## **4.1 Preliminary functional testing**

Laboratory functional testing of the system such as, characterization of system linearity, noise figure, antenna gain, and isolation between antennas has been completed and reported elsewhere (Lawrence et al. 2007; Lin et al., 2006). Results of these tests were nominal with two minor exceptions. The frequency response of the Up/Down Converter, shown in figure 12, varied over the frequency range of 50 and 56 GHz by more that 12 dB. This change with frequency was larger that expected. However, it has been assumed that low altitude DiBAR data would be used to characterize the frequency response of the instrument during the flight tests. Therefore, as long as frequency response is stable, this should not affect the DiBAR demonstration flight tests. The leakage from the transmitter to the receiver within the Up/Down Converter enclosure was larger than mutual coupling between antennas. The impact of this leakage is minor. Our stepped CW measurement approach allowed software gating to suppress this term as long as the range to the target is more than about 10 to 15 m. Again, this had no impact on flight tests.

The assembled DiBAR demonstration radar is shown in figure 13 during a quick test using a water tower as a target to verify the operation of the radar. The DiBAR instrument collected 16001 stepped CW measurements for frequencies from 53 to 56 GHz. The Fourier transform of these data then results in a time domain representation of the radar return as a function of range. The resulting time domain data is shown in figure 14 and the large return from the water tower as well as the internal leakage term can clearly be seen in the figure.

The data in figure 14 may be helpful in illustrating the DiBAR measurement approach. The DiBAR instrument must provide precision measurements of the variation in the radar return as a function of frequency. Using a similar stepped CW measurement approach over the ocean, we can transform the data to the time domain, and then use windowing to minimize the effects of clutter. The windowed time domain data can then be transformed back to the frequency domain to measure the differential absorption index. An important assumption for our test flight planning is that the frequency response of the instrument will be

Differential Absorption Microwave Radar


**4.2 DIBAR flight test results** 

Fig. 15. Radar return from sphere vs. range


**Recieved Power Ratio (dB)**

Measurements for Remote Sensing of Barometric Pressure 189

The data was collected in the stepped CW mode using 16001 points from 50 to 56 GHz over several hours. The time domain result of a measurement of a 35.5 cm diameter sphere is shown in figure 15. The sphere can be seen at a range of approximately 22 m. The leakage term appears near zero range and the back wall of the facility is only a few meters further downrange than the sphere. Windowing was used to reduce the error due to these contaminating signals and the data is then transformed back to the frequency domain. Assuming the sphere is stationary, any change in the measured response can be attributed to variation in the end-to-end frequency response of the DiBAR demonstration instrument.

50 GHz to 56 GHz


**Range (m)**

The initial flight-testing to verify the differential loss was accomplished utilizing a helicopter that provided several test flights over water in varying atmospheric and sea conditions. Several modifications to the DiBAR instrument were required for these tests. The integration of the DiBAR instrument on board the helicopter required the high gain antennas to be replaced with smaller horn antennas. The reduction in antenna gain results in reduced system dynamic range, and limits the maximum altitude where sufficient signal to noise ratio is available for useful pressure measurements. To minimize the impact of the antenna modification, the frequency sweep was increased from 53-56 GHz to 50-60 GHz for these flights. While the spectral response of the DiBAR instrument decreases above 56 GHz, the increased O2 attenuation at these frequencies may be useful for the lower altitude operations. Analysis using an instrument model developed from laboratory testing and the microwave absorption model described in (Lin & Hu, 2005, Lawrence et al., 2007) suggests that this configuration of the instrument will provide an estimate of the differential O2

Fig. 13. DiBAR Demonstration Radar

Fig. 14. Radar return from water tower vs. range

characterized by comparing stepped CW data at various flight altitudes. This of course assumes stability of the instrument frequency response. In order to verify the stability of the frequency response, the DiBAR instrument was moved into an anechoic chamber to

measure the backscatter from a conductive sphere in a stable and controlled environment. Unfortunately, the available chamber was not designed for millimeter wave frequencies, so precision radar cross section measurements or absolute calibration of the DiBAR instrument was not possible. However, while clutter was apparent in the radar measurements, the facility did provide a stable environment and was useful for the primary objective of characterizing the stability of the instrument.

0 50 100 150 200 250 300 350 400

Range (m)

characterized by comparing stepped CW data at various flight altitudes. This of course assumes stability of the instrument frequency response. In order to verify the stability of the

measure the backscatter from a conductive sphere in a stable and controlled environment. Unfortunately, the available chamber was not designed for millimeter wave frequencies, so precision radar cross section measurements or absolute calibration of the DiBAR instrument was not possible. However, while clutter was apparent in the radar measurements, the facility did provide a stable environment and was useful for the primary objective of

frequency response, the DiBAR instrument was moved into an anechoic chamber to

53 GHz to 56 GHz

Fig. 13. DiBAR Demonstration Radar

Received Power Ratio (dB)

Fig. 14. Radar return from water tower vs. range

characterizing the stability of the instrument.

The data was collected in the stepped CW mode using 16001 points from 50 to 56 GHz over several hours. The time domain result of a measurement of a 35.5 cm diameter sphere is shown in figure 15. The sphere can be seen at a range of approximately 22 m. The leakage term appears near zero range and the back wall of the facility is only a few meters further downrange than the sphere. Windowing was used to reduce the error due to these contaminating signals and the data is then transformed back to the frequency domain. Assuming the sphere is stationary, any change in the measured response can be attributed to variation in the end-to-end frequency response of the DiBAR demonstration instrument.

Fig. 15. Radar return from sphere vs. range

## **4.2 DIBAR flight test results**

The initial flight-testing to verify the differential loss was accomplished utilizing a helicopter that provided several test flights over water in varying atmospheric and sea conditions. Several modifications to the DiBAR instrument were required for these tests. The integration of the DiBAR instrument on board the helicopter required the high gain antennas to be replaced with smaller horn antennas. The reduction in antenna gain results in reduced system dynamic range, and limits the maximum altitude where sufficient signal to noise ratio is available for useful pressure measurements. To minimize the impact of the antenna modification, the frequency sweep was increased from 53-56 GHz to 50-60 GHz for these flights. While the spectral response of the DiBAR instrument decreases above 56 GHz, the increased O2 attenuation at these frequencies may be useful for the lower altitude operations. Analysis using an instrument model developed from laboratory testing and the microwave absorption model described in (Lin & Hu, 2005, Lawrence et al., 2007) suggests that this configuration of the instrument will provide an estimate of the differential O2

Differential Absorption Microwave Radar


antennas will not have this limitation.






Radar Return (dB)






Measurements for Remote Sensing of Barometric Pressure 191

y

2000 ft

3000 ft

50 52 54 56 58 60

5000 ft

Frequency (GHz)

The measured results agree very well with the model for 2000 ft altitude measurements. The results for 3000 ft also agree well with the model for frequencies from 50 to 58 GHz. The difference between the measured and predicted values above 58 GHz is likely due to the noise floor of the modified DiBAR instrument. That is, due to the reduced antenna gain the signal to noise ratio of the DiBAR is insufficient at frequencies above 58 GHz at 3000 ft altitude and above 56 GHz at 5000 ft altitude. It appears that the optimum trade off between sufficient O2 absorption (path length) and the noise floor of the DiBAR instrument for these flights occurs at an altitude of approximately 3000ft. Future flights with the high gain

DiBAR data for 3000 ft from three difference days are shown in Figure 18. Three measurements are indicated for each day (symbols) as well as the predicted values (solid line). The increase in attenuation with increasing frequency can be seen in the data for all three days. Further, the attenuation appears to increase with increasing barometer pressure as would be expected. The difference between barometric pressures for each day is approximately 10 mb. While no statistical analysis was performed, the variation in the measured attenuation above 57 GHz appears to be on-the-order of the variation between each day. That is, the measurement-to-measurement variation was on the order of ± 5 mb

Fig. 17. Comparison of DiBAR measured return and model predictions

absorption for an altitude of approximately 3000 feet (ft). Note that within US aviation industry aircraft altitude is reported in feet. Since this is the value recorded by the flight crew, altitude will be reported in feet in this description.

The demonstration DiBAR instrument was installed on a helicopter (Figure 16) for several test flights. Data was collected with in-situ estimated barometric pressure ranging from 1007 to 1028 mb. At each measurement site, the DiBAR instrument made three to five measurements of radar return for frequencies from 50 to 60 GHz. These measurements were performed while the helicopter was in a hover and each measurement set included altitudes from 500 to 5000 ft. These measurements were performed at each altitude with the helicopter at nominally the same location. The 500 ft altitude measurements for each measurement set was used to provide correction for sea surface reflectivity variations and spectral calibration of the instrument.

Fig. 16. DiBAR Instrument Installed in vehicle for initial flight tests.

The results for a data set performed on a day with an in-situ estimated barometric pressure at the measurement location of approximately 1018mb are shown in figure 17. DiBAR data for 2000, 3000, and 5000 ft altitudes is shown, as well as the modeled radar return. Three DiBAR measurements were performed at each altitude, and are indicated by the three different symbols in Figure 17. The predicted radar return (solid curve) is estimated using the radar equation for an extended target (sea surface) and the microwave absorption model adapted from (Lawrence et al., 2007; Lin & Hu, 2005). The measured transfer function of the DiBAR instrument was then combined with these models to estimate the expected radar return, shown in Figure 17 as the solid curve. The DiBAR measurements for each altitude are very repeatable, suggesting that the DiBAR instrument and the sea surface scattering characteristics were sufficiently stable. The reduced radar return as the measurement frequency increases can clearly be seen in Figure 17. This reduction is partially due to the increased O2 attenuation discussed in section 3.

absorption for an altitude of approximately 3000 feet (ft). Note that within US aviation industry aircraft altitude is reported in feet. Since this is the value recorded by the flight

The demonstration DiBAR instrument was installed on a helicopter (Figure 16) for several test flights. Data was collected with in-situ estimated barometric pressure ranging from 1007 to 1028 mb. At each measurement site, the DiBAR instrument made three to five measurements of radar return for frequencies from 50 to 60 GHz. These measurements were performed while the helicopter was in a hover and each measurement set included altitudes from 500 to 5000 ft. These measurements were performed at each altitude with the helicopter at nominally the same location. The 500 ft altitude measurements for each measurement set was used to provide correction for sea surface reflectivity variations and

crew, altitude will be reported in feet in this description.

Fig. 16. DiBAR Instrument Installed in vehicle for initial flight tests.

increased O2 attenuation discussed in section 3.

The results for a data set performed on a day with an in-situ estimated barometric pressure at the measurement location of approximately 1018mb are shown in figure 17. DiBAR data for 2000, 3000, and 5000 ft altitudes is shown, as well as the modeled radar return. Three DiBAR measurements were performed at each altitude, and are indicated by the three different symbols in Figure 17. The predicted radar return (solid curve) is estimated using the radar equation for an extended target (sea surface) and the microwave absorption model adapted from (Lawrence et al., 2007; Lin & Hu, 2005). The measured transfer function of the DiBAR instrument was then combined with these models to estimate the expected radar return, shown in Figure 17 as the solid curve. The DiBAR measurements for each altitude are very repeatable, suggesting that the DiBAR instrument and the sea surface scattering characteristics were sufficiently stable. The reduced radar return as the measurement frequency increases can clearly be seen in Figure 17. This reduction is partially due to the

spectral calibration of the instrument.

Fig. 17. Comparison of DiBAR measured return and model predictions

The measured results agree very well with the model for 2000 ft altitude measurements. The results for 3000 ft also agree well with the model for frequencies from 50 to 58 GHz. The difference between the measured and predicted values above 58 GHz is likely due to the noise floor of the modified DiBAR instrument. That is, due to the reduced antenna gain the signal to noise ratio of the DiBAR is insufficient at frequencies above 58 GHz at 3000 ft altitude and above 56 GHz at 5000 ft altitude. It appears that the optimum trade off between sufficient O2 absorption (path length) and the noise floor of the DiBAR instrument for these flights occurs at an altitude of approximately 3000ft. Future flights with the high gain antennas will not have this limitation.

DiBAR data for 3000 ft from three difference days are shown in Figure 18. Three measurements are indicated for each day (symbols) as well as the predicted values (solid line). The increase in attenuation with increasing frequency can be seen in the data for all three days. Further, the attenuation appears to increase with increasing barometer pressure as would be expected. The difference between barometric pressures for each day is approximately 10 mb. While no statistical analysis was performed, the variation in the measured attenuation above 57 GHz appears to be on-the-order of the variation between each day. That is, the measurement-to-measurement variation was on the order of ± 5 mb

Differential Absorption Microwave Radar

0

**5. Conclusions** 

operations.

5

10

15

Ri(53GHz,58GHz) (dB)

20

25

30

Measurements for Remote Sensing of Barometric Pressure 193

500 1000 1500 2000 2500 3000 3500 4000 4500

 Measured Ri 1028 mb Measured Ri 1018 mb Measured Ri 1007 mb Predicted 1028 mb Predicted 1018 mb Predicted 1007 mb

Altitude (ft)

The goal of the initial flight testing was to demonstrate differential radar measurement approach. The DiBAR measurements for the Chesapeake Bay at multiple altitudes demonstrated very good agreement between measured and predicted results for altitudes below approximately 3000 ft and for frequencies below 56 GHz. In addition, multiple measurements at these altitudes indicate little change over several minutes. This suggests that changes on the surface reflection coefficient over these time scales can be ignored for these surface conditions and spatial resolution. As expected, above 3000 ft the reduced antenna gain resulted in insufficient signal to noise ratio. However, the measured differential absorption index was in general agreement with the modeled values. Further, although beyond the scope of these initial flight tests, variations in the DiBAR measurements for 3000 ft measurements appear to be in the range ± 5 mb. These results are encouraging and consistent with our accuracy goal. Future flight testing should include an assessment of the barometric pressure measurement for high altitude and future satellite

The initial flight testing described above successfully demonstrated the measurement approach. To fully demonstrate the measurement of surface level pressure will likely require flight data at altitudes between 5 kft and 15 kft using the original high gain antennas. An onboard calibration system should also be developed to eliminate the need for low altitude data to correct for changes to the spectral response of the instrument. In

Fig. 19. DiBAR derived and predicted differential absorption coefficients.

Fig. 18. Measured radar return and model predictions for three pressure days.

for the 3000 ft altitude data. The stability of these measurements over several minutes indicates that sea surface scattering can be assumed constant for these conditions. As discussed in Section 3 this increase in attenuation is expected to result in a linear change in differential absorption, Ri(f1,f2) defined in equation 4.

The differential absorption index is also provided by DiBAR measurements. The DiBAR demonstration instrument measures the radar return over the entire frequency band from 50 to 60 GHz. However, the differential attenuation index can be extracted from the data where the radar signals are sufficiently above the noise floor. For example, the differential absorption for f1= 53 GHz and f2=58 GHz, or Ri(53,58), can be found from Figure 18 by

subtracting the radar return for 58 GHz from that for 53 GHz. Figure 19 shows Ri(53,58) measured at altitudes of 1000, 2000, 3000, and 4000 ft. The measured data for the three pressure days are shown in the figure as well as the predicted Ri(53,58) using the instrument model and microwave atmospheric attenuation model discussed above. The figure illustrates the affect of increasing altitude. As the altitude increases the increased path length increases proportionality constant between Ri and Po in equation (4). Thus, ignoring the receiver SNR, a less precise estimate of Ri is required for the same surface pressure precision at higher altitudes. Conversely, at 1000 ft larger changes in barometric pressure would be required to produce a detectable change in Ri. This demonstrates the impact of the reduction in antenna gain and limiting the useful measurement altitude to 3000 ft. However, the differential absorption index shown in Figure 19 agrees well with the predicted values for Ri, through 3000 ft altitude.

Fig. 19. DiBAR derived and predicted differential absorption coefficients.

## **5. Conclusions**

192 Remote Sensing – Advanced Techniques and Platforms

3000 ft

1007 mb

1028 mb

1018 mb

50 51 52 53 54 55 56 57 58 59 60

Frequency (GHz)

for the 3000 ft altitude data. The stability of these measurements over several minutes indicates that sea surface scattering can be assumed constant for these conditions. As discussed in Section 3 this increase in attenuation is expected to result in a linear change in

The differential absorption index is also provided by DiBAR measurements. The DiBAR demonstration instrument measures the radar return over the entire frequency band from 50 to 60 GHz. However, the differential attenuation index can be extracted from the data where the radar signals are sufficiently above the noise floor. For example, the differential absorption for f1= 53 GHz and f2=58 GHz, or Ri(53,58), can be found from Figure 18 by

subtracting the radar return for 58 GHz from that for 53 GHz. Figure 19 shows Ri(53,58) measured at altitudes of 1000, 2000, 3000, and 4000 ft. The measured data for the three pressure days are shown in the figure as well as the predicted Ri(53,58) using the instrument model and microwave atmospheric attenuation model discussed above. The figure illustrates the affect of increasing altitude. As the altitude increases the increased path length increases proportionality constant between Ri and Po in equation (4). Thus, ignoring the receiver SNR, a less precise estimate of Ri is required for the same surface pressure precision at higher altitudes. Conversely, at 1000 ft larger changes in barometric pressure would be required to produce a detectable change in Ri. This demonstrates the impact of the reduction in antenna gain and limiting the useful measurement altitude to 3000 ft. However, the differential absorption index shown in Figure 19 agrees well with the predicted values

Fig. 18. Measured radar return and model predictions for three pressure days.


for Ri, through 3000 ft altitude.



 DiBAR Measurement 1 DiBAR Measurement 2 DiBAR Measurement 3

differential absorption, Ri(f1,f2) defined in equation 4.



Radar Return (dB)





The goal of the initial flight testing was to demonstrate differential radar measurement approach. The DiBAR measurements for the Chesapeake Bay at multiple altitudes demonstrated very good agreement between measured and predicted results for altitudes below approximately 3000 ft and for frequencies below 56 GHz. In addition, multiple measurements at these altitudes indicate little change over several minutes. This suggests that changes on the surface reflection coefficient over these time scales can be ignored for these surface conditions and spatial resolution. As expected, above 3000 ft the reduced antenna gain resulted in insufficient signal to noise ratio. However, the measured differential absorption index was in general agreement with the modeled values. Further, although beyond the scope of these initial flight tests, variations in the DiBAR measurements for 3000 ft measurements appear to be in the range ± 5 mb. These results are encouraging and consistent with our accuracy goal. Future flight testing should include an assessment of the barometric pressure measurement for high altitude and future satellite operations.

The initial flight testing described above successfully demonstrated the measurement approach. To fully demonstrate the measurement of surface level pressure will likely require flight data at altitudes between 5 kft and 15 kft using the original high gain antennas. An onboard calibration system should also be developed to eliminate the need for low altitude data to correct for changes to the spectral response of the instrument. In

Differential Absorption Microwave Radar

*103*, 3873-3886.

3905.

1844.

Measurements for Remote Sensing of Barometric Pressure 195

Lin, B., and Rossow, W. B. (1997). Precipitation water path and rainfall rate estimates over

Lin, B., Wielicki, B., Minnis, P., and Rossow, W.(1998a) Estimation of water cloud properties

Lin, B., Minnis, P., Wielicki, B., Doelling, D. R., Palikonda, R., Young, D. F., and Uttal, T.

Lin, B. and Minnis, P. (2000). Temporal variations of land surface microwave emissivities over the ARM southern great plains site*, J. App. Meteor*., 39, 1103-1116. Lin, B., Minnis, P., Fan, A., Curry, J., and Gerber, H. (2001). Comparison of cloud liquid

Lin, B. and Hu, Y. (2005). Numerical Simulations of Radar Surface Air Pressure Measurements at O2 Bands*, IEEE Geosci. and Remote Sensing Letter*, 2, 324-328. Lin, B., Harrah, S., Neece, R. Lawrence, R., and Fralick, D. (2006). *The Feasibility of Radar-*

McClatchey, R., Fenn, R., Selby, J., Voltz, E., and Garing, J. (1972). *Optical properties of the* 

Rosenkranz, P. (1998). Water vapor microwave continuum absorption: A comparison of

Ray, P. (1972). Broadband complex refractive indices of ice and water, *Appl. Opt.*, 11, 1836-

Seemann, S. W., Li, J., Menzel, W. P., and Gumley, L. E. (2003). Operational retrieval of

Singer, S.F. (1968). Measurement of atmospheric surface pressure with a satellite–borne

Wang, D–H., Droegemeier, K. K., Jahn, D., Xu, K. –M., Xue, M., and Zhang, J. (2001). NIDS-

Weather and Forecasting, Amer. Meteor. Soc., Ft. Lauderdale, FL, 2001. Wang, D. –H., and Minnis, P. (2003). *4D Data Reanalysis/Assimilation with Satellite, Radar and* 

Wu, M.-L. (1985). Remote sensing of cloud top pressure using reflected Solar radiation in the

Xiao, Q., Zou, X., and Wang, B. (2000). Initialization and simulation of a landfalling

Climatology Project data, *J. Geophys. Res*., 102, 9359-9374.

SHEBA/FIREACE, Geophys. Res. Letter, 28, 975-978.

Technology Office, August 10, 2006.

Paper AFCRL-72-0497, No. 411, 108pp.

*Appl. Meteorol*., 42(8), 1072-1091.

laser, Appl. Opp. 7, 1125-1127.

City, UT, 24-28 Feb. 2003.

*Review*, 128, 2252-2269.

measurements and models, *Radio Sci.*, 33, 919-928.

Oxygen A-band*, J. Clim. Appl. Meteor*., 24, 539-546.

oceans using Special Sensor Microwave Imager and International Satellite Cloud

from satellite microwave, infrared and visible measurements in oceanic environments, 1. Microwave brightness temperature simulations, *J. Geophys. Res.*,

(1998b) Estimation of water cloud properties from satellite microwave, infrared and visible measurements in oceanic environment, 2. Results, *J. Geophys. Res.*,*103*, 3887-

water paths derived from in situ and microwave radiometer data taken during the

*Based Remote Sensing of Barometric Pressure, Final Report*, NASA Earth Science

*atmospheric*, Air Force Cambridge Research Laboratories Environmental Research

atmospheric temperature, moisture, and ozone from MODIS infrared radiances, *J.* 

based intermittent diabatic assimilation and application to storm-scale numerical weather prediction. 14th Conf. On Numerical Weather Prediction and 18th Conf. On

*the Extensive Field Measurements*, CRYSTAL-FACE Science Team Meeting, Salt Lake

hurricane using a variational bogus data assimilation scheme, *Monthly Weather* 

addition, while the existing demonstration DiBAR instrument is suitable to demonstrate the concept, a radar processor should be developed specifically for the differential absorption measurement to eliminate the need for the PNA. This would substantially reduce the weight and size of the instrument. This modification should not only eliminate the PNA, but should also be designed to enhance the stability of the instrument and enable the pulse operation to eliminate one of the antennas. While eventually funding will be required to develop an operational DiBAR instrument capable of operation at altitudes of 40 kft, these improvements may lead to moderate altitude flight opportunities.

## **6. References**


addition, while the existing demonstration DiBAR instrument is suitable to demonstrate the concept, a radar processor should be developed specifically for the differential absorption measurement to eliminate the need for the PNA. This would substantially reduce the weight and size of the instrument. This modification should not only eliminate the PNA, but should also be designed to enhance the stability of the instrument and enable the pulse operation to eliminate one of the antennas. While eventually funding will be required to develop an operational DiBAR instrument capable of operation at altitudes of 40 kft, these

Barton, I.J., and Scott, J.C. (1986). Remote measurement of surface pressure using absorption

Callahan, P.S., Morris, C.S. and Hsiao, S.V. (1994). Comparison of TOPEX/POSEIDON <sup>0</sup>

Chou M-D (1990). Parameterization for the absorption of solar radiation by O2 and CO2

Flower, D.A., and Peckham, G.E. (1978). *A microwave pressure sounder*, JPL Publication 78-68,

Ho, S.-P., Lin, B., Minnis, P., and Fan T.-F.(2003). Estimation of cloud vertical structure and

Huang, J., Minnis, P. , Lin, B., Yi, Y., Khaiyer, M.M., Arduini, R.F., Fan, A., Mace, G.G.

Lawrence, R., Fralick, D., Harrah, S., Lin, B., Hu, Y., Hunt, P., Differential absorption

Liebe, H.(1989). MPM--An atmospheric millimeter-wave propagation model. *Int. J. Infrared* 

Liebe, H., Hufford, G., and Manabe, T. (1991). A model for complex permittivity of water at frequencies below 1 THz, *Int. J. Infrared Millimeter Waves*, 12, 659-675. Lin, B., and Rossow, W.B.(1994). Observations of cloud liquid water path over oceans: Optical and microwave remote sensing methods, *J. Geophys. Res*., 99, 20907-20927. Lin, B., and Rossow, W. B. (1996). Seasonal variation of liquid and ice water path in non-

measurements, *J. Geophys. Res.,* 110, D15S18, doi:10.1029/2004JD005101. Korb, C.L., and Weng, C.Y.(1982). A theoretical study of a two-wavelength lidar technique

water amount over tropical oceans using VIRS and TMI data, *J. Geophys. Res*., 108

(2005). Advanced retrievals of multilayered cloud properties using multi-spectral

for the measurement of atmospheric temperature profiles, *J. Appl. Meteorol.,* 21,

microwave radar measurements for remote sensing of atmospheric pressure, Proceedings of the IEEE International Geoscience and Remote Sensing Symposium,

Chou, M-D. (1992). A solar radiation model for climate studies*. J. Atmos. Sci*., 49, 762-772. Chou M-D and Suarez, M. J. (1994). *An efficient thermal infrared radiation parameterization for* 

and significant wave height distributions to Geosat*, J. Geophys. Res.,* 99, 25015-

improvements may lead to moderate altitude flight opportunities.

in the Oxygen A-band, *Appl. Opt*., 25, 3502-3507.

with application to climate studies. *J. Climate*, 3, 209-217.

*use in general circulation models*, NASA Tech Memo 104606.

**6. References** 

25024,.

CalTech, Pasadena, CA.

1346-1355, 1982.

July 2007.

(D14), 4419, doi:10.1029/2002JD003298.

*and Millimeter Waves*, 10, 631-650, 1989.

precipitating clouds over oceans, *J. Clim., 9,* 2890-2902.


**1. Introduction**

Wireless sensor network (or sensor network, for brevity in the following) comes into practice, thanks to the recent technological advancement of embedded systems, sensing devices and wireless communication. A typical sensor network is composed of a number of wirelessly connected sensor nodes distributed in a sensed area. In the network, sensor nodes sense their surroundings and record sensed readings. The sensed readings of individual sensor nodes are then collected to present the measurement of an entire sensed area. In many fields including but not limit to, military, science, remote sensing Vasilescu et al. (2005), industry, commerce, transportation Li et al. (2011), public security Faulkner et al. (2011), healthcare and so on, sensor networks are recognized as important sensing, monitoring and actuation instruments. In addition, many off-the-shelf sensor node products Zurich (n.d.) and supporting software such as TinyOS Group (n.d.) are available in the market. Now sensor network application development is much facilitated. Many sensor networks are anticipated to be deployed soon. Over those years, the computational capability and storage capacity of sensor nodes have been considerably improving. Yet, the improvement of battery energy is relatively small. Since battery replacement for numerous deployed sensor nodes is extremely costly and even impossible in hostile environments, battery energy conservation is a critical issue to sensor networks and their applications. Accordingly, how to effectively save battery energy is a challenge to researchers from academia, government agencies and industries. One common practice is to keep sensor nodes in sleep mode whenever they are not in use. During sleep mode, some hardware components of sensor nodes are turned off to minimize energy consumption. For instance, MICAz needs only 1*μ*A when wireless interface is off and less than 15*μ*A for processor in sleep mode Musaloiu-Elefteri et al. (2008). Besides, wireless communication is very energy consuming. For instance, MICAz consumes 17.4mA and 19.7mA in data sending and receiving, respectively, whereas it only needs 8mA for computation when its wireless interface and processor are on. Thus, reducing the amount of data transmitted between sensor nodes is another important means to save battery energy. In many sensor network applications, data acquisition that collects sensed readings from remote sensor nodes is an essential activity. A primitive approach for data acquisition

**Energy Efficient Data Acquistion** 

**in Wireless Sensor Network** 

*2Department of Computer Science and Engineering, The Pennsylvania State University, University Park,* 

*USA* 

**9**

Ken C. K. Lee1, Mao Ye2 and Wang-Chien Lee2 *1Department of Computer and Information Science,* 

*University of Massachusetts Dartmouth, North Dartmouth,* 

Xue, M., Wang, D. –H., Gao, J. –D., Brewster, K., and Droegemeier, K. K. (2003). The Advanced Regional Prediction System (ARPS): storm-scale numerical weather prediction and assimilation*. Meteor. Atmos. Physics*, 82, 139-170.

## **Energy Efficient Data Acquistion in Wireless Sensor Network**

Ken C. K. Lee1, Mao Ye2 and Wang-Chien Lee2

*1Department of Computer and Information Science, University of Massachusetts Dartmouth, North Dartmouth, 2Department of Computer Science and Engineering, The Pennsylvania State University, University Park, USA* 

#### **1. Introduction**

196 Remote Sensing – Advanced Techniques and Platforms

Xue, M., Wang, D. –H., Gao, J. –D., Brewster, K., and Droegemeier, K. K. (2003). The

prediction and assimilation*. Meteor. Atmos. Physics*, 82, 139-170.

Advanced Regional Prediction System (ARPS): storm-scale numerical weather

Wireless sensor network (or sensor network, for brevity in the following) comes into practice, thanks to the recent technological advancement of embedded systems, sensing devices and wireless communication. A typical sensor network is composed of a number of wirelessly connected sensor nodes distributed in a sensed area. In the network, sensor nodes sense their surroundings and record sensed readings. The sensed readings of individual sensor nodes are then collected to present the measurement of an entire sensed area. In many fields including but not limit to, military, science, remote sensing Vasilescu et al. (2005), industry, commerce, transportation Li et al. (2011), public security Faulkner et al. (2011), healthcare and so on, sensor networks are recognized as important sensing, monitoring and actuation instruments. In addition, many off-the-shelf sensor node products Zurich (n.d.) and supporting software such as TinyOS Group (n.d.) are available in the market. Now sensor network application development is much facilitated. Many sensor networks are anticipated to be deployed soon.

Over those years, the computational capability and storage capacity of sensor nodes have been considerably improving. Yet, the improvement of battery energy is relatively small. Since battery replacement for numerous deployed sensor nodes is extremely costly and even impossible in hostile environments, battery energy conservation is a critical issue to sensor networks and their applications. Accordingly, how to effectively save battery energy is a challenge to researchers from academia, government agencies and industries. One common practice is to keep sensor nodes in sleep mode whenever they are not in use. During sleep mode, some hardware components of sensor nodes are turned off to minimize energy consumption. For instance, MICAz needs only 1*μ*A when wireless interface is off and less than 15*μ*A for processor in sleep mode Musaloiu-Elefteri et al. (2008). Besides, wireless communication is very energy consuming. For instance, MICAz consumes 17.4mA and 19.7mA in data sending and receiving, respectively, whereas it only needs 8mA for computation when its wireless interface and processor are on. Thus, reducing the amount of data transmitted between sensor nodes is another important means to save battery energy.

In many sensor network applications, data acquisition that collects sensed readings from remote sensor nodes is an essential activity. A primitive approach for data acquisition

In a typical sensor network, some senor nodes in the sensor network are directly connected to computer terminals; and they are called *base stations*. Through base stations, computer terminals can issue commands to administer sensor nodes and collect their sensed readings. Besides, all sensor nodes are wirelessly connected, e.g., MICAz uses 2.4GHz IEEE 802.15.4 radio. That means messages are all sent through wireless broadcast. When a node delivers a message, other sensor nodes within its radio coverage range can receive the message. Messages can be conveyed transitively from a sender sensor node to a distant target receiver node Xu et al. (2007). On the other hand, because of shared radio frequencies, simultaneous messages from closely located sensor nodes may lead to signal interference. Moreover, due to ad hoc connectivity and sensor node failure, which is common in practice, connections among sensor nodes are mostly transient and unreliable. Thus, other than regular data messages, every sensor node periodically broadcasts a special message called *beacon* to indicate its liveness to its neighboring sensor nodes. Also, data messages are sent through multiple paths from a sender sensor node towards a destination to deal with possible message loss Xu et al.

Energy Efficient Data Acquistion in Wireless Sensor Network 199

To save battery energy, sensor nodes stay in sleep mode for most of the time; and each of them periodically wakes up to sense its surrounding and record its measurements as sensed readings. For data acquisition, an entire sensor network (i.e., a set of sensor nodes *N*) presents a set of sensed reading values *V*, notationally, *V* = {*vn* | *n* ∈ *N*} where *vn* is a sensed reading value provided by a sensor node *n*. Based on *V*, data analysis is conducted to understand the entire sensed area. As already discussed, it is very costly to collect *V* from all sensor nodes. Accordingly, some research results were reported in the literature exploring techniques

while collected readings may only provide approximate analytical results. The following are two sorts of techniques. Sampling is the first technique that sensed readings are only collected from some (randomly) selected sensor nodes Biswas et al. (2004); Doherty & Pister (2004); Huang et al. (2011). Those unselected sensor nodes do not need to provide their sensed readings. The sampling rate is adjustable according to the energy budget. The second technique is based on a certain prediction model Silberstein et al. (2006) that, some sensed readings can be omitted from being sent as long as they can be (approximately) predicted according to other sensed readings, which can be from some neighboring sensor nodes, or from the previous sensed reading values of the same sensor nodes. Meanwhile, another important research direction for energy efficient data acquisition based on in-network query processing Hellerstein et al. (2003) has been extensively studied; and we shall review some of

Data aggregation is often used to summarize a large dataset. With respect to all sensed readings *V* from all sensor nodes *N*, an aggregate function *f* is applied on *V* to obtain a single aggregated value, i.e., *f*(*V*). Some commonly used aggregate functions include SUM, COUNT, MEAN, VARIANCE, MAX and MIN etc. Aggregated data can provide a very small summary of sensed readings (e.g., the highest, average and lowest temperature) in a sense area. In many situations, it can be sufficient for scientists to know about a remote sensed area. Besides, aggregated data is usually small to transmit and data aggregation is not very computationally expensive for sensor nodes to perform so that in-network data aggregation

(⊂ *V*) from a subset of sensor nodes *N*� (⊂ *N*),

(2007). As a result, those extra messages incur additional energy costs.

to collect a subset of sensed readings *V*�

the representative works in the coming four sections.

**3. In-network data aggregation**

can be collecting all raw sensed readings and maintaining them in a data repository for centralized processing. Alternatively, a large volume of raw sensed readings are streamed to a processing site where analysis and data processing are directly applied on streamed sensor readings Madden & Franklin (2002). However, costly wireless communication can quickly use up sensor nodes' battery energy. In other words, such a centralized approach is not energy efficient and thus undesirable in practice. As in the literature, a lot of original ideas and important research results have been developed for energy efficient data acquisition. Among those, many new techniques have been developed based on the idea of in-network query processing. Through in-network query processing, queries are delivered into sensor networks and sensor nodes evaluate the queries locally. By doing so, (partial) query results are transmitted instead of raw sensed readings. Since (partial) query results are in smaller size than raw sensed readings, energy cost can be effectively saved. Subject to the types of queries and potential optimization opportunities, various in-network query processing techniques have been developed and reported in the literature.

In this chapter, we review the main concepts and ideas of many representative research results on in-network query processing, which include some of our recent works such as itinerary-based data aggregation Xu et al. (2006), materialized in-network view Lee et al. (2007), contour mapping engine Xu et al. (2008) and in-network probabilistic minimum value search Ye, Lee, Lee, Liu & Chen (to appear). As briefly described, itinerary-based data aggregation is a new access method that navigates query messages among sensor nodes to collect/aggregate their sensed readings. Materialized in-network view is a novel data caching scheme that maintains (partial) query results in queried sensor nodes. Then, subsequent queries issued by different base stations can access cached results instead of traversing query regions from scratch to determine query results. Contour mapping engine derives fairly accurate contour line segments using data mining techniques. Besides, only the coefficients of equations representing contour line segments, which are very compact, are transmit. Finally, probabilitistic minimum value search is one of recent efforts in probabilistic sensed data aggregation. It finds the possible smallest sensed reading values in a sensor network.

The details of those works will be discussed in the following sections. First of all, we present a system model that our reviewed research results are based upon. Then, we discuss research results in in-network data aggregation and in-network data caching as well as in-network contour map computation. We further discuss recent results on in-network probabilistic data aggregation. Last but not least, we summarize this chapter and discuss some future research directions.

## **2. System model**

Without loss of generality, a sensor network is composed of a number of battery powered stationary sensor nodes deployed over a sensed area. The spatial deployment of sensor nodes in a target sensed area is one of the research problems in sensor networks; and many research works (e.g. Bojkovic & Bakmaz (2008)) were proposed to maximize the area coverage by a given quantity of sensor nodes while providing required network connectivity among sensor nodes. The issue of sensor node deployment is usually considered to be independent from others. As will be discussed in the following, research works on data acquisition mostly assume that sensor networks are already set up and all sensor nodes are with identical hardware configurations.

2 Will-be-set-by-IN-TECH

can be collecting all raw sensed readings and maintaining them in a data repository for centralized processing. Alternatively, a large volume of raw sensed readings are streamed to a processing site where analysis and data processing are directly applied on streamed sensor readings Madden & Franklin (2002). However, costly wireless communication can quickly use up sensor nodes' battery energy. In other words, such a centralized approach is not energy efficient and thus undesirable in practice. As in the literature, a lot of original ideas and important research results have been developed for energy efficient data acquisition. Among those, many new techniques have been developed based on the idea of in-network query processing. Through in-network query processing, queries are delivered into sensor networks and sensor nodes evaluate the queries locally. By doing so, (partial) query results are transmitted instead of raw sensed readings. Since (partial) query results are in smaller size than raw sensed readings, energy cost can be effectively saved. Subject to the types of queries and potential optimization opportunities, various in-network query processing techniques

In this chapter, we review the main concepts and ideas of many representative research results on in-network query processing, which include some of our recent works such as itinerary-based data aggregation Xu et al. (2006), materialized in-network view Lee et al. (2007), contour mapping engine Xu et al. (2008) and in-network probabilistic minimum value search Ye, Lee, Lee, Liu & Chen (to appear). As briefly described, itinerary-based data aggregation is a new access method that navigates query messages among sensor nodes to collect/aggregate their sensed readings. Materialized in-network view is a novel data caching scheme that maintains (partial) query results in queried sensor nodes. Then, subsequent queries issued by different base stations can access cached results instead of traversing query regions from scratch to determine query results. Contour mapping engine derives fairly accurate contour line segments using data mining techniques. Besides, only the coefficients of equations representing contour line segments, which are very compact, are transmit. Finally, probabilitistic minimum value search is one of recent efforts in probabilistic sensed data

aggregation. It finds the possible smallest sensed reading values in a sensor network.

The details of those works will be discussed in the following sections. First of all, we present a system model that our reviewed research results are based upon. Then, we discuss research results in in-network data aggregation and in-network data caching as well as in-network contour map computation. We further discuss recent results on in-network probabilistic data aggregation. Last but not least, we summarize this chapter and discuss some future research

Without loss of generality, a sensor network is composed of a number of battery powered stationary sensor nodes deployed over a sensed area. The spatial deployment of sensor nodes in a target sensed area is one of the research problems in sensor networks; and many research works (e.g. Bojkovic & Bakmaz (2008)) were proposed to maximize the area coverage by a given quantity of sensor nodes while providing required network connectivity among sensor nodes. The issue of sensor node deployment is usually considered to be independent from others. As will be discussed in the following, research works on data acquisition mostly assume that sensor networks are already set up and all sensor nodes are with identical

have been developed and reported in the literature.

directions.

**2. System model**

hardware configurations.

In a typical sensor network, some senor nodes in the sensor network are directly connected to computer terminals; and they are called *base stations*. Through base stations, computer terminals can issue commands to administer sensor nodes and collect their sensed readings. Besides, all sensor nodes are wirelessly connected, e.g., MICAz uses 2.4GHz IEEE 802.15.4 radio. That means messages are all sent through wireless broadcast. When a node delivers a message, other sensor nodes within its radio coverage range can receive the message. Messages can be conveyed transitively from a sender sensor node to a distant target receiver node Xu et al. (2007). On the other hand, because of shared radio frequencies, simultaneous messages from closely located sensor nodes may lead to signal interference. Moreover, due to ad hoc connectivity and sensor node failure, which is common in practice, connections among sensor nodes are mostly transient and unreliable. Thus, other than regular data messages, every sensor node periodically broadcasts a special message called *beacon* to indicate its liveness to its neighboring sensor nodes. Also, data messages are sent through multiple paths from a sender sensor node towards a destination to deal with possible message loss Xu et al. (2007). As a result, those extra messages incur additional energy costs.

To save battery energy, sensor nodes stay in sleep mode for most of the time; and each of them periodically wakes up to sense its surrounding and record its measurements as sensed readings. For data acquisition, an entire sensor network (i.e., a set of sensor nodes *N*) presents a set of sensed reading values *V*, notationally, *V* = {*vn* | *n* ∈ *N*} where *vn* is a sensed reading value provided by a sensor node *n*. Based on *V*, data analysis is conducted to understand the entire sensed area. As already discussed, it is very costly to collect *V* from all sensor nodes. Accordingly, some research results were reported in the literature exploring techniques to collect a subset of sensed readings *V*� (⊂ *V*) from a subset of sensor nodes *N*� (⊂ *N*), while collected readings may only provide approximate analytical results. The following are two sorts of techniques. Sampling is the first technique that sensed readings are only collected from some (randomly) selected sensor nodes Biswas et al. (2004); Doherty & Pister (2004); Huang et al. (2011). Those unselected sensor nodes do not need to provide their sensed readings. The sampling rate is adjustable according to the energy budget. The second technique is based on a certain prediction model Silberstein et al. (2006) that, some sensed readings can be omitted from being sent as long as they can be (approximately) predicted according to other sensed readings, which can be from some neighboring sensor nodes, or from the previous sensed reading values of the same sensor nodes. Meanwhile, another important research direction for energy efficient data acquisition based on in-network query processing Hellerstein et al. (2003) has been extensively studied; and we shall review some of the representative works in the coming four sections.

#### **3. In-network data aggregation**

Data aggregation is often used to summarize a large dataset. With respect to all sensed readings *V* from all sensor nodes *N*, an aggregate function *f* is applied on *V* to obtain a single aggregated value, i.e., *f*(*V*). Some commonly used aggregate functions include SUM, COUNT, MEAN, VARIANCE, MAX and MIN etc. Aggregated data can provide a very small summary of sensed readings (e.g., the highest, average and lowest temperature) in a sense area. In many situations, it can be sufficient for scientists to know about a remote sensed area. Besides, aggregated data is usually small to transmit and data aggregation is not very computationally expensive for sensor nodes to perform so that in-network data aggregation

value based on (aggregated) sensed reading values received from its child nodes and its own sensed reading value. As shown in Figure 1(a), some leaf nodes *n*1, *n*2, *n*<sup>3</sup> first send their reading values of 2, 4 and 5, respectively, to their parent node *n*4. Then, *n*<sup>4</sup> calculates the sum of their values and its own sensed reading values of 3, i.e., 14, and propagates it to its parent node *n*5. Eventually, the root derives the final sum among all sensor nodes in the region and

Energy Efficient Data Acquistion in Wireless Sensor Network 201

The infrastructure-based approaches relies on an infrastructure to perform in-network data aggregation, incurring two rounds of messages for both query dissemination and data collection. However, in presence of sensor node failure, queries and aggregated sensed readings would be lost making these approaches not very robust and reliable. Some additional research works Manjhi et al. (2005) were proposed to improve the robustness and reliability of routing trees by replicating aggregated values and sending them through different paths towards the root. However, it incurs extra data communication cost. To save the quantity of messages, we have recently developed itinerary-based data aggregation Xu et al. (2006).

The basic idea of itinerary-based data aggregation is to navigate a query among sensor nodes in a queried region as illustrated in Figure 1(b). In every step, a query message that carries both a query specification and an immediate query result is strategically sent from one sensor node to another along a designed space filling path called *itinerary*. The width of an itinerary is bounded by a maximum radio transmission range. Sensor nodes participating in forwarding a query message are called *Q*-nodes. After it receives a query message, a *Q*-node asks its neighboring nodes for their sensed readings. Then, the *Q*-node incorporates all received sensed readings and its own reading into the immediate query result. Thereafter, it forwards the query message with a new intermediate query result to a succeeding *Q*-node. Here, the succeeding *Q*-node is chosen by the current *Q*-node. If a *Q*-node fails, its preceding *Q*-node can detect it and re-propagates the query message to another sensor node as a replacement Q-node. As such, the itinerary can be resumed from that new Q-node. The evaluation of a query completes when a specified region is completely traversed. Finally, a query result is

*split*

*combine*

Base station

*Q*-node

(b) Hybrid Itinerary

*Q1*

reports it to the base station.

returned to the base station.

Base station

Fig. 2. Parallel and hybrid itinerary

*<sup>Q</sup>*-node *split combine*

(a) Parallel Itinerary

*Q1*

**3.2 Itinerary-based data aggregation**

is very suitable to sensor networks. In the following, we discuss two major strategies, namely, *infrastructure-based approaches* and *itinerary-based approaches*, for in-network data aggregation.

#### **3.1 Infrastructure-based data aggregation**

As their name suggests, infrastructure-based approaches build certain routing structures among sensor nodes to perform in-network data aggregation. TAG Madden et al. (2002) and COUGAR Yao & Gehrke (2003) are two representative infrastructure-based approaches. They both form a routing tree to disseminate a query and to derive aggregated sensed readings in divide-and-conquer fashion. The rationale behind these approaches are two ideas. First, some aggregate functions *f* are decomposable so that *f*(*V*) can be transformed to *f*(*f*(*V*1), *f*(*V*2), ··· *f*(*Vx*)), where *V*1, *V*2, ··· *Vx* are sensed reading values from *x* disjointed subsets of sensor nodes and the union of all of them equals *V*, and *f* can be applied to readings from individual subsets of sensor nodes and to their aggregated readings. For example, SUM(*V*), where SUM adds all sensed reading values, can be performed as SUM(SUM(*V*1), SUM(*V*2), ··· SUM(*Vx*)). Second, the connections among sensor nodes can be organized as a tree topology, in which the root of any subtree that covers a disjointed subset of some sensor nodes can carry out local aggregation on data from its descendant nodes. In other words, in-network data aggregation incrementally computes aggregated values at different levels in a routing tree.

Fig. 1. Strategies for in-network data aggregation

Figure 1(a) exemplifies a routing tree formed for data aggregation. In brief, upon receiving a SUM query for the total of sensed reading values from its connected computer terminal, a base station disseminates the query to sensor nodes within a specified queried region. The specified queried region can be a small area or an entire sensed area. With the queried region, sensor nodes join the routing tree when they receive the query. A node becomes the parent node of its neighboring nodes in a routing tree if those nodes receive the query from it. In a routing tree, the first queried node within the region serves as the root. Meanwhile, every non-root tree node should have another sensor node as its parent node, and non-leaf nodes are connected to some other nodes as their child nodes.

After the tree is built, data aggregation starts from leaf nodes. The leaf nodes send their sensed reading values to their parent nodes. Thereafter, every non-leaf node derives an aggregated value based on (aggregated) sensed reading values received from its child nodes and its own sensed reading value. As shown in Figure 1(a), some leaf nodes *n*1, *n*2, *n*<sup>3</sup> first send their reading values of 2, 4 and 5, respectively, to their parent node *n*4. Then, *n*<sup>4</sup> calculates the sum of their values and its own sensed reading values of 3, i.e., 14, and propagates it to its parent node *n*5. Eventually, the root derives the final sum among all sensor nodes in the region and reports it to the base station.

#### **3.2 Itinerary-based data aggregation**

4 Will-be-set-by-IN-TECH

is very suitable to sensor networks. In the following, we discuss two major strategies, namely, *infrastructure-based approaches* and *itinerary-based approaches*, for in-network data aggregation.

As their name suggests, infrastructure-based approaches build certain routing structures among sensor nodes to perform in-network data aggregation. TAG Madden et al. (2002) and COUGAR Yao & Gehrke (2003) are two representative infrastructure-based approaches. They both form a routing tree to disseminate a query and to derive aggregated sensed readings in divide-and-conquer fashion. The rationale behind these approaches are two ideas. First, some aggregate functions *f* are decomposable so that *f*(*V*) can be transformed to *f*(*f*(*V*1), *f*(*V*2), ··· *f*(*Vx*)), where *V*1, *V*2, ··· *Vx* are sensed reading values from *x* disjointed subsets of sensor nodes and the union of all of them equals *V*, and *f* can be applied to readings from individual subsets of sensor nodes and to their aggregated readings. For example, SUM(*V*), where SUM adds all sensed reading values, can be performed as SUM(SUM(*V*1), SUM(*V*2), ··· SUM(*Vx*)). Second, the connections among sensor nodes can be organized as a tree topology, in which the root of any subtree that covers a disjointed subset of some sensor nodes can carry out local aggregation on data from its descendant nodes. In other words, in-network data aggregation incrementally computes aggregated values at different

Base station

*Q*-node

(b) Itinerary-based Approach

*Q1*

**3.1 Infrastructure-based data aggregation**

levels in a routing tree.

root

Base station

*Q1*

*n1* (2) *n2* (4)

sensor nodes

Figure 1(a) exemplifies a routing tree formed for data aggregation. In brief, upon receiving a SUM query for the total of sensed reading values from its connected computer terminal, a base station disseminates the query to sensor nodes within a specified queried region. The specified queried region can be a small area or an entire sensed area. With the queried region, sensor nodes join the routing tree when they receive the query. A node becomes the parent node of its neighboring nodes in a routing tree if those nodes receive the query from it. In a routing tree, the first queried node within the region serves as the root. Meanwhile, every non-root tree node should have another sensor node as its parent node, and non-leaf nodes

After the tree is built, data aggregation starts from leaf nodes. The leaf nodes send their sensed reading values to their parent nodes. Thereafter, every non-leaf node derives an aggregated

*n3* (5) *n4* (3)

2 4 5

14 *n5*

(a) Routing tree

are connected to some other nodes as their child nodes.

Fig. 1. Strategies for in-network data aggregation

The infrastructure-based approaches relies on an infrastructure to perform in-network data aggregation, incurring two rounds of messages for both query dissemination and data collection. However, in presence of sensor node failure, queries and aggregated sensed readings would be lost making these approaches not very robust and reliable. Some additional research works Manjhi et al. (2005) were proposed to improve the robustness and reliability of routing trees by replicating aggregated values and sending them through different paths towards the root. However, it incurs extra data communication cost. To save the quantity of messages, we have recently developed itinerary-based data aggregation Xu et al. (2006).

The basic idea of itinerary-based data aggregation is to navigate a query among sensor nodes in a queried region as illustrated in Figure 1(b). In every step, a query message that carries both a query specification and an immediate query result is strategically sent from one sensor node to another along a designed space filling path called *itinerary*. The width of an itinerary is bounded by a maximum radio transmission range. Sensor nodes participating in forwarding a query message are called *Q*-nodes. After it receives a query message, a *Q*-node asks its neighboring nodes for their sensed readings. Then, the *Q*-node incorporates all received sensed readings and its own reading into the immediate query result. Thereafter, it forwards the query message with a new intermediate query result to a succeeding *Q*-node. Here, the succeeding *Q*-node is chosen by the current *Q*-node. If a *Q*-node fails, its preceding *Q*-node can detect it and re-propagates the query message to another sensor node as a replacement Q-node. As such, the itinerary can be resumed from that new Q-node. The evaluation of a query completes when a specified region is completely traversed. Finally, a query result is returned to the base station.

Fig. 2. Parallel and hybrid itinerary

Base station 1 Base station 2

*Q1 Q2*

(a) Three SUM queries, *Q*1, *Q*<sup>2</sup> and

12345678

351224 732685 242334 321535 421334

(c) Processing *Q*<sup>2</sup> on MINV

be processed if its answer is partially or fully available from the cache.

Fig. 3. Materialized in-network view

**0,3 3,8 8,9 9,11 11,13 13,17 0,7 7,10 10,12 12,18 18,26 26,31 0,2 2,6 6,8 8,11 11,14 14,18 0,3 3,5 5,6 6,11 11,14 14,19 0,4 4,6 6,7 7,10 10,13 13,17** *Q3*

*Q2*

351224 732685 242334 321535 421334

(b) Grid network and partial sum

12345678

21535 21334 **3,5 5,6 6,11 11,14 14,19 4,6 6,7 7,10 10,13 13,17**

*Q3*

*c*

*Q3 Q3*

*a b*

Base station 3 *Q3 probing query*

(d) Processing *Q*<sup>3</sup> on MINV

12345678

On the other hand, two major issues are faced in the development of MINV. The first and most critical issue is the presentation and placement of queried results. This directly affects the usability of cached data for any subsequent query. Another issue is about how a query can

In MINV, we consider a sensed area structured into a grid as shown in Figure 3(b), as opposed to building any ad hoc routing structure that favors queries issued by some base stations at query time. Within every grid cell denoted by *cell*(*x*, *y*), sensor nodes form a cluster and one of the sensor nodes is elected as a cluster head. Upon receiving a query, the cluster head collects sensed readings from all cluster members. Based on this setting, we can treat a sensor network as a grid of cluster heads. To answer aggregation queries, we assume parallel itinerary-based data aggregation as discussed in the previous section. Here, cluster heads serve as *Q*-nodes, forwarding queries and computing intermediate results. Additional to query processing, cluster heads cache every intermediate query result it receives and that it send. For grid cell *cell*(*x*, *y*), we denote the received intermediate query result as *init*(*x*, *y*) and the sent intermediate query result as *final*(*x*, *y*). As shown in Figure 3(b), intermediate results derived and maintained for a SUM query (called *partial sum*) are accumulated and cached

y x *Q1*

y x

Energy Efficient Data Acquistion in Wireless Sensor Network 203

**0,3 3,8 8,9 9,11 11,13 13,17 0,7 7,10 10,12 12,18 18,26 26,31 0,2 2,6 6,8 8,11 11,14 14,18 0,3 3,5 5,6 6,11 11,14 14,19 0,4 4,6 6,7 7,10 10,13 13,17**

Base station 3

*Q*3

y x

On the other hand, the length of an itinerary directly affects the query processing time. A single itinerary takes a very long processing time, especially in a large query region. Thus, as opposed to single itinerary as shown in Figure 1(b), parallel itinerary has been developed to improve query processing time. As depicted in Figure 2(a), an itinerary is split into four threads scanning four rows in a region. Their immediate query results are then aggregated at the end of the rows. However, wireless signal from two adjacent threads may lead to signal interference, message loss and finally data retransmission. As a result, longer time and more energy are consumed. To address this issue, a hybrid itinerary has been derived accordingly. Here, a query region is divided into several sections that contain multiple rows. Inside each section, a single itinerary scans all the rows. For instance, as in Figure 2(b), a query region is partitioned into two sections, each covering two rows. Within each section, a sequential itinerary is formed. Now, because of wider separation, the impact of signal interference is minimized while a higher degree of parallelism is achieved, compared with single itinerary.

Through simulation, our developed itinerary-based approach is demonstrated outperforming infrastructure-based approaches Xu et al. (2006). Besides, the idea of itinerary-based in-network query processing has also been adopted for other types of queries and applications such as tracking nearest neighbor objects Wu et al. (2007).

## **4. In-network data caching**

Data caching is widely used in distributed computer systems to shorten remote data access latency. In sensor networks, data caching has one more important benefit that is saving communication energy cost. Many existing research works focused on strategies of replicating frequently accessed sensed readings in some sensor nodes closer to base stations Ganesan et al. (2003); Liu et al. (2004); Ratnasamy et al. (2002); Sadagopan et al. (2003); Shakkottai (2004); Zhang et al. (2007). In presence of multiple base stations, a research problem of finding sensor nodes for caching sensed readings is formulated as determining a Steiner tree in a sensor network Prabh & Abdelzaher (2005). In a graph, a Steiner tree is a subgraph connecting all specified vertices and providing the smallest sum of edge distances Invanov & Tuzhilin (1994). By caching data in some sensor nodes as internal vertices (that connect more than one edge) in a Steiner tree, the communication costs between those sensor nodes providing sensed readings and base stations are guaranteed to be minimized.

On the other hand, existing data caching schemes do not support data aggregation. Accordingly, we have devised a new data caching scheme called *materialized in-network view* (MINV) to support SUM, AVERAGE, COUNT, VARIANCE aggregate functions Lee et al. (2007). Specifically, MINV maintains partially computed aggregated readings in some queried sensor nodes. Then, subsequent queries, which are issued by different base stations and which cover queried sensor nodes, can be fully or partially answered by cached results.

Figure 3(a) shows a motivating example of MINV. In the figure, a SUM query *Q*<sup>1</sup> adds up the sensed readings of all sensor nodes in a query region at time *t*1. At a later times *t*<sup>2</sup> and *t*3, two other SUM queries, *Q*<sup>2</sup> and *Q*3, respectively, are issued to summarize readings from sensor nodes in two other queried regions overlapping *Q*1's. Without cache, all queries are processed independently. Ideally, if *Q*1's answer can be maintained and made accessible, *Q*<sup>2</sup> and *Q*<sup>3</sup> can be answered by some cached data to save the energy costs of an entire sensor network.

6 Will-be-set-by-IN-TECH

On the other hand, the length of an itinerary directly affects the query processing time. A single itinerary takes a very long processing time, especially in a large query region. Thus, as opposed to single itinerary as shown in Figure 1(b), parallel itinerary has been developed to improve query processing time. As depicted in Figure 2(a), an itinerary is split into four threads scanning four rows in a region. Their immediate query results are then aggregated at the end of the rows. However, wireless signal from two adjacent threads may lead to signal interference, message loss and finally data retransmission. As a result, longer time and more energy are consumed. To address this issue, a hybrid itinerary has been derived accordingly. Here, a query region is divided into several sections that contain multiple rows. Inside each section, a single itinerary scans all the rows. For instance, as in Figure 2(b), a query region is partitioned into two sections, each covering two rows. Within each section, a sequential itinerary is formed. Now, because of wider separation, the impact of signal interference is minimized while a higher degree of parallelism is achieved, compared with single itinerary. Through simulation, our developed itinerary-based approach is demonstrated outperforming infrastructure-based approaches Xu et al. (2006). Besides, the idea of itinerary-based in-network query processing has also been adopted for other types of queries and applications

Data caching is widely used in distributed computer systems to shorten remote data access latency. In sensor networks, data caching has one more important benefit that is saving communication energy cost. Many existing research works focused on strategies of replicating frequently accessed sensed readings in some sensor nodes closer to base stations Ganesan et al. (2003); Liu et al. (2004); Ratnasamy et al. (2002); Sadagopan et al. (2003); Shakkottai (2004); Zhang et al. (2007). In presence of multiple base stations, a research problem of finding sensor nodes for caching sensed readings is formulated as determining a Steiner tree in a sensor network Prabh & Abdelzaher (2005). In a graph, a Steiner tree is a subgraph connecting all specified vertices and providing the smallest sum of edge distances Invanov & Tuzhilin (1994). By caching data in some sensor nodes as internal vertices (that connect more than one edge) in a Steiner tree, the communication costs between those sensor nodes providing sensed readings and base stations are guaranteed to be minimized. On the other hand, existing data caching schemes do not support data aggregation. Accordingly, we have devised a new data caching scheme called *materialized in-network view* (MINV) to support SUM, AVERAGE, COUNT, VARIANCE aggregate functions Lee et al. (2007). Specifically, MINV maintains partially computed aggregated readings in some queried sensor nodes. Then, subsequent queries, which are issued by different base stations and which

cover queried sensor nodes, can be fully or partially answered by cached results.

Figure 3(a) shows a motivating example of MINV. In the figure, a SUM query *Q*<sup>1</sup> adds up the sensed readings of all sensor nodes in a query region at time *t*1. At a later times *t*<sup>2</sup> and *t*3, two other SUM queries, *Q*<sup>2</sup> and *Q*3, respectively, are issued to summarize readings from sensor nodes in two other queried regions overlapping *Q*1's. Without cache, all queries are processed independently. Ideally, if *Q*1's answer can be maintained and made accessible, *Q*<sup>2</sup> and *Q*<sup>3</sup> can be answered by some cached data to save the energy costs of an entire sensor

such as tracking nearest neighbor objects Wu et al. (2007).

**4. In-network data caching**

network.

(a) Three SUM queries, *Q*1, *Q*<sup>2</sup> and *Q*3

*Q1*

y x

> 351224 732685 242334 321535 421334

(b) Grid network and partial sum

12345678

**0,3 3,8 8,9 9,11 11,13 13,17 0,7 7,10 10,12 12,18 18,26 26,31 0,2 2,6 6,8 8,11 11,14 14,18 0,3 3,5 5,6 6,11 11,14 14,19 0,4 4,6 6,7 7,10 10,13 13,17**

Fig. 3. Materialized in-network view

On the other hand, two major issues are faced in the development of MINV. The first and most critical issue is the presentation and placement of queried results. This directly affects the usability of cached data for any subsequent query. Another issue is about how a query can be processed if its answer is partially or fully available from the cache.

In MINV, we consider a sensed area structured into a grid as shown in Figure 3(b), as opposed to building any ad hoc routing structure that favors queries issued by some base stations at query time. Within every grid cell denoted by *cell*(*x*, *y*), sensor nodes form a cluster and one of the sensor nodes is elected as a cluster head. Upon receiving a query, the cluster head collects sensed readings from all cluster members. Based on this setting, we can treat a sensor network as a grid of cluster heads. To answer aggregation queries, we assume parallel itinerary-based data aggregation as discussed in the previous section. Here, cluster heads serve as *Q*-nodes, forwarding queries and computing intermediate results. Additional to query processing, cluster heads cache every intermediate query result it receives and that it send. For grid cell *cell*(*x*, *y*), we denote the received intermediate query result as *init*(*x*, *y*) and the sent intermediate query result as *final*(*x*, *y*). As shown in Figure 3(b), intermediate results derived and maintained for a SUM query (called *partial sum*) are accumulated and cached

wind speed, etc., should continuously change over the area. Data aggregation cannot effectively represent such spatially varied measurements. Thus, some other presentations, e.g., histogram, contour map, etc., should be used instead. Among those, contour maps are often used to present the approximate spatial distributions of measurements. On a contour map as illustrated in Figure 4(a), an area is divided into regions by some curves called *contour lines* and every contour line is labeled with one value. Thus, on a contour map, all measurements on a contour line labeled with *v* are equal to *v*, whereas measurements at some points not on any contour lines can be determined through interpolation according to their

Energy Efficient Data Acquistion in Wireless Sensor Network 205

(c) Contour line segment (d) Convex hulls in SVM

Very recently, the research of contour map computation in sensor networks has started to receive attention Liu & Li (2007); Meng et al. (n.d.); Xue et al. (2006). An earlier work Xue et al. (2006) was proposed to construct a contour map as a grid, in which each grid cell carries an aggregated single value. This grid presentation can facilitate recognition and matching spatial patterns of measurements with respect to some predefined patterns for event detection and phenomenon tracking. However, the grid presentation cannot provide very precise contour maps and it may incur a large communication cost to convey individual grid cell values,

base station

cluster head

(b) Clustered sensor network

cluster

contour line

straight-line distances to adjacent contour lines.

30

(a) Contour map

especially when grids of very fine granularity are used.

10 20

10

Fig. 4. Contour map computation

20

in cluster heads within queried regions. In the figure, cluster head at *cell*(3, 4) maintains an initial partial sum (i.e., *init*(3, 4)) and a final partial sum (i.e., *final*(3, 4)) as 7 and 10, respectively, while its local reading is 3. Based on cached partial sums, the sum of sensed readings in all cell between *cell*(*x*, *y*) and *cell*(*x*� , *y*) in the same row *y* can be determined as *final*(*x*� , *y*) − *init*(*x*, *y*). As in the figure, the sum of sensed readings of sensor nodes from *cell*(3, 4) through *cell*(7, 4) can be calculated as 31 − 7 = 24.

To answer another SUM query *Q*<sup>2</sup> whose region is fully covered by *Q*1's, *Q*<sup>2</sup> can simply traverse the border of its query region to collect cached partial sums. In Figure 3(c), *Q*<sup>2</sup> sums up *init*(3, 3), *init*(4, 3), *init*(5, 3) and *init*(6, 3), i.e., 3 + 7 + 2 + 3 = 15, from the left side of its query region. Thereafter, it calculates the sum of *final*(6, 7), *final*(5, 3), *final*(4, 3) and *final*(3, 7), i.e., 19 + 18 + 31 + 17 = 85 from the right side of the region, and subtracts 15 from it. Now the final sum is 70. Notice that only cluster heads on the border of a query region are accessed for cached partial sums and participate in query passing. By using the cache, messages between cluster heads and their members are saved. Besides, some internal grid cells inside a given query region are not accessed at all, further reducing energy costs.

Some queries may have their query regions partially covered by previous queries. In these cases, those queries need to be decomposed into subqueries, which each subquery covers one disjointed subregion. The final query result is then computed by aggregating those subquery results. For instance, *Q*3's region is partially covered by *Q*1's. Thus, it is partitioned into three subqueries *Q*3*a*, *Q*3*<sup>b</sup>* and *Q*3*<sup>c</sup>* as illustrated in Figure 3(d). While *Q*3*<sup>a</sup>* is totally answered by the cached partial sums, *Q*3*<sup>b</sup>* and *Q*3*<sup>c</sup>* are performed as separate SUM queries. The answer of *Q*<sup>3</sup> is then obtained by adding the sums from these subqueries.

Thus far, cache information has been implicitly assumed to be available to every base stations in the above discussion. In fact, it is not energy efficient to make cache information available everywhere. In MINV, we consider that the cache information is only maintained with initial and final intermediate results in queried grid cells. In this setting, cache discovery is an issue to consider. To determine whether a cache is available for a query, we introduced a probing stage in every query evaluation as illustrated in Figure 3(d). The main idea of this probing stage is described as follows. When a query reaches the (nearest) corner of a query region, it traverses to the diagonally opposite corner and checks if available cache is present in the traversed cells on a diagonal line. If no cache is discovered, it means two possible implications: (i) no cache is available inside the query region, or (ii) a cache if exists has a small overlapped area with the query region, so that it is considered to be not useful to the query. If no cache is used, the query is executed directly from the farthest corner. Otherwise, the query is transformed into subqueries accessing the cache and deriving aggregated reading values in remaining divided areas. Notice that this additional probing stage introduces a little extra communication cost, compared to evaluating queries directly, which usually derives query results at the farthest corners of query regions and sends the results from there back to base stations. Besides, for some cases like entire query regions fully covered by a cache (e.g., *Q*<sup>2</sup> as discussed above), probe stages can be omitted.

#### **5. In-network contour map computation**

As discussed in the previous two sections, data aggregation was used to compute a single aggregated value representing the measurements for an entire sensed area or a query region. For a large sensed area, certain measurements recorded by sensor nodes, e.g., temperature, 8 Will-be-set-by-IN-TECH

in cluster heads within queried regions. In the figure, cluster head at *cell*(3, 4) maintains an initial partial sum (i.e., *init*(3, 4)) and a final partial sum (i.e., *final*(3, 4)) as 7 and 10, respectively, while its local reading is 3. Based on cached partial sums, the sum of sensed

To answer another SUM query *Q*<sup>2</sup> whose region is fully covered by *Q*1's, *Q*<sup>2</sup> can simply traverse the border of its query region to collect cached partial sums. In Figure 3(c), *Q*<sup>2</sup> sums up *init*(3, 3), *init*(4, 3), *init*(5, 3) and *init*(6, 3), i.e., 3 + 7 + 2 + 3 = 15, from the left side of its query region. Thereafter, it calculates the sum of *final*(6, 7), *final*(5, 3), *final*(4, 3) and *final*(3, 7), i.e., 19 + 18 + 31 + 17 = 85 from the right side of the region, and subtracts 15 from it. Now the final sum is 70. Notice that only cluster heads on the border of a query region are accessed for cached partial sums and participate in query passing. By using the cache, messages between cluster heads and their members are saved. Besides, some internal grid cells inside a given query region are not accessed at all, further reducing energy costs.

Some queries may have their query regions partially covered by previous queries. In these cases, those queries need to be decomposed into subqueries, which each subquery covers one disjointed subregion. The final query result is then computed by aggregating those subquery results. For instance, *Q*3's region is partially covered by *Q*1's. Thus, it is partitioned into three subqueries *Q*3*a*, *Q*3*<sup>b</sup>* and *Q*3*<sup>c</sup>* as illustrated in Figure 3(d). While *Q*3*<sup>a</sup>* is totally answered by the cached partial sums, *Q*3*<sup>b</sup>* and *Q*3*<sup>c</sup>* are performed as separate SUM queries. The answer of

Thus far, cache information has been implicitly assumed to be available to every base stations in the above discussion. In fact, it is not energy efficient to make cache information available everywhere. In MINV, we consider that the cache information is only maintained with initial and final intermediate results in queried grid cells. In this setting, cache discovery is an issue to consider. To determine whether a cache is available for a query, we introduced a probing stage in every query evaluation as illustrated in Figure 3(d). The main idea of this probing stage is described as follows. When a query reaches the (nearest) corner of a query region, it traverses to the diagonally opposite corner and checks if available cache is present in the traversed cells on a diagonal line. If no cache is discovered, it means two possible implications: (i) no cache is available inside the query region, or (ii) a cache if exists has a small overlapped area with the query region, so that it is considered to be not useful to the query. If no cache is used, the query is executed directly from the farthest corner. Otherwise, the query is transformed into subqueries accessing the cache and deriving aggregated reading values in remaining divided areas. Notice that this additional probing stage introduces a little extra communication cost, compared to evaluating queries directly, which usually derives query results at the farthest corners of query regions and sends the results from there back to base stations. Besides, for some cases like entire query regions fully covered by a cache (e.g.,

As discussed in the previous two sections, data aggregation was used to compute a single aggregated value representing the measurements for an entire sensed area or a query region. For a large sensed area, certain measurements recorded by sensor nodes, e.g., temperature,

, *y*) − *init*(*x*, *y*). As in the figure, the sum of sensed readings of sensor nodes from

, *y*) in the same row *y* can be determined as

readings in all cell between *cell*(*x*, *y*) and *cell*(*x*�

*cell*(3, 4) through *cell*(7, 4) can be calculated as 31 − 7 = 24.

*Q*<sup>3</sup> is then obtained by adding the sums from these subqueries.

*Q*<sup>2</sup> as discussed above), probe stages can be omitted.

**5. In-network contour map computation**

*final*(*x*�

wind speed, etc., should continuously change over the area. Data aggregation cannot effectively represent such spatially varied measurements. Thus, some other presentations, e.g., histogram, contour map, etc., should be used instead. Among those, contour maps are often used to present the approximate spatial distributions of measurements. On a contour map as illustrated in Figure 4(a), an area is divided into regions by some curves called *contour lines* and every contour line is labeled with one value. Thus, on a contour map, all measurements on a contour line labeled with *v* are equal to *v*, whereas measurements at some points not on any contour lines can be determined through interpolation according to their straight-line distances to adjacent contour lines.

Very recently, the research of contour map computation in sensor networks has started to receive attention Liu & Li (2007); Meng et al. (n.d.); Xue et al. (2006). An earlier work Xue et al. (2006) was proposed to construct a contour map as a grid, in which each grid cell carries an aggregated single value. This grid presentation can facilitate recognition and matching spatial patterns of measurements with respect to some predefined patterns for event detection and phenomenon tracking. However, the grid presentation cannot provide very precise contour maps and it may incur a large communication cost to convey individual grid cell values, especially when grids of very fine granularity are used.

**6. In-network probabilistic data aggregation**

aggregation appears to be new research direction.

*n1*

base station

nodes

below:

*n2 n3 n4*

reading value(s).

Sensor reading values are inherently noisy and somewhat uncertain, because of possible inaccurate sensing, environmental noise, hardware defeats, etc., Thus, data uncertainty is another important issue in sensor data analysis. In the literature, uncertain data management has been extensively studied and various models are developed to provide the semantics of underlying data and queries Faradjian et al. (2002); Prabhakar & Cheng (2009). However, existing works adopts centralized approaches Faradjian et al. (2002); Prabhakar & Cheng (2009) that, however, is energy inefficient as already discussed. In-network uncertain data

Energy Efficient Data Acquistion in Wireless Sensor Network 207

Very recently, we have started to investigate a variety of in-network data aggregation techniques for some common aggregation queries. In the following, we discuss one of our recent works on probabilistic minimum value query (PMVQ) Ye, Lee, Lee, Liu & Chen (to appear). A probability minimum value query searches for possible minimum sensed

Figure 5(a) shows an example sensor network of four sensor nodes. Each sensor node *ni* maintains a probabilistic sensed reading *ri*, i.e., a set of possible values {*vi*,1, ··· *vi*,|*ri*|}. Each value *vi*,*<sup>k</sup>* is associated with a non-zero probability *pi*,*<sup>k</sup>* being a real sensed reading value. The sum of all *pi*,*<sup>k</sup>* (1 ≤ *k* ≤ |*ri*|) equals 1. The sensed reading *ri* of each example sensor node *ni* is shown next to the node. For *n*1, the actual sensed reading value may be either 5 with a probability of 0.5 or 6 with the same probability. Since every sensed reading has different possible values, it is apparently not trivial to say that 3, which is the smallest possible value among all, is the minimum since it may not actually exist. On the other hand, 4 can be the true minimum when 3 is not real. As such, more than one value can be the minimum value, simultaneously. Thus, the minimum value probability for *v* being the minimum *vmin* among all possible sensed reading values, denoted by *Pr*[*vmin* = *v*], is introduced and defined as

*Pr*[*ri* <sup>≥</sup> *<sup>v</sup>*] <sup>−</sup> <sup>∏</sup>*ni*∈*<sup>N</sup>*

In our example, *Pr*[*vmin* = 3] is equal to (1 · 1 · 1 · 1) − (1 · 0.6 · 1 · 0.9) = 0.46, *Pr*[*vmin* = 4] is equal to (1 · 0.6 · 1 · 0.9) − (1 · 0 · 0.6 · 0.8) = 0.54, and both *Pr*[*vmin* = 5] and *Pr*[*vmin* = 6] are 0, as listed in Figure 5(b). Hence, the minimum value query result include 3 and 4 and their

Value *v Pr*[*vmin* = *v*] 3 0.46 4 0.54 5 0.00 6 0.00

(b) Minimum value

probability

*Pr*[*ri* > *v*]

*r3*: {(4,0.4), (5,0.2), (6,0.4)}

*r1*: {(5,0.5), (6,0.5)}

*r2*: {(3,0.4), (4,0.6)} *r4*: {(3,0.1), (4,0.1), (5,0.6), (6,0.2)}

(a) An example sensor network of four sensor

Fig. 5. Example sensor network and minimum value probability

*Pr*[*vmin* <sup>=</sup> *<sup>v</sup>*] = <sup>∏</sup>*ni*∈*<sup>N</sup>*

minimum value probabilities are greater than 0.

Motivated by the importance of contour map in sensor networks, we have developed a Contour Map Engine (CME) to compute contour map in sensor networks Xu et al. (2008). More precisely, CME computes contour lines, which can be represented by the coefficients of certain curve/line equations, and thus are small to transmit. In a sensor network, every small area is assumed to be monitored by a cluster of sensor nodes as shown in Figure 4(b). Periodically, a cluster head collects sensed readings from all sensor nodes. Based on their spatial locations and reported sensed readings, the cluster head determines a contour line segment for the area and sends it to a base station. Finally, the base station connects all received contour line segments and constructs a contour map.

Logically, a contour line with respect to a given *vc* divides a given area into subareas on its two sides as in Figure 4(c). On one side, all sensor nodes provides reading values not greater than *vc*, whereas all other sensor nodes on another side have their readings not smaller than *vc*. Here, some sensor nodes reporting their sensed readings of *vc* may be distributed around the contour line. Further, given the reading values and locations of individual sensor nodes, partitioning an area by a contour line segment is somewhat equivalent to a binary classification problem. In light of this, the design of CME uses support vector machine (SVM) Christianini & Shawe-Taylor (2000), a commonly used data mining technique, to determines contour line segments. In a cluster of sensor nodes *N*� , each sensor node *n* (∈ *N*� ) provides its location *xn* and its classified value *yn*, which can be either −1 or +1, according to its own sensed reading *vn* and the contour line value *vc*. Here, *yn* = <sup>+</sup><sup>1</sup> *vn* <sup>≥</sup> *vc* −1 *vn* < *vc* . Next, we define the classification boundary (i.e., the contour line segment) as a hyperplane by a pair of coefficients (*w*, *b*) such that *wTx* + *b* = 0. Based on this, we can estimate an expected *y*ˆ for any location *x*, which may not have any sensor node as

$$\mathcal{Y} = \text{sgn}(w^T \mathbf{x} + b) = \begin{cases} +1 \ w^T \mathbf{x} + b \ge 0 \\ -1 \ w^T \mathbf{x} + b < 0 \end{cases}$$

Now, the classification boundary in SVM is derived to maximize the margin between the convex hull of the two sets, such that classification error for unknown locations can be minimized as depicted in Figure 4(d). The distance between any location *x* and the classification boundary is <sup>|</sup>*wTx*<sup>|</sup> ||*w*|| . The optimal classification boundary is derived by maximizing the margin, which can be written with Largrange multipliers *αn* below:

$$\max\_{\mathfrak{a}} W(\mathfrak{a}) = \sum\_{n \in \mathbb{N}'} \mathfrak{a}\_{\mathfrak{n}} - \frac{1}{2} \sum\_{n \in \mathbb{N}'} \sum\_{m \in \mathbb{N}'} \mathfrak{a}\_n \mathfrak{a}\_m \mathfrak{y}\_n \mathfrak{y}\_m \mathfrak{x}\_n^T \mathfrak{x}\_m$$

subject to *<sup>α</sup><sup>n</sup>* > 0 and <sup>∑</sup>*n*∈*N*� *<sup>α</sup>nyn* = 0. Finally, max*<sup>α</sup> <sup>W</sup>*(*α*) can be solved by traditional quadratic optimization.

Thus far, our discussion has assumed a single linear contour line segment formed. To handle non-linear classification, our CME utilizes space transformation to divide sensor nodes in a sub-cluster, according to some sample training data. Then, contour line segments are derived from individual sub-clusters. Interested readers can be refer the details in Xu et al. (2008). Some other recent works (e.g., Zhou et al. (2009)) have been presented in the literature to improve the precision of contour line segments by using more sophnicated techniques.

#### **6. In-network probabilistic data aggregation**

10 Will-be-set-by-IN-TECH

Motivated by the importance of contour map in sensor networks, we have developed a Contour Map Engine (CME) to compute contour map in sensor networks Xu et al. (2008). More precisely, CME computes contour lines, which can be represented by the coefficients of certain curve/line equations, and thus are small to transmit. In a sensor network, every small area is assumed to be monitored by a cluster of sensor nodes as shown in Figure 4(b). Periodically, a cluster head collects sensed readings from all sensor nodes. Based on their spatial locations and reported sensed readings, the cluster head determines a contour line segment for the area and sends it to a base station. Finally, the base station connects all

Logically, a contour line with respect to a given *vc* divides a given area into subareas on its two sides as in Figure 4(c). On one side, all sensor nodes provides reading values not greater than *vc*, whereas all other sensor nodes on another side have their readings not smaller than *vc*. Here, some sensor nodes reporting their sensed readings of *vc* may be distributed around the contour line. Further, given the reading values and locations of individual sensor nodes, partitioning an area by a contour line segment is somewhat equivalent to a binary classification problem. In light of this, the design of CME uses support vector machine (SVM) Christianini & Shawe-Taylor (2000), a commonly used data mining technique, to

provides its location *xn* and its classified value *yn*, which can be either −1 or +1, according to

define the classification boundary (i.e., the contour line segment) as a hyperplane by a pair of coefficients (*w*, *b*) such that *wTx* + *b* = 0. Based on this, we can estimate an expected *y*ˆ for any

Now, the classification boundary in SVM is derived to maximize the margin between the convex hull of the two sets, such that classification error for unknown locations can be minimized as depicted in Figure 4(d). The distance between any location *x* and the

> <sup>2</sup> ∑ *n*∈*N*�

subject to *<sup>α</sup><sup>n</sup>* > 0 and <sup>∑</sup>*n*∈*N*� *<sup>α</sup>nyn* = 0. Finally, max*<sup>α</sup> <sup>W</sup>*(*α*) can be solved by traditional

Thus far, our discussion has assumed a single linear contour line segment formed. To handle non-linear classification, our CME utilizes space transformation to divide sensor nodes in a sub-cluster, according to some sample training data. Then, contour line segments are derived from individual sub-clusters. Interested readers can be refer the details in Xu et al. (2008). Some other recent works (e.g., Zhou et al. (2009)) have been presented in the literature to improve the precision of contour line segments by using more sophnicated techniques.

∑ *m*∈*N*�

 <sup>+</sup><sup>1</sup> *<sup>w</sup>Tx* <sup>+</sup> *<sup>b</sup>* <sup>≥</sup> <sup>0</sup> <sup>−</sup><sup>1</sup> *<sup>w</sup>Tx* <sup>+</sup> *<sup>b</sup>* <sup>&</sup>lt; <sup>0</sup>


*αnαmynymx<sup>T</sup>*

*<sup>n</sup> xm*

, each sensor node *n* (∈ *N*�

 <sup>+</sup><sup>1</sup> *vn* <sup>≥</sup> *vc* −1 *vn* < *vc* )

. Next, we

received contour line segments and constructs a contour map.

determines contour line segments. In a cluster of sensor nodes *N*�

its own sensed reading *vn* and the contour line value *vc*. Here, *yn* =

*y*ˆ = *sgn*(*wTx* + *b*) =

the margin, which can be written with Largrange multipliers *αn* below:

*n*∈*N*�

*<sup>α</sup><sup>n</sup>* <sup>−</sup> <sup>1</sup>

max *<sup>α</sup> <sup>W</sup>*(*α*) = ∑

location *x*, which may not have any sensor node as

classification boundary is <sup>|</sup>*wTx*<sup>|</sup>

quadratic optimization.

Sensor reading values are inherently noisy and somewhat uncertain, because of possible inaccurate sensing, environmental noise, hardware defeats, etc., Thus, data uncertainty is another important issue in sensor data analysis. In the literature, uncertain data management has been extensively studied and various models are developed to provide the semantics of underlying data and queries Faradjian et al. (2002); Prabhakar & Cheng (2009). However, existing works adopts centralized approaches Faradjian et al. (2002); Prabhakar & Cheng (2009) that, however, is energy inefficient as already discussed. In-network uncertain data aggregation appears to be new research direction.

Very recently, we have started to investigate a variety of in-network data aggregation techniques for some common aggregation queries. In the following, we discuss one of our recent works on probabilistic minimum value query (PMVQ) Ye, Lee, Lee, Liu & Chen (to appear). A probability minimum value query searches for possible minimum sensed reading value(s).

#### Fig. 5. Example sensor network and minimum value probability

Figure 5(a) shows an example sensor network of four sensor nodes. Each sensor node *ni* maintains a probabilistic sensed reading *ri*, i.e., a set of possible values {*vi*,1, ··· *vi*,|*ri*|}. Each value *vi*,*<sup>k</sup>* is associated with a non-zero probability *pi*,*<sup>k</sup>* being a real sensed reading value. The sum of all *pi*,*<sup>k</sup>* (1 ≤ *k* ≤ |*ri*|) equals 1. The sensed reading *ri* of each example sensor node *ni* is shown next to the node. For *n*1, the actual sensed reading value may be either 5 with a probability of 0.5 or 6 with the same probability. Since every sensed reading has different possible values, it is apparently not trivial to say that 3, which is the smallest possible value among all, is the minimum since it may not actually exist. On the other hand, 4 can be the true minimum when 3 is not real. As such, more than one value can be the minimum value, simultaneously. Thus, the minimum value probability for *v* being the minimum *vmin* among all possible sensed reading values, denoted by *Pr*[*vmin* = *v*], is introduced and defined as below:

$$\Pr[v\_{\min} = v] = \prod\_{n\_l \in \mathcal{N}} \Pr[r\_i \ge v] - \prod\_{n\_l \in \mathcal{N}} \Pr[r\_i > v]$$

In our example, *Pr*[*vmin* = 3] is equal to (1 · 1 · 1 · 1) − (1 · 0.6 · 1 · 0.9) = 0.46, *Pr*[*vmin* = 4] is equal to (1 · 0.6 · 1 · 0.9) − (1 · 0 · 0.6 · 0.8) = 0.54, and both *Pr*[*vmin* = 5] and *Pr*[*vmin* = 6] are 0, as listed in Figure 5(b). Hence, the minimum value query result include 3 and 4 and their minimum value probabilities are greater than 0.

two terms are sent to its parent instead of all individual sensed reading values as needed by

Energy Efficient Data Acquistion in Wireless Sensor Network 209

Further, due to the fact that *Pr*[*vmin* = *<sup>v</sup>*] should be zero whenever <sup>∏</sup>*ni*∈*Ni Pr*[*ri* ≥ *<sup>v</sup>*] = <sup>∏</sup>*ni*∈*Ni Pr*[*ri* > *<sup>v</sup>*] for any non-empty *Ni*, it is safe to omit value *<sup>v</sup>* from being propagated. In addition, for integer sensed reading values, <sup>∏</sup>*ni*∈*Ni Pr*[*vmin* > *<sup>v</sup>*] should be equal to <sup>∏</sup>*ni*∈*Ni Pr*[*vmin* ≥ *<sup>v</sup>* + <sup>1</sup>]. Therefore, either <sup>∏</sup>*ni*∈*Ni Pr*[*vmin* > *<sup>v</sup>*] or <sup>∏</sup>*ni*∈*Ni Pr*[*vmin* ≥ *<sup>v</sup>* + <sup>1</sup>] can be sent to a parent node and the omitted probabilities can be deduced by the parent node. Figure 6(b) illustrates MVA algorithm. First, *n*<sup>4</sup> sends each of its value *v* and *Pr*[*vmin* ≥ *v*], i.e., (3, 1.0), (4, 0.9), (5, 0.8), (6, 0.2) to *n*3. Similarly, *n*<sup>2</sup> sends (3, 1.0) and (4, 0.6) to *n*1. Then, *n*<sup>3</sup> calculates *Pr*[*vmin* ≥ *v*] for all its know values, i.e., 3, 4, 5 and 6. Next, *n*<sup>3</sup> forwards (3, 1.0), (4, 0.9), (5, 0.48) and (6, 0.8) to *n*1. Further, *n*<sup>1</sup> computes *Pr*[*vmin* = *v*] as *n*3. However, *Pr*[*vmin* = 5] and *Pr*[*vmin* = 6] are both 0, so 5 and 6 are filtered out. At last, *n*1's *Pr*[*vmin* = 3] and *Pr*[*vmin* = 4] are determined and they are equal to zero; and both 3 and 4 are the query

Compared with MVS algorithm, MVA algorithm considerably saves communication costs and battery energy. Through detailed cost analysis and simulation experiments as in Ye, Lee, Lee, Liu & Chen (to appear), MVA algorithm provides costs linear to the number of sensor nodes, while MVS incurs significantly large communication costs with respect to the

In addition to probabilistic minimum query, we have also investigated other probabilistic queries in sensor networks, e.g., probabilistic minimum node query (PMNQ) Ye, Lee, Lee, Liu & Chen (to appear) that searches for sensor nodes that provide probabilistic minimum values and probabilistic top-k value query that search for *k* smallest

Wireless sensor networks are important tools for many fields and applications. Meanwhile, in sensor networks, data acquisition that collects data from individual sensor nodes for analysis is one of the essential activities. However, because of scarce sensor node battery energy, energy efficiency becomes a critical issue for the length of sensor network operational life. Over those years, many research works have studied various in-network query processing as one of the remedies to precious precious sensor node energy. By in-network query processing, queries are disseminated and processed by sensor nodes and a small volume of (derived) data is collected and transmitted rather than raw sensed readings over costly wireless communication. Subject to the supported types of queries and potential optimizations, a variety of in-network query processing techniques have been investigated and reported in

This chapter is devoted to review representative works in in-network data aggregation, data caching, contour map computation and probabilistic data aggregation. With respect to those areas, we also discussed our recent research results, namely, itinerary-based data aggregation, materialized in-network view, contour mapping engine and probabilistic minimum value search. Itinerary-based data aggregation navigates a query among sensor nodes in a queried region for an aggregated value. Compared with infrastructure-based approaches, it incurs fewer rounds of messages and can easily deal with sensor node failure in the

MVS algorithm.

result.

the literature.

increased number of sensor nodes.

**7. Summary and future directions**

(or largest) values Ye, Lee, Lee & Liu (to appear).

To evaluate PMVQ in sensor networks, we have devised two algorithms, namely, *Minimum Value Screening (MVS) algorithm* and *Minimum Value Aggregation (MVA) algorithm*. Both of the algorithms evaluate PMVQs in sensor networks organized as routing trees. We describe them in the following.

**MVS Algorithm**. Suppose that there are two probabilistic sensed readings *ri* and *rj* from two sensor nodes *ni* and *nj*, where *ri* <sup>=</sup> {*vi*,1, ··· *vi*,|*ri*|} and *rj* <sup>=</sup> {*vj*,1, ··· *vj*,|*ri*|}. A value *vj* (<sup>∈</sup> *rj*) is certainly not the minimum if *ri* has all its values smaller than it, i.e., ∀*vi*∈*ri vi* < *vj*. Then, *vj* can be safely discarded. Based on this idea, we introduced a notion called MiniMax. Among sensed readings from a subset of sensor nodes *N*� , a MiniMax denoted by MiniMax(*N*� ) represents the largest possible value, formally, MiniMax(*N*� ) = min *ni*∈*N*� max *vi*∈*ri* {*vi*} .

Fig. 6. MVS and MVA algorithms

This MiniMax notion is used to screen out those values that should not be minimum values. We use Figure 6(a) to illustrate how MiniMax is determined and used by MVS algorithm to eliminate some values and their probabilities from being propagated in a routing tree. First, *n*<sup>4</sup> sends its sending reading values to *n*3, which in turn deduces MiniMax({*n*3, *n*4}), i.e., 6. Thus, *n*<sup>3</sup> propagates all its and *n*4's sensed reading values to *n*1. On the other hand, *n*<sup>2</sup> submits its sensed reading values to *n*1. Now, *n*1, i.e., the base station, determines MiniMax({*n*1, *n*2, *n*3, *n*4}), which equals 4. Thus, only *n*2's {(3, 0.4),(4, 0.6)}, *n*3's {(4, 0.4} and *n*4's {(3, 0.1),(4, 0.4)} are further propagated to the connected terminal. Later, it determines the final result values according to their minimum value probabilities.

**MVA Algorithm**. MVA algorithm computes *Pr*[*vmin* = *v*] for each candidate value *v* incrementally during data propagation since computation of *Pr*[*vmin* = *v*] is decomposable. Recall that *Pr*[*vmin* = *<sup>v</sup>*] is computed based on two terms, i.e., <sup>∏</sup>*ni*∈*<sup>N</sup> Pr*[*ri* ≥ *<sup>v</sup>*] and <sup>∏</sup>*ni*∈*<sup>N</sup> Pr*[*ri* > *<sup>v</sup>*]. These two terms can be factorized when *<sup>N</sup>* is divided into *<sup>x</sup>* disjointed subsets, i.e., *N*1, *N*2, ··· *Nx* as follows:

$$\prod\_{n\_l \in N} \Pr[r\_i \ge v] = \prod\_{i \in [1, x]} \prod\_{n\_l \in N\_l} \Pr[r\_i \ge v]\_\prime \qquad \prod\_{n\_l \in N} \Pr[r\_i > v] = \prod\_{i \in [1, x]} \prod\_{n\_l \in N\_l} \Pr[r\_i > v]\_\prime$$

Based on this, in any subtree covering some sensor nodes *Ni*, the root can calculate <sup>∏</sup>*ni*∈*Ni Pr*[*ri* ≥ *<sup>v</sup>*] and <sup>∏</sup>*ni*∈*Ni Pr*[*ri* > *<sup>v</sup>*] for every value *<sup>v</sup>*. Then, only the value and these MVS algorithm.

12 Will-be-set-by-IN-TECH

To evaluate PMVQ in sensor networks, we have devised two algorithms, namely, *Minimum Value Screening (MVS) algorithm* and *Minimum Value Aggregation (MVA) algorithm*. Both of the algorithms evaluate PMVQs in sensor networks organized as routing trees. We describe them

**MVS Algorithm**. Suppose that there are two probabilistic sensed readings *ri* and *rj* from two sensor nodes *ni* and *nj*, where *ri* <sup>=</sup> {*vi*,1, ··· *vi*,|*ri*|} and *rj* <sup>=</sup> {*vj*,1, ··· *vj*,|*ri*|}. A value *vj* (<sup>∈</sup> *rj*) is

be safely discarded. Based on this idea, we introduced a notion called MiniMax. Among

This MiniMax notion is used to screen out those values that should not be minimum values. We use Figure 6(a) to illustrate how MiniMax is determined and used by MVS algorithm to eliminate some values and their probabilities from being propagated in a routing tree. First, *n*<sup>4</sup> sends its sending reading values to *n*3, which in turn deduces MiniMax({*n*3, *n*4}), i.e., 6. Thus, *n*<sup>3</sup> propagates all its and *n*4's sensed reading values to *n*1. On the other hand, *n*<sup>2</sup> submits its sensed reading values to *n*1. Now, *n*1, i.e., the base station, determines MiniMax({*n*1, *n*2, *n*3, *n*4}), which equals 4. Thus, only *n*2's {(3, 0.4),(4, 0.6)}, *n*3's {(4, 0.4} and *n*4's {(3, 0.1),(4, 0.4)} are further propagated to the connected terminal. Later, it determines

**MVA Algorithm**. MVA algorithm computes *Pr*[*vmin* = *v*] for each candidate value *v* incrementally during data propagation since computation of *Pr*[*vmin* = *v*] is decomposable. Recall that *Pr*[*vmin* = *<sup>v</sup>*] is computed based on two terms, i.e., <sup>∏</sup>*ni*∈*<sup>N</sup> Pr*[*ri* ≥ *<sup>v</sup>*] and <sup>∏</sup>*ni*∈*<sup>N</sup> Pr*[*ri* > *<sup>v</sup>*]. These two terms can be factorized when *<sup>N</sup>* is divided into *<sup>x</sup>* disjointed

*Pr*[*ri* <sup>≥</sup> *<sup>v</sup>*], <sup>∏</sup>*ni*∈*<sup>N</sup>*

Based on this, in any subtree covering some sensor nodes *Ni*, the root can calculate <sup>∏</sup>*ni*∈*Ni Pr*[*ri* ≥ *<sup>v</sup>*] and <sup>∏</sup>*ni*∈*Ni Pr*[*ri* > *<sup>v</sup>*] for every value *<sup>v</sup>*. Then, only the value and these

*vi* < *vj*. Then, *vj* can

)

, a MiniMax denoted by MiniMax(*N*�

{*vi*} .

{(3,1.0), (4,0.9), (5,0.48), (6, 0.08)}

{(3,1.0), (4,0.9), (5,0.8), (6,0.2)}

 max *vi*∈*ri*

{(3,1.0), (4, 0.56)}

(b) MVA algorithm

) = min *ni*∈*N*�

*n2 n3 n4*

*Pr*[*ri* > *v*] = ∏

*i*∈[1,*x*]

∏*ni*∈*Ni*

*Pr*[*ri* > *v*]

*n1*

{(3,0.4), (4,0.6)}

certainly not the minimum if *ri* has all its values smaller than it, i.e., ∀*vi*∈*ri*

sensed readings from a subset of sensor nodes *N*�

*r3*: {(3,0.4), (4,0.6)} *r3*: {(4,0.4)} *r4*: {(3,0.1), (4,0.1)}

*n1*

*n2 n3 n4*

Fig. 6. MVS and MVA algorithms

subsets, i.e., *N*1, *N*2, ··· *Nx* as follows:

*Pr*[*ri* ≥ *<sup>v</sup>*] = ∏

*i*∈[1,*x*]

∏*ni*∈*Ni*

∏*ni*∈*N*

represents the largest possible value, formally, MiniMax(*N*�

*r3*: {(4,0.4), (5,0.2), (6,0.4)} *r4*: {(3,0.1), (4,0.1), (5,0.6), (6,0.2)}

the final result values according to their minimum value probabilities.

MiniMax*=6*

*r2*: {(3,0.4), (4,0.6)} *r4*: {(3,0.1), (4,0.1), (5,0.6), (6,0.2)}

(a) MVS algorithm

in the following.

MiniMax*=*4

Further, due to the fact that *Pr*[*vmin* = *<sup>v</sup>*] should be zero whenever <sup>∏</sup>*ni*∈*Ni Pr*[*ri* ≥ *<sup>v</sup>*] = <sup>∏</sup>*ni*∈*Ni Pr*[*ri* > *<sup>v</sup>*] for any non-empty *Ni*, it is safe to omit value *<sup>v</sup>* from being propagated. In addition, for integer sensed reading values, <sup>∏</sup>*ni*∈*Ni Pr*[*vmin* > *<sup>v</sup>*] should be equal to <sup>∏</sup>*ni*∈*Ni Pr*[*vmin* ≥ *<sup>v</sup>* + <sup>1</sup>]. Therefore, either <sup>∏</sup>*ni*∈*Ni Pr*[*vmin* > *<sup>v</sup>*] or <sup>∏</sup>*ni*∈*Ni Pr*[*vmin* ≥ *<sup>v</sup>* + <sup>1</sup>] can be sent to a parent node and the omitted probabilities can be deduced by the parent node.

Figure 6(b) illustrates MVA algorithm. First, *n*<sup>4</sup> sends each of its value *v* and *Pr*[*vmin* ≥ *v*], i.e., (3, 1.0), (4, 0.9), (5, 0.8), (6, 0.2) to *n*3. Similarly, *n*<sup>2</sup> sends (3, 1.0) and (4, 0.6) to *n*1. Then, *n*<sup>3</sup> calculates *Pr*[*vmin* ≥ *v*] for all its know values, i.e., 3, 4, 5 and 6. Next, *n*<sup>3</sup> forwards (3, 1.0), (4, 0.9), (5, 0.48) and (6, 0.8) to *n*1. Further, *n*<sup>1</sup> computes *Pr*[*vmin* = *v*] as *n*3. However, *Pr*[*vmin* = 5] and *Pr*[*vmin* = 6] are both 0, so 5 and 6 are filtered out. At last, *n*1's *Pr*[*vmin* = 3] and *Pr*[*vmin* = 4] are determined and they are equal to zero; and both 3 and 4 are the query result.

Compared with MVS algorithm, MVA algorithm considerably saves communication costs and battery energy. Through detailed cost analysis and simulation experiments as in Ye, Lee, Lee, Liu & Chen (to appear), MVA algorithm provides costs linear to the number of sensor nodes, while MVS incurs significantly large communication costs with respect to the increased number of sensor nodes.

In addition to probabilistic minimum query, we have also investigated other probabilistic queries in sensor networks, e.g., probabilistic minimum node query (PMNQ) Ye, Lee, Lee, Liu & Chen (to appear) that searches for sensor nodes that provide probabilistic minimum values and probabilistic top-k value query that search for *k* smallest (or largest) values Ye, Lee, Lee & Liu (to appear).

### **7. Summary and future directions**

Wireless sensor networks are important tools for many fields and applications. Meanwhile, in sensor networks, data acquisition that collects data from individual sensor nodes for analysis is one of the essential activities. However, because of scarce sensor node battery energy, energy efficiency becomes a critical issue for the length of sensor network operational life. Over those years, many research works have studied various in-network query processing as one of the remedies to precious precious sensor node energy. By in-network query processing, queries are disseminated and processed by sensor nodes and a small volume of (derived) data is collected and transmitted rather than raw sensed readings over costly wireless communication. Subject to the supported types of queries and potential optimizations, a variety of in-network query processing techniques have been investigated and reported in the literature.

This chapter is devoted to review representative works in in-network data aggregation, data caching, contour map computation and probabilistic data aggregation. With respect to those areas, we also discussed our recent research results, namely, itinerary-based data aggregation, materialized in-network view, contour mapping engine and probabilistic minimum value search. Itinerary-based data aggregation navigates a query among sensor nodes in a queried region for an aggregated value. Compared with infrastructure-based approaches, it incurs fewer rounds of messages and can easily deal with sensor node failure in the

Invanov, A. O. & Tuzhilin, A. A. (1994). *Minimal Networks: The Steiner Problem and Its*

Energy Efficient Data Acquistion in Wireless Sensor Network 211

Lee, K. C. K., Zheng, B., Lee, W.-C. & Winter, J. (2007). Materialized In-Network View

Li, Z., Zhu, Y., Zhu, H. & Li, M. (2011). Compressive Sensing Approach to Urban Traffic

Liu, X., Huang, Q. & Zhang, Y. (2004). Combs, Needles, Haystacks: Balancing Push and

Liu, Y. & Li, M. (2007). Iso-Map: Energy-Efficient Contour Mapping in Wireless

Madden, S., Franklin, M. J., Hellerstein, J. M. & Hong, W. (2002). TAG: A Tiny AGgregation

Meng, X., Nandagopal, T., Li, L. & Lu, S. (n.d.). Contour Maps: Monitoring and Diagnosis in

Musaloiu-Elefteri, R., Liang, C.-J. M. & Terzis, A. (2008). Koala: Ultra-Low Power Data

Perillo, M. A., Cheng, Z. & Heinzelman, W. B. (2005). An Analysis of Strategies for Mitigating

Prabh, S. & Abdelzaher, T. F. (2005). Energy-Conserving Data Cache Placement in Sensor

Prabhakar, S. & Cheng, R. (2009). Data Uncertainty Management in Sensor Networks,

Ratnasamy, S., Karp, B., Yin, L., Yu, F., Estrin, D., Govindan, R. & Shenker, S. (2002). GHT:

Sadagopan, N., Krishnamachari, B. & Helmy, A. (2003). The ACQUIRE Mechanism for

Networks, *ACM Transactions on Sensor Networks* 1(2): 178–203.

*Encyclopedia of Database Systems*, pp. 647–651.

*Conference on Communications (ICC), Anchorage, AL*.

*Operating System Design and Implementation (OSDI), Boston, MA, Dec 9-11*. Manjhi, A., Nath, S. & Gibbons, P. B. (2005). Tributaries and Deltas: Efficient and

*Computing Systems (ICDCS), Toronto, Ontario, Canada, Jun 25-29*, p. 36. Madden, S. & Franklin, M. J. (2002). Fjording the Stream: An Architecture for Queries Over

for Spatial Aggregation Queries in Wireless Sensor Network, *ISPRS Journal of*

Sensing, *Proceedings of IEEE International Conference on Distributed Computing Systems*

Pull for Discovery in Large-Scale Sensor Networks, *Proceedings of the 2nd ACM International Conference on Embedded Networked Sensor Systems (SenSys), Baltimore, MD,*

Sensor Networks, *Proceedings of the 27th IEEE International Conference on Distributed*

Streaming Sensor Data, *Proceedings of the 18th IEEE International Conference on Data*

Service for Ad-Hoc Sensor Networks, *Proceedings of The 5th USENIX Symposium on*

Robust Aggregation in Sensor Network Streams, *Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD), Baltimore, MD, Jun 14-16*,

Retrieval in Wireless Sensor Networks, *Proceedings of the 7th International Conference on Information Processing in Sensor Networks (IPSN), St. Louis, MO, Apr 22-24*, pp. 421–432.

the Sensor Network Hot Spot Problem, *Proceedings of the 2nd Annual International Conference on Mobile and Ubiquitous Systems (MobiQuitous), San Diego, Jul 17-21*,

a Geographic Hash Table for Data-Centric Storage, *Proceedings of the First ACM International Workshop on Wireless Sensor Networks and Applications (WSNA), Atlanta,*

Efficient Querying in Sensor Networks, *IEEE International Workshop on Sensor Network Protocols and Applications (SNPA'03), held in conjunction with the IEEE International*

*Generalizations*, CRC Press.

*Nov 3-5*, pp. 122–133.

pp. 287–298.

pp. 474–478.

*GA, Sept 28*, pp. 78–87.

Sensor Networks, 50(15): 2920–2838.

*Photogrammetry and Remore Sensing* 62: 382–402.

*(ICDCS), Minneapolis, MN, Jun 20-24*, pp. 889–898.

*Engineering, San Jose, CA, Feb 26 - Mar 1*, pp. 555–566.

course of query processing. To boost the performance of multi-queries issued from different base stations, materialized in-network views provide partial results for previous queries to subsequent aggregation queries. It is different from existing works that cache sensed readings independently and that cannot directly support data aggregation. Contour mapping engine adopts data mining techniques to determine contour line segments in sensor networks, whereas some other works relies on centralized processing or provide less accurate contour maps. Last but not least, probabilistic minimum value search is the initial research result on uncertain sensed data aggregation. As sensed reading values are mostly imprecise, handling and querying probabilistic sensor data is currently an important on-going research direction.

In addition, recent research studies have shown uneven energy consumption of sensor nodes that sensor nodes in some hotspot regions have more energy consumed than others Perillo et al. (2005). Such hotspot problems are currently studied from the networking side. Besides, heterogeneous sensor nodes are going to be very common in sensor networks. Thus, we anticipate that future in-network query processing techniques should be able to handle uneven energy consumption and to make use of super sensor nodes, while many existing works mainly presume homogeneous sensor nodes and consider even energy consumption.

#### **8. References**


14 Will-be-set-by-IN-TECH

course of query processing. To boost the performance of multi-queries issued from different base stations, materialized in-network views provide partial results for previous queries to subsequent aggregation queries. It is different from existing works that cache sensed readings independently and that cannot directly support data aggregation. Contour mapping engine adopts data mining techniques to determine contour line segments in sensor networks, whereas some other works relies on centralized processing or provide less accurate contour maps. Last but not least, probabilistic minimum value search is the initial research result on uncertain sensed data aggregation. As sensed reading values are mostly imprecise, handling and querying probabilistic sensor data is currently an important on-going research direction. In addition, recent research studies have shown uneven energy consumption of sensor nodes that sensor nodes in some hotspot regions have more energy consumed than others Perillo et al. (2005). Such hotspot problems are currently studied from the networking side. Besides, heterogeneous sensor nodes are going to be very common in sensor networks. Thus, we anticipate that future in-network query processing techniques should be able to handle uneven energy consumption and to make use of super sensor nodes, while many existing works mainly presume homogeneous sensor nodes and consider even energy

Biswas, R., Thrun, S. & Guibas, L. J. (2004). A Probabilistic Approach to Inference with Limited

Bojkovic, Z. & Bakmaz, B. (2008). A Survey on Wireless Sensor Networks Deployment, *WSEAS*

Christianini, N. & Shawe-Taylor, J. (2000). *An Introduction to Support Vector Machines and Other*

Doherty, L. & Pister, K. S. J. (2004). Scattered Data Selection for Dense Sensor Networks,

Faradjian, A., Gehrke, J. & Bonnet, P. (2002). GADT: A Probability Space ADT for Representing

Ganesan, D., Estrin, D. & Heidemann, J. S. (2003). Dimensions: Why Do We Need a New

Hellerstein, J. M., Hong, W., Madden, S. & Stanek, K. (2003). Beyond Average: Toward

*on Data Engineering (ICDE), San Jose, CA, Feb 26 - Mar 1*, pp. 201–211. Faulkner, M., Olson, M., Chandy, R., Krause, J., Chandy, K. M. & Krause, A. (2011). The Next

*Kernel-Based Learning Methods*, Cambridge University Press.

*Networks (IPSN), Berkeley, CA, Apr 26-27*, pp. 369–378.

*Sensor Networks (IPSN), Chicago, IL, Apr 12-14*, pp. 13–24.

*Transactions on Communications* 7(12).

Group, T. W. (n.d.). TinyOS, http://www.tinyos.net/.

33(1): 143–148.

Information in Sensor Networks, *Proceedings of the Third International Symposium on Information Processing in Sensor Networks (IPSN), Berkeley, CA, Apr 26-27*, pp. 269–276.

*Proceedings of the Third International Symposium on Information Processing in Sensor*

and Querying the Physical World, *Proceedings of the 18th IEEE International Conference*

Big One: Detecting Earthquakes and Other Rare Events from Community-Based Sensors, *Proceedings of the 10th International Conference on Information Processing in*

Data Handling Architecture for Sensor Networks?, *Computer Communication Review*

Sophisticated Sensing with Queries, *Proceedings of Information Processing in Sensor Networks, Second International Workshop (IPSN), Palo Alto, CA, Apr 22-23*, pp. 63–79. Huang, Z., Wang, L., Yi, K. & Liu, Y. (2011). Sampling Based Algorithms for Quantile

Computation in Sensor Networks, *Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD), Athens, Greece, Jun 12-16*, pp. 745–756.

consumption.

**8. References**


**10**

Maged Marghany

*Malaysia* 

**Three-Dimensional Lineament**

*Institute of Geospatial Science and Technology (INSTeG) Universiti Teknologi Malaysia, UTM, Skudai, Johor Bahru* 

**Visualization Using Fuzzy B-Spline**

**Algorithm from Multispectral Satellite Data** 

A lineament is a linear feature in a landscape which is an expression of an underlying geological structure such as a fault. Typically a lineament will comprise a fault-aligned valley, a series of fault or fold-aligned hills, a straight coastline or indeed a combination of these features. Fracture zones, shear zones and igneous intrusions such as dykes can also give rise to lineaments. Lineaments are often apparent in geological or topographic maps and can appear obvious on aerial or satellite photographs. The term 'megalineament' has been used to describe such features on a continental scale. The trace of the San Andreas Fault might be considered an example. The Trans Brazilian Lineament and the Trans-Saharan Belt, taken together, form perhaps the longest coherent shear zone on the Earth, extending for about 4,000 km. Lineaments have also been identified on other planets and their moons. Their origins may be radically different from those of terrestrial lineaments due to the differing tectonic processes involved (Mostafa and Bishta, 2005; Semere and

Accurate geological features mapping is critical task for oil exploration, groundwater storage and understanding the mechanisms of environmental disasters for instance, earthquake, flood and landslides. The major task of geologists is documentation of temporal and spatial variations in the distribution and abundance of geological features over wide scale. In this context, the major challenge is that most of conventional geological surveying techniques are not able to cover a wide region of such as desert in the Earth's surface. Quite clearly, to understand the mechanisms generations of geological features and their relationship with environmental disasters such as earthquake, landslide and flood, geological researchers must be able to conduct simultaneous measurements over broad areas of surface or subsurface of the Earth(Novak and Soulakellis 2000 and Marghany et al.,

This requires the collection of asset of reliable synoptic data that specify variations of critical geological environmental parameters over a wide region for discrete moments. In fact that geological features such as lineament and faults are key parameters that described the Earth generation or disaster mechanisms and significant indicator for oil explorations and

**1. Introduction** 

Ghebreab, 2006).

2009a).


## **Three-Dimensional Lineament Visualization Using Fuzzy B-Spline Algorithm from Multispectral Satellite Data**

Maged Marghany

*Institute of Geospatial Science and Technology (INSTeG) Universiti Teknologi Malaysia, UTM, Skudai, Johor Bahru Malaysia* 

## **1. Introduction**

16 Will-be-set-by-IN-TECH

212 Remote Sensing – Advanced Techniques and Platforms

Shakkottai, S. (2004). Asymptotics of Query Strategies over a Sensor Network, *Proceedings*

Silberstein, A., Braynard, R., Ellis, C. S., Munagala, K. & Yang, J. (2006). A Sampling-Based

Wu, S.-H., Chuang, K.-T., Chen, C.-M. & Chen, M.-S. (2007). DIKNN: An Itinerary-based

Xu, Y., , Lee, W.-C. & Mitchell, G. (2008). CME: A Contour Mapping Engine in Wireless

Xu, Y., Lee, W.-C. & Xu, J. (2007). Analysis of A Loss-Resilient Proactive Data Transmission

Xu, Y., Lee, W.-C., Xu, J. & Mitchell, G. (2006). Processing Window Queries in Wireless Sensor

Xue, W., Luo, Q., Chen, L. & Liu, Y. (2006). Contour Map Matching for Event Detection

Yao, Y. & Gehrke, J. (2003). Query Processing in Sensor Networks, *Online Proceedings of The*

Ye, M., Lee, K. C. K., Lee, W.-C., Liu, X. & Chen, M. C. (to appear). Querying Uncertain

Ye, M., Lee, W.-C., Lee, D. L. & Liu, X. (to appear). Distributed Processing of Probabilistic

Zhang, W., Cao, G. & Porta, T. L. (2007). Data Dissemination with Ring-Based Index for Wireless Sensor Networks, *IEEE Transactions on Mobile Computing* 6(7): 832–847. Zhou, Y., Xiong, J., Lyu, M. R., Liu, J. & Ng, K.-W. (2009). Energy-Efficient On-Demand Active

*Societies (INFOCOMM), Anchorage, AL, May 6-12*, pp. 1712–1720.

*Management of Data, Chicago, IL, Jun 27-29*, pp. 145–156.

*Societies (INFOCOM), Hong Kong, China, Mar 7-11*.

*(ICDCS), Beijing, China, Jun 17-20*, pp. 133–140.

*Nov 2-4*, pp. 154–165.

*15-20*, pp. 456–465.

*Atlanta, GA, Apr 3-8*, p. 70.

*5-8*.

*Engineering* .

*Engineering* .

Main/HomePage.

*of The 23rd IEEE Annual Joint Conference of the IEEE Computer and Communications*

Approach to Optimizing Top-k Queries in Sensor Networks, *Proceedings of the 22nd International Conference on Data Engineering (ICDE), Atlanta, GA, Apr 3-8*, p. 68. Vasilescu, I., Kotay, K., Rus, D., Dunbabin, M. & Corke, P. I. (2005). Data Collection, Storage,

and Retrieval with an Underwater Sensor Network, *Proceedings of the 3rd ACM International Conference on Embedded Networked Sensor Systems (SenSys), San Diego, CA,*

KNN Query Processing Algorithm for Mobile Sensor Networks, *Proceedings of the 23rd IEEE International Conference on Data Engineering (ICDE), Istanbul, Turkey, Apr*

Sensor Networks, *The 28th International Conferences on Distributed Computing Systems*

Protocol in Wireless Sensor Networks, *Proceedings of 26th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications*

Networks, *Proceedings of the 22nd International Conference on Data Engineering (ICDE),*

in Sensor Networks, *Proceedings of the ACM SIGMOD International Conference on*

*First Biennial Conference on Innovative Data Systems Research (CIDR), Asilomar, CA, Jan*

Minimum in Wireless Sensor Networks, *IEEE Transactions on Knowledge and Data*

Top-k Queries in Wireless Sensor Networks, *IEEE Transactions on Knowledge and Data*

Contour Service for Sensor Networks, *Proceedings of IEEE 6th International Conference on Mobile Adhoc and Sensor Systems (MASS), Macau, China, Oct 12-15*, pp. 383–392. Zurich, T. W. R. G. . E. (n.d.). The Sensor Network Museum, http://www.snm.ethz.ch/

A lineament is a linear feature in a landscape which is an expression of an underlying geological structure such as a fault. Typically a lineament will comprise a fault-aligned valley, a series of fault or fold-aligned hills, a straight coastline or indeed a combination of these features. Fracture zones, shear zones and igneous intrusions such as dykes can also give rise to lineaments. Lineaments are often apparent in geological or topographic maps and can appear obvious on aerial or satellite photographs. The term 'megalineament' has been used to describe such features on a continental scale. The trace of the San Andreas Fault might be considered an example. The Trans Brazilian Lineament and the Trans-Saharan Belt, taken together, form perhaps the longest coherent shear zone on the Earth, extending for about 4,000 km. Lineaments have also been identified on other planets and their moons. Their origins may be radically different from those of terrestrial lineaments due to the differing tectonic processes involved (Mostafa and Bishta, 2005; Semere and Ghebreab, 2006).

Accurate geological features mapping is critical task for oil exploration, groundwater storage and understanding the mechanisms of environmental disasters for instance, earthquake, flood and landslides. The major task of geologists is documentation of temporal and spatial variations in the distribution and abundance of geological features over wide scale. In this context, the major challenge is that most of conventional geological surveying techniques are not able to cover a wide region of such as desert in the Earth's surface. Quite clearly, to understand the mechanisms generations of geological features and their relationship with environmental disasters such as earthquake, landslide and flood, geological researchers must be able to conduct simultaneous measurements over broad areas of surface or subsurface of the Earth(Novak and Soulakellis 2000 and Marghany et al., 2009a).

This requires the collection of asset of reliable synoptic data that specify variations of critical geological environmental parameters over a wide region for discrete moments. In fact that geological features such as lineament and faults are key parameters that described the Earth generation or disaster mechanisms and significant indicator for oil explorations and

Three-Dimensional Lineament Visualization Using

constitute most of the information in the image.

fault.

algorithm.

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 215

Consequently, optical remote sensing techniques over more than three decades have shown a great promise for mapping geological feature variations over wide scale (Mostafa and Bishta, 2005; Semere and Ghebreab, 2006; Marghany et al., 2009a). In referring to Katsuaki et al., (1995); Walsh (2000) lineament information extractions in satellite images can be divided broadly into three categories: (i) lineament enhancement and lineament extraction for characterization of geologic structure;(ii) image classification to perform geologic mapping or to locate spectrally anomalous zones attributable to mineralization (Mostafa et al., 1995; Süzen and Toprak 1998); and (iii) superposition of satellite images and multiple data such as geological, geochemical, and geophysical data in a geographical information system (Novak and Soulakellis 2000; Semere and Ghebreab 2006). Furthermore, remote sensing data assimilation in real time could be a bulk tool for geological features extraction and mapping. In this context, several investigations currently underway on the assimilation of both passive and active remotely sensed data into automatic detection of significant geological features i.e., lineament, curvilinear and

Image processing tools have used for lineament feature detections are: (i) image enhancement techniques (Mah et al. 1995; Chang et al. 1998; Walsh 2000;Marghany et al., 2009b); and (ii) edge detection and segmentation (Wang et al. 1990; Vassilas et al. 2002; Mostafa and Bishta 2005). In practice, researchers have preferred to use the spatial domain filtering techniques in order to get ride of the artificial lineaments and to verify disjoint lineament pixels in satellite data (Süzen and Toprak 1998). Further, Leech et al., (2003) implemented the band-rationing, linear and Gaussian nonlinear stretching enhancement techniques to determine lineament populations. Won-In and Charusiri (2003) found that High Pass Filter enhancement technique provides accurate geological map. In fact, the High Pass filter selectively enhances the small scale features of an image (high frequency spatial components) while maintaining the larger-scale features (low frequency components) that

Majumdar and Bhattacharya (1998) and Vassilas et al. (2002), respectively have used Haar and Hough transforms as edge detection algorithms for lineament detection in Landsat-TM satellite data. Majumdar and Bhattacharya (1998) reported that Haar transform is proper in extraction of subtle features with finer details from satellite data. Vassilas et al. (2002), however, reported that Hough transform is appropriate for fault feature mapping. Consequently, Laplacian, Sobel, and Canny are the major algorithms for lineament feature detections in remotely sensed data (Mostafa and Bishta 2005; Semere and Ghebreab, 2006; Marghany 2005).Recently Marghany and Mazlan (2010) proposed a new approach for automatic detection of lineament features from RADARSAT-1 SAR data. This approach is based on modification of Lee adaptive algorithm using convolution of Gaussian

**1.2 Problems for geological features extraction from remote sensing data** 

Geological studies are requiring standard methods and procedures to acquire precisely information. However, traditional methods might be difficult to use due to highly earth complex topography. Regarding the previous prospective, the advantage of satellite remote sensing in its application to geology is the wide coverage over the area of interest, where much accurate and useful information such as structural patterns and spectral features can

groundwater storages (Semere and Ghebreab, 2006). Fortunately, the application of remotesensing technology from space is providing geologists with means of acquiring these synoptic data sets.

## **1.1 Satellite remote sensing and image processing for lineament features detection**

Lineaments are any linear features that can be picked out as lines (appearing as such or evident because of contrasts in terrain or ground cover on either side) in aerial or space imagery. If geological these are usually faults, joints, or boundaries between stratigraphic formations. Other causes of lineaments include roads and railroads, contrast-emphasized contacts between natural or man-made geographic features (e.g., fence lines), or vague "false alarms" caused by unknown (unspecified) factors. The human eye tends to single out both genuine and spurious linear features, so that some thought to be geological may, in fact, be of other origins (Semere and Ghebreab, 2006).

In the early days of Landsat, perhaps the most commonly cited use of space imagery in Geology was to detect linear features (the terms "linear" or "photolinear" are also used instead of lineaments, but 'linear' is almost a slang word) that appeared as tonal discontinuities. Almost anything that showed as a roughly straight line in an image was suspected to be geological. Most of these lineaments were attributed either to faults or to fracture systems that were controlled by joints (fractures without relative offsets) (Wang et al. 1990; Vassilas et al. 2002; Robinson et al., 2007).

Lineaments are well-known phenomena in the Earth's crust. Rocks exposed as surfaces or in road cuts or stream outcrops typically show innumerable fractures in different orientations, commonly spaced fractions of a meter to a few meters apart. These lineaments tend to disappear locally as individual structures, but fracture trends persist. The orientations are often systematic meaning, that in a region, joint planes may lie in spatial positions having several limited directions relative to north and to horizontal (Mostafa and Bishta, 2005). Where continuous subsurface fracture planes that extend over large distances and intersect the land surface produce linear traces (lineaments). A linear feature in general can show up in an aerial photo or a space images as discontinuity that is either darker (lighter in the image) in the middle and lighter (darker in the images) on both sides; or, is lighter on one side and darker on the other side. Obviously, some of these features are not geological. Instead, these could be fence lines between crop fields, roads, or variations in land use. Others may be geo-topographical, such as ridge crests, set off by shadowing. But those that are structural (joints and faults) are visible in several ways (Semere and Ghebreab, 2006; Zaineldeen 2011).

Lineament commonly are opened up and enlarged by erosion. Some may even become small valleys. Being zones of weak structure, they may be scoured out by glacial action and then filled by water to become elongated lakes (the Great Lakes are the prime example). Ground water may invade and gouge the fragmented rock or seep into the joints, causing periodic dampness that we can detect optically, thermally, or by radar. Vegetation can then develop in this moisture-rich soil, so that at certain times of year linear features are enhanced. We can detect all of these conditions in aerial or space imagery (Majumdar and Bhattacharya 1998; Katsuaki et al., 1995; Walsh 2000; Mostafa and Bishta, 2005; Semere and Ghebreab, 2006).

groundwater storages (Semere and Ghebreab, 2006). Fortunately, the application of remotesensing technology from space is providing geologists with means of acquiring these

**1.1 Satellite remote sensing and image processing for lineament features detection**  Lineaments are any linear features that can be picked out as lines (appearing as such or evident because of contrasts in terrain or ground cover on either side) in aerial or space imagery. If geological these are usually faults, joints, or boundaries between stratigraphic formations. Other causes of lineaments include roads and railroads, contrast-emphasized contacts between natural or man-made geographic features (e.g., fence lines), or vague "false alarms" caused by unknown (unspecified) factors. The human eye tends to single out both genuine and spurious linear features, so that some thought to be geological may, in fact, be

In the early days of Landsat, perhaps the most commonly cited use of space imagery in Geology was to detect linear features (the terms "linear" or "photolinear" are also used instead of lineaments, but 'linear' is almost a slang word) that appeared as tonal discontinuities. Almost anything that showed as a roughly straight line in an image was suspected to be geological. Most of these lineaments were attributed either to faults or to fracture systems that were controlled by joints (fractures without relative offsets) (Wang et

Lineaments are well-known phenomena in the Earth's crust. Rocks exposed as surfaces or in road cuts or stream outcrops typically show innumerable fractures in different orientations, commonly spaced fractions of a meter to a few meters apart. These lineaments tend to disappear locally as individual structures, but fracture trends persist. The orientations are often systematic meaning, that in a region, joint planes may lie in spatial positions having several limited directions relative to north and to horizontal (Mostafa and Bishta, 2005). Where continuous subsurface fracture planes that extend over large distances and intersect the land surface produce linear traces (lineaments). A linear feature in general can show up in an aerial photo or a space images as discontinuity that is either darker (lighter in the image) in the middle and lighter (darker in the images) on both sides; or, is lighter on one side and darker on the other side. Obviously, some of these features are not geological. Instead, these could be fence lines between crop fields, roads, or variations in land use. Others may be geo-topographical, such as ridge crests, set off by shadowing. But those that are structural (joints and faults) are visible in several ways (Semere and Ghebreab, 2006;

Lineament commonly are opened up and enlarged by erosion. Some may even become small valleys. Being zones of weak structure, they may be scoured out by glacial action and then filled by water to become elongated lakes (the Great Lakes are the prime example). Ground water may invade and gouge the fragmented rock or seep into the joints, causing periodic dampness that we can detect optically, thermally, or by radar. Vegetation can then develop in this moisture-rich soil, so that at certain times of year linear features are enhanced. We can detect all of these conditions in aerial or space imagery (Majumdar and Bhattacharya 1998; Katsuaki et al., 1995; Walsh 2000; Mostafa and Bishta, 2005; Semere and

synoptic data sets.

Zaineldeen 2011).

Ghebreab, 2006).

of other origins (Semere and Ghebreab, 2006).

al. 1990; Vassilas et al. 2002; Robinson et al., 2007).

Consequently, optical remote sensing techniques over more than three decades have shown a great promise for mapping geological feature variations over wide scale (Mostafa and Bishta, 2005; Semere and Ghebreab, 2006; Marghany et al., 2009a). In referring to Katsuaki et al., (1995); Walsh (2000) lineament information extractions in satellite images can be divided broadly into three categories: (i) lineament enhancement and lineament extraction for characterization of geologic structure;(ii) image classification to perform geologic mapping or to locate spectrally anomalous zones attributable to mineralization (Mostafa et al., 1995; Süzen and Toprak 1998); and (iii) superposition of satellite images and multiple data such as geological, geochemical, and geophysical data in a geographical information system (Novak and Soulakellis 2000; Semere and Ghebreab 2006). Furthermore, remote sensing data assimilation in real time could be a bulk tool for geological features extraction and mapping. In this context, several investigations currently underway on the assimilation of both passive and active remotely sensed data into automatic detection of significant geological features i.e., lineament, curvilinear and fault.

Image processing tools have used for lineament feature detections are: (i) image enhancement techniques (Mah et al. 1995; Chang et al. 1998; Walsh 2000;Marghany et al., 2009b); and (ii) edge detection and segmentation (Wang et al. 1990; Vassilas et al. 2002; Mostafa and Bishta 2005). In practice, researchers have preferred to use the spatial domain filtering techniques in order to get ride of the artificial lineaments and to verify disjoint lineament pixels in satellite data (Süzen and Toprak 1998). Further, Leech et al., (2003) implemented the band-rationing, linear and Gaussian nonlinear stretching enhancement techniques to determine lineament populations. Won-In and Charusiri (2003) found that High Pass Filter enhancement technique provides accurate geological map. In fact, the High Pass filter selectively enhances the small scale features of an image (high frequency spatial components) while maintaining the larger-scale features (low frequency components) that constitute most of the information in the image.

Majumdar and Bhattacharya (1998) and Vassilas et al. (2002), respectively have used Haar and Hough transforms as edge detection algorithms for lineament detection in Landsat-TM satellite data. Majumdar and Bhattacharya (1998) reported that Haar transform is proper in extraction of subtle features with finer details from satellite data. Vassilas et al. (2002), however, reported that Hough transform is appropriate for fault feature mapping. Consequently, Laplacian, Sobel, and Canny are the major algorithms for lineament feature detections in remotely sensed data (Mostafa and Bishta 2005; Semere and Ghebreab, 2006; Marghany 2005).Recently Marghany and Mazlan (2010) proposed a new approach for automatic detection of lineament features from RADARSAT-1 SAR data. This approach is based on modification of Lee adaptive algorithm using convolution of Gaussian algorithm.

#### **1.2 Problems for geological features extraction from remote sensing data**

Geological studies are requiring standard methods and procedures to acquire precisely information. However, traditional methods might be difficult to use due to highly earth complex topography. Regarding the previous prospective, the advantage of satellite remote sensing in its application to geology is the wide coverage over the area of interest, where much accurate and useful information such as structural patterns and spectral features can

Three-Dimensional Lineament Visualization Using

Fig. 1. Location of Study area.

Fig. 2. Geologic fault feature along Oman mountain.

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 217

be extracted from the imagery. Yet, abundance of geological features are not be fully understood. Lineaments are considered the bulk geological features which are still unclear in spite of they are useful for geological analysis in oil exploration. In this sense, the lineament extraction is very important for the application of remote sensing to geology. However the real meaning of lineament is still vague. Lineaments should be discriminated from other line features that are not due to geological structures. In this context, the lineament extraction should be carefully interpreted by geologists.

## **1.3 Hypothesis of study**

Concerning with above prospective, we address the question of uncertainties impact on modelling Digital Elevation Model (DEM) for 3-D lineament visualization from multispectral satellite data without needing to include digital elevation data. This is demonstrated with LANDSAT-ETM satellite data using fuzzy B-spline algorithm (Marghany and Mazlan 2005 and Marghany et al., 2007). Three hypotheses are examined:


## **2. Study area**

The study area is located in Sharjah Emirates about 70 Km fromSharjah city. It is considered in the alluvium plain for central area of UAE and covers an Area of 1800 Km2 (60 km x 30 km) within boundaries of latitudes 24º 12′N to 24º.23'N and longitudes of 55º.51'E to 55º 59' E (Fig. 1). The northern part of UAE is formed of the Oman mountains and the marginal hills extends from the base of the mountains and (alluvium plain) to the south western sand dunes (Figs 2 and 3) such features can be seen clearly in Wadi Bani Awf, Western Hajar (Fig.2). Land geomorphology is consisted of structural form, fluvial, and Aeolian forms(sand dunes). According to Maged et al., (2009) structural form is broad of the Oman mountains and JabalFayah (Fig.4) which are folded structure due collusion of oceanic crust and Arabian plate (continental plate). Furthermore, the mountain is raised higher than 400 m above sea level and exhibit parallel ridges and high–tilted beds. Many valleys are cut down the mountains, forming narrow clefts and there are also intermittent basins caused by differential erosion. In addition, the Valley bases are formed small caves. Stream channels have been diverted to the southwest and they deposited silt in the tongue -shaped which lies between the dunes. Further, Aeolian forms are extended westwards from the Bahada plain, where liner dunes run towards the southwest direction in parallel branching pattern (Fig. 3) with relative heights of 50 meters. Nevertheless, the heights are decreased towards the southeast due to a decrease in sand supply and erosion caused by water occasionally flowing from the Oman mountains. Moreover, some of the linear dunes are quite complex due to the development of rows of star dunes along the top of their axes. Additionally, inter dunes areas are covered by fluvial material which are laid down in the playas formed at the margins of the Bahadas plain near the coastline. The dunes changes their forms to low flats of marine origin and their components are also dominated by bioclastics and quartz sands (Marghany and Mazlan 2010).

be extracted from the imagery. Yet, abundance of geological features are not be fully understood. Lineaments are considered the bulk geological features which are still unclear in spite of they are useful for geological analysis in oil exploration. In this sense, the lineament extraction is very important for the application of remote sensing to geology. However the real meaning of lineament is still vague. Lineaments should be discriminated from other line features that are not due to geological structures. In this context, the

Concerning with above prospective, we address the question of uncertainties impact on modelling Digital Elevation Model (DEM) for 3-D lineament visualization from multispectral satellite data without needing to include digital elevation data. This is demonstrated with LANDSAT-ETM satellite data using fuzzy B-spline algorithm (Marghany and Mazlan 2005 and Marghany et al., 2007). Three hypotheses are examined:

 Canny algorithm can be used as semiautomatic tool to discriminate between lineaments and surrounding geological features in optical remotely sensed satellite data; and uncertainties of DEM model can be solved using Fuzzy B-spline algorithm to map

The study area is located in Sharjah Emirates about 70 Km fromSharjah city. It is considered in the alluvium plain for central area of UAE and covers an Area of 1800 Km2 (60 km x 30 km) within boundaries of latitudes 24º 12′N to 24º.23'N and longitudes of 55º.51'E to 55º 59' E (Fig. 1). The northern part of UAE is formed of the Oman mountains and the marginal hills extends from the base of the mountains and (alluvium plain) to the south western sand dunes (Figs 2 and 3) such features can be seen clearly in Wadi Bani Awf, Western Hajar (Fig.2). Land geomorphology is consisted of structural form, fluvial, and Aeolian forms(sand dunes). According to Maged et al., (2009) structural form is broad of the Oman mountains and JabalFayah (Fig.4) which are folded structure due collusion of oceanic crust and Arabian plate (continental plate). Furthermore, the mountain is raised higher than 400 m above sea level and exhibit parallel ridges and high–tilted beds. Many valleys are cut down the mountains, forming narrow clefts and there are also intermittent basins caused by differential erosion. In addition, the Valley bases are formed small caves. Stream channels have been diverted to the southwest and they deposited silt in the tongue -shaped which lies between the dunes. Further, Aeolian forms are extended westwards from the Bahada plain, where liner dunes run towards the southwest direction in parallel branching pattern (Fig. 3) with relative heights of 50 meters. Nevertheless, the heights are decreased towards the southeast due to a decrease in sand supply and erosion caused by water occasionally flowing from the Oman mountains. Moreover, some of the linear dunes are quite complex due to the development of rows of star dunes along the top of their axes. Additionally, inter dunes areas are covered by fluvial material which are laid down in the playas formed at the margins of the Bahadas plain near the coastline. The dunes changes their forms to low flats of marine origin and their components are also dominated by bioclastics and quartz sands

lineaments can be reconstructed in Three Dimensional (3-D) visualization;

lineament extraction should be carefully interpreted by geologists.

**1.3 Hypothesis of study** 

**2. Study area** 

(Marghany and Mazlan 2010).

spatial lineament variations in 3-D.

Fig. 1. Location of Study area.

Fig. 2. Geologic fault feature along Oman mountain.

Three-Dimensional Lineament Visualization Using

The main features of LANDSAT-7 (Robinson et al., 2007) are

Full aperture, 5% absolute radiometric calibration.

A panchromatic band with 15 m (49 ft) spatial resolution (band 8).

A thermal infrared channel with 60 m spatial resolution (band 6).

Second is ancillary data which are contained digital topographic, geological maps, well logs and finally ground water data. Furthermore, ancillary data such as topography map of scale 1: 122,293 used to generate Digital Elevation Model (DEM) of selected area. Bands 1,2,3,5 and 7 are selected to achieve the objective of this study. According to Marghany et al., (2009) these bands can provide accurate geological information. Finally, the Digital Elevation

The procedures have been used to extract lineaments and drainage pattern from LANDSAT ETM satellite image were involved image enhancement contrast, stretching and linear enhancement which were applied to acquire an excellent visualization. In addition, automatic detection algorithm Canny are performed to acquire excellent accuracy of lineament extraction (Mostafa et al., 1995). Two procedures have involved to extract lineaments from LANDSAT ETM data. First is automatic detection by using automatic edge detection algorithm of Canny algorithm. Prior to implementations of automatic edge detection processing, LANDSAT ETM data are enhanced and then geometrically corrected. Second is implementing fuzzy B-spline was adopted from Marghany et al., (2010) to

reconstruct 3D geologic mapping visualization from LANDSAT ETM satellite data.

Fig. 5. LANDSAT satellite data used in this study

Model (DEM) is acquired from SRTM data (Fig.6).

**4. Model for 3-D lineament visualization** 

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 219

Visible (reflected light) bands in the spectrum of blue, green, red, near-infrared (NIR),

and mid-infrared (MIR) with 30 m (98 ft) spatial resolution (bands 1-5, 7).

Fig. 3. Dune forms on Oman mountain base.

Fig. 4. Sand dune feature along Jabal Fayah.

## **3. Data sets**

In study, there are two sort of data have been used. First is satellite data which is involved LANDSAT Enhanced Thematic Mapper (ETM) image with pixel resolution of 30 m which is acquired on 14:07, 18 December 2004 (Fig.5). It covers area of 24° 23′ N, 55° 52' E to 24° 17′ N and 55° 59′ E (Fig.5). Landsat sensors have a moderate spatial-resolution. It is in a polar, sunsynchronous orbit, meaning it scans across the entire earth's surface. With an altitude of 705 kilometres +/- 5 kilometres, it takes 232 orbits, or 16 days, to do so. The satellite weighs 1973 kg, is 4.04 m long, and 2.74 m in diameter. Unlike its predecessors, Landsat 7 has a solid state memory of 378 gigabits (roughly 100 images). The main instrument on board Landsat 7 is the Enhanced Thematic Mapper Plus (ETM+).

In study, there are two sort of data have been used. First is satellite data which is involved LANDSAT Enhanced Thematic Mapper (ETM) image with pixel resolution of 30 m which is acquired on 14:07, 18 December 2004 (Fig.5). It covers area of 24° 23′ N, 55° 52' E to 24° 17′ N and 55° 59′ E (Fig.5). Landsat sensors have a moderate spatial-resolution. It is in a polar, sunsynchronous orbit, meaning it scans across the entire earth's surface. With an altitude of 705 kilometres +/- 5 kilometres, it takes 232 orbits, or 16 days, to do so. The satellite weighs 1973 kg, is 4.04 m long, and 2.74 m in diameter. Unlike its predecessors, Landsat 7 has a solid state memory of 378 gigabits (roughly 100 images). The main instrument on board

Fig. 3. Dune forms on Oman mountain base.

Fig. 4. Sand dune feature along Jabal Fayah.

Landsat 7 is the Enhanced Thematic Mapper Plus (ETM+).

**3. Data sets** 

The main features of LANDSAT-7 (Robinson et al., 2007) are


Fig. 5. LANDSAT satellite data used in this study

Second is ancillary data which are contained digital topographic, geological maps, well logs and finally ground water data. Furthermore, ancillary data such as topography map of scale 1: 122,293 used to generate Digital Elevation Model (DEM) of selected area. Bands 1,2,3,5 and 7 are selected to achieve the objective of this study. According to Marghany et al., (2009) these bands can provide accurate geological information. Finally, the Digital Elevation Model (DEM) is acquired from SRTM data (Fig.6).

## **4. Model for 3-D lineament visualization**

The procedures have been used to extract lineaments and drainage pattern from LANDSAT ETM satellite image were involved image enhancement contrast, stretching and linear enhancement which were applied to acquire an excellent visualization. In addition, automatic detection algorithm Canny are performed to acquire excellent accuracy of lineament extraction (Mostafa et al., 1995). Two procedures have involved to extract lineaments from LANDSAT ETM data. First is automatic detection by using automatic edge detection algorithm of Canny algorithm. Prior to implementations of automatic edge detection processing, LANDSAT ETM data are enhanced and then geometrically corrected. Second is implementing fuzzy B-spline was adopted from Marghany et al., (2010) to reconstruct 3D geologic mapping visualization from LANDSAT ETM satellite data.

Three-Dimensional Lineament Visualization Using

Equation 2 is faster to be computed.

(Deriche 1987).

(Canny 1986).

**4.3 The fuzzy B-splines algorithm** 

for current gradient values.

one related to an assumption level

An assumption

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 221

The direction of the edge θ is computed using the gradient in the *Gx* and *Gy* directions. However, an error will be generated when sum X is equal to zero. So in the code, there has to be a restriction set whenever this takes place. Whenever the gradient (*G*) in the *x* direction is equal to zero, the edge direction has to be equal to 90 degrees or 0 degrees, depending on what the value of the gradient in the y-direction is equal to. If *Gy* has a value of zero, the edge direction will equal 0 degrees. Otherwise the edge direction will equal 90 degrees

According to Gonzalez and Woods (1992),three criteria are used to improve edge detection. The first and most obvious is low error rate. It is important that edges occurring in images should not be missed and that there be NO responses to non-edges. The second criterion is that the edge points be well localized. In other words, the distance between the edge pixels as found by the detector and the actual edge is to be at a minimum. A third criterion is to have only one response to a single edge. This was implemented because the first 2 were not substantial enough to completely eliminate the possibility of multiple responses to an edge

The fuzzy B-splines (FBS) are introduced allowing fuzzy numbers instead of intervals in the definition of the B-splines. Typically, in computer graphics, two objective quality definitions for fuzzy B-splines are used: triangle-based criteria and edge-based criteria (Marghany et al., 2009). A fuzzy number is defined using interval analysis. There are two basic notions that we combine together: confidence interval and presumption level. A confidence interval is a real values interval which provides the sharpest enclosing range

level of the topography elevation gradients (Anile 1997). The 0 value corresponds to minimum knowledge of topography elevation gradients, and 1 to the maximum topography elevation gradients. A fuzzy number is then prearranged in the confidence interval set, each

Let us consider a function ' *f* : *d d* , of *N* fuzzy variables 1 2 , ,...., *<sup>n</sup> dd d* . Where *<sup>n</sup> d* are the global minimum and maximum values topography elevation gradients along the space. Based on the spatial variation of the topography elevation gradients, the fuzzy B-spline algorithm is used to compute the function *f* (Marghany et al., 2010). Follow Marghany et al., (2010) *d(i,j)* is the topography elevation value at location *i,j* in the region *D* where *i* is the horizontal and *j* is the vertical coordinates of a grid of m times n rectangular cells. Let *N* be

pair of confidence intervals which define a number: '


 ' *d d* .

[0, 1]. Moreover, the following must hold for each

*y x G*

 arctan 

*GG G <sup>x</sup> <sup>y</sup>* (2)

*G* (3)

Fig. 6. Topographic map of United Arab Emirates that created with GMT from SRTM data

#### **4.1 Histogram equalization**

Following Marghany et al., (2009) histogram equalization is applied to LANDSAT TM image to obtain high quality image visualization. An image histogram is an analytic tool used to measure the amplitude distribution of pixels within an image. For example, a histogram can be used to provide a count of the number of pixels at amplitude 0, the number at amplitude 1, and so on. By analyzing the distribution of pixel amplitudes, you can gain some information about the visual appearance of an image. A high-contrast image contains a wide distribution of pixel counts covering the entire amplitude range. A low contrast image has most of the pixel amplitudes congregated in a relatively narrow range (Süzen et al., 1998 and Gonzalez and Woods 1992).

#### **4.2 Canny algorithm**

According to Canny ( 1986),the Canny edge detector uses a filter based on the first derivative of a Gaussian, because it is susceptible to noise present on raw unprocessed image data, so to begin with, the raw image is convolved with a Gaussian filter. The result is a slightly blurred version of the original which is not affected by a single noisy pixel to any significant degree. According to Deriche (1987) the edge detection operator (Roberts, Prewitt, Sobel for example) returns a value for the first derivative in the horizontal direction **(Gy)** and the vertical direction **(Gx)**. From this the edge gradient and direction (Ө) can be determined:

$$\left| \mathbf{G} \right| = \sqrt{\mathbf{G}\_x^{\; 2} + \mathbf{G}\_y^{\; 2}} \tag{1}$$

In fact, equation 1 is used to estimate the gradient magnitude (edge strength) at each point can be found to find the edge strength by taking the gradient of the image. Typically, an approximate magnitude is computed using

$$\left| \mathbf{G} \right| = \left| \mathbf{G}\_x \right| + \left| \mathbf{G}\_y \right| \tag{2}$$

Equation 2 is faster to be computed.

220 Remote Sensing – Advanced Techniques and Platforms

Fig. 6. Topographic map of United Arab Emirates that created with GMT from SRTM data

Following Marghany et al., (2009) histogram equalization is applied to LANDSAT TM image to obtain high quality image visualization. An image histogram is an analytic tool used to measure the amplitude distribution of pixels within an image. For example, a histogram can be used to provide a count of the number of pixels at amplitude 0, the number at amplitude 1, and so on. By analyzing the distribution of pixel amplitudes, you can gain some information about the visual appearance of an image. A high-contrast image contains a wide distribution of pixel counts covering the entire amplitude range. A low contrast image has most of the pixel amplitudes congregated in a relatively narrow range

According to Canny ( 1986),the Canny edge detector uses a filter based on the first derivative of a Gaussian, because it is susceptible to noise present on raw unprocessed image data, so to begin with, the raw image is convolved with a Gaussian filter. The result is a slightly blurred version of the original which is not affected by a single noisy pixel to any significant degree. According to Deriche (1987) the edge detection operator (Roberts, Prewitt, Sobel for example) returns a value for the first derivative in the horizontal direction **(Gy)** and the vertical direction **(Gx)**. From this the edge gradient and direction (Ө) can be

In fact, equation 1 is used to estimate the gradient magnitude (edge strength) at each point can be found to find the edge strength by taking the gradient of the image. Typically, an

2 2 *GGG x y* (1)

**4.1 Histogram equalization** 

**4.2 Canny algorithm** 

determined:

(Süzen et al., 1998 and Gonzalez and Woods 1992).

approximate magnitude is computed using

$$\theta = \arctan\left(\frac{G\_y}{G\_x}\right) \tag{3}$$

The direction of the edge θ is computed using the gradient in the *Gx* and *Gy* directions. However, an error will be generated when sum X is equal to zero. So in the code, there has to be a restriction set whenever this takes place. Whenever the gradient (*G*) in the *x* direction is equal to zero, the edge direction has to be equal to 90 degrees or 0 degrees, depending on what the value of the gradient in the y-direction is equal to. If *Gy* has a value of zero, the edge direction will equal 0 degrees. Otherwise the edge direction will equal 90 degrees (Deriche 1987).

According to Gonzalez and Woods (1992),three criteria are used to improve edge detection. The first and most obvious is low error rate. It is important that edges occurring in images should not be missed and that there be NO responses to non-edges. The second criterion is that the edge points be well localized. In other words, the distance between the edge pixels as found by the detector and the actual edge is to be at a minimum. A third criterion is to have only one response to a single edge. This was implemented because the first 2 were not substantial enough to completely eliminate the possibility of multiple responses to an edge (Canny 1986).

## **4.3 The fuzzy B-splines algorithm**

The fuzzy B-splines (FBS) are introduced allowing fuzzy numbers instead of intervals in the definition of the B-splines. Typically, in computer graphics, two objective quality definitions for fuzzy B-splines are used: triangle-based criteria and edge-based criteria (Marghany et al., 2009). A fuzzy number is defined using interval analysis. There are two basic notions that we combine together: confidence interval and presumption level. A confidence interval is a real values interval which provides the sharpest enclosing range for current gradient values.

An assumption -level is an estimated truth value in the [0, 1] interval on our knowledge level of the topography elevation gradients (Anile 1997). The 0 value corresponds to minimum knowledge of topography elevation gradients, and 1 to the maximum topography elevation gradients. A fuzzy number is then prearranged in the confidence interval set, each one related to an assumption level [0, 1]. Moreover, the following must hold for each pair of confidence intervals which define a number: ' ' *d d* .

Let us consider a function ' *f* : *d d* , of *N* fuzzy variables 1 2 , ,...., *<sup>n</sup> dd d* . Where *<sup>n</sup> d* are the global minimum and maximum values topography elevation gradients along the space. Based on the spatial variation of the topography elevation gradients, the fuzzy B-spline algorithm is used to compute the function *f* (Marghany et al., 2010). Follow Marghany et al., (2010) *d(i,j)* is the topography elevation value at location *i,j* in the region *D* where *i* is the horizontal and *j* is the vertical coordinates of a grid of m times n rectangular cells. Let *N* be

Three-Dimensional Lineament Visualization Using

has been locally dolomitized.

Fig. 7. DEM for study area.

**5. Three-dimensional lineament visualization 5.1 3-D lineament visulization using classical method** 

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 223

Fig. 4 shows the Digital Elevation Model is derived from SRTM data that covered area of approximately 11 km2. Clearly, DEM varies between 319-929 m and maximum elevation value of 929 m is found in northeast direction of UAE. Therefore, SRTM has promised to produce DEM with root mean square error of 16 m (Nikolakopoulos et al., 2006). In addition, Oman mountain is dominated by highest DEM value of 929 m which is shown parallel to coastal zone of Arabian Gulf. The DEM is dominated by spatial variation of the topography features such as ridges, sand dunes and steep slopes. As the steep slopes are clearly seen within DEM of 400 m (Fig,7). According to Zaineldeen (2011), the rocks are well bedded massive limestones with some replacement chert band sand nodules. The limestone

Fig. 8 shows the supervised classification map of LANDSAT ETM satellite data. It clear that the vegetation covers are located in highest elevation as compiled with Fig. 7 while highlands are located in lowest elevation with DEM value of 660 m. The supervised

the set of eight neighbouring cells. The input variables of the fuzzy are the amplitude differences of water depth *d* defined by (Anile et al. 1997):

$$
\Delta d\_N = d\_i - d\_{0'}N = 1, \dots, 4 \tag{4}
$$

where the *<sup>i</sup> d* , *N*=1, 4 values are the neighbouring cells of the actually processed cell *d0* along the horizontal coordinate *i*. To estimate the fuzzy number of topography elevation *dj* which is located along the vertical coordinate *j*, we estimated the membership function values and ' of the fuzzy variables *<sup>i</sup> d* and *<sup>j</sup> d* , respectively by the following equations were described by Rövid et al. (2004)

$$\mu = \max \left\{ \min \left\{ m\_{pl} (\Delta d\_i) : d\_i \in N\_i \right\} ; \text{N} = \mathbf{1} ...\_\prime \mathbf{4} \right\} \tag{5}$$

$$\dot{\mu}^{\dot{i}} = \max \{ \min \{ m\_{L\mathcal{M}}(\Delta d\_i) : d\_i \in N\_i \} ; N = 1...4 \} \tag{6}$$

Equations 5 and 6 represent topography elevation in 2-D, in order to reconstruct fuzzy values of topography elevation in 3-D, then fuzzy number of digital elevation in *z* coordinate is estimated by the following equation proposed by Russo (1998) and Marghany et al., (2010),

$$\Delta d\_z = \Delta \mu \text{MAX} \{ m\_{\text{LA}} \left| d\_{i-1,j} - d\_{i,j} \right|, m\_{\text{LA}} \left| d\_{i,j-1} - d\_{i,j} \right| \} \tag{7}$$

where *<sup>z</sup> d* fuzzy set of digital elevation values in z coordinate which is function of *i* and *j* coordinates i.e. (, ) *z ij d Fd d* . Fuzzy number *FO* for water depth in *i,j* and *z* coordinates then can be given by

$$F\_O = \{ \min(d\_{z\_0}, \dots, \dots, d\_{z\_{\Omega}}), \max(d\_{z\_0}, \dots, \dots, d\_{z\_{\Omega}}) \} \tag{8}$$

where =1, 2, 3, 4,

The fuzzy number of water depth *FO* then is defined by B-spline in order to reconstruct 3-D of digital elevation. In doing so, B-spline functions including the knot positions, and fuzzy set of control points are constructed. The requirements for B-spline surface are set of control points, set of weights and three sets of knot vectors and are parameterized in the *p* and *q* directions.

Following Marghany et al., (2009b) and Marghany et al., (2010), a fuzzy number is defined whose range is given by the minimum and maximum values of digital elevation along each kernel window size. Furthermore, the identification of a fuzzy number is acquired to summarize the estimated digital elevation data in a cell and it is characterized by a suitable membership function. The choice of the most appropriate membership is based on triangular numbers which are identified by minimum, maximum, and mean values of digital elevation estimated. Furthermore, the membership support is the range of digital elevation data in the cell and whose vertex is the median value of digital elevation data (Anile et al. 1997).

the set of eight neighbouring cells. The input variables of the fuzzy are the amplitude

where the *<sup>i</sup> d* , *N*=1, 4 values are the neighbouring cells of the actually processed cell *d0* along the horizontal coordinate *i*. To estimate the fuzzy number of topography elevation *dj* which is located along the vertical coordinate *j*, we estimated the membership function values

' max min ( ) : ; 1....,4

Equations 5 and 6 represent topography elevation in 2-D, in order to reconstruct fuzzy values of topography elevation in 3-D, then fuzzy number of digital elevation in *z* coordinate is estimated by the following equation proposed by Russo (1998) and Marghany

1, , ,1 , {,} *z LA <sup>i</sup> <sup>j</sup> <sup>i</sup> <sup>j</sup> LA i <sup>j</sup> <sup>i</sup> <sup>j</sup> d MAX m d d m d d*

where *<sup>z</sup> d* fuzzy set of digital elevation values in z coordinate which is function of *i* and *j* coordinates i.e. (, ) *z ij d Fd d* . Fuzzy number *FO* for water depth in *i,j* and *z* coordinates then

The fuzzy number of water depth *FO* then is defined by B-spline in order to reconstruct 3-D of digital elevation. In doing so, B-spline functions including the knot positions, and fuzzy set of control points are constructed. The requirements for B-spline surface are set of control points, set of weights and three sets of knot vectors and are parameterized in the *p* and *q*

Following Marghany et al., (2009b) and Marghany et al., (2010), a fuzzy number is defined whose range is given by the minimum and maximum values of digital elevation along each kernel window size. Furthermore, the identification of a fuzzy number is acquired to summarize the estimated digital elevation data in a cell and it is characterized by a suitable membership function. The choice of the most appropriate membership is based on triangular numbers which are identified by minimum, maximum, and mean values of digital elevation estimated. Furthermore, the membership support is the range of digital elevation data in the cell and whose vertex is the median value of digital elevation data

of the fuzzy variables *<sup>i</sup> d* and *<sup>j</sup> d* , respectively by the following equations were

<sup>0</sup> *d d dN N i* , 1,........,4 (4)

max min ( ) : ; 1....,4 *m ddNN pl i i i* (5)

*m ddNN LNl i i i* (6)

(7)

0 0 *Fd dd d Oz zz z* {min( ,..........., ),max( ,..........., )} (8)

differences of water depth *d* defined by (Anile et al. 1997):

and ' 

et al., (2010),

can be given by

where =1, 2, 3, 4,

directions.

(Anile et al. 1997).

described by Rövid et al. (2004)

## **5. Three-dimensional lineament visualization**

## **5.1 3-D lineament visulization using classical method**

Fig. 4 shows the Digital Elevation Model is derived from SRTM data that covered area of approximately 11 km2. Clearly, DEM varies between 319-929 m and maximum elevation value of 929 m is found in northeast direction of UAE. Therefore, SRTM has promised to produce DEM with root mean square error of 16 m (Nikolakopoulos et al., 2006). In addition, Oman mountain is dominated by highest DEM value of 929 m which is shown parallel to coastal zone of Arabian Gulf. The DEM is dominated by spatial variation of the topography features such as ridges, sand dunes and steep slopes. As the steep slopes are clearly seen within DEM of 400 m (Fig,7). According to Zaineldeen (2011), the rocks are well bedded massive limestones with some replacement chert band sand nodules. The limestone has been locally dolomitized.

Fig. 7. DEM for study area.

Fig. 8 shows the supervised classification map of LANDSAT ETM satellite data. It clear that the vegetation covers are located in highest elevation as compiled with Fig. 7 while highlands are located in lowest elevation with DEM value of 660 m. The supervised

Three-Dimensional Lineament Visualization Using

Fig. 9. Lineament mapping using Canny algorithm.

zero (made a non-edge).

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 225

weathered materials coming from Oman mountains) (Fig. 9). The lineaments are associated with fractures and faults which are located in northern part of Fig. 9. In fact that Canny algorithm first is smoothed the image to eliminate and noise. It then finds the image gradient to highlight regions with high spatial derivatives. The algorithm then tracks along these regions and suppresses any pixel that is not at the maximum (non-maximum suppression). The gradient array is further reduced by hysteresis. According to Deriche (1987), hysteresis is used to track along the remaining pixels that have not been suppressed. Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to

Further, If the magnitude is above the high threshold, it is made an edge. And if the magnitude is between the 2 thresholds, then it is set to zero unless there is a path from this pixel to a pixel with a gradient above threshold. In order to implement the canny edge detector algorithm, a series of steps must be followed. The first step is to filter out any noise in the original image before trying to locate and detect any edges. In fact, the Gaussian filter can be computed using a simple mask, it is used exclusively in the Canny algorithm. Once a suitable mask has been calculated, the Gaussian smoothing can be performed using standard convolution methods. According to Marghany et al., (2009), LANDSAT TM data can be used to map geological features such as lineaments and faults. This could be contributed to that composite of bands 3,4,5 able and 7 in LANDSAT TM satellite data are appropriate for mapping of geologic structures (Katsuaki and Ohmi 1995; Novak and Soulakellis 2000; Marghany et al., 2009). Consequently, the ground

Fig. 8. Supervised map results.

classification shows a great fault moves through a highland area. According to Robinson et al., (2007) , TM bands 7 (2.08–2.35 mm),4 (0.76–0.90 mm),and 2(0.50–0.60 mm)are appropriate for geological features detection because they have low-correlation and produce high-contrast. In this regard, band 2 is useful for rock discrimination, band 4 for land/water contrasts, and band 7 for discrimination of mineral and rock types. Further, TM bands 7 are also able to imagined crest dunes parallel with tens kilometres of length. This feature is clear in northern part of Fig.8 and located in high land of DEM of 900 m. This finding confirm the study of Robinson et al., (2007).

Fig. 9 shows the output result mapping of lineaments using composite of bands 3,4 , 5 and 7 in LANDSAT TM satellite data. The appearance of lineaments in LANDSAT TM satellite image are clearly distinguished. In addition, area adjacent to the mountainous from Manamh (northward), Flili village in the (southward) has high density of lineaments due to the westward compressive force between the oceanic crust and Arabian plate, such as fractures and faults and drainage pattern that running in the buried fault plains (filled

classification shows a great fault moves through a highland area. According to Robinson et al., (2007) , TM bands 7 (2.08–2.35 mm),4 (0.76–0.90 mm),and 2(0.50–0.60 mm)are appropriate for geological features detection because they have low-correlation and produce high-contrast. In this regard, band 2 is useful for rock discrimination, band 4 for land/water contrasts, and band 7 for discrimination of mineral and rock types. Further, TM bands 7 are also able to imagined crest dunes parallel with tens kilometres of length. This feature is clear in northern part of Fig.8 and located in high land of DEM of 900 m. This finding confirm the

Fig. 9 shows the output result mapping of lineaments using composite of bands 3,4 , 5 and 7 in LANDSAT TM satellite data. The appearance of lineaments in LANDSAT TM satellite image are clearly distinguished. In addition, area adjacent to the mountainous from Manamh (northward), Flili village in the (southward) has high density of lineaments due to the westward compressive force between the oceanic crust and Arabian plate, such as fractures and faults and drainage pattern that running in the buried fault plains (filled

Fig. 8. Supervised map results.

study of Robinson et al., (2007).

weathered materials coming from Oman mountains) (Fig. 9). The lineaments are associated with fractures and faults which are located in northern part of Fig. 9. In fact that Canny algorithm first is smoothed the image to eliminate and noise. It then finds the image gradient to highlight regions with high spatial derivatives. The algorithm then tracks along these regions and suppresses any pixel that is not at the maximum (non-maximum suppression). The gradient array is further reduced by hysteresis. According to Deriche (1987), hysteresis is used to track along the remaining pixels that have not been suppressed. Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a non-edge).

Fig. 9. Lineament mapping using Canny algorithm.

Further, If the magnitude is above the high threshold, it is made an edge. And if the magnitude is between the 2 thresholds, then it is set to zero unless there is a path from this pixel to a pixel with a gradient above threshold. In order to implement the canny edge detector algorithm, a series of steps must be followed. The first step is to filter out any noise in the original image before trying to locate and detect any edges. In fact, the Gaussian filter can be computed using a simple mask, it is used exclusively in the Canny algorithm. Once a suitable mask has been calculated, the Gaussian smoothing can be performed using standard convolution methods. According to Marghany et al., (2009), LANDSAT TM data can be used to map geological features such as lineaments and faults. This could be contributed to that composite of bands 3,4,5 able and 7 in LANDSAT TM satellite data are appropriate for mapping of geologic structures (Katsuaki and Ohmi 1995; Novak and Soulakellis 2000; Marghany et al., 2009). Consequently, the ground

Three-Dimensional Lineament Visualization Using

and Zaineldeen 2011).

Magrghany and Mazlan 2011).

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 227

decrease in sand supply and erosion caused by water occasionally flowing from the Oman mountains. Moreover, some of the linear dunes are quite complex due to the development of rows of star dunes along the top of their axes. Additionally, inter dunes areas are covered by fluvial material which are laid down in the playas formed at the margins of the Bahadas plain near the coastline. The dunes changes their forms to low flats of marine origin and their components are also dominated by bioclastics and quartz sands (Marghany et al., 2009

Fig. 12 shows the result acquires by using fuzzy B-spline algorithm. It is clear that the 3D visualization discriminates between different geological features. It can be noticed the faults, lineament and infrastructures clearly (Fig. 12c). This is due to the fact that the fuzzy Bsplines considered as deterministic algorithms which are described here optimize a triangulation only locally between two different points (Fuchs et al., 1977; Anile et al., 1995;Anile, 1997; Marghany et al., 2010; Marghany and Mazlan 2011). This corresponds to the feature of deterministic strategies of finding only sub-optimal solutions usually. The visualization of geological feature is sharp with the LANDSAT TM satellite image due to the fact that each operation on a fuzzy number becomes a sequence of corresponding operations on the respective µ-levels and the multiple occurrences of the same fuzzy parameters evaluated as a result of the function on fuzzy variables Keppel 1975; Anile et al., 1995;

It is very easy to distinguish between smooth and jagged features. Typically, in computer graphics, two objective quality definitions for fuzzy B-splines were used: triangle-based criteria and edge-based criteria. Triangle-based criteria follow the rule of maximization or minimization, respectively, of the angles of each triangle (Fuchs et al., 1977). The so-called max-min angle criterion prefers short triangles with obtuse angles. This finding confirms those of Keppel 1975 and Anile 1997. Table 1 confirms the accurate of fuzzy B-spline to eliminate uncertainties of 3-D visualization. Consequently, the fuzzy B-spline shows higher performance with standard error of mean of 0.12 and bias of 0.23 than SRTM technique. In

Fig. 11. 3D image and lineament distribution from Canny algorithm.

**5.2 3-D lineament visulization using fuzzy B-spline technique** 

resolution cell size of LANDSAT TM data is about 30 m. This confirms the study of Robinson et al., (2007).

Fig. 10 shows the lineament distribution with 3D map reconstruction using SRTM and LANDSAT TM bands 3,4,5, and 7. It is clear that the 3D visualization discriminates between different geological features. It can be noticed the faults, lineament and infrastructures clearly (Figure 10b). This study agrees with Marghany et al., (2009). It can be confirmed that the lineament are associated with faults and it also obvious that heavy capacity of lineament occurrences within the Oman mountain. This type of lineament can be named as mountain lineament.

Fig. 10. (a) 3D image reconstruction using SRTM data and (b) lineament distribution over 3D image.

According to Robinson et al., (2007) and Marghany et al., (2009) the mountain is raised higher than 400 m above sea level and exhibit parallel ridges and high-tilted beds. Many valleys are cut down the mountains, forming narrow clefts and small caves. The fluvial forms are consisted of streams channels which are flowed from Oman mountains have and spread out into several braided channels at the base of the mountains from the Bahada and Playa plains (Figure 11). Stream channels have been diverted to the southwest and they deposited silt in the tongue -shaped which lies between the dunes.

Further, Aeolian forms are extended westwards from the Bahada plain, where liner dunes run towards the southwest direction in parallel branching pattern (Fig. 11) with relative heights of 50 meters. Nevertheless, the heights are decreased towards the southeast due to a

resolution cell size of LANDSAT TM data is about 30 m. This confirms the study of

Fig. 10 shows the lineament distribution with 3D map reconstruction using SRTM and LANDSAT TM bands 3,4,5, and 7. It is clear that the 3D visualization discriminates between different geological features. It can be noticed the faults, lineament and infrastructures clearly (Figure 10b). This study agrees with Marghany et al., (2009). It can be confirmed that the lineament are associated with faults and it also obvious that heavy capacity of lineament occurrences within the Oman mountain. This type of lineament can be named as mountain

Fig. 10. (a) 3D image reconstruction using SRTM data and (b) lineament distribution over 3D

According to Robinson et al., (2007) and Marghany et al., (2009) the mountain is raised higher than 400 m above sea level and exhibit parallel ridges and high-tilted beds. Many valleys are cut down the mountains, forming narrow clefts and small caves. The fluvial forms are consisted of streams channels which are flowed from Oman mountains have and spread out into several braided channels at the base of the mountains from the Bahada and Playa plains (Figure 11). Stream channels have been diverted to the southwest and they

Further, Aeolian forms are extended westwards from the Bahada plain, where liner dunes run towards the southwest direction in parallel branching pattern (Fig. 11) with relative heights of 50 meters. Nevertheless, the heights are decreased towards the southeast due to a

deposited silt in the tongue -shaped which lies between the dunes.

Robinson et al., (2007).

lineament.

image.

Fig. 11. 3D image and lineament distribution from Canny algorithm.

decrease in sand supply and erosion caused by water occasionally flowing from the Oman mountains. Moreover, some of the linear dunes are quite complex due to the development of rows of star dunes along the top of their axes. Additionally, inter dunes areas are covered by fluvial material which are laid down in the playas formed at the margins of the Bahadas plain near the coastline. The dunes changes their forms to low flats of marine origin and their components are also dominated by bioclastics and quartz sands (Marghany et al., 2009 and Zaineldeen 2011).

## **5.2 3-D lineament visulization using fuzzy B-spline technique**

Fig. 12 shows the result acquires by using fuzzy B-spline algorithm. It is clear that the 3D visualization discriminates between different geological features. It can be noticed the faults, lineament and infrastructures clearly (Fig. 12c). This is due to the fact that the fuzzy Bsplines considered as deterministic algorithms which are described here optimize a triangulation only locally between two different points (Fuchs et al., 1977; Anile et al., 1995;Anile, 1997; Marghany et al., 2010; Marghany and Mazlan 2011). This corresponds to the feature of deterministic strategies of finding only sub-optimal solutions usually. The visualization of geological feature is sharp with the LANDSAT TM satellite image due to the fact that each operation on a fuzzy number becomes a sequence of corresponding operations on the respective µ-levels and the multiple occurrences of the same fuzzy parameters evaluated as a result of the function on fuzzy variables Keppel 1975; Anile et al., 1995; Magrghany and Mazlan 2011).

It is very easy to distinguish between smooth and jagged features. Typically, in computer graphics, two objective quality definitions for fuzzy B-splines were used: triangle-based criteria and edge-based criteria. Triangle-based criteria follow the rule of maximization or minimization, respectively, of the angles of each triangle (Fuchs et al., 1977). The so-called max-min angle criterion prefers short triangles with obtuse angles. This finding confirms those of Keppel 1975 and Anile 1997. Table 1 confirms the accurate of fuzzy B-spline to eliminate uncertainties of 3-D visualization. Consequently, the fuzzy B-spline shows higher performance with standard error of mean of 0.12 and bias of 0.23 than SRTM technique. In

Three-Dimensional Lineament Visualization Using

Statistical Parameters

**6. Conclusions** 

excellent tool for geologic mapping.

Systems, 72,123-156.

*Geosciences*. 24, (1), pp. 83-93.

No.26/97.

**7. References** 

SRTM.

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 229

survey. In fact fuzzy B-spline algorithm is able to keep track of uncertainty and provide tool for representing spatially clustered geological features. This advantage of fuzzy B-spline is

3-D Visualization

Fuzzy B-spline SRTM

not provided in Canny algorithm and DEM produced by SRTM data.

Bias 0.23 0.63 Standard error of the mean 0.12 0.56

Table 1. Statistical Comparison of 3-D computer visualization using Fuzzy-B-spline and

This study has demonstrated a method to map lineament distributions in United Arab Emirates (UAE) using LANDSAT-TM satellite data. In doing so, 3D image reconstruction is produced using SRTM data. Then Canny algorithm is implemented for lineament automatic detection from LANDSAT TM bands of 3,4,5,and 7. The results show that the maximum DEM value of 929 m is found in the northeast direction of UAE. The vegetation covers are dominated feature in the highest DEM while highlands are located in lowest elevation of 660 m. In addition, Canny algorithm has detected automatically lineament and fracture features. Therefore, 3D visualization is discriminated between lineament and fault features. The results show that the highest spatial distribution of lineaments are appeared in Oman mountain which are named by lineament mountain. In conclusion, the integration between Digital Elevation Model (DEM) and Canny algorithm can be used as geomatic tool for lineament automatic detection in 3D visualization. Further, a fuzzy B-spline algorithm is used to reconstruct Three Dimensional (3D) visualization of geologic feature spatial variations with standard error of mean of 0.12 and bias of 0.23. In conclusion, combination between Canny algorithm and DEM generated by using fuzzy B-spline could be used as an

Anile, A. M, (1997). *Report on the activity of the fuzzy soft computing group*, Technical Report of the Dept. of Mathematics, University of Catania, March 1997, 10 pages. Anile, AM, Deodato, S, Privitera, G, (1995) *Implementing fuzzy arithmetic*, Fuzzy Sets and

Anile, A.M., Gallo, G., Perfilieva, I., (1997). *Determination of Membership Function for Cluster of* 

Canny, J., A, (1986). Computational Approach To Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. PAMI-8 (6), pp. 679-698. Chang, Y.,Song, G., Hsu, S., (1998). Automatic Extraction of Ridge and Valley Axes Using

*Geographical data.* Genova, Italy: Institute for Applied Mathematics, National Research Council, University of Catania, Italy, October 1997, 25p., Technical Report

the Profile Recognition and Polygon-Breaking Algorithm. *Computers and* 

(b)

(a)

#### (c)

Fig. 12. (a): LANDSAT ETM satellite data and (b): 3D fuzzy B-spline visualization and (c): Zoom area of lineaments and fault

fact, Fuzzy B-splines provide both a continuous approximating model of the experimental data and a possibilistic description of the uncertainty in such DEM. Approximation with FBS provides a fast way to obtain qualitatively reliable descriptions whenever the introduction of a precise probabilistic DEM is too costly or impossible. In this study, fuzzy B-spline algorithm produced 3-D lineament visulization without need to ground geological survey. In fact fuzzy B-spline algorithm is able to keep track of uncertainty and provide tool for representing spatially clustered geological features. This advantage of fuzzy B-spline is not provided in Canny algorithm and DEM produced by SRTM data.


Table 1. Statistical Comparison of 3-D computer visualization using Fuzzy-B-spline and SRTM.

## **6. Conclusions**

228 Remote Sensing – Advanced Techniques and Platforms

Fig. 12. (a): LANDSAT ETM satellite data and (b): 3D fuzzy B-spline visualization and (c):

fact, Fuzzy B-splines provide both a continuous approximating model of the experimental data and a possibilistic description of the uncertainty in such DEM. Approximation with FBS provides a fast way to obtain qualitatively reliable descriptions whenever the introduction of a precise probabilistic DEM is too costly or impossible. In this study, fuzzy B-spline algorithm produced 3-D lineament visulization without need to ground geological

(a)

(b)

(c)

Zoom area of lineaments and fault

This study has demonstrated a method to map lineament distributions in United Arab Emirates (UAE) using LANDSAT-TM satellite data. In doing so, 3D image reconstruction is produced using SRTM data. Then Canny algorithm is implemented for lineament automatic detection from LANDSAT TM bands of 3,4,5,and 7. The results show that the maximum DEM value of 929 m is found in the northeast direction of UAE. The vegetation covers are dominated feature in the highest DEM while highlands are located in lowest elevation of 660 m. In addition, Canny algorithm has detected automatically lineament and fracture features. Therefore, 3D visualization is discriminated between lineament and fault features. The results show that the highest spatial distribution of lineaments are appeared in Oman mountain which are named by lineament mountain. In conclusion, the integration between Digital Elevation Model (DEM) and Canny algorithm can be used as geomatic tool for lineament automatic detection in 3D visualization. Further, a fuzzy B-spline algorithm is used to reconstruct Three Dimensional (3D) visualization of geologic feature spatial variations with standard error of mean of 0.12 and bias of 0.23. In conclusion, combination between Canny algorithm and DEM generated by using fuzzy B-spline could be used as an excellent tool for geologic mapping.

## **7. References**


Three-Dimensional Lineament Visualization Using

*Engineering and Remote Sensing*, 61(6),pp. 761-773.

*Remote Sensing*. 9( 12),pp. 1937-1942.

*Engineering Geology*. 39 (1-2), pp. 5-15.

*Journal of Remote Sensing.* 27 (21), 4819–4838.

*Remote Sensing.* 19 (6), pp. 1101-1114.

25-29, 2004, Budapest, Hungary, pp. 1661-1666.

Eritrea. *Journal of African Earth Sciences*. 46 (4), pp. 371-378.

1475.

69,676–694.

Volume,355-366.

109-120.

Fuzzy B-Spline Algorithm from Multispectral Satellite Data 231

Mah, A., Taylor, G.R., Lennox, P. and Balia, L., (1995). Lineament Analysis of Landsat

Majumdar, T.J., Bhattacharya, B.B., (1988). Application of the Haar transform For extraction

Mostafa, M.E. and M.Y.H.T. Qari, (1995).An exact technique of counting lineaments.

Mostafa, M.E. and A.Z. Bishta, (2005). Significant of lineament pattern in rock unit

Novak, I.D. and N. Soulakellis, (2000). Identifying geomorphic features using Landsat-5/TM data processing techniques on lesvos, Greece. *Geomorphology*. 34: 101-109. Nikolakopoulos, K. G.; Kamaratakis, E. K; Chrysoulakis, N. (2006). "SRTM vs ASTER

Semere, S. and W. Ghebreab, (2006). Lineament characterization and their tectonic

Süzen, M.L. and V. Toprak, (1998).Filtering of satellite images in geological lineament

Russo, F., (1998).Recent advances in fuzzy techniques for image enhancement. IEEE Transactions on Instrumentation and measurement. 47, pp: 1428-1434. Robinson, C.A. F.El-Baz, T.M.Kuskyb, M.Mainguet, F.Dumayc, Z.AlSuleimani, A.Al

Rövid, A., Várkonyi, A.R. andVárlaki, P., (2004). 3D Model estimation from multiple

Vassilas, N., Perantonis, S., Charou, E., Tsenoglou T., Stefouli, M., Varoufakis, S., (2002).

Walsh, G.J. and S.F. Clark Jr., (2000). Contrasting methods of fracture trend characterization

Won-In, K., Charusiri, P., (2003). Enhancement of thematic mapper satellite images for

*Applied Earth Observation and Geoinformation*, Vol. 15, 1-11.

Thematic Mapper Images, Northern Territory, Australia. *Photogrammetric* 

of linear and anomalous over part of Cambay Basin, India. *International Journal of* 

classification and designation: A pilot study on the gharib-dara area. Northen eastern Desert, Egypt. *International Journal of Remote Sensing*. 26 ( 7), pp. 1463 –

elevation products. Comparison for two regions in Crete, Greece". *International* 

significance using Landsat TM data and field studies in the central highlands of

analyses: An application to a fault zone in central Turkey. *International Journal of* 

Marjebye (2007). Role of uvial and structural processes in the formation of the Wahiba Sands, Oman: A remote Sensing Prospective. *Journal of Arid Environments*.

images," IEEE International Conference on Fuzzy Systems, FUZZ-IEEE'2004, July

Delineation of Lineaments from Satellite Data Based on Efficient Neural Network and Pattern Recognition Techniques. *2nd Hellenic Conf. on AI, SETN-2002*, 11-12 April 2002, Thessaloniki, Greece, Proceedings, Companion

in crystalline metamorphic and igneous rocks of the Windham quadrangle, New Hampshire. Northeast. *Northeastern Geology and Environmental Sciences*. 22 (2), pp.

geological mapping of the Cho Dien area, Northern Vietnam. *International Journal of* 


Deriche, R., (1987). Using Canny's criteria to derive a recursively implemented optimal edge

Forster, B.C., (1985). Mapping Potential of Future Spaceborne Remote Sensing System.

Fuchs, H. Z.M. Kedem, and Uselton, S.P., (1977). Optimal Surface Reconstruction from

Gonzalez, R., and R. Woods (1992).Digital Image Processing, 3rd edition, Addison-Wesley

Guenther, G.C., Cunningham, A.G., LaRocque, P. E., and Reid, D. J. (2000). Proceedings of

Keppel, E. (1975). Approximation Complex Surfaces by Triangulations of Contour Lines.

Katsuaki, K., N., Shuichi, and M., Ohmi ,(1995). Lineament analysis of satellite images using

Leech, D.P., Treloar, P.J., Lucas, N.S., Grocott, J., (2003). Landsat TM analysis of fracture

Marghany, M., (2005).Fuzzy B-spline and Volterra algorithms for modelling surface current

Marghany M., and Hashim, M.,(2006). Three-dimensional reconstruction of bathymetry

Marghay, M., M., Hashim and Crackenal, A., (2007). 3D Bathymetry Reconstruction from

Marghany, M. S., Mansor and Hashim, M., (2009a). Geologic mapping of United Arab

Marghany,M., M. Hashim and Cracknell A (2009b). 3D Reconstruction of Coastal

Marghany, M. and M. Hashim (2010). Lineament mapping using multispectral remote

Marghany, M., M. Hashim and Cracknell A. (2010). 3-D visualizations of coastal bathymetry

Procs. of 27th Australia Survey Congress, Alice Springs, Institution of Surveyors,

EARSeL-SIG-Workshop LIDAR,Dresden/FRG,EARSeL , Strasbourg, France,June

a segment tracing algorithm (STA). *Computers and Geosciences*.Vol. 21, No. 9, pp.

patterns: a case study from the Coastal Cordillera of northern Chile. *International* 

and ocean bathymetry from polarised TOPSAR data. *Asian Journal of Information* 

using C-band TOPSAR data. Photogrammetrie Fernerkundung Geoinformation.

AIRBORNE TOPSAR Polarized Data. In: Gervasi, O and Gavrilova, M (Eds.): Lecture Notes in Computer Science. Computational Science and Its Applications – ICCSA 2007, ICCSA 2007, LNCS 4705, Part I, Volume 4707/2007, Springer-Verlag

Emirates using multispectral remotely sensed data. *American J. of Engineering and* 

Bathymetry from AIRSAR/POLSAR data. *Chinese Journal of Oceanology and* 

sensing satellite data. *International Journal of the Physical Sciences* Vol. 5(10), pp. 1501-

by utilization of airborne TOPSAR polarized data. *International Journal of Digital* 

detector. *International Journal of Computer Vision*. 1 (2), pp. 167-187.

Planar Contours. *Communications of the ACM*, 20, 693-702.

*IBM Journal of Research Development*, 19, pp: 2-11.

*Journal of Remote Sensing*, 24 (19),pp.3709-3726.

Berlin Heidelberg, pp. 410–420, 2007.

*Applied Sciences*. 2, pp: 476-480.

*Limnology*.Vol. 27(1), pp.117-123.

Australia, Australia, 109-117.

16 – 17, 2000.

1091-I 104.

pp: 469-480.

1507.

*Earth*, 3(2):187 – 206.

*Technology.* 4, pp: 1-6.

Publishing Company. pp:200-229.


**Sensors and Platforms** 

Zaineldeen U. (2011) Paleostress reconstructions of Jabal Hat structures,Southeast of AlAin City, United Arab Emirates (UAE). *Journal of African Earth Sciences*. 59,323–335 **Section 2** 

## City, United Arab Emirates (UAE). *Journal of African Earth Sciences*. 59,323–335 **Section 2**

**Sensors and Platforms** 

232 Remote Sensing – Advanced Techniques and Platforms

Zaineldeen U. (2011) Paleostress reconstructions of Jabal Hat structures,Southeast of AlAin

**11**

Han-Dol Kim et al.

*Republic of Korea* 

**COMS, the New Eyes in the Sky for** 

With its successful launch on June 26, 2010, the Communication, Ocean, and Meteorological Satellite (COMS) is currently in the early stage of normal operation for the service to the end users, exhibiting exciting and fruitful performances including the image data from the two on-board optical sensors, Meteorological Imager (MI) and Geostationary Ocean Color Imager (GOCI), and the experimental Ka-band telecommunication. This chapter gives a comprehensive overview of COMS in terms of its key design characteristics, current status of in-orbit performances and its implied role in the geostationary remote sensing, and discusses its potential application and contribution to the world remote sensing community.

COMS is a multi-purpose, multi-mission, geostationary satellite. It has been designed and developed by the joint effort of EADS Astrium and Korea Aerospace Research Institute (KARI), and launched by Ariane 5 ECA L552 V195 of Arianespace on 21:41 (UTC) of June 26 2010. COMS is the first South Korean multi-mission geostationary satellite, and also the first 3-axis stabilized geostationary satellite ever built in Europe for optical remote sensing.

The In Orbit Testing (IOT) of COMS was completed early part of 2011, and since then the satellite has been being successfully operated by KARI for the benefits of all 3 end users: the Korean Meteorological Administration (KMA), the Korea Ocean Research & Development Institute (KORDI) and the Electronics & Telecommunications Research Institute (ETRI).

COMS is a single geostationary satellite fulfilling 3 rather conflicting missions as follows:

Gm-Sil Kang1, Do-Kyung Lee1, Kyoung-Wook Jin1, Seok-Bae Seo1, Hyun-Jong Oh2, Joo-Hyung Ryu3,

Herve Lambert4, Ivan Laine4, Philippe Meyer4, Pierre Coste4 And Jean-Louis Duquesne4

**1. Introduction** 

**2.1 COMS overview** 

*4EADS Astrium, France* 


*1Korea Aerospace Research Institute (KARI), Republic of Korea 2Korea Meteorological Administration (KMA), Republic of Korea* 

*3Korea Ocean Research & Development Institute (KORDI), Republic of Korea* 

**2. COMS: Description and overview** 

**Geostationary Remote Sensing** 

*Korea Aerospace Research Institute (KARI)* 

## **COMS, the New Eyes in the Sky for Geostationary Remote Sensing**

Han-Dol Kim et al. *Korea Aerospace Research Institute (KARI)* 

*Republic of Korea* 

## **1. Introduction**

With its successful launch on June 26, 2010, the Communication, Ocean, and Meteorological Satellite (COMS) is currently in the early stage of normal operation for the service to the end users, exhibiting exciting and fruitful performances including the image data from the two on-board optical sensors, Meteorological Imager (MI) and Geostationary Ocean Color Imager (GOCI), and the experimental Ka-band telecommunication. This chapter gives a comprehensive overview of COMS in terms of its key design characteristics, current status of in-orbit performances and its implied role in the geostationary remote sensing, and discusses its potential application and contribution to the world remote sensing community.

## **2. COMS: Description and overview**

COMS is a multi-purpose, multi-mission, geostationary satellite. It has been designed and developed by the joint effort of EADS Astrium and Korea Aerospace Research Institute (KARI), and launched by Ariane 5 ECA L552 V195 of Arianespace on 21:41 (UTC) of June 26 2010. COMS is the first South Korean multi-mission geostationary satellite, and also the first 3-axis stabilized geostationary satellite ever built in Europe for optical remote sensing.

The In Orbit Testing (IOT) of COMS was completed early part of 2011, and since then the satellite has been being successfully operated by KARI for the benefits of all 3 end users: the Korean Meteorological Administration (KMA), the Korea Ocean Research & Development Institute (KORDI) and the Electronics & Telecommunications Research Institute (ETRI).

## **2.1 COMS overview**

COMS is a single geostationary satellite fulfilling 3 rather conflicting missions as follows:


 Gm-Sil Kang1, Do-Kyung Lee1, Kyoung-Wook Jin1, Seok-Bae Seo1, Hyun-Jong Oh2, Joo-Hyung Ryu3, Herve Lambert4, Ivan Laine4, Philippe Meyer4, Pierre Coste4 And Jean-Louis Duquesne4

*<sup>1</sup>Korea Aerospace Research Institute (KARI), Republic of Korea 2Korea Meteorological Administration (KMA), Republic of Korea* 

*<sup>3</sup>Korea Ocean Research & Development Institute (KORDI), Republic of Korea* 

*<sup>4</sup>EADS Astrium, France* 

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 237

The COMS system consists of the space segment, which is made up of a COMS spacecraft bus with the three payloads, and the various systems of the ground segment, as depicted in

Images captured by MI and GOCI are first interleaved on board and downloaded in L band. Data are separated on ground; MI data are processed (radiometrically calibrated and geometrically corrected) and uploaded again in S-band to the satellite in two formats, LRIT (Low Rate Information Transmission) and HRIT (High Rate Information Transmission). These two new streams of data are again interleaved with the raw data and downloaded in

The COMS spacecraft bus is based on EADS Astrium's Eurostar-3000 bus design. The satellite features a box-shaped structure, built around the two bi-propellant tanks. Imaging instruments and MODCS antennae are located on the Earth floor (Fig. 1). A single-winged solar array with 10.6 m2 of GaAs cells is implemented on the south side, so as to keep the north wall in full view of cold space for the MI radiant cooler. The deployable Ka-band

The COMS spacecraft is 3-axis stabilized. Attitude sensing in normal mode is based on a hybridized Earth sensors (IRESs; Infra-Red Earth Sensors) and gyros (FOGs; Fiber Optic Gyros) concept; in addition, sun sensors are being used during 3-axis transfer operations. 5 reaction wheels (RDRs) and 7 thrusters (10 N) serve as actuators. Thrusters are also used for

antenna reflectors are accommodated on the east and west walls.

L-band to end users by the satellite which acts as a specific data relay.

**2.2 Description of COMS system** 

Fig. 2. COMS system overview

**2.2.1 COMS spacecraft bus** 

the Fig. 2.


MI is the common imager with the flight heritage from the later series of GOES and MTSAT satellites, and GOCI is the world' 1st ocean color imager to be operated in the geostationary orbit which has been newly developed for the COMS mission. The spacecraft launch mass is 2460 kg and the size is 2.6 m x 1.8 m x 2.8 m in stowed configuration. The orbital location is 128.2°E, mission lifetime is 7.7 years and design lifetime is 10 years.

Fig. 1 shows COMS both in stowed and deployed configurations, where the MI and GOCI optical instruments located on the earth looking satellite floor can be found with both MODCS (Meteorology and Ocean Data Communication System) antenna and the two small telecommunication Ka band reflectors, along with the COMS flight model during AIT.

Fig. 1. COMS, in stowed and deployed configurations and the flight model during the final stage of AIT (Assembly, Integration and Test) at KARI

The following subsections give a succinct description of COMS system, in terms of its key design characteristics and its unique and salient features on the platform, with a little touch on its development history and with a certain emphasized details on GOCI, along with a brief description on the ground segment.

## **2.2 Description of COMS system**

236 Remote Sensing – Advanced Techniques and Platforms

MI is the common imager with the flight heritage from the later series of GOES and MTSAT satellites, and GOCI is the world' 1st ocean color imager to be operated in the geostationary orbit which has been newly developed for the COMS mission. The spacecraft launch mass is 2460 kg and the size is 2.6 m x 1.8 m x 2.8 m in stowed configuration. The orbital location is

Fig. 1 shows COMS both in stowed and deployed configurations, where the MI and GOCI optical instruments located on the earth looking satellite floor can be found with both MODCS (Meteorology and Ocean Data Communication System) antenna and the two small telecommunication Ka band reflectors, along with the COMS flight model during AIT.

Fig. 1. COMS, in stowed and deployed configurations and the flight model during the final

The following subsections give a succinct description of COMS system, in terms of its key design characteristics and its unique and salient features on the platform, with a little touch on its development history and with a certain emphasized details on GOCI, along with a

stage of AIT (Assembly, Integration and Test) at KARI

brief description on the ground segment.



128.2°E, mission lifetime is 7.7 years and design lifetime is 10 years.

The COMS system consists of the space segment, which is made up of a COMS spacecraft bus with the three payloads, and the various systems of the ground segment, as depicted in the Fig. 2.

Images captured by MI and GOCI are first interleaved on board and downloaded in L band. Data are separated on ground; MI data are processed (radiometrically calibrated and geometrically corrected) and uploaded again in S-band to the satellite in two formats, LRIT (Low Rate Information Transmission) and HRIT (High Rate Information Transmission). These two new streams of data are again interleaved with the raw data and downloaded in L-band to end users by the satellite which acts as a specific data relay.

Fig. 2. COMS system overview

## **2.2.1 COMS spacecraft bus**

The COMS spacecraft bus is based on EADS Astrium's Eurostar-3000 bus design. The satellite features a box-shaped structure, built around the two bi-propellant tanks. Imaging instruments and MODCS antennae are located on the Earth floor (Fig. 1). A single-winged solar array with 10.6 m2 of GaAs cells is implemented on the south side, so as to keep the north wall in full view of cold space for the MI radiant cooler. The deployable Ka-band antenna reflectors are accommodated on the east and west walls.

The COMS spacecraft is 3-axis stabilized. Attitude sensing in normal mode is based on a hybridized Earth sensors (IRESs; Infra-Red Earth Sensors) and gyros (FOGs; Fiber Optic Gyros) concept; in addition, sun sensors are being used during 3-axis transfer operations. 5 reaction wheels (RDRs) and 7 thrusters (10 N) serve as actuators. Thrusters are also used for

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 239

generated on the ground from the MI raw data, and uploaded to the satellite in S-Band and

MI is a two-axis scan imaging radiometer from ITT. It senses basically the radiant and solar reflected energies from the Earth simultaneously and provides imagery and radiometric information of the Earth's surface and cloud cover. It features 1 visible (VIS) channel and 4 infra-red (IR) channels as a scanning radiometer. The design of it is derived from the GOES

No. Channel Wavelength(μm) IFOV(μrad) GSD(Km) Dynamic Range 1 VIS 0.55~0.80 28 1 0~115% albedo 2 SWIR 3.50~4.00 112 4 110K~350K 3 WV 6.50~7.00 112 4 110K~330K 4 WIN1 10.3~11.3 112 4 110K~330K 5 WIN2 11.5~12.5 112 4 110K~330K

MI consists of three modules; sensor module, electronics module, and power supply module. The sensor module contains a scan assembly, a telescope and detectors, and is mounted on spacecraft with the shields, louver and cooler for thermal control. The electronics module which has some redundant circuits performs command, control, signal processing and telemetry conditioning function. The power supply module contains power converters, fuses

The servo-driven, two-axis gimbaled scan mirror of the MI reflects scene energy reflected and emitted from the Earth into the telescope of the MI as shown in the Fig. 5. The mirror scans the Earth with a bi-directional raster scan, which sweeps an 8 km swath along East-West (EW) direction and steps every 8 km along North-South (NS) direction. The area of the observed scene depends on the 2-dimensional angular range of the scan mirror movement. The scene radiance, collected by the scan mirror and the telescope, is separated into each spectral channel by dichroic beam splitters, which allow the geometrically-corresponding detectors of each channel to look at the same position on the Earth. Each detector converts

and power control for interfacing with the spacecraft power system with redundancy.

Fig. 4. COMS MI sensor module, in design and flight model configurations

relayed in L-band to MI end users.

imager for COMS program.

**2.2.2 MI** 

S-band is also used for satellite Telemetry and Telecommands.

Table 1. Spectral channel characteristics of MI as requirement

wheel off-loading and for orbit control. The apogee firing boosts are provided by a 440 N liquid apogee engine.

The key feature of COMS AOCS (Attitude and Orbit Control Subsystem) is the addition of EADS Astrium's newly developed FOGs, Astrix 120 HR. The FOG allowed the requested performance boost in terms of pointing knowledge and stability to already excellent Eurostar-3000 AOCS design and its performances.

The EPS (Electric Power Subsystem) makes use of GaAs solar cells and Li-ion batteries. A regulated power bus (50 V) distributes power to the various onboard applications through the power shunt regulator. During orbital eclipses, energy is provided by a 154 Ah Li-ion battery. The power at EOL (End Of Life) shall be greater than 2.5 KW.

Fig. 3. Block diagram of COMS spacecraft functional architecture

The heart of the avionics architecture is implemented in hot redundant spacecraft computer units, based on 1750 standard processors with Ada object-oriented real-time software. A redundant MIL-STD-1553-B data bus serves as the main data path between the onboard units. Interface units are being used for the serial links, namely the actuator drive electronics with the bus units (including thermal control), the modular payload interface unit with the Ka-band communication payload, and the MI interface unit with the MI instrument.

A specific module (MODCS; Meteorology and Ocean Data Communication System) was developed for handling MI and GOCI images. It collects and transmits raw MI and GOCI data in L-band. HRIT/LRIT (High- and Low-Rate Information Transmission) formats are generated on the ground from the MI raw data, and uploaded to the satellite in S-Band and relayed in L-band to MI end users.

S-band is also used for satellite Telemetry and Telecommands.

## **2.2.2 MI**

238 Remote Sensing – Advanced Techniques and Platforms

wheel off-loading and for orbit control. The apogee firing boosts are provided by a 440 N

The key feature of COMS AOCS (Attitude and Orbit Control Subsystem) is the addition of EADS Astrium's newly developed FOGs, Astrix 120 HR. The FOG allowed the requested performance boost in terms of pointing knowledge and stability to already excellent

The EPS (Electric Power Subsystem) makes use of GaAs solar cells and Li-ion batteries. A regulated power bus (50 V) distributes power to the various onboard applications through the power shunt regulator. During orbital eclipses, energy is provided by a 154 Ah Li-ion

liquid apogee engine.

Eurostar-3000 AOCS design and its performances.

battery. The power at EOL (End Of Life) shall be greater than 2.5 KW.

Fig. 3. Block diagram of COMS spacecraft functional architecture

The heart of the avionics architecture is implemented in hot redundant spacecraft computer units, based on 1750 standard processors with Ada object-oriented real-time software. A redundant MIL-STD-1553-B data bus serves as the main data path between the onboard units. Interface units are being used for the serial links, namely the actuator drive electronics with the bus units (including thermal control), the modular payload interface unit with the

A specific module (MODCS; Meteorology and Ocean Data Communication System) was developed for handling MI and GOCI images. It collects and transmits raw MI and GOCI data in L-band. HRIT/LRIT (High- and Low-Rate Information Transmission) formats are

Ka-band communication payload, and the MI interface unit with the MI instrument.

MI is a two-axis scan imaging radiometer from ITT. It senses basically the radiant and solar reflected energies from the Earth simultaneously and provides imagery and radiometric information of the Earth's surface and cloud cover. It features 1 visible (VIS) channel and 4 infra-red (IR) channels as a scanning radiometer. The design of it is derived from the GOES imager for COMS program.


Table 1. Spectral channel characteristics of MI as requirement

MI consists of three modules; sensor module, electronics module, and power supply module. The sensor module contains a scan assembly, a telescope and detectors, and is mounted on spacecraft with the shields, louver and cooler for thermal control. The electronics module which has some redundant circuits performs command, control, signal processing and telemetry conditioning function. The power supply module contains power converters, fuses and power control for interfacing with the spacecraft power system with redundancy.

Fig. 4. COMS MI sensor module, in design and flight model configurations

The servo-driven, two-axis gimbaled scan mirror of the MI reflects scene energy reflected and emitted from the Earth into the telescope of the MI as shown in the Fig. 5. The mirror scans the Earth with a bi-directional raster scan, which sweeps an 8 km swath along East-West (EW) direction and steps every 8 km along North-South (NS) direction. The area of the observed scene depends on the 2-dimensional angular range of the scan mirror movement. The scene radiance, collected by the scan mirror and the telescope, is separated into each spectral channel by dichroic beam splitters, which allow the geometrically-corresponding detectors of each channel to look at the same position on the Earth. Each detector converts

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 241

It uses sunlight through a small aperture as a source. In addition to the radiometric calibration, an electrical calibration is provided to check the stability and the linearity of the output data of the MI signal processing electronics by using an internal reference signal. MI has the star sensing capability in the visible channel, which can be used for image

MI has three observation modes: global, regional and local modes, which are specialized for the meteorological missions. The global mode is for taking images of the Full Disk (FD) of the Earth. The regional observation mode is for taking images of the Asia and Pacific in North Hemisphere (APNH), the Extended North Hemisphere (ENH), and Limited Southern Hemisphere (LSH). The image of Limited Full Disk (LFD) area can be obtained by the combination of the images of ENH and LSH. The local observation mode is activated for Local Area (LA) coverage in the FD. The user interest of the MI observation areas for FD,

Geostationary Ocean Color Imager (GOCI), the first Ocean Colour Imager to operate from geostationary orbit, is designed to provide multi-spectral data to detect, monitor, quantify, and predict short term changes of coastal ocean environment for marine science research and application purpose. GOCI has been developed to provide a monitoring of Ocean Color around the Korean Peninsula from geostationary platforms in a joint effort by Korea Aerospace Research Institute (KARI) and EADS Astrium under the contract of

Main mission requirement for GOCI is to provide a multi-spectral ocean image of area around South Korea eight times per day as shown in Fig. 6. The imaging coverage area is 2500x2500 km2 and the ground pixel size is 500x500 m2 at centre of field, defined at (130°E -

Target Area

**(130E, 47.25N)**

1250 km 1250 km

TgP6

TgP2 TgP1

1250 km

1250 km **Central point** 1250 km **(130E, 36N)**

TgP7 TgP5

TgP0

110 115 120 125 130 135 140 145 150 **Longitude (deg)**

**(130E, 24.75N)**

TgP8

1250 km

TgP3 TgP4

1250 km 1250 km

**(143.92E, 36N)**

**(142.39E, 24.75N)**

**(146.6E, 47.25N)**

navigation and registration purposes.

**2.2.3.1 GOCI mission overview** 

Fig. 6. Target observation coverage of the GOCI

21

**(117.61E, 24.75N)**

26

31

36

**(116.08E, 36N)**

**(113.4E, 47.25N)**

**Latitude (deg)**

41

46

51

**2.2.3 GOCI** 

APNH, ENH, LSH, LFD, and LA is shown in the Fig. 5.

Communication, Ocean, and Meteorological Satellite (COMS) of Korea.

Fig. 5. MI Scan Frame and Schematic design of Optics (BS:Beam Splitter, FM:Folding Mirror, FD: Full Disk, APNH:Asia and Pacific in Northern Hemisphere, ENH:Extended Northern Hemisphere, LSH:Limited Southern Hemisphere, LA: Local Area)

the scene radiance into an electrical signal. The five channel detectors of the MI are divided into two sides, which are electrically redundant each other. Only one side operates at one time by choosing side 1 or side 2 electronics. The visible silicon detector array contains eight detector elements which are active simultaneously in the either side mode. Each visible detector element produces the instantaneous field of view (IFOV) of 28 rad on a side, which corresponds to 1km on the surface of the Earth at the spacecraft's suborbital point. Each IR channel has two detector elements which are active simultaneously in the either side mode. The SWIR channel employs InSb detectors and the other IR channels use HgCdTe detectors. Each IR detector element produces the IFOV of 112 rad on a side, which corresponds to 4km on the surface of the Earth at the spacecraft's suborbital point. The 8 visible detector elements and 2 IR detector elements produce the swath width (8 km) of one EW scan line respectively.

The passive radiant cooler with thermostatically controlled heater maintains the infrared detectors at one of the three, command-selectable, cryogenic temperatures. Visible light detectors are at the instrument ambient temperature. Preamplifiers convert low level outputs of all detectors into higher level, low impedance signals as the inputs to the electronics module. MI carries an on-board blackbody target inside of the sensor module for the in-orbit radiometric calibration of the IR channels. The blackbody target is located at the opposite direction to the nadir, so that the scan mirror is rotated 180 degrees in the NS direction from the imaging mode for the blackbody calibration. The full aperture blackbody calibration can be performed by the scan mirror's pointing at the on-board blackbody target via ground command or automatically. The albedo monitor is mounted in the sensor module to measure the in-orbit response change of the visible channel over the mission life. It uses sunlight through a small aperture as a source. In addition to the radiometric calibration, an electrical calibration is provided to check the stability and the linearity of the output data of the MI signal processing electronics by using an internal reference signal. MI has the star sensing capability in the visible channel, which can be used for image navigation and registration purposes.

MI has three observation modes: global, regional and local modes, which are specialized for the meteorological missions. The global mode is for taking images of the Full Disk (FD) of the Earth. The regional observation mode is for taking images of the Asia and Pacific in North Hemisphere (APNH), the Extended North Hemisphere (ENH), and Limited Southern Hemisphere (LSH). The image of Limited Full Disk (LFD) area can be obtained by the combination of the images of ENH and LSH. The local observation mode is activated for Local Area (LA) coverage in the FD. The user interest of the MI observation areas for FD, APNH, ENH, LSH, LFD, and LA is shown in the Fig. 5.

## **2.2.3 GOCI**

240 Remote Sensing – Advanced Techniques and Platforms

Fig. 5. MI Scan Frame and Schematic design of Optics (BS:Beam Splitter, FM:Folding Mirror, FD: Full Disk, APNH:Asia and Pacific in Northern Hemisphere, ENH:Extended Northern

the scene radiance into an electrical signal. The five channel detectors of the MI are divided into two sides, which are electrically redundant each other. Only one side operates at one time by choosing side 1 or side 2 electronics. The visible silicon detector array contains eight detector elements which are active simultaneously in the either side mode. Each visible

which corresponds to 1km on the surface of the Earth at the spacecraft's suborbital point. Each IR channel has two detector elements which are active simultaneously in the either side mode. The SWIR channel employs InSb detectors and the other IR channels use

corresponds to 4km on the surface of the Earth at the spacecraft's suborbital point. The 8 visible detector elements and 2 IR detector elements produce the swath width (8 km) of one

The passive radiant cooler with thermostatically controlled heater maintains the infrared detectors at one of the three, command-selectable, cryogenic temperatures. Visible light detectors are at the instrument ambient temperature. Preamplifiers convert low level outputs of all detectors into higher level, low impedance signals as the inputs to the electronics module. MI carries an on-board blackbody target inside of the sensor module for the in-orbit radiometric calibration of the IR channels. The blackbody target is located at the opposite direction to the nadir, so that the scan mirror is rotated 180 degrees in the NS direction from the imaging mode for the blackbody calibration. The full aperture blackbody calibration can be performed by the scan mirror's pointing at the on-board blackbody target via ground command or automatically. The albedo monitor is mounted in the sensor module to measure the in-orbit response change of the visible channel over the mission life.

rad on a side,

rad on a side, which

Hemisphere, LSH:Limited Southern Hemisphere, LA: Local Area)

detector element produces the instantaneous field of view (IFOV) of 28

HgCdTe detectors. Each IR detector element produces the IFOV of 112

EW scan line respectively.

Geostationary Ocean Color Imager (GOCI), the first Ocean Colour Imager to operate from geostationary orbit, is designed to provide multi-spectral data to detect, monitor, quantify, and predict short term changes of coastal ocean environment for marine science research and application purpose. GOCI has been developed to provide a monitoring of Ocean Color around the Korean Peninsula from geostationary platforms in a joint effort by Korea Aerospace Research Institute (KARI) and EADS Astrium under the contract of Communication, Ocean, and Meteorological Satellite (COMS) of Korea.

#### **2.2.3.1 GOCI mission overview**

Main mission requirement for GOCI is to provide a multi-spectral ocean image of area around South Korea eight times per day as shown in Fig. 6. The imaging coverage area is 2500x2500 km2 and the ground pixel size is 500x500 m2 at centre of field, defined at (130°E -

Fig. 6. Target observation coverage of the GOCI

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 243

The shutter wheel is located in front of pointing mirror carrying four elements: shutter which will protect optical cavity during non-imaging period, open part for the ocean observation, Solar Diffuser (SD) and Diffuser Aging Monitoring Device (DAMD) for solar calibration. A Quasi Volumic Diffuser (QVD) has been chosen for the SD and the DAMD among several candidates because it is known to be insensitive to radiation environment. The on-board calibration devices prepared for integration are shown in Fig. 8. The SD covering the full aperture of GOCI is used to perform in-orbit solar calibration on a daily basis. Degradation of the SD over mission life is detected by the DAMD covering the partial aperture of GOCI.

SD DAMD POM (without mirror)

Fig. 8. On-board calibration devices SD, DAMD and pointing mirror mechanism POM

collected light goes to an optical filter through a folding mirror.

The pointing mirror is equipped with a 2-axis circular mechanism for scanning over observation area. Fig. 8 shows the GOCI pointing mechanism (POM). The pointing mirror is controlled to achieve a Line of Sight (LOS) corresponding to a center of a predefined slot on the Earth. The principle of the pointing mechanism is an assembly of two rotating actuators mounted together with a cant angle of about 1°, the top actuator carrying also the Pointing Mirror (PM) with the same cant angle. When rotating the lower actuator the LOS is moved on a circle and by rotating the second actuator, a second circle is drawn from the first one. It is thus possible to reach any LOS position inside the target area by choosing appropriate angle position on each circle. The mechanism pointing law provides the relation between rotation of both actuators and the LOS with a very high stability. This high accuracy pointing assembly used to select slots centers is able to position the instrument LOS anywhere within a 4° cone, with a pointing accuracy better than 0.03° (500 μrad). Position knowledge is better than 10 μrad (order of pixel size) thanks to the use of optical encoders. An incident light on the GOCI aperture is reflected by the pointing mirror and collected through the TMA telescope. Then the

The eight spectral channels are obtained by means of a filter wheel which includes dark plate in order to measure system offset. Fig. 9 shows the filter wheel integrated with eight spectral filters without a protective cover. The FPA for GOCI, which is shown in Fig. 9, is a custom designed CMOS image sensor featuring rectangular pixel size to compensate for the Earth projection over Korea, and electron-optical characteristics matched to the specified instrument operations. The CMOS FPA having 1432 1415 pixels is passively cooled and regulated around 10C. It is split into two modules which are electrically independent. The GOCI electronics unit, which is shown in Fig. 9, is deported on satellite wall about 1.5m from the GOCI main unit. It provides control of mechanisms (pointing mirror, shutter

wheel, filter wheel), video data acquisition, digitization, mass memory and power.

36°N). Such resolution is equivalent to a Ground Sampling Distance (GSD) of 360 m in NADIR direction, on the equator. The GSD is varied over the target area because of the imaging geometry including the projection on Earth and the orbital position of the satellite. The GOCI spectral bands have been selected for their adequacy to the ocean color observation, as shown in Table 2.


Table 2. GOCI spectral bands

### **2.2.3.2 GOCI design overview**

The GOCI consists of a Main Unit and an Electronic Unit. Total GOCI Mass is below 78 Kg. Power needed is about 40W for the electronics plus about 60W for Main Unit thermal control. A Payload Interface Plate (PIP) is part of the Main Unit. It supports a highly stable full SiC telescope, mechanisms and proximity electronics. Fig. 7 shows the main unit which is integrated on the Earth panel of satellite through the PIP. The PIP is larger than the instrument to carry the satellite Infra-Red Earth Sensor (IRES).

The main unit includes an optical module, a two-dimensional Focal Plane Array (FPA) and a Front End Electronics (FEE). The optical module of GOCI consists of a pointing mirror, a Three Mirror Anastigmat (TMA) mirrors, a folding mirror, and a filter wheel. The FEE is attached near the FPA in order to amplify the detector signal with low noise before digitization.

Fig. 7. Design configuration of GOCI main unit and its flight model configuration during integration phase (without MLI)

36°N). Such resolution is equivalent to a Ground Sampling Distance (GSD) of 360 m in NADIR direction, on the equator. The GSD is varied over the target area because of the imaging geometry including the projection on Earth and the orbital position of the satellite. The GOCI spectral bands have been selected for their adequacy to the ocean color

5 660 nm 20 nm Fluorescence signal, chlorophyll, suspended sediment 6 680 nm 10 nm Atmospheric correction and fluorescence signal 7 745 nm 20 nm Atmospheric correction and baseline of fluorescence signal

8 865 nm 40 nm Aerosol optical thickness, vegetation, water vapour

The GOCI consists of a Main Unit and an Electronic Unit. Total GOCI Mass is below 78 Kg. Power needed is about 40W for the electronics plus about 60W for Main Unit thermal control. A Payload Interface Plate (PIP) is part of the Main Unit. It supports a highly stable full SiC telescope, mechanisms and proximity electronics. Fig. 7 shows the main unit which is integrated on the Earth panel of satellite through the PIP. The PIP is larger than the

The main unit includes an optical module, a two-dimensional Focal Plane Array (FPA) and a Front End Electronics (FEE). The optical module of GOCI consists of a pointing mirror, a Three Mirror Anastigmat (TMA) mirrors, a folding mirror, and a filter wheel. The FEE is attached near the FPA in order to amplify the detector signal with low noise before

Fig. 7. Design configuration of GOCI main unit and its flight model configuration during

instrument to carry the satellite Infra-Red Earth Sensor (IRES).

reference over the ocean

Band Center Band-width Main Purpose and Expected Usage 1 412 nm 20 nm Yellow substance and turbidity extraction 2 443 nm 20 nm Chlorophyll absorption maximum 3 490 nm 20 nm Chlorophyll and other pigments 4 555 nm 20 nm Turbidity, suspended sediment

observation, as shown in Table 2.

Table 2. GOCI spectral bands

**2.2.3.2 GOCI design overview** 

integration phase (without MLI)

digitization.

The shutter wheel is located in front of pointing mirror carrying four elements: shutter which will protect optical cavity during non-imaging period, open part for the ocean observation, Solar Diffuser (SD) and Diffuser Aging Monitoring Device (DAMD) for solar calibration. A Quasi Volumic Diffuser (QVD) has been chosen for the SD and the DAMD among several candidates because it is known to be insensitive to radiation environment. The on-board calibration devices prepared for integration are shown in Fig. 8. The SD covering the full aperture of GOCI is used to perform in-orbit solar calibration on a daily basis. Degradation of the SD over mission life is detected by the DAMD covering the partial aperture of GOCI.

Fig. 8. On-board calibration devices SD, DAMD and pointing mirror mechanism POM

The pointing mirror is equipped with a 2-axis circular mechanism for scanning over observation area. Fig. 8 shows the GOCI pointing mechanism (POM). The pointing mirror is controlled to achieve a Line of Sight (LOS) corresponding to a center of a predefined slot on the Earth. The principle of the pointing mechanism is an assembly of two rotating actuators mounted together with a cant angle of about 1°, the top actuator carrying also the Pointing Mirror (PM) with the same cant angle. When rotating the lower actuator the LOS is moved on a circle and by rotating the second actuator, a second circle is drawn from the first one. It is thus possible to reach any LOS position inside the target area by choosing appropriate angle position on each circle. The mechanism pointing law provides the relation between rotation of both actuators and the LOS with a very high stability. This high accuracy pointing assembly used to select slots centers is able to position the instrument LOS anywhere within a 4° cone, with a pointing accuracy better than 0.03° (500 μrad). Position knowledge is better than 10 μrad (order of pixel size) thanks to the use of optical encoders. An incident light on the GOCI aperture is reflected by the pointing mirror and collected through the TMA telescope. Then the collected light goes to an optical filter through a folding mirror.

The eight spectral channels are obtained by means of a filter wheel which includes dark plate in order to measure system offset. Fig. 9 shows the filter wheel integrated with eight spectral filters without a protective cover. The FPA for GOCI, which is shown in Fig. 9, is a custom designed CMOS image sensor featuring rectangular pixel size to compensate for the Earth projection over Korea, and electron-optical characteristics matched to the specified instrument operations. The CMOS FPA having 1432 1415 pixels is passively cooled and regulated around 10C. It is split into two modules which are electrically independent. The GOCI electronics unit, which is shown in Fig. 9, is deported on satellite wall about 1.5m from the GOCI main unit. It provides control of mechanisms (pointing mirror, shutter wheel, filter wheel), video data acquisition, digitization, mass memory and power.

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 245

excellent landmark matching algorithm, fine-tuning of configuration parameters during IOT and the fine-tuning of newly established landmark database with ample landmark sites at the final phase of IOT rendered such acquisition of sufficient number of good landmarks. Fig. 11 shows the overall architecture of COMS INR. All the processing are done on ground except for the long term image motion compensation (LTIMC) and as can be seen here, the

> **Image Quality Control**

**Landmark Residuals Statistics**

**Image Observation Data Extraction**

**Landmark Catalog**

**Landmarks & Tie‐points**

**Front‐End**

In this section, the description of each module and each processing which comprises the

The on-board attitude determination estimates the spacecraft attitude from attitude sensors measurements (IRESs, Sun Sensors and FOGs) through filtering process. This process is performed at 10 Hz and sub-sampled at 1 Hz to insert into the MI wideband telemetry for

The on-board attitude control loop actuates momentum wheels, solar array, and thrusters. The control loop is designed to be robust to effect of disturbances on MI & GOCI field of

Diurnal attitude pointing perturbation due to thermo-elastic distortion and solar

**Process Level 0 Data**

**Pre‐Rectified Level 1A Image**

**Tie‐points**

**State Vector**

> **Navigation & Registration Filter**

**Thermo‐elastic Data**

**State Vector**

**FDS Orbit**

**Radiometric Calibration**

**Image Geometric Correction**

**Level 1A Data**

**Level 1B Data**

**Auxiliary Data Attitude Encoder Dating**

whole INR system is operated in close conjunction with the AOCS.

**LTIMC Coefficients**

**Raw Data** 

**Auxiliary Attitude Data**

**Instrument Payloads (MI & GOCI)**

Fig. 11. COMS INR overall architecture

*2.2.4.2.1 Space Segment INR*  **Attitude determination** 

views. Disturbances include:

**Attitude control** 

torques.

**Image Encoder** **LTIMC**

**Wideband Data Formatting**

**2.2.4.2 Description of COMS INR system and processing** 

whole COMS INR system, as shown in the Fig. 10, is provided.

use by the Navigation and Registration Filter Module on ground.

Thruster firing for station keeping and wheel off-loading.

**AOCS**

**Attitude Control**

**Attitude Determination Attitude Sensors** • **Earth Sensor** • **Sun Sensor** • **Gyros (FOGs)**

**Spacecraft Ground**

Fig. 9. GOCI filter wheel without cover, CMOS detector package with temporary window and Electronics Unit

The imaging in GOCI is done in the step and stare fashion, passing along the 16 slots, as shown in the Fig. 10.

Fig. 10. GOCI imaging principle

## **2.2.4 COMS INR system**

## **2.2.4.1 Overview of COMS INR system**

Achieving and maintaining a good geo-localization of the images on the ground is an essential part of the geostationary remote sensing satellite for the untilization of the remote sensing data to be a meaningful and fruitful one. To this purpose, the Image Navigation and Registration (INR) system should be in place, and in COMS, a novel approach to INR was developed, allowing a-posteriori location of the images on the geoid based on automatic identification of landmarks and comparison with a reference database of specific terrestrial features such as small islands, capes, and lakes.

In this novel approach, INR is not directly dependent on the satellite and payload models and hence can avoid any indispensible modeling and prediction error in the process. The high reliance on the landmarks and the acquisition of sufficient number of good-quality landmarks, however, become the key part of the design in this approach and such acquisition must be secured for this approach to be practically successful. In COMS INR,

Filter wheel CMOS detector Instrument Electronics Unit

Fig. 9. GOCI filter wheel without cover, CMOS detector package with temporary window

The imaging in GOCI is done in the step and stare fashion, passing along the 16 slots, as

Achieving and maintaining a good geo-localization of the images on the ground is an essential part of the geostationary remote sensing satellite for the untilization of the remote sensing data to be a meaningful and fruitful one. To this purpose, the Image Navigation and Registration (INR) system should be in place, and in COMS, a novel approach to INR was developed, allowing a-posteriori location of the images on the geoid based on automatic identification of landmarks and comparison with a reference database of specific terrestrial

In this novel approach, INR is not directly dependent on the satellite and payload models and hence can avoid any indispensible modeling and prediction error in the process. The high reliance on the landmarks and the acquisition of sufficient number of good-quality landmarks, however, become the key part of the design in this approach and such acquisition must be secured for this approach to be practically successful. In COMS INR,

and Electronics Unit

shown in the Fig. 10.

Fig. 10. GOCI imaging principle

**2.2.4.1 Overview of COMS INR system** 

features such as small islands, capes, and lakes.

**2.2.4 COMS INR system** 

excellent landmark matching algorithm, fine-tuning of configuration parameters during IOT and the fine-tuning of newly established landmark database with ample landmark sites at the final phase of IOT rendered such acquisition of sufficient number of good landmarks.

Fig. 11 shows the overall architecture of COMS INR. All the processing are done on ground except for the long term image motion compensation (LTIMC) and as can be seen here, the whole INR system is operated in close conjunction with the AOCS.

Fig. 11. COMS INR overall architecture

## **2.2.4.2 Description of COMS INR system and processing**

In this section, the description of each module and each processing which comprises the whole COMS INR system, as shown in the Fig. 10, is provided.

## *2.2.4.2.1 Space Segment INR*

## **Attitude determination**

The on-board attitude determination estimates the spacecraft attitude from attitude sensors measurements (IRESs, Sun Sensors and FOGs) through filtering process. This process is performed at 10 Hz and sub-sampled at 1 Hz to insert into the MI wideband telemetry for use by the Navigation and Registration Filter Module on ground.

## **Attitude control**

The on-board attitude control loop actuates momentum wheels, solar array, and thrusters. The control loop is designed to be robust to effect of disturbances on MI & GOCI field of views. Disturbances include:


COMS, the New Eyes in the Sky for Geostationary Remote Sensing 247

The SOC performs the primary satellite operation/monitoring and the secondary image data processing. The NMSC and KOSC have a role of the primary image data processing for MI (in NMSC) and GOCI (in KOSC), respectively, and The NMSC is also the secondary ground center for a satellite operation/monitoring. The CTES monitors RF (Radio

The SOC has two functions of the COMS GS; MI/GOCI Image data processing (as the backup center) and satellite operation/monitoring (as the primary center). One of SOC function is implemented in IDACS (Image Data Acquisition and Control System) for Image data processing by three subsystem; DATS (Data Acquisition and Transmission Subsystem), IMPS (IMage Pre-processing Subsystem), and LHGS (LRIT/HRIT Generation Subsystem)

The other SOC function, satellite operation and monitoring, is implemented in SGCS (Satellite Ground Control System) by five subsystems; MPS (Mission Planning Subsystem), TTC (Telemetry, Tracking, and Command), ROS (Real-time Operations Subsystem), FDS (Flight Dynamics Subsystem), and CSS (COMS Simulator Subsystem) (Lee et al., 2006).

Fig. 12 shows the essential architecture of COMS ground segment with key composing subsystems and Table 3 describes functions of subsystem for COMS ground segment;

Frequency) signals to check the status of Ka-Band communication system.

DATS, IMPS, LHGS (IDACS) MPS, TTC, ROS, FDS, and CSS (SGCS).

Fig. 12. COMS ground segment architecture with key composing subsystems

Processing and dissemination of LRIT/HRIT

Interfaces among subsystems of the IDACS

Compression and encryption for LRIT/HRIT generation

Control and monitoring of IDACS

Radiometric correction (IRCM) Geometric calibration (INRSM) Payload status monitoring

System Sub-System Functions IDACS DATS Reception and error correction of CADU

IMPS CADU receiving and processing

LHGS LRIT/HRIT generation

(Lim et al., 2011).

## **Long term image motion compensation (LTIMC)**

The on-board LTIMC is used to compensate pointing bias and long term evolutions (seasonal, ageing) to keep the area to be observed within the MI & GOCI field of views.

## **Wideband data formatting**

Wideband data consists of MI & GOCI imagery/telemetry and AOCS auxiliary attitude data.

#### *2.2.4.2.2 Ground Segment INR*

## **The image observation data extraction module**

This module gathers all functions of data extraction from images: cloud cover detection, landmark detection from image/database matching, multi-temporal tie point detection from image/image matching, and multi-spectral tie point detection from band-to-band matching. A first "pre - rectification" of Level 1A images allows retrieving 2D local image coherence.

## **The navigation and registration filter module**

This module gathers all functions of geometric models: localization model (including focal plane and scan mirror models), navigation filter, landmarks or tie points position prediction. This module performs state vector estimation through a hybridization filter that combines landmarks, thermo-elastic, orbit, and gyros data in the way that minimizes criteria on landmarks (for navigation) or tie points (for registration) residuals.

## **The image geometric correction module**

This module gathers all functions relative to image resampling and Modulation Transfer Function (MTF) compensation. For each pixel of an image, the state vector allows computing the shift between raw geometry and reference geometry. Each pixel of the Level 1B image is computed through radiometric interpolation with respect to the neighbouring pixels around its corresponding pixel in the Level 1A image.

## **The image quality control module**

Once the state filter estimation is performed, ground pixels corresponding to landmarks are localized. The result difference with respect to the landmark known position is called "residual". It can be done on the landmark used for navigation, but also on "reference landmarks" which are used for the navigation accuracy control filter. All computed residuals are stored for further statistics. The statistics (average, standard deviation, max value) on residuals within the image gives instantaneous INR performance. The statistics over a set of image during a certain period gives INR performance relative to the period. The statistics relative to a specific landmark over a certain period gives information on quality and reliability for that landmark. This result will be used to periodically update landmark database with confidence rate that has to be taken into account for better accuracy of the navigation filter. All statistics are also computed with respects to context: date, time, cloud conditions.

## **2.2.5 COMS ground segment**

The COMS GS (Ground Segment) consists of four GCs (Ground Centers); Satellite Control Center (SOC), National Meteorological Satellite Center (NMSC), Korea Ocean Satellite Center (KOSC), and Communication Test Earth Station (CTES) (KARI, 2006).

The on-board LTIMC is used to compensate pointing bias and long term evolutions (seasonal, ageing) to keep the area to be observed within the MI & GOCI field of views.

Wideband data consists of MI & GOCI imagery/telemetry and AOCS auxiliary attitude data.

This module gathers all functions of data extraction from images: cloud cover detection, landmark detection from image/database matching, multi-temporal tie point detection from image/image matching, and multi-spectral tie point detection from band-to-band matching. A first "pre - rectification" of Level 1A images allows retrieving 2D local image coherence.

This module gathers all functions of geometric models: localization model (including focal plane and scan mirror models), navigation filter, landmarks or tie points position prediction. This module performs state vector estimation through a hybridization filter that combines landmarks, thermo-elastic, orbit, and gyros data in the way that minimizes criteria on

This module gathers all functions relative to image resampling and Modulation Transfer Function (MTF) compensation. For each pixel of an image, the state vector allows computing the shift between raw geometry and reference geometry. Each pixel of the Level 1B image is computed through radiometric interpolation with respect to the neighbouring pixels around

Once the state filter estimation is performed, ground pixels corresponding to landmarks are localized. The result difference with respect to the landmark known position is called "residual". It can be done on the landmark used for navigation, but also on "reference landmarks" which are used for the navigation accuracy control filter. All computed residuals are stored for further statistics. The statistics (average, standard deviation, max value) on residuals within the image gives instantaneous INR performance. The statistics over a set of image during a certain period gives INR performance relative to the period. The statistics relative to a specific landmark over a certain period gives information on quality and reliability for that landmark. This result will be used to periodically update landmark database with confidence rate that has to be taken into account for better accuracy of the navigation filter. All statistics are also computed with respects to context: date, time, cloud conditions.

The COMS GS (Ground Segment) consists of four GCs (Ground Centers); Satellite Control Center (SOC), National Meteorological Satellite Center (NMSC), Korea Ocean Satellite

Center (KOSC), and Communication Test Earth Station (CTES) (KARI, 2006).

**Long term image motion compensation (LTIMC)** 

**The image observation data extraction module** 

**The navigation and registration filter module** 

**The image geometric correction module** 

its corresponding pixel in the Level 1A image.

**The image quality control module** 

**2.2.5 COMS ground segment** 

landmarks (for navigation) or tie points (for registration) residuals.

**Wideband data formatting** 

*2.2.4.2.2 Ground Segment INR* 

The SOC performs the primary satellite operation/monitoring and the secondary image data processing. The NMSC and KOSC have a role of the primary image data processing for MI (in NMSC) and GOCI (in KOSC), respectively, and The NMSC is also the secondary ground center for a satellite operation/monitoring. The CTES monitors RF (Radio Frequency) signals to check the status of Ka-Band communication system.

The SOC has two functions of the COMS GS; MI/GOCI Image data processing (as the backup center) and satellite operation/monitoring (as the primary center). One of SOC function is implemented in IDACS (Image Data Acquisition and Control System) for Image data processing by three subsystem; DATS (Data Acquisition and Transmission Subsystem), IMPS (IMage Pre-processing Subsystem), and LHGS (LRIT/HRIT Generation Subsystem) (Lim et al., 2011).

The other SOC function, satellite operation and monitoring, is implemented in SGCS (Satellite Ground Control System) by five subsystems; MPS (Mission Planning Subsystem), TTC (Telemetry, Tracking, and Command), ROS (Real-time Operations Subsystem), FDS (Flight Dynamics Subsystem), and CSS (COMS Simulator Subsystem) (Lee et al., 2006).

Fig. 12 shows the essential architecture of COMS ground segment with key composing subsystems and Table 3 describes functions of subsystem for COMS ground segment; DATS, IMPS, LHGS (IDACS) MPS, TTC, ROS, FDS, and CSS (SGCS).

Fig. 12. COMS ground segment architecture with key composing subsystems


COMS, the New Eyes in the Sky for Geostationary Remote Sensing 249

The resulting performances are typified as the pointing knowledge of better than 0.003°, the pointing accuracy of better than 0.05°, and the pointing stability of better than 7µrad/8s, all in roll and pitch. Fig. 13 shows the typical example of the performance on the platform

Fig. 13. COMS platform stability, as measured for a period of 3 months and computed on a

From the MI Visible dark noise analysis results, COMS MI in-orbit SNR at 5% albedo have been computed for both side 1 and side 2 of MI and the in-orbit SNR in both sides proved to be better than on ground measurement and significantly above the specification, SNR > 10 at 5%

> SNR 5% *On Ground In Orbit*

albedo. Table 4 shows both the on-ground and in-orbit SNR at 5% albedo for MI side 1.

**Detector 1** 24 27.18 **Detector 2** 24 26.28 **Detector 3** 23 26.20 **Detector 4** 24 27.01 **Detector 5** 23 25.24 **Detector 6** 23 27.08 **Detector 7** 23 26.00 **Detector 8** 24 26.21

stability.

3-sigma basis

**3.2 Radiometric performances of MI and GOCI** 

**3.2.1 MI radiometric performances** 

MI Side 1

Table 4. MI On-Ground and In-Orbit SNR results, MI side 1

**3.2.1.1 MI in-orbit SNR** 


Table 3. Functions of the COMS ground segment

## **3. COMS in-orbit performances**

## **3.1 COMS AOCS performances and platform stability**

The quality of images taken by on-board optical instruments is strongly dependent on the quality of the platform stabilisation. Three (3) strong requirements have been put on the COMS platform, all necessary to obtain the specified image quality.


The first point is fulfilled by the heritage bus (E3000 platform), but the two last points have necessitated the implementation of a high precision Fibre Optic Gyro (Astrium's FOG Astrix 120 HR), furthermore the third point has been flown down to micro-vibration dampers under wheels, various AOCS tuning (solar array natural mode damping, optimised wheel zero crossing management), optimized manoeuvres (reaction wheel off loading, EW and NS manoeuvres, etc.), and few operational constraints (stop solar array rotation during GOCI imaging period, etc.).

The resulting performances are typified as the pointing knowledge of better than 0.003°, the pointing accuracy of better than 0.05°, and the pointing stability of better than 7µrad/8s, all in roll and pitch. Fig. 13 shows the typical example of the performance on the platform stability.

Fig. 13. COMS platform stability, as measured for a period of 3 months and computed on a 3-sigma basis

## **3.2 Radiometric performances of MI and GOCI**

## **3.2.1 MI radiometric performances**

## **3.2.1.1 MI in-orbit SNR**

248 Remote Sensing – Advanced Techniques and Platforms

Station-keeping and re-location planning

The quality of images taken by on-board optical instruments is strongly dependent on the quality of the platform stabilisation. Three (3) strong requirements have been put on the

 pointing accuracy (pitch and roll) : this specification is essential to a priori know where the instrument line of sight is aiming at. This is important for Ka band payload operations, for GOCI operation (due to further stitching of small images to construct the large imaging area) and for MI which can be commended to frequently review some local areas. pointing knowledge (pitch and roll) : the pointing knowledge is mainly driven by the INR in order to start the landmark matching processing with a sufficient accuracy. pointing stability (pitch and roll) : this specification is mainly driven by the GOCI instrument, requesting integration times as long as 8 seconds, with a jitter less than

The first point is fulfilled by the heritage bus (E3000 platform), but the two last points have necessitated the implementation of a high precision Fibre Optic Gyro (Astrium's FOG Astrix 120 HR), furthermore the third point has been flown down to micro-vibration dampers under wheels, various AOCS tuning (solar array natural mode damping, optimised wheel zero crossing management), optimized manoeuvres (reaction wheel off loading, EW and NS manoeuvres, etc.), and few operational constraints (stop solar array rotation during GOCI

System Sub-System Functions

Mission scheduling

Telemetry analysis Command planning Telecommand processing FDS Orbit Determination and prediction

Mission schedule reporting

Command transmission Tracking and ranging Control and monitoring

Satellite event prediction Satellite fuel accounting CSS Satellite dynamic static simulation Command verification Anomaly simulation

SGCS MPS Mission request gathering

TTC Telemetry reception

ROS Telemetry processing

Table 3. Functions of the COMS ground segment

**3.1 COMS AOCS performances and platform stability** 

COMS platform, all necessary to obtain the specified image quality.

**3. COMS in-orbit performances** 

10µrad.

imaging period, etc.).

From the MI Visible dark noise analysis results, COMS MI in-orbit SNR at 5% albedo have been computed for both side 1 and side 2 of MI and the in-orbit SNR in both sides proved to be better than on ground measurement and significantly above the specification, SNR > 10 at 5% albedo. Table 4 shows both the on-ground and in-orbit SNR at 5% albedo for MI side 1.


Table 4. MI On-Ground and In-Orbit SNR results, MI side 1

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 251

lines). As a result, the normalization algorithm was not implemented on the visible channel

Different from the visible channel, the MI infrared channel calibration process has complex steps to get qualified data as shown in Fig. 16. First, coefficients of the basic (nominal) IR calibration equation were verified using the real data sets and then four major steps were taken: 1) Scan mirror emissivity compensation, 2) Midnight effect correction, 3) Slope

Fig. 16. MI infrared channel radiometric calibration process flow chart

scan mirror emissivities according to different scan angles are shown in Fig. 17.

Based on the scan mirror emissivity (as a function of a scan angle), the effect of emitted radiances from the coating material on the scan mirror were compensated. The computed

Fig. 15. PRNU Check (SIDE1): Space-Look data

*3.2.1.2.2 MI infrared channel radiometric calibration* 

averaging and 4) 1/f noise compensation.

**1. Scan mirror emissivity correction** 

calibration process.

## **3.2.1.2 MI in-orbit radiometric calibration**

COMS IOT (In-Orbit Test) MI calibration activities were divided into two main parts: MI visible channel and infrared channel calibrations. The visible channel calibration was conducted from July 11, 2010 after the COMS Launch (2010.6.26. 21:41 UTC). Calibration activity of the Infrared channels including the visible one was started from Aug 11, 2010 after the completion of the out-gassing (removal of remnant volatile contaminants by heating). The functional and performance tests were performed for both two functional sides (SIDE1: primary, SIDE2: secondary) plus two patch temperatures (patch Low and Mid) of the MI payload. In addition to the images of MI channels, albedo monitor and moon images were also acquired and analyzed. The final performance verification was checked officially at the phase 1 & phase 5 end meeting (Jan 26, 2011) after the intensive MI radiometric calibration processes conducted from July, 2010 to Jan, 2011. Summary of the verifications at the meeting is listed as follows.


### *3.2.1.2.1 MI visible channel calibration process*

As shown in Fig. 14, the MI visible channel calibration process was simply verification of a linear visible calibration equation using the real data sets. After that, necessity of the normalization among eight detectors was checked. Albedo monitor data analysis and moon image processing were used for the detector's trend monitoring.

Fig. 14. MI visible channel radiometric calibration process flow chart

The pixel-to-pixel response non-uniformity (PRNU) were examined using the both space look and image data (Fig. 15). PRNU met the requirement specifications (denoted by red

#### Fig. 15. PRNU Check (SIDE1): Space-Look data

250 Remote Sensing – Advanced Techniques and Platforms

COMS IOT (In-Orbit Test) MI calibration activities were divided into two main parts: MI visible channel and infrared channel calibrations. The visible channel calibration was conducted from July 11, 2010 after the COMS Launch (2010.6.26. 21:41 UTC). Calibration activity of the Infrared channels including the visible one was started from Aug 11, 2010 after the completion of the out-gassing (removal of remnant volatile contaminants by heating). The functional and performance tests were performed for both two functional sides (SIDE1: primary, SIDE2: secondary) plus two patch temperatures (patch Low and Mid) of the MI payload. In addition to the images of MI channels, albedo monitor and moon images were also acquired and analyzed. The final performance verification was checked officially at the phase 1 & phase 5 end meeting (Jan 26, 2011) after the intensive MI radiometric calibration processes conducted from July, 2010 to Jan, 2011. Summary of the

5. The performance tests of MI infrared channels based on the payload real-time

As shown in Fig. 14, the MI visible channel calibration process was simply verification of a linear visible calibration equation using the real data sets. After that, necessity of the normalization among eight detectors was checked. Albedo monitor data analysis and moon

The pixel-to-pixel response non-uniformity (PRNU) were examined using the both space look and image data (Fig. 15). PRNU met the requirement specifications (denoted by red

1. Command and control tests for both sides (Side 1/Side2) were successful

**3.2.1.2 MI in-orbit radiometric calibration** 

verifications at the meeting is listed as follows.

3. Image monitoring and acquisition tests were successful 4. The performance tests of MI visible channel were successful

operational configuration modes were successful

image processing were used for the detector's trend monitoring.

Fig. 14. MI visible channel radiometric calibration process flow chart

2. Scan mechanism tests were successful

*3.2.1.2.1 MI visible channel calibration process* 

lines). As a result, the normalization algorithm was not implemented on the visible channel calibration process.

### *3.2.1.2.2 MI infrared channel radiometric calibration*

Different from the visible channel, the MI infrared channel calibration process has complex steps to get qualified data as shown in Fig. 16. First, coefficients of the basic (nominal) IR calibration equation were verified using the real data sets and then four major steps were taken: 1) Scan mirror emissivity compensation, 2) Midnight effect correction, 3) Slope averaging and 4) 1/f noise compensation.

Fig. 16. MI infrared channel radiometric calibration process flow chart

#### **1. Scan mirror emissivity correction**

Based on the scan mirror emissivity (as a function of a scan angle), the effect of emitted radiances from the coating material on the scan mirror were compensated. The computed scan mirror emissivities according to different scan angles are shown in Fig. 17.

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 253

The PRNU values from the radiometric indices computed from the real time MI data processing system of COMS (called IMPS) indicated that relative bias between detectors of infrared channels were minimal and thus the normalization process step on the infrared channels were skipped as same as the visible one. The complete COMS MI images resulted from the IOT (see Fig. 19) showed that the radiometric performance of the MI payload meets the all requirement specifications for the current operation configuration of MI (SIDE 1,

Fig. 19. Calibrated MI FD Level 1A images (before INR), (Side 1, Patch Low, 2010.12.23)

The GOCI was turned on for the first time in orbit on July 12, 2010 and captured it first image the day after. Both sides (primary and redundant) were successfully tested during about two weeks. After the successful functional tests such as the mechanism movement, detector temperature control, and imaging chain validity, the radiometric performance tests and radiometric calibration tests have been performed. The radiometric performance test is aimed to verify the validity of performance measured on ground. In-orbit offset and dark signal shows a quite good correlation with the ground measurements. Also the radiometric gain matrix, which has been measured in-orbit, is very similar to the ground gain. The SNR test results, which are provided in Table 5, show the performance exceeding the requirements in all 8 spectral bands by 25 to 40%. This is mainly due to the excellent quality of the CMOS matrix detector, and the design margin considered for worst case analysis.

GOCI in-orbit radiometric calibration relies on a full pupil Sun Diffuser (SD), made of fused silica, known to be insensitive to radiations. The instrument is designed to allow a calibration every day. In practice, during IOT, two calibrations per week were performed. After IOT, the frequency of calibration was reduced to one per week. The

**3.2.2 GOCI radiometric performances** 

**3.2.2.2 GOCI in-orbit radiometric calibration** 

**3.2.2.1 GOCI in-orbit SNR** 

*3.2.1.2.3 The result of the MI IOT radiometric calibration processes* 

Patch Low).

Fig. 17. Computation of the scan mirror emissivity for four different infrared chennels (1Dark Image, Side1, Patch Low, Det A, 2010.8.16).

### **2. Midnight effect compensation**

Before and after four hours of near local midnight data were corrected using a mid night compensation algorithm (see Fig. 18). The estimated slope(open circles and squares) based on the regression between the black body slope and the selected optic temperature were used near midnight and the original slope values(thick lines) are used during the rest of time.

Fig. 18. IR Midnight Effect: the result of slope selection (SWIR; Side 1/Patch Low)

#### **3. Slope averaging**

Slope averaging is a smoothing process to remove the responsivity variation of the detectors due to the diurnal variation of background radiation inside the sensor. The reference slope value were compared to that of the previous day and the residual between two were filtered by the slope averaging.

#### **4. 1/f noise compensation**

The 1/f noise compensation, which is a filtering of random noise on the lower frequency components was also conducted. After the 1/f noise compensation, the stripping effects on the water vapor channel were greatly removed.

Fig. 17. Computation of the scan mirror emissivity for four different infrared chennels

Fig. 18. IR Midnight Effect: the result of slope selection (SWIR; Side 1/Patch Low)

Slope averaging is a smoothing process to remove the responsivity variation of the detectors due to the diurnal variation of background radiation inside the sensor. The reference slope value were compared to that of the previous day and the residual between two were filtered

The 1/f noise compensation, which is a filtering of random noise on the lower frequency components was also conducted. After the 1/f noise compensation, the stripping effects on

Before and after four hours of near local midnight data were corrected using a mid night compensation algorithm (see Fig. 18). The estimated slope(open circles and squares) based on the regression between the black body slope and the selected optic temperature were used near midnight and the original slope values(thick lines) are used during the rest of time.

(1Dark Image, Side1, Patch Low, Det A, 2010.8.16).

**2. Midnight effect compensation** 

**3. Slope averaging** 

by the slope averaging. **4. 1/f noise compensation** 

the water vapor channel were greatly removed.

## *3.2.1.2.3 The result of the MI IOT radiometric calibration processes*

The PRNU values from the radiometric indices computed from the real time MI data processing system of COMS (called IMPS) indicated that relative bias between detectors of infrared channels were minimal and thus the normalization process step on the infrared channels were skipped as same as the visible one. The complete COMS MI images resulted from the IOT (see Fig. 19) showed that the radiometric performance of the MI payload meets the all requirement specifications for the current operation configuration of MI (SIDE 1, Patch Low).

Fig. 19. Calibrated MI FD Level 1A images (before INR), (Side 1, Patch Low, 2010.12.23)

## **3.2.2 GOCI radiometric performances**

## **3.2.2.1 GOCI in-orbit SNR**

The GOCI was turned on for the first time in orbit on July 12, 2010 and captured it first image the day after. Both sides (primary and redundant) were successfully tested during about two weeks. After the successful functional tests such as the mechanism movement, detector temperature control, and imaging chain validity, the radiometric performance tests and radiometric calibration tests have been performed. The radiometric performance test is aimed to verify the validity of performance measured on ground. In-orbit offset and dark signal shows a quite good correlation with the ground measurements. Also the radiometric gain matrix, which has been measured in-orbit, is very similar to the ground gain. The SNR test results, which are provided in Table 5, show the performance exceeding the requirements in all 8 spectral bands by 25 to 40%. This is mainly due to the excellent quality of the CMOS matrix detector, and the design margin considered for worst case analysis.

## **3.2.2.2 GOCI in-orbit radiometric calibration**

GOCI in-orbit radiometric calibration relies on a full pupil Sun Diffuser (SD), made of fused silica, known to be insensitive to radiations. The instrument is designed to allow a calibration every day. In practice, during IOT, two calibrations per week were performed. After IOT, the frequency of calibration was reduced to one per week. The

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 255

1% amplitude. This is probably not the real variation of SD. In addition the longitudinal solar incident angle to the GOCI shows the similar variation over the year. The reason for this sinusoidal variation is now under examination. The variations lower than 1% over almost one year shows the SD stability. All the variations observed in orbit up to now are within 1 to 2% which is very low and very satisfactory. Some evolutions seem to be correlated with the longitudinal solar incident angle. This opens the way to further

The major performances (Modulation Transfer Function – MTF and Signal to Noise Ratio – SNR) are presented in this chapter, all other performances being well within the

One of the major advantages of ocean observation with the GOCI is that continuous monitoring is possible with images provided every hour, which maximizes chance of clear observation of the whole field even in cloudy season. No sun glint occurs thanks to the angular position of the field of view during daytime, while it discards many observations in

During IOT, the MI Ground Sampling Distance (GSD) and the spatial performance (MTF) have been fully checked and verified. The GSD has been verified as follows. The landmark matching results by the INRSM were used and the angular steps in both E/W and N/S were measured by best fit between level 1A image coordinates and landmark GEOS positions. Those angular steps were used to determine a projection function for each image (or subimage). Then, the specified GSD at Nadir was verified using the relevant projection

During IOT, the GOCI GSD has been verified by the same method as with MI, and the imaging coverage and the slot overlap have also been fully verified. The spatial performance (MTF) has also been checked. Before launch, the GOCI MTF performance was tested through ground test at the payload level. The in-orbit MTF test would allow the validation of MTF at system level including the satellite stability performance. But the measurement accuracy for in-orbit test is much worse than the ground test depending on the availability and the quality of the transition patterns between bright and dark in the image. The GOCI

improvement of the calibration model if necessary.

**3.3 Spatial and geometric performances of MI and GOCI** 

Table 6 shows the measured MI MTF results.

Table 6. Measured In-Orbit MI MTF

requirements.

low orbit.

**3.3.1.1 MI** 

function.

**3.3.1.2 GOCI** 

**3.3.1 GSD and MTF** 


Table 5. GOCI In-Orbit SNR test result

potential aging of the SD is monitored by a second diffuser (Diffuser Aging Monitoring Device: DAMD) used less frequency than the SD, typically once per month since the end of the IOT. When not in used, both SD and DAMD are well protected by the shutter wheel cover to minimise their exposure to the space environment.

Through IOT period, about six months, the instrument calibration and the calibration stability were fully verified. The purpose of radiometric calibration test is to verify the inorbit calibration method which is based on two point measurements (Kang & Coste, 2010). The in-orbit radiometric gain matrix of GOCI is calculated by using two sun images, which are obtained through the SD with two different integration times. The imaging time for the sun has been specified according to the desired solar incident angle over 25 degree to 35 degree. The actual solar incident angle of measured sun image is calculated by using the On-Board Time (OBT) which is included in the secondary header of the raw data. During IOT, sun imaging for eight spectral bands has been performed over two days based on one week period. For each calibration, six sets of sun images with short and long integration time have been obtained for each spectral band over about 10 minutes. Variation of gains calculated by 6 sets are very small (0.1 % to 0.3%) and are most probably due to processing noise (small errors in the ephemerides and in the calibration time) and also possibly to short term variations of the sun irradiance. Fig. 8 shows the gain evolution over eight months. For first three months, the gain shows a relatively rapid decrement. There is about 2% variation over eight months. Fig. 20 shows the aging factor of the SD over eight months. The trend provided in this Figure shows a sinusoidal variation over 8 months with about maximum

Fig. 20. In-orbit radiometric stability over 8 months

1% amplitude. This is probably not the real variation of SD. In addition the longitudinal solar incident angle to the GOCI shows the similar variation over the year. The reason for this sinusoidal variation is now under examination. The variations lower than 1% over almost one year shows the SD stability. All the variations observed in orbit up to now are within 1 to 2% which is very low and very satisfactory. Some evolutions seem to be correlated with the longitudinal solar incident angle. This opens the way to further improvement of the calibration model if necessary.

The major performances (Modulation Transfer Function – MTF and Signal to Noise Ratio – SNR) are presented in this chapter, all other performances being well within the requirements.

One of the major advantages of ocean observation with the GOCI is that continuous monitoring is possible with images provided every hour, which maximizes chance of clear observation of the whole field even in cloudy season. No sun glint occurs thanks to the angular position of the field of view during daytime, while it discards many observations in low orbit.

## **3.3 Spatial and geometric performances of MI and GOCI**

## **3.3.1 GSD and MTF**

### **3.3.1.1 MI**

254 Remote Sensing – Advanced Techniques and Platforms

Band Mean SNR at GOCI level B1 1476 1077 B2 1496 1199 B3 1716 1316 B4 1722 1223 B5 1586 1192 B6 1513 1093 B7 1449 1107 B8 1390 1009

potential aging of the SD is monitored by a second diffuser (Diffuser Aging Monitoring Device: DAMD) used less frequency than the SD, typically once per month since the end of the IOT. When not in used, both SD and DAMD are well protected by the shutter wheel

Through IOT period, about six months, the instrument calibration and the calibration stability were fully verified. The purpose of radiometric calibration test is to verify the inorbit calibration method which is based on two point measurements (Kang & Coste, 2010). The in-orbit radiometric gain matrix of GOCI is calculated by using two sun images, which are obtained through the SD with two different integration times. The imaging time for the sun has been specified according to the desired solar incident angle over 25 degree to 35 degree. The actual solar incident angle of measured sun image is calculated by using the On-Board Time (OBT) which is included in the secondary header of the raw data. During IOT, sun imaging for eight spectral bands has been performed over two days based on one week period. For each calibration, six sets of sun images with short and long integration time have been obtained for each spectral band over about 10 minutes. Variation of gains calculated by 6 sets are very small (0.1 % to 0.3%) and are most probably due to processing noise (small errors in the ephemerides and in the calibration time) and also possibly to short term variations of the sun irradiance. Fig. 8 shows the gain evolution over eight months. For first three months, the gain shows a relatively rapid decrement. There is about 2% variation over eight months. Fig. 20 shows the aging factor of the SD over eight months. The trend provided in this Figure shows a sinusoidal variation over 8 months with about maximum

(a) Gain Stability (b) SD Stability

Fig. 20. In-orbit radiometric stability over 8 months

Table 5. GOCI In-Orbit SNR test result

cover to minimise their exposure to the space environment.

SNR specification

During IOT, the MI Ground Sampling Distance (GSD) and the spatial performance (MTF) have been fully checked and verified. The GSD has been verified as follows. The landmark matching results by the INRSM were used and the angular steps in both E/W and N/S were measured by best fit between level 1A image coordinates and landmark GEOS positions. Those angular steps were used to determine a projection function for each image (or subimage). Then, the specified GSD at Nadir was verified using the relevant projection function.

Table 6 shows the measured MI MTF results.


Table 6. Measured In-Orbit MI MTF

## **3.3.1.2 GOCI**

During IOT, the GOCI GSD has been verified by the same method as with MI, and the imaging coverage and the slot overlap have also been fully verified. The spatial performance (MTF) has also been checked. Before launch, the GOCI MTF performance was tested through ground test at the payload level. The in-orbit MTF test would allow the validation of MTF at system level including the satellite stability performance. But the measurement accuracy for in-orbit test is much worse than the ground test depending on the availability and the quality of the transition patterns between bright and dark in the image. The GOCI

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 257

Worth noting is the fact that the COMS AOCS pointing performaces, as described in section 3.1, provide a significant contribution to the final INR performances. Also worth noting is the timeliness requirement put on the MI INR processing. As mentioned in section 2.2, the satellite serves as telecommunication relay to broadcast corrected data to end users in international formats called HRIT and LRIT. Both formats suppose to rectify the data both radiometrically and geometrically. An allowance of 15 minutes is given to perform the ground processing before uploading again the data to the satellite. After few inevitable tunings, the whole process is now performed in typically 12 minutes. For illustration purpose, two examples of shoreline matching are presented for MI vis channel and for one

Fig. 22. GOCI shoreline matching. The reference shoreline is superimposed to the

geometrically rectified GOCI image. Matching is better than 2 pixels over the whole area.

GOCI spectral band in the Fig. 21 and Fig. 22.

Fig. 21. MI shoreline matching (FD, VIS)

MTF is calculated by using the image having a radiometric transition (such as a coast line) which is equivalent to Knife Edge Function (KEF) measurement. Table 7 shows the GOCI MTF test result. Significant margins are demonstrated with respect to specifications; similar margins are present in all spectral bands.


Table 7. Measured GOCI MTF in the band 8

## **3.3.2 INR performances**

The INR IOT took a significant amount of time, as the final tuning requested. The first positive result obtained from the first images was the number of landmarks automatically extracted by the INR software. During the development, it had been demonstrated that a minimum of typically 100 landmarks were necessary, and sometimes more than 600 landmarks could be found on images.

The INR performance is evaluated on the basis on land marks residuals (statistical error after landmark best fit). In order to verify the validity of this approach, the coast line from the images is checked against an absolute coast line (based on GSHHS). The following figures in the Table 8 illustrate the typical performances of COMS INR as observed during the IOT.



Table 8. COMS MI and GOCI INR performances (units in µrad)

MTF is calculated by using the image having a radiometric transition (such as a coast line) which is equivalent to Knife Edge Function (KEF) measurement. Table 7 shows the GOCI MTF test result. Significant margins are demonstrated with respect to specifications; similar

> *Sample #1* 0.34 0.42 0.38 0.30 *Sample #2* 0.27 0.43 0.37 0.36 *Sample #3* 0.29 0.33 0.42 0.33 *Sample #4* 0.28 0.45 0.37 0.32 *Sample #5* 0.35 0.37 0.37 0.26 *Sample #6* 0.40 0.42 0.38 0.31

> > **0.30** *19%*

The INR IOT took a significant amount of time, as the final tuning requested. The first positive result obtained from the first images was the number of landmarks automatically extracted by the INR software. During the development, it had been demonstrated that a minimum of typically 100 landmarks were necessary, and sometimes more than 600

The INR performance is evaluated on the basis on land marks residuals (statistical error after landmark best fit). In order to verify the validity of this approach, the coast line from the images is checked against an absolute coast line (based on GSHHS). The following figures in the Table 8 illustrate the typical performances of COMS INR as observed during

*Sample #7* 0.31 0.44 *Sample #8* 0.43 0.34 *Sample #9* 0.32 0.28 *Sample #10* 0.32 0.36

Table 8. COMS MI and GOCI INR performances (units in µrad)

**EWN S**

*0.36 0.35 0.06 0.04*

> **0.30** *16%*

**Band 8**

margins are present in all spectral bands.

*MTF @ Nyquist*

*Mean Value Standard Error Specification (Mean - Spec.) / Spec.*

landmarks could be found on images.

**3.3.2 INR performances** 

the IOT.

Table 7. Measured GOCI MTF in the band 8

Worth noting is the fact that the COMS AOCS pointing performaces, as described in section 3.1, provide a significant contribution to the final INR performances. Also worth noting is the timeliness requirement put on the MI INR processing. As mentioned in section 2.2, the satellite serves as telecommunication relay to broadcast corrected data to end users in international formats called HRIT and LRIT. Both formats suppose to rectify the data both radiometrically and geometrically. An allowance of 15 minutes is given to perform the ground processing before uploading again the data to the satellite. After few inevitable tunings, the whole process is now performed in typically 12 minutes. For illustration purpose, two examples of shoreline matching are presented for MI vis channel and for one GOCI spectral band in the Fig. 21 and Fig. 22.

Fig. 21. MI shoreline matching (FD, VIS)

Fig. 22. GOCI shoreline matching. The reference shoreline is superimposed to the geometrically rectified GOCI image. Matching is better than 2 pixels over the whole area.

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 259

As mentioned in the previous sections, COMS MI Level 1B data are generated through radiometric and geometric calibrations and then sixteen meteorological products(level 2) are produced by CMDPS (COMS Meteorological Data Processing System) as shown Fig. 25.

Parts of meteorological products from COMS MI have been generated operationally since April 1, 2011 together with COMS operation. Those products are cloud analysis (type, phase and amount), cloud top temperature/pressure, atmospheric motion vector, cloud detection, fog, and aerosol index. And then, four products, which are sea surface temperature, rain

(a) (b) (c)

Fig. 26. Examples of COMS meteorological products (a) cloud phase (b) atmospheric

**4.1 MI** 

**4.1.1 Generation of MI end products** 

Fig. 25. COMS Meteorological Products

meteorological vector and (c) rain intensity.

Further analysis and monitoring on INR performances have been performed since the start of normal operation of COMS for the service to the end users, and Fig. 23 and Fig. 24 illustrate some of these typical COMS INR performances.

Mode: ENH, Channel: VIS and IR, and negative correlation between the number of LMKs and the average of Residuals (courtesy of KMA)

Mode: ENH, Channel: VIS and IR, and negative correlation between the number of LMKs and the average of Residuals: Twilight effects (courtesy of KMA)

Fig. 24. MI INR performance during 1st April ~ 31th August.

## **4. Application and suggestion**

It has been merely 8 months since the outset of the normal operation of COMS for the distribution and service of the images and image products to the end users and scientific communities. The activities in this period in terms of the data processing, calibration and the end product generation and the related studies and researches have been exceedingly interesting, proactive and imaginative, to say the least, and in a word 'dynamic' in a very positive and rewarding sense. This section describes the application aspect of the COMS image data from MI and GOCI, addresses some posing technical challenges at the present time on this course of data application, summarizes some of the representative end products both from MI and GOCI and discusses the way forwards with some suggestions.

## **4.1 MI**

258 Remote Sensing – Advanced Techniques and Platforms

Further analysis and monitoring on INR performances have been performed since the start of normal operation of COMS for the service to the end users, and Fig. 23 and Fig. 24

Mode: ENH, Channel: VIS and IR, and negative correlation between the number of LMKs and the

Mode: ENH, Channel: VIS and IR, and negative correlation between the number of LMKs and the

It has been merely 8 months since the outset of the normal operation of COMS for the distribution and service of the images and image products to the end users and scientific communities. The activities in this period in terms of the data processing, calibration and the end product generation and the related studies and researches have been exceedingly interesting, proactive and imaginative, to say the least, and in a word 'dynamic' in a very positive and rewarding sense. This section describes the application aspect of the COMS image data from MI and GOCI, addresses some posing technical challenges at the present time on this course of data application, summarizes some of the representative end products

both from MI and GOCI and discusses the way forwards with some suggestions.

illustrate some of these typical COMS INR performances.

Fig. 23. MI INR performance during 1st April ~ 31th August.

average of Residuals: Twilight effects (courtesy of KMA)

**4. Application and suggestion** 

Fig. 24. MI INR performance during 1st April ~ 31th August.

average of Residuals (courtesy of KMA)

## **4.1.1 Generation of MI end products**

As mentioned in the previous sections, COMS MI Level 1B data are generated through radiometric and geometric calibrations and then sixteen meteorological products(level 2) are produced by CMDPS (COMS Meteorological Data Processing System) as shown Fig. 25.

Fig. 25. COMS Meteorological Products

Parts of meteorological products from COMS MI have been generated operationally since April 1, 2011 together with COMS operation. Those products are cloud analysis (type, phase and amount), cloud top temperature/pressure, atmospheric motion vector, cloud detection, fog, and aerosol index. And then, four products, which are sea surface temperature, rain

Fig. 26. Examples of COMS meteorological products (a) cloud phase (b) atmospheric meteorological vector and (c) rain intensity.

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 261

phase and the disappearance phase, based on the MI infrared (IR) images, and automatically analyses the Typhoon intensity through the experience in pattern recognition by applying the Fast Fourier Transform (FFT) on the resulting patterns from the different phase of the

COMS MI data are also to be used in the generation of aeoronautical meteorological products, as shown in the Fig. 28. These products may have a relatively low accuracy but have the advantage of observing the broader area every hour. They are providing the level 2 information; such as the cloud phase, cloud height and the cloud top temperature in the air route and also the information from the convective cloud monitoring, and the other technique is under development for the information generation on the elements that can

(a) (b)

The application of GOCI data is focused on the monitoring of long-term/short-term ocean change phenomena around Korean peninsula and north-eastern Asian seas. In daytime, the hourly-produced GOCI data will be used for the ocean/coast environmental monitoring and for the observation of ocean dynamics features and the management of ocean territory. Also, these GOCI data, when used in conjunction with ocean numeric models, would bring

GOCI level 2 data products can be generated from GOCI level 1B with GDPS (GOCI Data Processing System) which is the data processing and analysis software developed by

This GDPS system derives the pure ocean signal (water leaving radiance) by atmospheric correction using aero-optics model and oceano-optics model developed and modified by KORDI. It can extract pure water signal as the normalized water leaving radiance which is corrected water leaving radiance by considering the satellite - sun relative geometry. For geostationary satellite, this relative position of the sun and the satellite changes all the time

Fig. 28. Examples of COMS MI aeoronautical meteorological products (under

developments) (a) turbulence distribution (b) icing on airplane area .

forth the increase of accuracy in ocean forecasting,.

cause aircraft accidents, such as the icing and the turbulence.

cyclone.

**4.2 GOCI** 

KORDI.

intensity, outgoing longwave radiation, and upper tropospheric humidity, were generated additionally from 10 August 2011. These products are currently being validated through comparison between satellite-derived products and ground in-situ data. For example, detection area of Asian dust (aerosol index) occurred in 2011 April and May was compared with COMS GOCI and MODIS (Moderate Resolution Imaging Spectroradiometer) true color images or OMI (Ozone Monitoring Instrument) AOD (Aerosol Optical Depth). The other six products which are land surface temperature, sea ice/snow cover, total precipitable water, insolation, clear sky radiance, and aerosol optical depth will be operationally produced soon.

## **4.1.2 Application to weather forecasting and analysis**

In Korean peninsula, annual losses and damages in human and material are enormous due to the convective cloud accompanying summer heavy rainfall, which is either flown from the West Sea or originated locally. COMS can monitor and watch the origination and development of this convective cloud since it can observe Korean peninsula with MI in a concentrative way eight times an hour. NMSC is supporting the weather forecasting with the developed technique for Very Short Range Forecasting utilizing COMS MI meteorological data, which was introduced and derived from the technique of convective cloud rainfall intensity calculation and monitoring by the SAFNWC (Satellite Application Facilities Nowcasting) of EUMETSAT (European Organization for the Exploitation of Meteorological Satellites).

Fig. 27. Examples of COMS MI data applications (a) Convective rain intensity image combined with radar rain map (b) Predicted location of convective cloud and lightening image.

To analyze the Typhoon, which passes through Korean peninsula two to three times a year, typically around July to September time, such elements as the Typhoon intensity, radius of strong winds, the maximum wind speed, low pressure, are needed. In this analysis, NMSC is utilizing the Advanced Dvorak Technique (ADT) in the site operations, which was developed by the Cooperative Institute for Meteorological Satellite Studies (CIMSS) of University of Wisconsin (UW). The algorithm in this technique classifies the evolution phase of the tropical cyclone according to its intensity, as the formation phase, the development phase and the disappearance phase, based on the MI infrared (IR) images, and automatically analyses the Typhoon intensity through the experience in pattern recognition by applying the Fast Fourier Transform (FFT) on the resulting patterns from the different phase of the cyclone.

COMS MI data are also to be used in the generation of aeoronautical meteorological products, as shown in the Fig. 28. These products may have a relatively low accuracy but have the advantage of observing the broader area every hour. They are providing the level 2 information; such as the cloud phase, cloud height and the cloud top temperature in the air route and also the information from the convective cloud monitoring, and the other technique is under development for the information generation on the elements that can cause aircraft accidents, such as the icing and the turbulence.

Fig. 28. Examples of COMS MI aeoronautical meteorological products (under developments) (a) turbulence distribution (b) icing on airplane area .

## **4.2 GOCI**

260 Remote Sensing – Advanced Techniques and Platforms

intensity, outgoing longwave radiation, and upper tropospheric humidity, were generated additionally from 10 August 2011. These products are currently being validated through comparison between satellite-derived products and ground in-situ data. For example, detection area of Asian dust (aerosol index) occurred in 2011 April and May was compared with COMS GOCI and MODIS (Moderate Resolution Imaging Spectroradiometer) true color images or OMI (Ozone Monitoring Instrument) AOD (Aerosol Optical Depth). The other six products which are land surface temperature, sea ice/snow cover, total precipitable water, insolation, clear sky radiance, and aerosol optical depth will be operationally produced

In Korean peninsula, annual losses and damages in human and material are enormous due to the convective cloud accompanying summer heavy rainfall, which is either flown from the West Sea or originated locally. COMS can monitor and watch the origination and development of this convective cloud since it can observe Korean peninsula with MI in a concentrative way eight times an hour. NMSC is supporting the weather forecasting with the developed technique for Very Short Range Forecasting utilizing COMS MI meteorological data, which was introduced and derived from the technique of convective cloud rainfall intensity calculation and monitoring by the SAFNWC (Satellite Application Facilities Nowcasting) of EUMETSAT (European Organization for the Exploitation of

(a) (b)

To analyze the Typhoon, which passes through Korean peninsula two to three times a year, typically around July to September time, such elements as the Typhoon intensity, radius of strong winds, the maximum wind speed, low pressure, are needed. In this analysis, NMSC is utilizing the Advanced Dvorak Technique (ADT) in the site operations, which was developed by the Cooperative Institute for Meteorological Satellite Studies (CIMSS) of University of Wisconsin (UW). The algorithm in this technique classifies the evolution phase of the tropical cyclone according to its intensity, as the formation phase, the development

Fig. 27. Examples of COMS MI data applications (a) Convective rain intensity image combined with radar rain map (b) Predicted location of convective cloud and lightening

**4.1.2 Application to weather forecasting and analysis** 

soon.

image.

Meteorological Satellites).

The application of GOCI data is focused on the monitoring of long-term/short-term ocean change phenomena around Korean peninsula and north-eastern Asian seas. In daytime, the hourly-produced GOCI data will be used for the ocean/coast environmental monitoring and for the observation of ocean dynamics features and the management of ocean territory. Also, these GOCI data, when used in conjunction with ocean numeric models, would bring forth the increase of accuracy in ocean forecasting,.

GOCI level 2 data products can be generated from GOCI level 1B with GDPS (GOCI Data Processing System) which is the data processing and analysis software developed by KORDI.

This GDPS system derives the pure ocean signal (water leaving radiance) by atmospheric correction using aero-optics model and oceano-optics model developed and modified by KORDI. It can extract pure water signal as the normalized water leaving radiance which is corrected water leaving radiance by considering the satellite - sun relative geometry. For geostationary satellite, this relative position of the sun and the satellite changes all the time

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 263

Understanding of sea surface currents and estimation of pollutant movements

Coastal water quality control/monitoring

Long-term climate change monitoring

Coastal ocean eutrophication

Carbon cycle

**PRODUCTS DESCRIPTION APPLICATION** 

Fig. 29. Examples of GOCI level 2 end products, TSS (Total Suspended Sediment) and

GOCI products such as ocean current vector and ocean color properties would be provided to the fishery and the related organization for the increase of the haul, the effective management of fish, and finally the increase of fisheries income. The GOCI data could also be useful for monitoring suspended sediment movement, pollution particles movement, ocean current circulation and ocean ecosystem. Also, it will contribute to the international cooperation system, such as GEOSS (Global Earth Observation System of Systems), for the long-term ocean climate change related research and application by the data exchange and

The Korea Ocean Satellite Center (KOSC) in KORDI as the official GOCI operation agency, receives the GOCI data from the satellite directly, generates, stores, manages and distributes the processed standard products. And KOSC will continuously develop new ocean environmental analysis algorithms to apply to the imagery data of GOCI and the GOCI-II

Through the normal operation of GOCI, KOSC can provide the new, high-grade ocean environmental information in near-real time. It can be applied to the detection of freak phenomena of ocean nature such as the red-tide and the green-tide. The primary

Sea surface current direction/speed

compounds from

estimation

Coastal water quality level

The production of Organic

carbon dioxide, principally through the process of photosynthesis

**Sea surface**

**Primary**

**Water quality Level (WQL)**

**Productivity (PP)**

Table 10. GOCI level 3 data products

CDOM (Colored Dissolved Organic Matter)

co-research among related countries.

which is next generation of GOCI.

**current vector (WCV)**

and then the ocean signal is distorted. To resolve this issue of signal distortion, some research was performed. The system can generate the marine environment analysis data using specific algorithms for target region. The data processing algorithms applied to the existing ocean satellite optical sensor and new algorithms to the GOCI would produce the latest marine environmental analysis results.

Table 9 shows the list of GOCI level 2 data products which are currently being generated and used for each application purpose, and Table 10 signifies the list of GOCI level 3 data products which can also be generated by GDPS. The algorithm to generate GOCI level 3 data is under the final validation process. Fig. 29 shows some typical examples of these end products, in the case of TSS and CDOM.


Table 9. GOCI level 2 data products



Table 10. GOCI level 3 data products

and then the ocean signal is distorted. To resolve this issue of signal distortion, some research was performed. The system can generate the marine environment analysis data using specific algorithms for target region. The data processing algorithms applied to the existing ocean satellite optical sensor and new algorithms to the GOCI would produce the

Table 9 shows the list of GOCI level 2 data products which are currently being generated and used for each application purpose, and Table 10 signifies the list of GOCI level 3 data products which can also be generated by GDPS. The algorithm to generate GOCI level 3 data is under the final validation process. Fig. 29 shows some typical examples of these end

**PRODUCTS DESCRIPTION APPLICATION** 

**Red tide (RI)** Red tide index information Ocean pollution and ecological

**PRODUCTS DESCRIPTION APPLICATION** 

Indispensible for water color

Ocean primary production estimation, dumping site monitoring, climate change

Coastal ocean environmental analysis and monitoring TSS movement and transfer monitoring

Indicator of ocean pollution Ocean salinity estimation

Ocean optical properties analysis

Movement and transfer monitoring

Navy tactics, ocean pollution map,

Input data for the water analysis

analysis algorithms

algorithm

monitoring

monitoring

of red tide

application

Yellow dust, Vegetation Index Atmospheric environment and land

sea rescue work

Climate change trend analysis

Fishing ground detection Fishing ground environmental

information

The radiance assumed to be measured at the very surface of the water under

The water leaving radiance assumed to be measured at nadir, as if there was no atmosphere with the Sun at

Concentration of phytoplankton chlorophyll in ocean water

concentration in ocean water

concentration in ocean water

Degree of clarity of the ocean observed by the naked eye

> Daily 8 images composite for cloud free mosaic image

Fishing ground probability

index, fishing ground prediction

Absorption coefficient(a) Backscattering coefficient(bb)

latest marine environmental analysis results.

products, in the case of TSS and CDOM.

the atmosphere

zenith

**TSS** Total suspended sediment

**CDOM** Colored dissolved organic matter

K-coefficient

**Water-leaving Radiance (Lw)**

**Normalized water**

**Chlorophyll (CHL)**

**(nLw)**

**Optical properties of water**

**Underwater Visibility (VIS)**

**Atm. & earth environment**

**Daily composite of CHL, SS, CDOM**

**Fishing ground Information (FGI)**

Table 9. GOCI level 2 data products

**leaving radiance** 

Fig. 29. Examples of GOCI level 2 end products, TSS (Total Suspended Sediment) and CDOM (Colored Dissolved Organic Matter)

GOCI products such as ocean current vector and ocean color properties would be provided to the fishery and the related organization for the increase of the haul, the effective management of fish, and finally the increase of fisheries income. The GOCI data could also be useful for monitoring suspended sediment movement, pollution particles movement, ocean current circulation and ocean ecosystem. Also, it will contribute to the international cooperation system, such as GEOSS (Global Earth Observation System of Systems), for the long-term ocean climate change related research and application by the data exchange and co-research among related countries.

The Korea Ocean Satellite Center (KOSC) in KORDI as the official GOCI operation agency, receives the GOCI data from the satellite directly, generates, stores, manages and distributes the processed standard products. And KOSC will continuously develop new ocean environmental analysis algorithms to apply to the imagery data of GOCI and the GOCI-II which is next generation of GOCI.

Through the normal operation of GOCI, KOSC can provide the new, high-grade ocean environmental information in near-real time. It can be applied to the detection of freak phenomena of ocean nature such as the red-tide and the green-tide. The primary

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 265

Oil spill monitoring - Oil spill and monitoring of movement and distribution of pollution

matter dissolved in seawater

contaminant migration path

productivity improvement

Hurricane watch - A hurricane tracking and navigation path


monitoring information

atmospheric environment

long-term climate change

information

utilization

Table 11. Application subjects of GOCI data

management of fisheries resources


Sea-ice monitoring - Development of the area of sea ice observations and monitoring

Forest fire monitoring - land management and forest fire monitoring, forest resources

Dust monitoring - Dust, vegetation, and the atmosphere and global environmental

Current surveillance - Balm of seawater, and the flow rate information production - Utilize coastal water quality management

Turbidity Monitoring - Indicators of marine pollution

Fishery Information - Fish and Fishery distribution

monitoring

monitoring)


phytoplankton concentration and monitoring the amount of organic


of red tide-related damage contribute to the reduction








productivity of aquatic organisms in the environment




Operation Application Items

Red Tide Monitoring

Green Algae Monitoring

Speculative waters, environmental monitoring

Low-salinity water monitoring

Fisheries and fishfarm management

El Niño, La Niña monitoring

tidal

Ecological monitoring

productivity derived from the GOCI chlorophyll and other products is the key research information about ocean carbon circulation. The color RGB images and analysis images of GOCI products with high spatial resolution are clearer and more recognizable than the monochrome images from other existing geostationary earth monitoring satellite which has only 1 visible band. These images can be useful to land application and atmospheric remote sensing application like monitoring of typhoon, sea ice, forest fire, yellow dust, etc.

Fig. 30. Standard RGB image of GOCI (left) and the analysis result of seawater chlorophyll density in the East Sea (right)

Table 11 shows the overall scheme of GOCI data application, and Fig. 31 and Fig. 32 exemplify some of the typical applications. In Fig. 32, several Images of different dates were mosaiced to realize this cloud-free picture and the numerical signals of the Yellow Sea (East China Sea), the East Sea (Japan Sea) and Northwestern Pacific were differently processed to maintain a balanced tone throughout the whole coverage area of the GOCI.


productivity derived from the GOCI chlorophyll and other products is the key research information about ocean carbon circulation. The color RGB images and analysis images of GOCI products with high spatial resolution are clearer and more recognizable than the monochrome images from other existing geostationary earth monitoring satellite which has only 1 visible band. These images can be useful to land application and atmospheric remote

sensing application like monitoring of typhoon, sea ice, forest fire, yellow dust, etc.

Fig. 30. Standard RGB image of GOCI (left) and the analysis result of seawater chlorophyll

Table 11 shows the overall scheme of GOCI data application, and Fig. 31 and Fig. 32 exemplify some of the typical applications. In Fig. 32, several Images of different dates were mosaiced to realize this cloud-free picture and the numerical signals of the Yellow Sea (East China Sea), the East Sea (Japan Sea) and Northwestern Pacific were differently processed to

> - Analysis of marine primary productivity - Long-term climate change research in the ocean

and utilized to secure carbon credits


maintain a balanced tone throughout the whole coverage area of the GOCI.

density in the East Sea (right)

Operation Application Items

Carbon Circulation Monitoring


Table 11. Application subjects of GOCI data

COMS, the New Eyes in the Sky for Geostationary Remote Sensing 267

COMS is a unique bird in many ways, partially in that it is such a complex satellite accommodating three different payloads with rather conflicting missions into a single spacecraft bus, partially in that it employs a unique and novel INR system, and also partially because it has the GOCI on board, the world's 1st geostationary imager for the ocean colour. By the joint effort of EADS Astrium and KARI, it was masterfully designed, developed, tested, and launched, and is now behaving beautifully in orbit, exhibiting quite impressive and fruitful performances along with the very useful and interesting image data and

It is especially interesting to note that with the co-existence of both MI and GOCI on board, the comparison and combination of data taken by these two sensors from the same geostationary location, could open some new windows for further interesting research and development. In the case of GOCI, the benefits of Geo observation compared to its LEO (Low Earth Orbit) counterparts has been notably demonstrated and largely appreciated by the end users so far, even with the relatively short accumulated time of normal service, and as the further activities on post processing and related studies will get refined and matured,

With these observations, findings and expectations at hand, it could be cautiously said that COMS image data and the processed end products will bring some added dimension to the world remote sensing community and the related field of science and technology. For this end, it will be made sure that the application of MI and GOCI data during the mission life of COMS is to be fully exploited and maximized. It is hoped and believed that all the aspects of the COMS development and operation; from the design, implementation, test and validation, launch and IOT, to the data processing, end product generation, data utilization and end user services will continue to grow and be improved and expanded in its relevant

COMS program has involved so many different organizations, agencies, government bureaus and companies and wide spectrum of participating personnel with different cultures, characters and backgrounds. It is with such a great emotion and gratitude, along with the highly rewarding feeling and sense of proud accomplishment, that we can now say that we regard all the participating members in this program as one big 'COMS family' and to some, very close life-long friends, indeed. There were certainly some bumpy roads and rocky times in the course, but through them all we became real friends and it is grateful that we can now look back upon those days with sense of mutual respect and appreciation.

We feel deeply grateful and obliged to send our appreciation to our Korean government bureaus first and foremost; MEST (Ministry of Education, Science and Technology), KMA (Korea Meteorological Administration), KORDI (Korea Ocean Research & Development Institute), MLTM (Ministry of Land, Transport and Maritime Affairs) and MIC (Ministry of Information and Communication), among others, without whose support and dedication this grand program would not have been possible. We feel especially thankful to MOSF (Ministry Of Strategy and Finance) for providing us the actual revenue sources continually

it is expected that this trend will become even more prominent.

realm into the next generation of geostationary remote sensing satellites.

throughout the entire course of this challenging program.

**5. Conclusion** 

processed end products.

**6. Acknowledgment** 

Fig. 31. Land and Sea Features expressed by natural color on the full scene of the GOCI.

Fig. 32. Structure of Chlorophyll Distribution in the North-East Asian Seas.

## **5. Conclusion**

266 Remote Sensing – Advanced Techniques and Platforms

Fig. 31. Land and Sea Features expressed by natural color on the full scene of the GOCI.

Fig. 32. Structure of Chlorophyll Distribution in the North-East Asian Seas.

COMS is a unique bird in many ways, partially in that it is such a complex satellite accommodating three different payloads with rather conflicting missions into a single spacecraft bus, partially in that it employs a unique and novel INR system, and also partially because it has the GOCI on board, the world's 1st geostationary imager for the ocean colour. By the joint effort of EADS Astrium and KARI, it was masterfully designed, developed, tested, and launched, and is now behaving beautifully in orbit, exhibiting quite impressive and fruitful performances along with the very useful and interesting image data and processed end products.

It is especially interesting to note that with the co-existence of both MI and GOCI on board, the comparison and combination of data taken by these two sensors from the same geostationary location, could open some new windows for further interesting research and development. In the case of GOCI, the benefits of Geo observation compared to its LEO (Low Earth Orbit) counterparts has been notably demonstrated and largely appreciated by the end users so far, even with the relatively short accumulated time of normal service, and as the further activities on post processing and related studies will get refined and matured, it is expected that this trend will become even more prominent.

With these observations, findings and expectations at hand, it could be cautiously said that COMS image data and the processed end products will bring some added dimension to the world remote sensing community and the related field of science and technology. For this end, it will be made sure that the application of MI and GOCI data during the mission life of COMS is to be fully exploited and maximized. It is hoped and believed that all the aspects of the COMS development and operation; from the design, implementation, test and validation, launch and IOT, to the data processing, end product generation, data utilization and end user services will continue to grow and be improved and expanded in its relevant realm into the next generation of geostationary remote sensing satellites.

## **6. Acknowledgment**

COMS program has involved so many different organizations, agencies, government bureaus and companies and wide spectrum of participating personnel with different cultures, characters and backgrounds. It is with such a great emotion and gratitude, along with the highly rewarding feeling and sense of proud accomplishment, that we can now say that we regard all the participating members in this program as one big 'COMS family' and to some, very close life-long friends, indeed. There were certainly some bumpy roads and rocky times in the course, but through them all we became real friends and it is grateful that we can now look back upon those days with sense of mutual respect and appreciation.

We feel deeply grateful and obliged to send our appreciation to our Korean government bureaus first and foremost; MEST (Ministry of Education, Science and Technology), KMA (Korea Meteorological Administration), KORDI (Korea Ocean Research & Development Institute), MLTM (Ministry of Land, Transport and Maritime Affairs) and MIC (Ministry of Information and Communication), among others, without whose support and dedication this grand program would not have been possible. We feel especially thankful to MOSF (Ministry Of Strategy and Finance) for providing us the actual revenue sources continually throughout the entire course of this challenging program.

**12** 

*USA* 

**Hyperspectral Remote Sensing –** 

**Small Vessels in Coastal Littoral Areas** 

Large field of view sensors as well as flight line tracks of hyperspectral reflectance signatures are useful for helping to help solve many land and water environmental management problems and issues. High spectral and spatial resolution sensing systems are useful for environmental monitoring and surveillance applications of land and water features, such as species discrimination, bottom top identification, and vegetative stress or vegetation dysfunction assessments1. In order to help provide information for environmental quality or environmental security issues, it is safe to say that there will never be one set of sensing systems to address all problems. Thus an optimal set of sensors and platforms need to be considered and then selected. The purpose of this paper is to describe a set of sensing systems that have been integrated and can be useful for land and water related assessments related to monitoring after an oil spill (specifically for weathered oil) and related recovery efforts. Recently collected selected imagery and data are presented from flights that utilize an aircraft with a suite of sensors and cameras. Platform integration, modifications and sensor mounting was achieved using designated engineering representatives (DER) analyses, and related FAA field approvals in order to satisfy safety

Sensors utilized have been: (1) a photogrammetric 9 inch mapping camera utilizing a 12 inch focal length cone, and using AGFA X400PE1 color negative film that has been optimized for high resolution scanning (2400 dpi) in order to reduce the effects of newton rings and an associated special glass plate from Scanatronics in the Netherlands; (2) forward and aft full high definition (HD) video cameras recording to solid state memory with GPS encoding; (3) a forward mounted Nikon SLR 12.3 megapixel digital camera with a vibration reduction zoom lens and GPS encoding; (4) a high hyperspectral imaging system with 1376 spatial

**1. Introduction** 

needs and requirements.

pixels and 64 to 1040 spectral bands.

**2.1 Imaging systems, sensor systems and calibration** 

**2. Techniques** 

Charles R. Bostater, Jr., Gaelle Coppin and Florian Levaux *Marine Environmental Optics Laboratory and Remote Sensing Center, College of Engineering, Florida Institute of Technology, Melbourne, Florida* 

**Using Low Flying Aircraft and** 

The author and co-authors of this chapter only represent a very small portion of all COMS family members, and we believe that the authors of this chapter, in fact, ought to be all COMS family members and thus we feel deeply indebted to them. Our special thanks go to; Mr. Seong-rae Jung and Ms. Jin Woo of KMA, and Mr. Hee-jeong Han and Mr. Seong-ik Cho of KORDI, for the charts in the section 3.3.2 and for their great help and support in preparing and finalizing sections 4.1 and 4.2.

Last but clearly not the least, we remember our missing COMS family members, Mr. Daniel Buvat of EADS Astrium and Mr. Young-joon Chang and Mr. Sang-mu Moon of KARI, who abruptly departed from this life on earth in the course of COMS development and operation, leaving the rest of us in deep grief and helpless devastation. Along the lines of COMS history, with the trace of their sincere commitment and contribution to the success of COMS, they will always be remembered in our hearts. We dedicate this small chapter to them.

### **7. References**


## **Hyperspectral Remote Sensing – Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas**

Charles R. Bostater, Jr., Gaelle Coppin and Florian Levaux *Marine Environmental Optics Laboratory and Remote Sensing Center, College of Engineering, Florida Institute of Technology, Melbourne, Florida USA* 

## **1. Introduction**

268 Remote Sensing – Advanced Techniques and Platforms

The author and co-authors of this chapter only represent a very small portion of all COMS family members, and we believe that the authors of this chapter, in fact, ought to be all COMS family members and thus we feel deeply indebted to them. Our special thanks go to; Mr. Seong-rae Jung and Ms. Jin Woo of KMA, and Mr. Hee-jeong Han and Mr. Seong-ik Cho of KORDI, for the charts in the section 3.3.2 and for their great help and support in

Last but clearly not the least, we remember our missing COMS family members, Mr. Daniel Buvat of EADS Astrium and Mr. Young-joon Chang and Mr. Sang-mu Moon of KARI, who abruptly departed from this life on earth in the course of COMS development and operation, leaving the rest of us in deep grief and helpless devastation. Along the lines of COMS history, with the trace of their sincere commitment and contribution to the success of COMS, they will always be remembered in our hearts. We dedicate this small chapter to

Cros, G.; Loubières, P.; Lainé, I.; Ferrand, S.; Buret, T.; Guay, P. (June 2011) *European ASTRIX* 

Kang, G.; Coste, P. (2010) *An In-orbit Radiometric Calibration Method of the Geostationary Ocean* 

Kim, H.; Kang, G.; Ellis, B.; Nam, M.; Youn, H.; Faure, F.; Coste, P.; Servin, P. (2009)

Kim, H.; Meyer, P.; Crombez, V.; Harris, J. (2010) *COMS INR: Prospect and Retrospect,* 61st

Lambert, H.; Koeck, C.; Kim, H.; Degremont, J.; Laine, I. (2011) *One Year into the Success of the COMS Mission,* 62nd International Astronautical Congress (IAC 2011) KARI (Korea Aerospace Research Institute) (January 2006). *COMS Ground Segment* 

Lee, B.; Jeong, W.; Lee, S.; et al. (April 2006). *Funtional Design of COMS Satellite Ground* 

Lim, H.; Ahn, S.; Seo, S.; Park, D. (December 2011). *In-Orbit Test Operational Validation of the* 

*Control System*, Conference of the Korean soceiety for aeronautical and space

*COMS Image Data Acquisition and Control System*, Journal of the Korean soceiety of

Object Identifier 10.1109 / TGRS. 2010. 2050329

International Astronautical Congress (IAC 2010)

*Specification, Ref C1-SP-800-001-Rev.C*, Deajeon, KOREA

Astronautical Congress (IAC 2009)

sceience, pp. 1000-1005, KSAS06-1850.

Space Tehnology, Vol.6 No.2, pp. 1- 9.

*FOGS In-Orbit Heritage*, 8th International ESA Conference on Guidance, Navigation

*Color Imager (GOCI)*, IEEE Transactions on Geoscience and Remote Sensing, Digital

*Geostationary Ocean Color Imager (GOCI)*, *Overview and Prospect,* 60th International

preparing and finalizing sections 4.1 and 4.2.

& Control Systems

them.

**7. References** 

Large field of view sensors as well as flight line tracks of hyperspectral reflectance signatures are useful for helping to help solve many land and water environmental management problems and issues. High spectral and spatial resolution sensing systems are useful for environmental monitoring and surveillance applications of land and water features, such as species discrimination, bottom top identification, and vegetative stress or vegetation dysfunction assessments1. In order to help provide information for environmental quality or environmental security issues, it is safe to say that there will never be one set of sensing systems to address all problems. Thus an optimal set of sensors and platforms need to be considered and then selected. The purpose of this paper is to describe a set of sensing systems that have been integrated and can be useful for land and water related assessments related to monitoring after an oil spill (specifically for weathered oil) and related recovery efforts. Recently collected selected imagery and data are presented from flights that utilize an aircraft with a suite of sensors and cameras. Platform integration, modifications and sensor mounting was achieved using designated engineering representatives (DER) analyses, and related FAA field approvals in order to satisfy safety needs and requirements.

## **2. Techniques**

#### **2.1 Imaging systems, sensor systems and calibration**

Sensors utilized have been: (1) a photogrammetric 9 inch mapping camera utilizing a 12 inch focal length cone, and using AGFA X400PE1 color negative film that has been optimized for high resolution scanning (2400 dpi) in order to reduce the effects of newton rings and an associated special glass plate from Scanatronics in the Netherlands; (2) forward and aft full high definition (HD) video cameras recording to solid state memory with GPS encoding; (3) a forward mounted Nikon SLR 12.3 megapixel digital camera with a vibration reduction zoom lens and GPS encoding; (4) a high hyperspectral imaging system with 1376 spatial pixels and 64 to 1040 spectral bands.

Hyperspectral Remote Sensing –

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 271

Fig. 1. Image (upper left) of a scanned AGFA color negative film (X400PE1) from an airborne flight March 21, 2011 over Barataria Bay, LA. The upper right image is a simultaneously collected hyperspectral RGB image (540, 532, 524 nm). Imagery indicates the ability to detect weathered oil in the area from oil spill remediation activities. The graph shows selected

Fig. 2. Digital images of the *weathered oil* observed in early 2011 from the ground in the

Barataria Bay, LA. areas shown above.

spectra in weathered oil impact areas. The lower right shows in-situ targets.

The HSI imaging system utilizes a pen tablet computer with custom software. The HSI pushbroom system is integrated into the computer with an external PCMCIA controller card for operating the temperature stabilized monochrome camera which is bore sighted with a transmission spectrograph and ~39 degree field of view lens. The HSI imaging system is gimbal mounted and co-located with one of the HD 30 HZ cameras. The HSI system runs between ~20 to 90 HZ and is also co-located with a ~100 HZ inertial measurement unit (IMU). The IMU is strap down mounted to the HSI along the axis of view of the hyperspectral imager.

An additional 5HZ WAAS GPS output is recorded as another data stream into the custom software that allows on the fly changes to the integration time and spectral binning capability of the system. The HSI system is calibrated for radiance using calibration spheres and with spectral line sources for wavelength calibration. Flights are conducted with the 5 cameras in a fashion to allow simultaneous and or continuous operation with additional use of camera intervalometers that trigger the Nikon and photogrammetric camera. Examples of imagery taken on March 21, 2011 are shown below as well as spectral signatures and in-situ field targets that are typically utilized for processing imagery for subsurface or submerged water feature detection and enhancements.

Airborne imagery shown in this paper was collected at 1,225 m between 10 AM local time or 4 PM local, with a 1/225 second shutter speed and aperture adjusted for optimal contrast and exposure. The large format (9 in2) negatives scanned at 2400 dpi using a scanner and a special glass plate obtained from Scanatron, Netherlands allows for minimization of "newton rings" in the resulting ~255 megapixel multispectral imagery shown below (left image). Experience has shown that this method works well with AGFA X400PE1 film. The aerial negative scanning process is calibrated using a scanned target with known submillimeter scales 0.005 mm to 5 um resolution using a 2400 dpi scanner. The film scanning process results in three band multispectral images with spectral response curves published by the film manufacturer (Agfa or Kodak).

*In-sit*u targets as shown in Figure 1 are used for calibration of the imagery using a combination of white, black or gray scale targets as shown below in an airborne digital image (right). Airborne targets are used for calibrating traditional film and digital sensor data for spatial and spectral characteristics using *in-situ* floating targets in the water as shown below.

Targets (figure 2) are placed along flight lines. These types of land and water targets are used for image enhancement techniques, for use as GPS georeferencing ground control points, and georeferencing accuracy assessments. They are necessary in order to assess shoreline erosion estimation resulting from oil spill impacts along littoral zones.

Figure 2 below shows images of weather oil taken in the Jimmy Bay area in January, 2011, eight months after the major spill was contained in the deep waters of the northern Gulf of Mexico.

## **2.2 Pushbroom imagery corrections for aerial platform motions**

Airborne pushbroom imagery collected aboard moving platforms (ground, air, sea, space) requires geometric corrections due to platform motions. These motions are due to changes in the linear direction of the platform (flight direction changes), as well as sensor and platform motion due to yaw, pitch and roll motions. Unlike frame cameras that acquire a 2 dimensional image, pushbroom cameras acquire one scan line at a time. A sequence of

The HSI imaging system utilizes a pen tablet computer with custom software. The HSI pushbroom system is integrated into the computer with an external PCMCIA controller card for operating the temperature stabilized monochrome camera which is bore sighted with a transmission spectrograph and ~39 degree field of view lens. The HSI imaging system is gimbal mounted and co-located with one of the HD 30 HZ cameras. The HSI system runs between ~20 to 90 HZ and is also co-located with a ~100 HZ inertial measurement unit (IMU). The IMU is strap down mounted to the HSI along the axis of view of the hyperspectral imager. An additional 5HZ WAAS GPS output is recorded as another data stream into the custom software that allows on the fly changes to the integration time and spectral binning capability of the system. The HSI system is calibrated for radiance using calibration spheres and with spectral line sources for wavelength calibration. Flights are conducted with the 5 cameras in a fashion to allow simultaneous and or continuous operation with additional use of camera intervalometers that trigger the Nikon and photogrammetric camera. Examples of imagery taken on March 21, 2011 are shown below as well as spectral signatures and in-situ field targets that are typically utilized for processing imagery for subsurface or submerged

Airborne imagery shown in this paper was collected at 1,225 m between 10 AM local time or 4 PM local, with a 1/225 second shutter speed and aperture adjusted for optimal contrast and exposure. The large format (9 in2) negatives scanned at 2400 dpi using a scanner and a special glass plate obtained from Scanatron, Netherlands allows for minimization of "newton rings" in the resulting ~255 megapixel multispectral imagery shown below (left image). Experience has shown that this method works well with AGFA X400PE1 film. The aerial negative scanning process is calibrated using a scanned target with known submillimeter scales 0.005 mm to 5 um resolution using a 2400 dpi scanner. The film scanning process results in three band multispectral images with spectral response curves published

*In-sit*u targets as shown in Figure 1 are used for calibration of the imagery using a combination of white, black or gray scale targets as shown below in an airborne digital image (right). Airborne targets are used for calibrating traditional film and digital sensor data for spatial and

Targets (figure 2) are placed along flight lines. These types of land and water targets are used for image enhancement techniques, for use as GPS georeferencing ground control points, and georeferencing accuracy assessments. They are necessary in order to assess

Figure 2 below shows images of weather oil taken in the Jimmy Bay area in January, 2011, eight months after the major spill was contained in the deep waters of the northern Gulf of

Airborne pushbroom imagery collected aboard moving platforms (ground, air, sea, space) requires geometric corrections due to platform motions. These motions are due to changes in the linear direction of the platform (flight direction changes), as well as sensor and platform motion due to yaw, pitch and roll motions. Unlike frame cameras that acquire a 2 dimensional image, pushbroom cameras acquire one scan line at a time. A sequence of

spectral characteristics using *in-situ* floating targets in the water as shown below.

shoreline erosion estimation resulting from oil spill impacts along littoral zones.

**2.2 Pushbroom imagery corrections for aerial platform motions** 

water feature detection and enhancements.

by the film manufacturer (Agfa or Kodak).

Mexico.

Fig. 1. Image (upper left) of a scanned AGFA color negative film (X400PE1) from an airborne flight March 21, 2011 over Barataria Bay, LA. The upper right image is a simultaneously collected hyperspectral RGB image (540, 532, 524 nm). Imagery indicates the ability to detect weathered oil in the area from oil spill remediation activities. The graph shows selected spectra in weathered oil impact areas. The lower right shows in-situ targets.

Fig. 2. Digital images of the *weathered oil* observed in early 2011 from the ground in the Barataria Bay, LA. areas shown above.

Hyperspectral Remote Sensing –

*ref z*

(0,0,0)

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 273

obtain the relative position of each scan line and the corresponding spatial pixel shift that needs to be applied to correct the image. When a gimbal mounted HSI pushbroom camera is used, there are two main influences that cause the geometric distortions. These are the slowly varying directional changes of the platform and the roll induced motions. The first step in the algorithm is to use the GPS to calculate the position of the sensor (Ox,Oy,Oz) at every scan line. The second step accounts for the influence of the roll motion by using the IMU sensor data. The position of a pixel on the earth's surface can be estimated using:

*<sup>s</sup> x Ox h Oz s s y Oy h Oz s*

*<sup>x</sup> DEM z y*

( )

( )

(1)

*ref x*

hDEM

*DEM*

*z*

Where ( Sx, Sy, Sz ) are components of a unit central scan line ray vector, (x, y, z) the position in meters compared to the origin (the initial position of the center of the scan line) and hDEM

roll

*<sup>b</sup> <sup>b</sup> <sup>x</sup> <sup>y</sup>*

pitch

*bz*

(, ,) *x y z*

Fig. 4. This figure shows the position of the sensor (Ox, Oy, Oz) and the unit scan line ray vector in the reference coordinate system as well as the body (sensor platform) coordinate system with the possible platform motions. The position (x, y, z) is the position of the surface in the reference coordinate system that is located at the center of a HSI pixel .

*ref y*

*z h*

is the surface elevation given in meters with respect to Mean Sea Level (MSL).

(, ,) *<sup>x</sup> y z s s s*

yaw

(,,) *Ox Oy Oz*

*DEM*

scan lines acquired along the platform track allows the creation of a 2 dimensional image. The consequence of using this type of imaging system is that the scan lines collected produce spatial artifacts due to platform motion changes - resulting in scan line feature offsets. The following describes the roll induced problem to be corrected. Consider an airplane that is flying over a straight road indicated by the dark red, vertical line in the left image below. Now assume the airplane or mobile platform undergoes unwanted platform roll motion and thus the resulting straight feature in the acquired scene is curved, as suggested by the light, blue line in the left image. One knows that the road was straight so the image as shown in Figure 3 (right) indicates a lateral scan line adjustment is required in order to straighten the feature (the blue line). This is accomplished by "shifting" the scan lines opposite to the platform roll motion and results in an image where the feature in the image is corrected. Thus, one needs to calculate the offset that corresponds to the shift the pixels undergo.

Fig. 3. The left figure shown in blue is a distorted road. The red line corresponds to the center of the scan line. The right image represents the corrected version of the left image. On this image the blue straight line is the road and the red curve is the actual position of the center pixels of the scan lines. In this example only the shift in the cross track direction is represented.

The offset mentioned previously can be corrected if sensing geometry and the hyperspectral imaging (HSI) system orientations are known when the different scan lines were taken. To obtain the platform and sensor orientation changes and position a 60 Hz update rate inertial measurement unit (IMU) was utilized and mounted to the gimbal mounted camera. An IMU is a device that is comprised of triads of accelerometers and gyroscopes. The accelerometers measure specific forces along their axes which are accelerations due to gravity plus dynamic accelerations. The gyroscopes measure angular rates. The IMU (Motion Node, GLI Interactive LLC, Seattle, Washington) that is used also has 3 magnetometers and outputs the orientation immediately by using those 3 types of sensors. In addition, differential WAAS 5 HZ GPS position, directional deviations, altitude with respect to a specified datum, and platform speed are collected during the flights.

An adaptive Kalman filter is used to estimate the induced platform motions using the combined sensor data from the GPS and IMU. The filtering technique thus allows one to

scan lines acquired along the platform track allows the creation of a 2 dimensional image. The consequence of using this type of imaging system is that the scan lines collected produce spatial artifacts due to platform motion changes - resulting in scan line feature offsets. The following describes the roll induced problem to be corrected. Consider an airplane that is flying over a straight road indicated by the dark red, vertical line in the left image below. Now assume the airplane or mobile platform undergoes unwanted platform roll motion and thus the resulting straight feature in the acquired scene is curved, as suggested by the light, blue line in the left image. One knows that the road was straight so the image as shown in Figure 3 (right) indicates a lateral scan line adjustment is required in order to straighten the feature (the blue line). This is accomplished by "shifting" the scan lines opposite to the platform roll motion and results in an image where the feature in the image is corrected. Thus, one needs to calculate the offset that corresponds to the shift the

Fig. 3. The left figure shown in blue is a distorted road. The red line corresponds to the center of the scan line. The right image represents the corrected version of the left image. On this image the blue straight line is the road and the red curve is the actual position of the center pixels of the scan lines. In this example only the shift in the cross track direction is

The offset mentioned previously can be corrected if sensing geometry and the hyperspectral imaging (HSI) system orientations are known when the different scan lines were taken. To obtain the platform and sensor orientation changes and position a 60 Hz update rate inertial measurement unit (IMU) was utilized and mounted to the gimbal mounted camera. An IMU is a device that is comprised of triads of accelerometers and gyroscopes. The accelerometers measure specific forces along their axes which are accelerations due to gravity plus dynamic accelerations. The gyroscopes measure angular rates. The IMU (Motion Node, GLI Interactive LLC, Seattle, Washington) that is used also has 3 magnetometers and outputs the orientation immediately by using those 3 types of sensors. In addition, differential WAAS 5 HZ GPS position, directional deviations, altitude with respect to a specified datum, and

An adaptive Kalman filter is used to estimate the induced platform motions using the combined sensor data from the GPS and IMU. The filtering technique thus allows one to

pixels undergo.

represented.

platform speed are collected during the flights.

obtain the relative position of each scan line and the corresponding spatial pixel shift that needs to be applied to correct the image. When a gimbal mounted HSI pushbroom camera is used, there are two main influences that cause the geometric distortions. These are the slowly varying directional changes of the platform and the roll induced motions. The first step in the algorithm is to use the GPS to calculate the position of the sensor (Ox,Oy,Oz) at every scan line. The second step accounts for the influence of the roll motion by using the IMU sensor data. The position of a pixel on the earth's surface can be estimated using:

$$\begin{aligned} \mathbf{x} &= \mathbf{O}\mathbf{x} + \frac{\mathbf{s}\_x}{\mathbf{s}\_x} (\mathbf{h}\_{\text{DEM}} - \mathbf{O}\mathbf{z})\\ \mathbf{y} &= \mathbf{O}\mathbf{y} + \frac{\mathbf{s}\_y}{\mathbf{s}\_z} (\mathbf{h}\_{\text{DEM}} - \mathbf{O}\mathbf{z})\\ \mathbf{z} &= \mathbf{h}\_{\text{DEM}} \end{aligned} \tag{1}$$

Where ( Sx, Sy, Sz ) are components of a unit central scan line ray vector, (x, y, z) the position in meters compared to the origin (the initial position of the center of the scan line) and hDEM is the surface elevation given in meters with respect to Mean Sea Level (MSL).

Fig. 4. This figure shows the position of the sensor (Ox, Oy, Oz) and the unit scan line ray vector in the reference coordinate system as well as the body (sensor platform) coordinate system with the possible platform motions. The position (x, y, z) is the position of the surface in the reference coordinate system that is located at the center of a HSI pixel .

Hyperspectral Remote Sensing –

reference coordinate system.

matrix Rk (6x6 matrix).

indicates the time dependence.

given below by equation (4) for x, y and z.

∆tk = the time-interval (in seconds) between step k and k+1.

exactly compensated by the thrust and gravity by the lift.

second) in the reference coordinate system obtained by the GPS.

assumed white and Gaussian with covariance matrix Qk (6x6 matrix).

relates the state vector to the measurement vector (**z**k = Hk**x**k).

velocity. **z**k is thus always prone to measurement noise.

matrices are defined as follows in the reference coordinate frame:

*t*

*t*

*k k*

*A Q*

Detailed calculations of the covariance's ߪ௩

and horizontal position (in m2) given by the GPS.

*t*

where:

where:

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 275

**x**k = the state vector (6x1 matrix). **x**k = (Ox Oy Oz Vx Vy Vz)kT contains the position of the sensor (Ox,Oy,Oz) (in meters) and the velocity (Vx,Vy,Vz) (in meters per second) in the

Ak = the (6x6) matrix that gives the relation between the previous state vector to the current state vector when no noise and no input vector are considered . This relation can also be

Bk = the (6xm) matrix that relates the optional input vector u to the current state. (m = the

**u**k = the control input vector (mx1 matrix), we assume that there are no external forces that act upon the system so **u**k =0 in our application. Actually, it is assumed that the drag is

**z**k = the measurement vector (6x1 matrix). **z**k = = (Oxm Oym Ozm Vxm Vym Vzm)kT contains the position of the sensor (Oxm,Oym,Ozm) (in meters) and velocity (Vxm,Vym,Vzm) (in meters per

Hk = the measurement sensitivity (6x6) matrix also known as the observation matrix that

**w**k = the process noise or also called dynamic disturbance noise (6x1 matrix) which is

**v**k = the measurement noise of the GPS (6x1 matrix) which is also assumed white and Gaussian (detailed calculations of the covariances see below) and its associated covariance

The subscript k refers to the time step at which the vector or matrix is considered and

**x**k contains the real position and velocity, whereas **z**k contains the measured position and

In the first step, GPS data is used to calculate the position of the sensor (Ox,Oy,Oz) the

001 0 0 0 0 0.1 0 0 0 , 0 0 0 1 0 0 0 0 0 0.1 0 0

0 0 0 0 1 0 0 0 0 0 0.1 0 0 0 0 0 0 1 0 0 0 0 0 0.1

<sup>ଶ</sup> and ߪ

*k k*

<sup>ଶ</sup> are respectively the covariance in vertical

1 0 0 0 0 0.1 0 0 0 0 0 0 1 0 0 0 0 0.1 0 0 0 0

number of elements in the control input vector if external forces are considered).

x xx k1 k k O O V (4)

The reference coordinate system chosen in this paper is a local tangent plane with the x axis pointed in the initial along track direction, y axis is 90 degrees clockwise to the x axis and corresponds to the initial cross track direction. In the results that are presented in this paper, shifts have only be applied in the cross-track direction. The shifts in meters are scaled to shifts in pixels as a function of the altitude (given by the GPS in meters), the field of view of the sensor (dependent upon the lens used) and the number of pixels in one scan line.

In the following section a description of the system is given, as well as the assumptions made. Then the application of the Kalman filter to acquire the position and velocity of the sensor is described with a detailed description of the vectors and matrices used. In the 2nd paragraph of this section, the influence of roll is taken into account. Then a paragraph that describes the image resampling phase applied to low flying airborne imagery in littoral areas.

In general, the application of the Kalman filter is used to acquire the position and velocity of the sensor is described with a detailed description of the vectors and matrices used and influence of roll is taken into account as described below, followed by a nearest neighbor resampling of the HSI imagery for each band independently.

Results that are presented in this paper, pixel shifts are only applied in the cross-track direction. Use of a gimbal sensor mount has allowed reduced HSI sensor motion corrections, however the need for improving image corrections in order to include pitch and yaw motions have been developed.

Fig. 5. This figure shows the different altitudes and heights used. Where h is the altitude of the platform with respect to the WGS84, hDEM is the surface elevation, halt is the altitude of the platform with respect to MSL and hH is the true altitude with respect to the earth's surface. In our applications we consider that the surface elevation is negligible as we take images of oil spills around MSL, so hH ≈ halt.

#### **2.3 Description of the platform dynamic system**

In order to model the movement of the platform, a discrete dynamic system described by the canonical state space equations are used:

$$\mathbf{x}\_{k+1} = A\_k \mathbf{x}\_k + B\_k \mathbf{u}\_k + \mathbf{w}\_k \tag{2}$$

$$\mathbf{z}\_k = \mathbf{H}\_k \mathbf{x}\_k + \mathbf{v}\_k \tag{3}$$

where:

274 Remote Sensing – Advanced Techniques and Platforms

The reference coordinate system chosen in this paper is a local tangent plane with the x axis pointed in the initial along track direction, y axis is 90 degrees clockwise to the x axis and corresponds to the initial cross track direction. In the results that are presented in this paper, shifts have only be applied in the cross-track direction. The shifts in meters are scaled to shifts in pixels as a function of the altitude (given by the GPS in meters), the field of view of the sensor (dependent upon the lens used) and the number of pixels in one scan

In the following section a description of the system is given, as well as the assumptions made. Then the application of the Kalman filter to acquire the position and velocity of the sensor is described with a detailed description of the vectors and matrices used. In the 2nd paragraph of this section, the influence of roll is taken into account. Then a paragraph that describes the image resampling phase applied to low flying airborne imagery in littoral

In general, the application of the Kalman filter is used to acquire the position and velocity of the sensor is described with a detailed description of the vectors and matrices used and influence of roll is taken into account as described below, followed by a nearest neighbor

Results that are presented in this paper, pixel shifts are only applied in the cross-track direction. Use of a gimbal sensor mount has allowed reduced HSI sensor motion corrections, however the need for improving image corrections in order to include pitch and yaw

Fig. 5. This figure shows the different altitudes and heights used. Where h is the altitude of the platform with respect to the WGS84, hDEM is the surface elevation, halt is the altitude of the platform with respect to MSL and hH is the true altitude with respect to the earth's surface. In our applications we consider that the surface elevation is negligible as we take

In order to model the movement of the platform, a discrete dynamic system described by

**x x uw** *k kk k k k* <sup>1</sup> *A B* (2)

**z xv** *k kk k H* (3)

resampling of the HSI imagery for each band independently.

line.

areas.

motions have been developed.

images of oil spills around MSL, so hH ≈ halt.

the canonical state space equations are used:

**2.3 Description of the platform dynamic system** 

**x**k = the state vector (6x1 matrix). **x**k = (Ox Oy Oz Vx Vy Vz)kT contains the position of the sensor (Ox,Oy,Oz) (in meters) and the velocity (Vx,Vy,Vz) (in meters per second) in the reference coordinate system.

Ak = the (6x6) matrix that gives the relation between the previous state vector to the current state vector when no noise and no input vector are considered . This relation can also be given below by equation (4) for x, y and z.

$$\left(\big|\mathcal{O}\_{\mathbf{x}}\right)\_{\mathbf{k}+1} = \left(\big|\mathcal{O}\_{\mathbf{x}}\right)\_{\mathbf{k}} + \left(\big|V\_{\mathbf{x}}\right)\_{\mathbf{k}}\tag{4}$$

where:

∆tk = the time-interval (in seconds) between step k and k+1.

Bk = the (6xm) matrix that relates the optional input vector u to the current state. (m = the number of elements in the control input vector if external forces are considered).

**u**k = the control input vector (mx1 matrix), we assume that there are no external forces that act upon the system so **u**k =0 in our application. Actually, it is assumed that the drag is exactly compensated by the thrust and gravity by the lift.

**z**k = the measurement vector (6x1 matrix). **z**k = = (Oxm Oym Ozm Vxm Vym Vzm)kT contains the position of the sensor (Oxm,Oym,Ozm) (in meters) and velocity (Vxm,Vym,Vzm) (in meters per second) in the reference coordinate system obtained by the GPS.

Hk = the measurement sensitivity (6x6) matrix also known as the observation matrix that relates the state vector to the measurement vector (**z**k = Hk**x**k).

**w**k = the process noise or also called dynamic disturbance noise (6x1 matrix) which is assumed white and Gaussian with covariance matrix Qk (6x6 matrix).

**v**k = the measurement noise of the GPS (6x1 matrix) which is also assumed white and Gaussian (detailed calculations of the covariances see below) and its associated covariance matrix Rk (6x6 matrix).

The subscript k refers to the time step at which the vector or matrix is considered and indicates the time dependence.

**x**k contains the real position and velocity, whereas **z**k contains the measured position and velocity. **z**k is thus always prone to measurement noise.

In the first step, GPS data is used to calculate the position of the sensor (Ox,Oy,Oz) the matrices are defined as follows in the reference coordinate frame:


Detailed calculations of the covariance's ߪ௩ <sup>ଶ</sup> and ߪ <sup>ଶ</sup> are respectively the covariance in vertical and horizontal position (in m2) given by the GPS.

Hyperspectral Remote Sensing –

corrected estimate of the state vector <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** and ܲ

Kk (equation 6) is the Kalman gain (6x6 matrix).

estimated <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** and ܲ

state vector (6x1 vector), ܲ

to estimate the state vector:

and non-smoothed state.

**4. Roll correction** 

where: ˆ*s*

ܲ

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 277

 1 1

*P APA Q*

*k kk k*

*P I KH P*

<sup>௦</sup> = the covariance (6x6) matrix of the smoothed estimated state vector.

because the IMU data is given at a higher frequency than the GPS data.

*<sup>k</sup>* **x** = the smoothed estimated state vector (6x1 matrix).

*C PA P*

ି and ܲ

corrected value of the estimation covariance of the state vector, or:

ˆˆ ˆ *k k k kk k k*

*K H*

where, <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** and <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** are respectively the predicted (-) and corrected (+) value of the estimated

The Kalman filter thus computes a weighted average of the predicted and the measured state vector by using the Kalman gain Kk. If one has an accurate GPS sensor, the uncertainty on the measurement will be small so there will be more weight given to the measurement and thus the corrected estimate will be close to the measurement. When one has a non-accurate sensor, the uncertainty on the measurement is large and more weight will be given to the predicted estimate. A Kalman smoother has been applied as well where the equations are shown in (7) below. A Kalman Smoother in addition to the past observations also incorporates future observations

 

*T k kk k s s k k k k kk s sT k k kk k k*

ˆˆ ˆ ˆ

*P P CP P C*

Ck = the (6x6) matrix that determines the weight of the correction between the smoothed

The second step in the algorithm for motion correction accounts for the influence of the roll motion by using the IMU orientation output. This is not included in the first Kalman filter

1

*C A*

1 1

1 1

**xx x x** (7)

*k k k k kk*

*K PH HPH R*

*k kk k k*

and the measurement update step (given by equation 6 below) corrects the predicted

ା or:

<sup>222222</sup> ,,,,, *<sup>k</sup> Ox Oy Oz Vx Vy Vz <sup>k</sup> P diag*

11 1 1

ି using the additional GPS sensor measurements **z**k to obtain the

1

**xx z x** (6)

ା (a 6 x 6 matrix) are respectively the predicted and

**x x** (5)

 

ˆ ˆ *k kk*

*A*

$$H\_{k} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}, \quad R\_{k} = \begin{pmatrix} \sigma\_{h}^{2} & 0 & 0 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{\sigma\_{h}^{2}}{2} & 0 & 0 & 0 & 0 \\ 0 & 0 & \sigma\_{v}^{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{\sigma\_{\tilde{V}h}^{2}}{2} & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{\sigma\_{\tilde{V}h}^{2}}{2} & 0 \\ 0 & 0 & 0 & 0 & 0 & \sigma\_{\tilde{V}\tilde{V}\_{k}}^{2} \end{pmatrix}, \quad \tilde{\sigma}\_{\tilde{V}\tilde{V}\_{k}} = \begin{pmatrix} \sigma\_{h}^{2} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \sigma\_{\tilde{V}\tilde{V}\_{k}}^{2} \end{pmatrix}$$

The covariance's of the vertical and horizontal velocities ��� � and ��� � (in m2 per seconds2) are however not given by the GPS but are calculated using using the following, where:

*A given quantity y is a function of x1, x2, … xN given by the formula y=f(x1,x2,...,xN). The uncertainties in xi are respectively e1, e2, …, eN. The absolute uncertainty ey is then given by* 

$$\left(\left(e\_y\right)^2 = \left(\frac{\partial f}{\partial \mathbf{x}\_1}\right)^2 \left(e\_1\right)^2 + \left(\frac{\partial f}{\partial \mathbf{x}\_2}\right)^2 \left(e\_2\right)^2 + \dots + \left(\frac{\partial f}{\partial \mathbf{x}\_N}\right)^2 \left(e\_N\right)^2$$

From the above, one has for the vertical, z direction ���)� <sup>=</sup> ���)�������)� ��� and hence the covariance of the vertical velocity ���� � )� equals ��� �)������� �)� ��� since the velocity at time k equals the difference between the position at time k+1 and k, divided by the time interval. It is assumed that there is no uncertainty on the time interval. This is valid because one assumes that **x**k and **x**k+1 are statistically independent. In an similar manner, one can calculate the covariance of the horizontal velocity, where one assumes

 $\sigma\_{O\chi}^2 = \sigma\_{O\chi}^2 = \frac{\sigma\_h^2}{2}$  from  $\sigma\_h^2 = \sigma\_{O\chi}^2 + \sigma\_{O\chi}^2$ .

#### **3. Kalman filter and smoothing approach**

#### **3.1 Position and velocity estimations**

The Kalman filter consists of 2 steps. A temporal update step (also known as the "a priori" prediction step) and a measurement update step (also known as the "a posteriori" correction step). In the temporal step given by equations 4 and 5, the estimated state vector <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** and the estimation covariance �� � at time step k are predicted based on the current knowledge at time step k-1.

The state vector ˆ *<sup>k</sup>* **x** contains the estimated position of the sensor (Ox,Oy,Oz) (in meters) and the velocity (Vx,Vy,Vz) (in meters per second) in the reference coordinate system.

The predictive procedure step is given by:

$$\begin{aligned} \hat{\mathbf{x}}\_{k}^{-} &= A\_{k-1} \hat{\mathbf{x}}\_{k-1}^{+} \\ P\_{k}^{-} &= A\_{k-1} P\_{k-1}^{+} A\_{k-1}^{\prime} + Q\_{k-1} \end{aligned} \tag{5}$$

and the measurement update step (given by equation 6 below) corrects the predicted estimated <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** and ܲ ି using the additional GPS sensor measurements **z**k to obtain the corrected estimate of the state vector <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** and ܲ ା or:

$$\begin{aligned} \hat{\mathbf{x}}\_{k} &= P\_{k}^{-} H\_{k}^{\prime} \left( H\_{k} P\_{k}^{-} H\_{k}^{\prime} + R\_{k} \right)^{-1} \\ \hat{\mathbf{x}}\_{k}^{+} &= \hat{\mathbf{x}}\_{k}^{-} + \mathcal{K}\_{k} \left( \mathbf{z}\_{k} - H\_{k} \hat{\mathbf{x}}\_{k}^{-} \right) \\ P\_{k}^{+} &= \left( I - \mathcal{K}\_{k} H\_{k} \right) P\_{k}^{-} \end{aligned} \tag{6}$$

where, <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** and <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** are respectively the predicted (-) and corrected (+) value of the estimated state vector (6x1 vector), ܲ ି and ܲ ା (a 6 x 6 matrix) are respectively the predicted and corrected value of the estimation covariance of the state vector, or:

$$P\_k^+ = \operatorname{diag} \left( \sigma\_{Ox}^2, \sigma\_{Oy'}^2, \sigma\_{Ox'}^2, \sigma\_{Vx'}^2, \sigma\_{Vy'}^2, \sigma\_{Vz}^2 \right)\_k$$

Kk (equation 6) is the Kalman gain (6x6 matrix).

The Kalman filter thus computes a weighted average of the predicted and the measured state vector by using the Kalman gain Kk. If one has an accurate GPS sensor, the uncertainty on the measurement will be small so there will be more weight given to the measurement and thus the corrected estimate will be close to the measurement. When one has a non-accurate sensor, the uncertainty on the measurement is large and more weight will be given to the predicted estimate.

A Kalman smoother has been applied as well where the equations are shown in (7) below. A Kalman Smoother in addition to the past observations also incorporates future observations to estimate the state vector:

$$\begin{aligned} \mathbf{C}\_{k} &= P\_{k}^{+} A\_{k}^{T} \left( P\_{k+1}^{-} \right)^{-1} \\ \hat{\mathbf{x}}\_{k}^{s} &= \hat{\mathbf{x}}\_{k}^{+} + \mathbf{C}\_{k} \left( \hat{\mathbf{x}}\_{k+1}^{s} - A\_{k} \hat{\mathbf{x}}\_{k}^{+} \right) \\ P\_{k}^{s} &= P\_{k}^{+} + \mathbf{C}\_{k} \left( P\_{k+1}^{s} - P\_{k+1}^{-} \right) \mathbf{C}\_{k}^{T} \end{aligned} \tag{7}$$

where:

276 Remote Sensing – Advanced Techniques and Platforms

2

001000 0 0 0 0 0 , , <sup>000100</sup>

000 0 0 <sup>000010</sup> <sup>2</sup>

*A given quantity y is a function of x1, x2, … xN given by the formula y=f(x1,x2,...,xN). The* 

 

*ff f eee e xx x*

... *y N*

� )� equals ���

equals the difference between the position at time k+1 and k, divided by the time interval. It is assumed that there is no uncertainty on the time interval. This is valid because one assumes that **x**k and **x**k+1 are statistically independent. In an similar manner, one can

from ��

The Kalman filter consists of 2 steps. A temporal update step (also known as the "a priori" prediction step) and a measurement update step (also known as the "a posteriori" correction step). In the temporal step given by equations 4 and 5, the estimated state vector <sup>ˆ</sup> *<sup>k</sup>* **<sup>x</sup>** and the

The state vector ˆ *<sup>k</sup>* **x** contains the estimated position of the sensor (Ox,Oy,Oz) (in meters) and

the velocity (Vx,Vy,Vz) (in meters per second) in the reference coordinate system.

1 2

1 2

From the above, one has for the vertical, z direction ���)� <sup>=</sup> ���)�������)�

calculate the covariance of the horizontal velocity, where one assumes

� <sup>=</sup> �� � �

��� � = ���

**3. Kalman filter and smoothing approach** 

**3.1 Position and velocity estimations** 

The predictive procedure step is given by:

estimation covariance ��

time step k-1.

2 2 2 2 2 2 2

however not given by the GPS but are calculated using using the following, where:

*uncertainties in xi are respectively e1, e2, …, eN. The absolute uncertainty ey is then given by* 

2

*h*

 

100000

covariance of the vertical velocity ����

000001

The covariance's of the vertical and horizontal velocities ���

*k k*

*H R*

010000 2

2

*h*

*k Vh*

2

*v*

2

*Vh*

� and ���

*N*

�)������� �)�

���

� = ���

� � ��� � .

� at time step k are predicted based on the current knowledge at

���

since the velocity at time k

000 0 0

000 0 0

 

0 00 0 0

00 0 0 0

2

2

2

*Vv k*

� (in m2 per seconds2) are

and hence the

ˆ*s <sup>k</sup>* **x** = the smoothed estimated state vector (6x1 matrix).

ܲ <sup>௦</sup> = the covariance (6x6) matrix of the smoothed estimated state vector.

Ck = the (6x6) matrix that determines the weight of the correction between the smoothed and non-smoothed state.

#### **4. Roll correction**

The second step in the algorithm for motion correction accounts for the influence of the roll motion by using the IMU orientation output. This is not included in the first Kalman filter because the IMU data is given at a higher frequency than the GPS data.

Hyperspectral Remote Sensing –

line.

**wavelength contrast algorithms** 

(sensor zenith angle) = 55°,

in the figure below.

hyperspectral imagery.

addition to the wavelength λ, (368-1115 nm) were the

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 279

Fig. 6. This image shows the conversion triangles used to calculate the shiftratio between the shift on the earth surface ss in meters and the pixelshift sp. hH is the altitude above the surface, α half of the angular field of view of the camera and nN the number of pixels in one

Hyperspectral signatures and imagery offer unique benefits in detection of land and water features due to the information contained in reflectance signatures that directly show relative absorption and backscattering features of targets. The reflectance spectra that will be used in this paper were collected *in-situ* on May 31st 2011 using a SE590 high spectral resolution solid state spectrograph and the HSI imaging system described above. Bidirectional Reflectance Distribution Function (BRDF) signatures were collected of weathered oil, turbid water, grass and dead vegetation. The parameters describing the function in

*<sup>i</sup>* (solar azimuth angle) = 105° and the

= 270°. The reflectance BRDF signature is calculated from the downwelling radiance using a calibrated Lambertian diffuse reflectance panel and the upwelling radiance at the above specified viewing geometry for each target (oil, water, grass, dead vegetation) as described

The figures below show the results of measurements from 400 to 900 nm for a 1 mm thick surface weathered oil film, diesel fuel, turbid water (showing the solar induced fluorescence line height feature, dead vegetation, and field grass with the red edge feature common to vegetation and associated leaf surfaces. These BRDF signatures are used below to select optimal spectral channels and regions using optimally selected contrast ratio algorithms in order to discriminate oil from other land & water features in

*<sup>i</sup>* (solar zenith angle) = 71.5°,

<sup>0</sup> (sensor azimuth angle)

0

**6. Feature detection in hyperspectral images using optimal multiple** 

The state equations and Kalman filter/smoothing equations are given by 2.5, 2.6 and 2.7 with state vector ' ˆ *<sup>k</sup>* **x** containing the estimated position of the center pixel of the scanline on the surface (in meters) and the tangent of the roll angle (nondimensional) in the reference coordinate system. The measurement vector **z**k' contains the position of the sensor (Ox,Oy) (in meters) specified by the output of the previous Kalman filter in the reference coordinate system and the tangent of the rollangle rm (nondimensional) given by the orientation output of the IMU.

The matrices used are defined by:

$$\begin{aligned} \mathbf{x}'\_k &= \begin{pmatrix} \mathbf{x} \\ \mathbf{y} \\ \mathbf{z} \end{pmatrix}\_k, \quad A'\_k = \begin{pmatrix} \mathbf{1} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{1} \end{pmatrix}\_k, \quad Q'\_k = \begin{pmatrix} \mathbf{0}.\mathbf{1} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0}.\mathbf{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0}.\mathbf{0}\mathbf{0} \end{pmatrix}\_k \\\\ \mathbf{z}'\_k &= \begin{pmatrix} \mathbf{O}\_x \\ \mathbf{O}\_y \\ \mathbf{0} \\ \tan r\_m \end{pmatrix}\_k, \quad H'\_k = \begin{pmatrix} \mathbf{1} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} & h\_{\mathrm{alt}} \\ \mathbf{0} & \mathbf{0} & \mathbf{1} \end{pmatrix}\_k, \quad R'\_k = \begin{pmatrix} \sigma^2\_{\mathrm{Ox}} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \sigma^2\_{\mathrm{Ox}} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \sigma^2\_{r} \end{pmatrix}\_k \end{aligned}$$

where:

r = the roll angle (in radians). halt = the altitude of the sensor (in meters) with respect to MSL. ை௫ߪ <sup>ଶ</sup> and ߪை௬ <sup>ଶ</sup> = respectively the covariance of the position in x and y direction of the sensor given by the previous Kalman filter (in m2).

ߪ ଶ = the covariance of the roll angle given by the IMU (nondimensional).

## **5. Image resampling**

In some cases, it is only desirable to cross-track shift corrections and not resample the image in order to keep the pure spectral signatures of measured pixels. Otherwise, 2D nearest neighbourhood resampling is used.

The cross-track shift corrections (which are in the y direction) ss on the surface in meters need to be converted to pixelshifts sp. The number of pixels in one scanline nN, the altitude of the sensor above the surface in meters hH and half of the angular field of view α are used. This is accomplished by defining a conversion ratio cr, the shift in meters on the surface of 1 pixel shift, or:

$$c\_r = \frac{w}{\frac{n\_N}{2}}\tag{8}$$

where:

w = hH tan α.

The pixelshift sp is then given by *<sup>s</sup> <sup>p</sup> r s s c* as depicted below:

The state equations and Kalman filter/smoothing equations are given by 2.5, 2.6 and 2.7 with state vector ' ˆ *<sup>k</sup>* **x** containing the estimated position of the center pixel of the scanline on the surface (in meters) and the tangent of the roll angle (nondimensional) in the reference coordinate system. The measurement vector **z**k' contains the position of the sensor (Ox,Oy) (in meters) specified by the output of the previous Kalman filter in the reference coordinate system and the tangent of the rollangle rm (nondimensional) given by the orientation output

> 

, 0 1 0 , 0 0.1 0 tan 001 0 0 0.001

*kk k*

, 01 , 0 0

*<sup>m</sup> <sup>k</sup> <sup>k</sup> <sup>r</sup> <sup>k</sup>*

<sup>ଶ</sup> = respectively the covariance of the position in x and y direction of the sensor

1 0 0 0.1 0 0

10 0 0 0

2

2

*<sup>n</sup>* (8)

2

In some cases, it is only desirable to cross-track shift corrections and not resample the image in order to keep the pure spectral signatures of measured pixels. Otherwise, 2D nearest

The cross-track shift corrections (which are in the y direction) ss on the surface in meters need to be converted to pixelshifts sp. The number of pixels in one scanline nN, the altitude of the sensor above the surface in meters hH and half of the angular field of view α are used. This is accomplished by defining a conversion ratio cr, the shift in meters on the surface of 1

*c*

*r s s*

*c*

2 *<sup>r</sup> <sup>N</sup> w*

as depicted below:

tan 00 1 0 0

*Ox x k y k alt k Oy*

*kk k*

*yA Q*

*O H hR*

ଶ = the covariance of the roll angle given by the IMU (nondimensional).

halt = the altitude of the sensor (in meters) with respect to MSL.

of the IMU.

where:

<sup>ଶ</sup> and ߪை௬

pixel shift, or:

where:

w = hH tan α.

**5. Image resampling** 

ை௫ߪ

ߪ

The matrices used are defined by:

**x**

**z**

r = the roll angle (in radians).

neighbourhood resampling is used.

The pixelshift sp is then given by *<sup>s</sup> <sup>p</sup>*

given by the previous Kalman filter (in m2).

*x*

*O*

*r*

*r*

Fig. 6. This image shows the conversion triangles used to calculate the shiftratio between the shift on the earth surface ss in meters and the pixelshift sp. hH is the altitude above the surface, α half of the angular field of view of the camera and nN the number of pixels in one line.

## **6. Feature detection in hyperspectral images using optimal multiple wavelength contrast algorithms**

Hyperspectral signatures and imagery offer unique benefits in detection of land and water features due to the information contained in reflectance signatures that directly show relative absorption and backscattering features of targets. The reflectance spectra that will be used in this paper were collected *in-situ* on May 31st 2011 using a SE590 high spectral resolution solid state spectrograph and the HSI imaging system described above. Bidirectional Reflectance Distribution Function (BRDF) signatures were collected of weathered oil, turbid water, grass and dead vegetation. The parameters describing the function in addition to the wavelength λ, (368-1115 nm) were the *<sup>i</sup>* (solar zenith angle) = 71.5°, 0 (sensor zenith angle) = 55°, *<sup>i</sup>* (solar azimuth angle) = 105° and the <sup>0</sup> (sensor azimuth angle) = 270°. The reflectance BRDF signature is calculated from the downwelling radiance using a calibrated Lambertian diffuse reflectance panel and the upwelling radiance at the above specified viewing geometry for each target (oil, water, grass, dead vegetation) as described in the figure below.

The figures below show the results of measurements from 400 to 900 nm for a 1 mm thick surface weathered oil film, diesel fuel, turbid water (showing the solar induced fluorescence line height feature, dead vegetation, and field grass with the red edge feature common to vegetation and associated leaf surfaces. These BRDF signatures are used below to select optimal spectral channels and regions using optimally selected contrast ratio algorithms in order to discriminate oil from other land & water features in hyperspectral imagery.

Hyperspectral Remote Sensing –

angle from magnetic north direction.

400 500 600 700 800 900 Wavelength (nm)

Contrast

"*multiple-wavelength (or channel) contrast*" as:

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 281

 Fig. 8. Averaged (n=360) BRDF reflectance spectrums collected using a SE590 solid state spectrograph May 31, 2010. From upper left to right: BRDF spectrum of weathered oil (1 mm thick film) on clear water, diesel film (1mm thick film) on clear water, turbid water, with high chlorophyll content as indicated by the solar induced fluorescence line height, dead

Fig. 9. Resulting BRDF Weber contrast signatures between oil as the target and different backgrounds (left to right): turbid water, dead vegetation (dead foliage) and field grass.


0.0

0.5

Contrast

1.0

400 500 600 700 800 900 Wavelength (nm)

400 500 600 700 800 900 Wavelength (nm)

Contrast

A limitation with this common definition of the contrast is that one band is used out of all the possible combinations available in a hyperspectral image for the feature detection or extraction algorithm. This limitation can be overcome, by defining an advantageous

vegetation (dead leaves) and field grass showing the red edge. Solar angles were determined from DGPS location, time of day, and sensor position angles and measured

Fig. 7. Illumination and viewing geometry defined for calculation of the BRDF signatures collected using the 252 channel SE590 high spectral and radiometric sensitivity solid state spectrograph and the hyperspectral imaging system, where θi is the incident solar zenith angle of the sun, θ0 is the sensor zenith angle, Øi is the solar azimuth angle from the north and Ø0 is the sensor azimuth angle as indicated above. In general, a goniometer measurement system is used to measure the BRDF in the field or laboratory environment as the sensor zenith and azimuth angles are changed during a collection period with a given solar zenith conditions.

The above BRDF signatures were used to select optimal spectral regions in order to apply the results to hyperspectral imagery collected from a weathered oil impacted shoreline in Barataria Bay, LA. The first method used was to perform feature detection using spectral contrast signature and HSI image contrast. The well know Weber's contrast definition is first used to determine the maximum (optimal) value of the contrast between a target *t* and a background *b* as a function of wavelength, or:

$$C\_t\left(\mathcal{\lambda}\_k\right) = \frac{BRDF\_t\left(\theta\_0, \phi\_0, \theta\_i, \phi\_i, \mathcal{\lambda}\_k\right) - BRDF\_b\left(\theta\_0, \phi\_0, \theta\_i, \phi\_i, \mathcal{\lambda}\_k\right)}{BRDF\_b\left(\theta\_0, \phi\_0, \theta\_i, \phi\_i, \mathcal{\lambda}\_k\right)}\tag{9}$$

The resulting contrast calculated across the spectrum for each channel are shown below using the 1 mm thick oil film as the target and the backgrounds of turbid water, dead vegetation (dead foliage), and field grass.

The result of the optimization of the contrast obtained from equation 9 yields an optimal channel and/or spectral region as a function of wavelength where the contrast is maximized between a specified target and specified background or feature in a hyperspectral image collected from a fixed or moving platform.

Fig. 7. Illumination and viewing geometry defined for calculation of the BRDF signatures collected using the 252 channel SE590 high spectral and radiometric sensitivity solid state spectrograph and the hyperspectral imaging system, where θi is the incident solar zenith angle of the sun, θ0 is the sensor zenith angle, Øi is the solar azimuth angle from the north

measurement system is used to measure the BRDF in the field or laboratory environment as the sensor zenith and azimuth angles are changed during a collection period with a given

The above BRDF signatures were used to select optimal spectral regions in order to apply the results to hyperspectral imagery collected from a weathered oil impacted shoreline in Barataria Bay, LA. The first method used was to perform feature detection using spectral contrast signature and HSI image contrast. The well know Weber's contrast definition is first used to determine the maximum (optimal) value of the contrast between a target *t* and a

The resulting contrast calculated across the spectrum for each channel are shown below using the 1 mm thick oil film as the target and the backgrounds of turbid water, dead

The result of the optimization of the contrast obtained from equation 9 yields an optimal channel and/or spectral region as a function of wavelength where the contrast is maximized between a specified target and specified background or feature in a hyperspectral image

0 0

*BRDF BRDF*

0 0 0 0

*b ii k*

, ,,, , ,,, , ,,, *t ii k b ii k*

 

*BRDF* (9)

and Ø0 is the sensor azimuth angle as indicated above. In general, a goniometer

solar zenith conditions.

background *b* as a function of wavelength, or:

*t k*

vegetation (dead foliage), and field grass.

collected from a fixed or moving platform.

*C*

Fig. 8. Averaged (n=360) BRDF reflectance spectrums collected using a SE590 solid state spectrograph May 31, 2010. From upper left to right: BRDF spectrum of weathered oil (1 mm thick film) on clear water, diesel film (1mm thick film) on clear water, turbid water, with high chlorophyll content as indicated by the solar induced fluorescence line height, dead vegetation (dead leaves) and field grass showing the red edge. Solar angles were determined from DGPS location, time of day, and sensor position angles and measured angle from magnetic north direction.

Fig. 9. Resulting BRDF Weber contrast signatures between oil as the target and different backgrounds (left to right): turbid water, dead vegetation (dead foliage) and field grass.

A limitation with this common definition of the contrast is that one band is used out of all the possible combinations available in a hyperspectral image for the feature detection or extraction algorithm. This limitation can be overcome, by defining an advantageous "*multiple-wavelength (or channel) contrast*" as:

Hyperspectral Remote Sensing –

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 283

 <sup>2</sup> <sup>1</sup> <sup>4</sup> 0.5 0.5

 <sup>2</sup> 0.5 0.25 1 1

To order to give equal weight to absorption and backscatter features in the band selection

 

*I for I*

<sup>1</sup> 0 1

*for I <sup>I</sup>*

Using this definition, both concavities will be on the same scale and a concave-down feature (hence: backscattering) will give a positive value (>1) while a concave-up feature (hence: absorption) will give a negative value (<-1) and be treated the same numerically. For example, in the above example, the result for the new definition of the inflection or 2nd

A second issue is to determine what values to assign to the upward and backward operators in the dilation filter. One could pick the optimal value for the inflection using all possible combinations of m and n. The problem with this method is that when m and n are large, the difference between the channels for which the inflection is calculated and the one to which it is compared can be influenced by the signal to noise ratio being at the low and high wavelengths in a typical camera/spectrograph system. Thus the resulting optimal regions selected can be scientifically or physically difficult to explain. Thus a limit is placed on the maximum value of the m, n operators from a practical point of view. The minimal value of m, n is 1. Thus, one can select the optimal range of the m and n wavelet filter operators (either a maximum (backscattering) or a minimum (absorption) for all combinations of m and n between 1 and the maximal value (in this paper this maximal value used was selected as 7). The resulting derivative estimator spectra (inflection spectra) using equation 12 was calculated and is shown

Fig. 11. BRDF Inflection spectra using the reflectance spectrums above. From left to right: an oil film (1mm thick) on clear water, turbid water, dead vegetation (dead leaves) and grass.



Inflection of dead vegetation

1 2

400 500 600 700 800 900 Wavelength (nm)

The inflection algorithm can also be applied to the contrast spectrums (to enhance variation in the contrast spectrum). The result of this calculation is given in the following figures.

1

(12)

400 500 600 700 800 900 Wavelength (nm)


Inflection of grass

*I*

*I*

below, using the previously shown BRDF spectra shown in Figure 8 above.

400 500 600 700 800 900 Wavelength (nm)



Inflection of turbid water

1 2 \*

*I*

While in the case of the concave-up (same concavity), the result will be:

process, a modified spectrum for I\*(λ) is defined as:

derivative estimator would be 4 and -4.

400 500 600 700 800 900 Wavelength (nm)



Inflection of oil

1 2

$$\begin{split} \mathbf{C}\_{t}\left(\boldsymbol{\lambda}\_{k,m}\right) &= \frac{BRDF\_{t}\left(\boldsymbol{\theta}\_{0},\boldsymbol{\phi}\_{0},\boldsymbol{\theta}\_{i},\boldsymbol{\phi}\_{i},\boldsymbol{\lambda}\_{k}\right) - BRDF\_{b}\left(\boldsymbol{\theta}\_{0},\boldsymbol{\phi}\_{0},\boldsymbol{\theta}\_{i},\boldsymbol{\phi}\_{i},\boldsymbol{\lambda}\_{k\pm m}\right)}{BRDF\_{b}\left(\boldsymbol{\theta}\_{0},\boldsymbol{\phi}\_{0},\boldsymbol{\theta}\_{i},\boldsymbol{\phi}\_{i},\boldsymbol{\lambda}\_{k\pm m}\right)} \\ &= \frac{BRDF\_{t}\left(\boldsymbol{\theta}\_{0},\boldsymbol{\phi}\_{0},\boldsymbol{\theta}\_{i},\boldsymbol{\phi}\_{i},\boldsymbol{\lambda}\_{k}\right)}{BRDF\_{b}\left(\boldsymbol{\theta}\_{0},\boldsymbol{\phi}\_{0},\boldsymbol{\theta}\_{i},\boldsymbol{\phi}\_{i},\boldsymbol{\lambda}\_{k\pm m}\right)} - 1 \end{split} \tag{10}$$

The result of the optimization of this "*multiple-wavelength contrast algorithm*" is the optimal selection of a band ratio (located in a spectral region) minus one. Furthermore, a new definition of the inflection contrast spectrum (a numerical approximation of the second derivative) can be defined. The contrast inflection spectrum described in previous papers was given by:

$$I\_t\left(\mathcal{J}\_{k,m,n}\right) = \frac{BRDF\_t\left(\theta\_0, \phi\_0, \theta\_i, \phi\_i, \mathcal{J}\_k\right)^2}{BRDF\_t\left(\theta\_0, \phi\_0, \theta\_i, \phi\_i, \mathcal{J}\_{k\times m}\right)BRDF\_t\left(\theta\_0, \phi\_0, \theta\_i, \phi\_i, \mathcal{J}\_{k-n}\right)}\tag{11}$$

where *m* and *n* are respectively defined as a dilating wavelet filter forward and backward operators described by Bostater, 2006. This inflection is used to estimate the second derivative of reflectance spectra. The underlying goal of computing an approximation of the second derivative is to utilize the nonlinear derivative based, dilating wavelet filter to enhance the variations in the reflectance spectra signals, as well as in the contrast spectrum signals. These variations directly represent the target and background absorption (hence: concave up) and backscattering (hence: concave down) features within a hyperspectral reflectance image or scene and form the scientific basis of the discrimination based noncontact optimal sensing algorithms. A practical limitation encountered using this definition above, is that a concave-down (or backscattering) feature value of the inflection as defined in 2.7 is greater than one and a concave up (or absorption) feature, in the inflection or derivative based wavelet filter defined in 1.7 will be between 0 and 1. There is thus a difference in scale between a concave-up and a concave-down behavior. Consider the following example:

Fig. 10. Example concave-down (backscattering) feature (blue line) and a concave-up (absorption) feature (red line) of the same amplitude (Y axis) as a function of an spectral wavelength on the x axis.

In the case of the concave-down, the result of the inflection is:

*t ii k b i i km*

*BRDF BRDF*

*BRDF*

The result of the optimization of this "*multiple-wavelength contrast algorithm*" is the optimal selection of a band ratio (located in a spectral region) minus one. Furthermore, a new definition of the inflection contrast spectrum (a numerical approximation of the second derivative) can be defined. The contrast inflection spectrum described in previous papers

Fig. 10. Example concave-down (backscattering) feature (blue line) and a concave-up (absorption) feature (red line) of the same amplitude (Y axis) as a function of an spectral

0.5 1.0 1.5 2.0 2.5 3.0 3.5 0.4

In the case of the concave-down, the result of the inflection is:

0.5 0.6 0.7 0.8 0.9 1.0 1.1 *BRDF*

*BRDF BRDF*

*C*

was given by:

following example:

wavelength on the x axis.

*t km*

*I*

*t kmn*

, ,

,

 

where *m* and *n* are respectively defined as a dilating wavelet filter forward and backward operators described by Bostater, 2006. This inflection is used to estimate the second derivative of reflectance spectra. The underlying goal of computing an approximation of the second derivative is to utilize the nonlinear derivative based, dilating wavelet filter to enhance the variations in the reflectance spectra signals, as well as in the contrast spectrum signals. These variations directly represent the target and background absorption (hence: concave up) and backscattering (hence: concave down) features within a hyperspectral reflectance image or scene and form the scientific basis of the discrimination based noncontact optimal sensing algorithms. A practical limitation encountered using this definition above, is that a concave-down (or backscattering) feature value of the inflection as defined in 2.7 is greater than one and a concave up (or absorption) feature, in the inflection or derivative based wavelet filter defined in 1.7 will be between 0 and 1. There is thus a difference in scale between a concave-up and a concave-down behavior. Consider the

0 0

0 0 0 0

, ,,, <sup>1</sup> , ,,,

0 0

0 0 0 0

*t i i km t i i kn*

 

*BRDF BRDF* (11)

2

, ,,, , ,,, , ,,, *t ii k*

, ,,, , ,,, , ,,,

*t i i k b i i km*

*b i i km*

 

(10)

$$I = \frac{1^2}{0.5 \ast 0.5} = 4$$

While in the case of the concave-up (same concavity), the result will be:

$$I = \frac{0.5^2}{1 \ast 1} = 0.25$$

To order to give equal weight to absorption and backscatter features in the band selection process, a modified spectrum for I\*(λ) is defined as:

$$I^\* = \begin{cases} \begin{array}{c} I \\ -\frac{1}{I} \end{array} & \text{for } I > 1 \\ -\frac{1}{I} \end{array} \tag{12}$$

Using this definition, both concavities will be on the same scale and a concave-down feature (hence: backscattering) will give a positive value (>1) while a concave-up feature (hence: absorption) will give a negative value (<-1) and be treated the same numerically. For example, in the above example, the result for the new definition of the inflection or 2nd derivative estimator would be 4 and -4.

A second issue is to determine what values to assign to the upward and backward operators in the dilation filter. One could pick the optimal value for the inflection using all possible combinations of m and n. The problem with this method is that when m and n are large, the difference between the channels for which the inflection is calculated and the one to which it is compared can be influenced by the signal to noise ratio being at the low and high wavelengths in a typical camera/spectrograph system. Thus the resulting optimal regions selected can be scientifically or physically difficult to explain. Thus a limit is placed on the maximum value of the m, n operators from a practical point of view. The minimal value of m, n is 1. Thus, one can select the optimal range of the m and n wavelet filter operators (either a maximum (backscattering) or a minimum (absorption) for all combinations of m and n between 1 and the maximal value (in this paper this maximal value used was selected as 7). The resulting derivative estimator spectra (inflection spectra) using equation 12 was calculated and is shown below, using the previously shown BRDF spectra shown in Figure 8 above.

Fig. 11. BRDF Inflection spectra using the reflectance spectrums above. From left to right: an oil film (1mm thick) on clear water, turbid water, dead vegetation (dead leaves) and grass.

The inflection algorithm can also be applied to the contrast spectrums (to enhance variation in the contrast spectrum). The result of this calculation is given in the following figures.

Hyperspectral Remote Sensing –

the contrast *Band* ( 382nm)

inflection *Band* ( 420nm)

Weber's contrast

Multiple wavelength contrast

Multiple wavelength contrast of the inflection

Inflection of

Contrast of the

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 285

sampling distances (GSD) on the order of several mm to cm scales, depending upon the distance between the sensor and the shoreline. The purpose of collecting this type of imagery is to (1) reduce atmospheric affects and (2) minimize the influence of the "mixed pixel" and "adjacency effects" in selecting spectral regions for detection of weathered oil and for testing algorithms. The results are also immediately and directly applicable to low altitude airborne

Water Mud and oil Sand Vegetation

*Band* ( 368nm)

*Band* ( 382nm)

*Band* ( 424nm)

*Band nm Band nm* 

( 751 ) <sup>1</sup> ( 345 ) *Band nm Band nm* 

> ( 394 ) ( 363 )

*Band* ( 345nm)

*Band* ( 710nm)

*Band* ( 684nm)

*Band nm Band nm* 

 

( 751 ) <sup>1</sup> ( 345 ) *Band nm Band nm* 

> ( 394 ) ( 363 )

imagery, especially if the same sensor is used aboard the airborne platform.

*Band* ( 378nm)

*Band* ( 360nm)

*Band* ( 684nm)

*Band nm Band nm*

Table 1. Resulting bands or band ratios for the optimization of: the contrast (Weber's definition), the inflection of the contrast, the contrast of the inflection spectra, the multiple wavelength contrast (as defined above) and the multiple wavelength contrast of the inflection spectra. In each case, weathered oil is the target and the background is: water,

( 751 ) <sup>1</sup> ( 345 ) *Band nm Band nm* 

> ( 394 ) ( 363 )

The sensor used to view the shoreline can be directly mounted on the vessel or can be mounted above the water but near the shore using a tripod or in a vessel. In the case of a mounted sensor on a vessel, the vessel is anchored at two points, allowing movement in mainly one direction (for example the boat is anchored to mainly allow motion due to waves in the pitching direction. Fixed platform mounting does not require motion correction, however the data collected from the anchored vessel requires roll motion correction (in this

In order to perform this correction, an IMU (inertial measurement unit) is attached to the HSI sensor and collects the sensor motion information while the pushbroom sensor sweeps or is rotated (using a rotation stage) along the shoreline being investigated. This correction will be applied before any further processing of the contrast algorithms are applied to the imagery taken in the Northern Gulf of Mexico and shown below. An example of the measurement scheme that has been used to detect and discriminate weathered oil (as described above) is

The image below (right) is the resulting hyperspectral image 3 band RGB display of a shoreline that has been impacted by a recent oil-spill in the Gulf of Mexico region, near Bay

*Band* ( 759nm) 

*Band nm Band nm* 

mixture of oil and mud, sand or vegetation.

case pitch correction).

shown below.

Jimmy, Louisiana.

( 454 ) <sup>1</sup> ( 345 ) *Band nm Band nm* 

> ( 394 ) ( 363 )

Fig. 12. Inflection of the contrast spectra. The contrast target is weathered oil with different backgrounds. From left to right: turbid water, dead vegetation (dead leaves) and grass are the contrast backgrounds.

Once the inflection spectra are calculated, it is also possible to apply Weber's definition of the contrast to the inflection spectra instead of the BRDF. The resulting contrast spectrums are given in the following figure:

Fig. 13. Weber contrast of the inflection spectra. The target is weathered oil with different backgrounds. From left to right: turbid water, dead vegetation (dead leaves) and grass. Optimal bands and spectral regions are indicated by the greatest positive or negative values across the spectrums.

The result of the optimization procedures yields a band or band ratio for the different types of contrast (Weber's contrast, contrast of the inflection or inflection of the contrast). The optimal bands using the different techniques that were obtained using weathered oil film as the target and water, dead vegetation or grass as backgrounds are shown in the Table 1, and are used in processing hyperspectral imagery collected using the methods in the results section of this paper (see Table 1).

## **7. Collection of hyperspectral imagery from littoral zone**

In order to detect and discriminate the presence of weathered oil on a near shore habitat or the spatial extent of weathered oiled along a shoreline, a novel and new technique has been developed for collecting HSI imagery from a small vessel (anchored or underway), or the sensor mounted in the littoral zone. The resulting HSI imagery produces pixel sizes or ground

Fig. 12. Inflection of the contrast spectra. The contrast target is weathered oil with different backgrounds. From left to right: turbid water, dead vegetation (dead leaves) and grass are



Inflection of the contrast (Oil/Dead

Vegetation)

0

500

400 500 600 700 800 900 Wavelength (nm)

400 500 600 700 800 900 Wavelength (nm)

400 500 600 700 800 900 Wavelength (nm)





Contrast of the inflection (Oil/Grass)

0

1

0

10,000

Inflection of the contrast (Oil/Grass)

20,000

Once the inflection spectra are calculated, it is also possible to apply Weber's definition of the contrast to the inflection spectra instead of the BRDF. The resulting contrast spectrums

Fig. 13. Weber contrast of the inflection spectra. The target is weathered oil with different backgrounds. From left to right: turbid water, dead vegetation (dead leaves) and grass. Optimal bands and spectral regions are indicated by the greatest positive or negative values

400 500 600 700 800 900 Wavelength (nm)

The result of the optimization procedures yields a band or band ratio for the different types of contrast (Weber's contrast, contrast of the inflection or inflection of the contrast). The optimal bands using the different techniques that were obtained using weathered oil film as the target and water, dead vegetation or grass as backgrounds are shown in the Table 1, and are used in processing hyperspectral imagery collected using the methods in the results

In order to detect and discriminate the presence of weathered oil on a near shore habitat or the spatial extent of weathered oiled along a shoreline, a novel and new technique has been developed for collecting HSI imagery from a small vessel (anchored or underway), or the sensor mounted in the littoral zone. The resulting HSI imagery produces pixel sizes or ground

**7. Collection of hyperspectral imagery from littoral zone** 



Vegetation)


Contrast of the inflection (Oil/Dead

0

1

the contrast backgrounds.

400 500 600 700 800 900 Wavelength (nm)





Contrast of the inflection (Oil/Water)

0

1

Inflection of the contrast (Oil/Water)

across the spectrums.

section of this paper (see Table 1).

are given in the following figure:

400 500 600 700 800 900 Wavelength (nm)

sampling distances (GSD) on the order of several mm to cm scales, depending upon the distance between the sensor and the shoreline. The purpose of collecting this type of imagery is to (1) reduce atmospheric affects and (2) minimize the influence of the "mixed pixel" and "adjacency effects" in selecting spectral regions for detection of weathered oil and for testing algorithms. The results are also immediately and directly applicable to low altitude airborne imagery, especially if the same sensor is used aboard the airborne platform.


Table 1. Resulting bands or band ratios for the optimization of: the contrast (Weber's definition), the inflection of the contrast, the contrast of the inflection spectra, the multiple wavelength contrast (as defined above) and the multiple wavelength contrast of the inflection spectra. In each case, weathered oil is the target and the background is: water, mixture of oil and mud, sand or vegetation.

The sensor used to view the shoreline can be directly mounted on the vessel or can be mounted above the water but near the shore using a tripod or in a vessel. In the case of a mounted sensor on a vessel, the vessel is anchored at two points, allowing movement in mainly one direction (for example the boat is anchored to mainly allow motion due to waves in the pitching direction. Fixed platform mounting does not require motion correction, however the data collected from the anchored vessel requires roll motion correction (in this case pitch correction).

In order to perform this correction, an IMU (inertial measurement unit) is attached to the HSI sensor and collects the sensor motion information while the pushbroom sensor sweeps or is rotated (using a rotation stage) along the shoreline being investigated. This correction will be applied before any further processing of the contrast algorithms are applied to the imagery taken in the Northern Gulf of Mexico and shown below. An example of the measurement scheme that has been used to detect and discriminate weathered oil (as described above) is shown below.

The image below (right) is the resulting hyperspectral image 3 band RGB display of a shoreline that has been impacted by a recent oil-spill in the Gulf of Mexico region, near Bay Jimmy, Louisiana.

Hyperspectral Remote Sensing –

**8. Conclusion** 

techniques.

**9. Acknowledgments** 

aerial image acquisition.

SPIE Vol. 3499, pp. 277-285 (1998)

**10. References** 

Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas 287

The purpose of this paper has been to describe different calibration approaches and techniques useful in the development and application of remote sensing imaging systems. Calibration includes the use of laboratory and field techniques including the scanning of photogrammetric negatives utilized in large format cameras, as well as *in-situ* targets and spectral wavelength and radiance calibration techniques. A newly integrated hyperspectral airborne pushbroom imaging system has been described in detail. Imagery from different integrated imaging systems were described for airborne remote sensing algorithm developments using high spatial resolution (on the order of a few mm2 to larger sub meter pixel sizes) imaging systems. The high spatial and spectral resolution imagery shown in this paper are examples of technology for characterization of the water surface as well as

Other ongoing applications in the Marine & Environmental Optics Lab making use of data from the remote sensing systems described in this paper are (a) land surface vegetation studies needed for ongoing climate change studies currently being conducted in coastal Florida scrub vegetation studies and (b) layered radiative transfer modeling of surface and subsurface oil signatures for sensor comparisons and related algorithm development to detect surface and subsurface oil using spectral and spatial data fusion and sharpening

The work presented in this paper has been supported in part by the Northrop Grumman Corporation, NASA, Kennedy Space Center, KB Science, the National Science Foundation, the US-Canadian Fulbright Program, and the US Department of Education, *FIPSE* & European Union's grant *Atlantis STARS* (Sensing Technology and Robotics Systems) to Florida Institute of Technology, the Budapest University of Engineering and Economics (BME) and the Belgium Royal Military Academy, Brussels, in order to support of the involvement of undergraduate students in obtaining international dual US-EU undergraduate engineering degrees. Acknowledgement is also given to recent funding from the Florida Institute of Oceanography's BP Corporation's research grant award in support of

Aktaruzzaman, A., [Simulation and Correction of Spectral Smile Effect and its Influence on

Bostater, C., Ghir, T., Bassetti, L., Hall, C., Reyier, R., Lowers, K., Holloway-Adkins, K.,

Science and Earth Observation, Enschede, Netherlands, pp. 77 (2008) Bostater, C., "Imaging Derivative Spectroscopy for Vegetation Dysfunction Assessments",

Vegetation in Shallow Water", SPIE Vol. 5233, pp. 199-215 (2003)

Hyperspectral Mapping]. MS Thesis, International Institute for Geo-Information

Virnstein, R., "Hyperspectral Remote Sensing Protocol for Submerged Aquatic

subsurface features (such as weathered oil) in aquatic systems.

Fig. 14. The HSI imaging system (left) is placed upon a small vessel or a fixed platform (tripod) in shallow water types within viewing distance of a shoreline. The sensor sweeps the shoreline and the pushbroom sensor produces a hyperspectral image of the shoreline as shown in the above HSI 3 band image (right). Note the ability to see gravity and capillary waves, small grasses on the shoreline as well as weathered oil at the land-water margin. Image collected February 28, 2011 in Barataria Bay, Louisiana

In this case a vessel mounted sensor was used and the image was corrected for the platform motion (right). To illustrate the influence of the motion of a small vessel, and the necessary IMU corrections needed, a shoreline was imaged from a vessel (below left image) and from a fixed *in-situ* platform (right image) in April 2011 the platform motion (right).

Fig. 15. A hyperspectral image (left) 3 band RGB display of a littoral zone using a pushbroom sensor mounted on a vessel anchored at two points. During the acquisition of the hyperspectral image the sensor records the pitching effect of the anchored vessel that needs to be corrected using an IMU sensor due to the water surface gravity waves. The influence of this motion can clearly be seen in the image if no correction is applied (left). The shoreline area (right) acquired when the pushboom sensor was mounted on fixed platform above the water. In this case no correction needs to be applied to the image. Note the clarity of the water surface capillary and small gravity waves.

## **8. Conclusion**

286 Remote Sensing – Advanced Techniques and Platforms

 Fig. 14. The HSI imaging system (left) is placed upon a small vessel or a fixed platform (tripod) in shallow water types within viewing distance of a shoreline. The sensor sweeps the shoreline and the pushbroom sensor produces a hyperspectral image of the shoreline as shown in the above HSI 3 band image (right). Note the ability to see gravity and capillary waves, small grasses on the shoreline as well as weathered oil at the land-water margin.

In this case a vessel mounted sensor was used and the image was corrected for the platform motion (right). To illustrate the influence of the motion of a small vessel, and the necessary IMU corrections needed, a shoreline was imaged from a vessel (below left image) and from

pushbroom sensor mounted on a vessel anchored at two points. During the acquisition of the hyperspectral image the sensor records the pitching effect of the anchored vessel that needs to be corrected using an IMU sensor due to the water surface gravity waves. The influence of this motion can clearly be seen in the image if no correction is applied (left). The shoreline area (right) acquired when the pushboom sensor was mounted on fixed platform above the water. In this case no correction needs to be applied to the image. Note the clarity

Fig. 15. A hyperspectral image (left) 3 band RGB display of a littoral zone using a

of the water surface capillary and small gravity waves.

a fixed *in-situ* platform (right image) in April 2011 the platform motion (right).

Image collected February 28, 2011 in Barataria Bay, Louisiana

The purpose of this paper has been to describe different calibration approaches and techniques useful in the development and application of remote sensing imaging systems. Calibration includes the use of laboratory and field techniques including the scanning of photogrammetric negatives utilized in large format cameras, as well as *in-situ* targets and spectral wavelength and radiance calibration techniques. A newly integrated hyperspectral airborne pushbroom imaging system has been described in detail. Imagery from different integrated imaging systems were described for airborne remote sensing algorithm developments using high spatial resolution (on the order of a few mm2 to larger sub meter pixel sizes) imaging systems. The high spatial and spectral resolution imagery shown in this paper are examples of technology for characterization of the water surface as well as subsurface features (such as weathered oil) in aquatic systems.

Other ongoing applications in the Marine & Environmental Optics Lab making use of data from the remote sensing systems described in this paper are (a) land surface vegetation studies needed for ongoing climate change studies currently being conducted in coastal Florida scrub vegetation studies and (b) layered radiative transfer modeling of surface and subsurface oil signatures for sensor comparisons and related algorithm development to detect surface and subsurface oil using spectral and spatial data fusion and sharpening techniques.

## **9. Acknowledgments**

The work presented in this paper has been supported in part by the Northrop Grumman Corporation, NASA, Kennedy Space Center, KB Science, the National Science Foundation, the US-Canadian Fulbright Program, and the US Department of Education, *FIPSE* & European Union's grant *Atlantis STARS* (Sensing Technology and Robotics Systems) to Florida Institute of Technology, the Budapest University of Engineering and Economics (BME) and the Belgium Royal Military Academy, Brussels, in order to support of the involvement of undergraduate students in obtaining international dual US-EU undergraduate engineering degrees. Acknowledgement is also given to recent funding from the Florida Institute of Oceanography's BP Corporation's research grant award in support of aerial image acquisition.

## **10. References**


**13** 

**CSIR – NLC Mobile LIDAR for** 

**Atmospheric Remote Sensing** 

*University of Kwa-Zulu Natal, Department of Physics, Durban* 

Remote sensing is a technique for measuring, observing, or monitoring a process or object without physically touching the object under observation. The remote sensing instrumentation is not in contact with the object being observed, remote sensing allows - to measure a process without causing disturbance - to probe large volumes economically and rapidly, such as providing global measurements of aerosols, air pollution, agriculture, environmental impacts, solar and terrestrial systems, ocean surface roughness and largescale geographic features. The modern atmosphere remote sensing technique offers to study in detail, the atmospheric physics/chemistry and meteorology. In general, observation, validation, and theoretical simulation are highly integrated components of atmospheric remote sensing. Active and passive remote-sensing techniques and theories/formulation methods for measuring atmospheric and environmental parameters have advanced rapidly in recent years. Active remote sensing instrumentation includes an energy source on which the measurement is based. In this case, the observer can control the energy source and the examples of this class are RADAR, LIDAR, SODAR, SONAR etc. Passive remote sensors do not include the energy source on which the measurement is based. They rely on an external light, which is beyond the control of the observer and examples of this class are optical and

LIDAR (LIght Detection And Ranging) is also called as "Optical RADAR" or "Laser RADAR". It is a powerful and versatile remote sensing technique for high resolution atmospheric studies. It complements the conventional RADAR for atmospheric studies by being able to probe the region not accessible to the RADAR and study micro-scales of the atmosphere. The LIDAR probing of the atmosphere started in early 1960s and pursued intensively over the past five decades. *Fiocco and Smullins* (1963) used Ruby Laser with a feeble energy of 0.5J, obtained Rayleigh scattering signals from the atmosphere upto 50 km altitude and also detected dust layers in the atmosphere. *Ligda* in 1963 made the LIDAR

radio telescopes, radiometers, photometers, spectrometers etc.

**2. LIDAR as a remote sensing probe** 

**1. Introduction** 

*Council for Scientific and Industrial Research,* 

*Geoinformatics and Meterology, Pretoria* 

*University of Pretoria, Department of Geography* 

Sivakumar Venkataraman

*National Laser Centre, Pretoria* 

*South Africa* 

Bostater, C., Jones. J., Frystacky, H., Kovacs, M., Joza, O., "Image Analysis for Water & Subsurface Feature Detection In Shallow Waters", SPIE Vol. 7825, pp. 7825-17-1 to 7, (2010).

## **CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing**

## Sivakumar Venkataraman

*Council for Scientific and Industrial Research, National Laser Centre, Pretoria University of Pretoria, Department of Geography Geoinformatics and Meterology, Pretoria University of Kwa-Zulu Natal, Department of Physics, Durban South Africa* 

## **1. Introduction**

288 Remote Sensing – Advanced Techniques and Platforms

Bostater, C., Jones. J., Frystacky, H., Kovacs, M., Joza, O., "Image Analysis for Water &

7, (2010).

Subsurface Feature Detection In Shallow Waters", SPIE Vol. 7825, pp. 7825-17-1 to

Remote sensing is a technique for measuring, observing, or monitoring a process or object without physically touching the object under observation. The remote sensing instrumentation is not in contact with the object being observed, remote sensing allows - to measure a process without causing disturbance - to probe large volumes economically and rapidly, such as providing global measurements of aerosols, air pollution, agriculture, environmental impacts, solar and terrestrial systems, ocean surface roughness and largescale geographic features. The modern atmosphere remote sensing technique offers to study in detail, the atmospheric physics/chemistry and meteorology. In general, observation, validation, and theoretical simulation are highly integrated components of atmospheric remote sensing. Active and passive remote-sensing techniques and theories/formulation methods for measuring atmospheric and environmental parameters have advanced rapidly in recent years. Active remote sensing instrumentation includes an energy source on which the measurement is based. In this case, the observer can control the energy source and the examples of this class are RADAR, LIDAR, SODAR, SONAR etc. Passive remote sensors do not include the energy source on which the measurement is based. They rely on an external light, which is beyond the control of the observer and examples of this class are optical and radio telescopes, radiometers, photometers, spectrometers etc.

## **2. LIDAR as a remote sensing probe**

LIDAR (LIght Detection And Ranging) is also called as "Optical RADAR" or "Laser RADAR". It is a powerful and versatile remote sensing technique for high resolution atmospheric studies. It complements the conventional RADAR for atmospheric studies by being able to probe the region not accessible to the RADAR and study micro-scales of the atmosphere. The LIDAR probing of the atmosphere started in early 1960s and pursued intensively over the past five decades. *Fiocco and Smullins* (1963) used Ruby Laser with a feeble energy of 0.5J, obtained Rayleigh scattering signals from the atmosphere upto 50 km altitude and also detected dust layers in the atmosphere. *Ligda* in 1963 made the LIDAR

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 291

Figure 1 shows the schematic diagram of LIDAR probing of the atmosphere in which P0 represents the laser-transmitted pulse energy. Let us consider at an altitude z the scattering take place, hence a factor T attenuates the intensity of light pulse. The radiation scattered in backward is P0T, where is the backscattering coefficient (sum of Rayleigh scattering by air molecules and Mie scattering by aerosol particles). Since the backscattered radiation travels the same distance r before being detected by the telescope, it further undergoes attenuation by the same factor T. Thus the intensity of the backscattered signal detected at

Fig. 1. Schematic diagram showing the basic principle involved in LIDAR probing of the

LIDARs may be configured into two ways; (a) Mono-static configuration in which both transmitter and receiver are collocated. (b) Bi-static configuration in which both transmitter

The transmitted laser beam gets scattered in all directions at all altitudes, the backscattered echoes are received by the telescope and their intensities are measured. The field of view of the telescope is kept larger than beam divergence, in order to accommodate the beam completely at all altitudes. The received signal intensity is described in terms of the LIDAR

> <sup>0</sup> <sup>2</sup> ( ) ( ) exp 2 ( ) <sup>2</sup> *A c Pr P <sup>r</sup> r dr*

 (1)

 

*r*

, where A is the area of the telescope receiving the

the telescope becomes

backscattered radiation.

atmosphere.

**2.2 LIDAR equation** 

and receiver are separated by some distance.

equation as given by (*Fiocco,* 1984);

2 0 2 *PT A r* 

measurements of cloud heights in the troposphere. Recent developments leading to the availability of more powerful, relatively rugged and highly efficient solid state lasers and improvements in detector technology as well as data acquisition techniques have resulted, LIDARs as a potential tool for atmospheric studies. Both continuous wave and pulsed laser systems have been extensively used and they are currently operational for the study of atmospheric structure and dynamics, trace constituents, aerosols, clouds as well as boundary layer and other meteorological phenomena. Currently laser systems are being used for probing the atmosphere begin from surface (near boundary layer) to lower thermosphere altitudes (upto ~100 km).

## **2.1 LIDAR principle**

LIDAR is one of the most powerful remote sensing techniques to probe the earth's middle atmosphere. The basic principle of probing the atmosphere by LIDAR is similar to that of the RADAR. In the simplest form, LIDAR employs a laser as a source of pulsed energy of useful magnitude and suitably short duration. Typically Q-switched ruby (wavelength=0.69 m) or Neodymium (wavelength 1.06 m) laser systems are used to generate pulses having peak powers measured in tens of megawatts in the duration of 10-20 nsec. Pulses with such energy (i.e. of the order 1 joule) are directed in beams by suitable optical systems. The advantage of laser, as it has specific properties of virtually monochromatic and highly coherent and collimated.

As the transmitted laser energy passes through the atmosphere, the gas molecules and particles or droplets cause scattering. A small fraction of this energy is backscattered in the direction of the LIDAR system and is available for detection. The scattering of energy in directions other than the direction of propagation, or absorption by the gases and particles, reduces the intensity of the beam, which is said to be attenuated. Such attenuation applies to both the paths (to and fro) of the distant backscattering region.

The LIDAR backscattered energy is collected in a suitable receiver by means of reflective optics and transferred to a photo-detector (commonly referred to a photo-multiplier). This produces an electrical signal, the intensity of which at any instant is proportional to the received LIDAR signal power. Since the light travels at a known velocity, the range of the scattering region produces the signal received at any instant can be uniquely determined from the time interval of the sampled signal from the transmitted pulse. The magnitude of the received signal is determined by the backscattering properties of the atmosphere at successive ranges and by the two-way atmospheric attenuation. Atmospheric backscattering intern depends upon the wavelength of the laser energy used, and the number, size, shape and refractive properties of the particles (droplets and molecules) intercepting the incident energy. Backscattering from an assemblage of scatterers is a complicated phenomenon; in general, the backscattering increases with increasing scatterer concentrations.

The electrical signal from the photo detector thus contains information on the presence, range and concentration of atmospheric scatterers. Various forms of presenting and analyzing such signals are available. In the simplest form they may be presented on an oscilloscope in a coordinate system showing received signal intensity as a function of range. Since such signals are transient, (1 km of range is represented by an interval of time of ~7 s), it is necessary to photograph several such oscilloscope displays to obtain adequate data for presentation.

Figure 1 shows the schematic diagram of LIDAR probing of the atmosphere in which P0 represents the laser-transmitted pulse energy. Let us consider at an altitude z the scattering take place, hence a factor T attenuates the intensity of light pulse. The radiation scattered in backward is P0T, where is the backscattering coefficient (sum of Rayleigh scattering by air molecules and Mie scattering by aerosol particles). Since the backscattered radiation travels the same distance r before being detected by the telescope, it further undergoes attenuation by the same factor T. Thus the intensity of the backscattered signal detected at 2

the telescope becomes 0 2 *PT A r* , where A is the area of the telescope receiving the

backscattered radiation.

290 Remote Sensing – Advanced Techniques and Platforms

measurements of cloud heights in the troposphere. Recent developments leading to the availability of more powerful, relatively rugged and highly efficient solid state lasers and improvements in detector technology as well as data acquisition techniques have resulted, LIDARs as a potential tool for atmospheric studies. Both continuous wave and pulsed laser systems have been extensively used and they are currently operational for the study of atmospheric structure and dynamics, trace constituents, aerosols, clouds as well as boundary layer and other meteorological phenomena. Currently laser systems are being used for probing the atmosphere begin from surface (near boundary layer) to lower

LIDAR is one of the most powerful remote sensing techniques to probe the earth's middle atmosphere. The basic principle of probing the atmosphere by LIDAR is similar to that of the RADAR. In the simplest form, LIDAR employs a laser as a source of pulsed energy of useful magnitude and suitably short duration. Typically Q-switched ruby (wavelength=0.69 m) or Neodymium (wavelength 1.06 m) laser systems are used to generate pulses having peak powers measured in tens of megawatts in the duration of 10-20 nsec. Pulses with such energy (i.e. of the order 1 joule) are directed in beams by suitable optical systems. The advantage of laser, as it has specific properties of virtually monochromatic and highly

As the transmitted laser energy passes through the atmosphere, the gas molecules and particles or droplets cause scattering. A small fraction of this energy is backscattered in the direction of the LIDAR system and is available for detection. The scattering of energy in directions other than the direction of propagation, or absorption by the gases and particles, reduces the intensity of the beam, which is said to be attenuated. Such attenuation applies to

The LIDAR backscattered energy is collected in a suitable receiver by means of reflective optics and transferred to a photo-detector (commonly referred to a photo-multiplier). This produces an electrical signal, the intensity of which at any instant is proportional to the received LIDAR signal power. Since the light travels at a known velocity, the range of the scattering region produces the signal received at any instant can be uniquely determined from the time interval of the sampled signal from the transmitted pulse. The magnitude of the received signal is determined by the backscattering properties of the atmosphere at successive ranges and by the two-way atmospheric attenuation. Atmospheric backscattering intern depends upon the wavelength of the laser energy used, and the number, size, shape and refractive properties of the particles (droplets and molecules) intercepting the incident energy. Backscattering from an assemblage of scatterers is a complicated phenomenon; in

The electrical signal from the photo detector thus contains information on the presence, range and concentration of atmospheric scatterers. Various forms of presenting and analyzing such signals are available. In the simplest form they may be presented on an oscilloscope in a coordinate system showing received signal intensity as a function of range. Since such signals are transient, (1 km of range is represented by an interval of time of ~7 s), it is necessary to photograph several such oscilloscope displays to obtain adequate data for presentation.

general, the backscattering increases with increasing scatterer concentrations.

both the paths (to and fro) of the distant backscattering region.

thermosphere altitudes (upto ~100 km).

**2.1 LIDAR principle** 

coherent and collimated.

Fig. 1. Schematic diagram showing the basic principle involved in LIDAR probing of the atmosphere.

LIDARs may be configured into two ways; (a) Mono-static configuration in which both transmitter and receiver are collocated. (b) Bi-static configuration in which both transmitter and receiver are separated by some distance.

#### **2.2 LIDAR equation**

The transmitted laser beam gets scattered in all directions at all altitudes, the backscattered echoes are received by the telescope and their intensities are measured. The field of view of the telescope is kept larger than beam divergence, in order to accommodate the beam completely at all altitudes. The received signal intensity is described in terms of the LIDAR equation as given by (*Fiocco,* 1984);

$$P(r) = P\_0 \, \eta \left(\frac{A}{r^2}\right) \left(\frac{c\tau}{2}\right) \beta(r) \exp\left[-2\int a(r) dr\right] \tag{1}$$

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 293

scattering processes, sometimes in combination with molecular absorption, form the basis for various types of LIDAR remote sensing techniques. The most well known is DIAL (DIfferential Absorption LIDAR) or DASE (Differential Absorption Scattering Energy).

In 1890's Lord Rayleigh showed that the scattering of light by air molecules is responsible for the blue color of the sky. He showed that, when the size of the scatterer is small compared to the wavelength of the incident radiation. Rayleigh scattering mainly consists of scattering from the atmospheric gases. This type of scattering is varies nearly as the inverse of fourth power of interactive wavelength and directly proportional to sixth power of the

When the sizes of the scattering particles are comparable to or larger than the LIDAR wavelength, the scattering is governed by Mie theory. Pollen, dust, smoke, water droplets, and other particles in the lower portion of the atmosphere cause Mie scattering. Mie scattering is responsible for the white appearance of the clouds. Note that for a given incident wavelength as the size of the scatterer is reduced, the scattering computed using Mie theory coincides with the results obtained using Rayleigh formula. Thus Rayleigh scattering is said to be a special case of Mie scattering. The Mie scattering is directly

Raman scattering is the process involving an exchange of a significant amount of energy between the scattered photon and the scattering species. Thus the Raman scattering component is shifted from the incident wave frequency by an amount corresponding to the internal energy of the species. The Raman scatter has both down-shifted (stokes) and upshifted (anti-stokes) lines in its spectrum. The cross section for Raman scattering is small and compared to Rayleigh scattering, it is smaller by about three orders of magnitude. However, by LIDAR technique, it offers a valuable means for identifying and monitoring atmospheric constituents and also for temperature measurements in the lower atmosphere. The technique makes use of stokes line since its intensity is much greater than that of anti-stokes

proportional to wavelength and proportional to the volume of the scatterers.

Trace Species, like O3, NO2, CO2, CH4, CO, H2O

**Technique Atmospheric measurements** 

Raman Scattering N2, CO2, H2O and Lower Atmosphere temperature (less than 20 km )

(for upto 50 km) Table 1. Main scattering / absorption process of laser-atmosphere Interactions.

Rayleigh Scattering Air Density and temperature (above 35 km) Mie Scattering Cloud, Smog, Dust, Aerosols (Below 35 km )

Table 1 summarizes these mechanisms.

**2.3.1 LIDAR scattering / absorption mechanisms** 

Differential Absorption

LIDAR (DIAL)

**Rayleigh scattering** 

radius of the scatter.

**Mie scattering** 

**Raman scattering** 

line.

Where P(r) is the instantaneous power received at time t from an altitude (range) r, P0 is the transmitted power, is the system constant which depends on the transmitter and receiver efficiencies. A is the area of primary (collecting) mirror of the receiving telescope. The term 2 *A r* is the solid angle subtended by the primary mirror at the range r. This simple expression for solid angle is applicable for monostatic only because all the transmitted energy contributes to the backscattered signal from the range r. The term 2 *c* gives the length of the illuminated path, which contributes to the received power, where c is the velocity of light and is the pulse duration of the laser beam.

The 2 *c* term determines the minimum spatial resolution available in the direction of the

beam propagation. In the transverse direction the spatial resolution depends on the laser beam width at particular altitude. In a typical LIDAR system the pulse duration of the laser beam is of the order of few nanoseconds and the beam divergence is less than a milli-radian, which corresponds to a scattering volume of a few cubic meters. This is the greatest advantage of the LIDAR technique which is not possible by any other atmospheric remote sensing technique.

( )*r* is the volume backscattering coefficient of the atmosphere at range r. It gives the fractional amount of the incident energy scattered per steradian in the backward direction per unit atmospheric path length and has the dimension of m-1sr-1. is the volume attenuation coefficient of the atmosphere and has the unit of m-1, defined as twice the integral between the transmitter and the scattering volume to obtain the net transmission.

The term and include the contribution from air molecules, aerosols and the other atmospheric species. The problem related with the LIDAR equation is that it contains two unknowns, and , which make it difficult to obtain the general solution. Appropriate inversion methods (*Fernald et al*., 1984; *Klett,* 1981 & 1985) have been developed to solve the equation. The LIDAR equation however assumes only single scattering. Contribution arising from multiple scattering is important for high turbidity cases such as clouds and fogs.

## **2.3 LIDAR scattering / absorption mechanisms**

As the radiant energy passes through the atmosphere it undergoes transformations like absorption and scattering. Absorption (or emission) of radiation takes place when the atoms or molecules undergo transition from a energy state to another. Scattering is the deflection of incoming solar radiation in all directions. Scattering of radiation depends to a large extent on particle size. There are several scattering / absorption mechanisms that occur when the laser energy interacts with the atmosphere. The predominant scattering is quasi-elastic scattering from aerosols (Mie scattering) or molecules (Rayleigh Scattering). The quasi-elastic nature arises from the motion of the molecules or aerosols along the direction of the laser beam. Aerosols, generally move with the air mass, give rise to smaller Doppler shifts, while the molecules, move at high speed, give rise to larger Doppler shifts. Another form of atmospheric elastic scattering is resonance fluorescence. In-elastic scattering includes Raman Scattering and Non-Resonance Fluorescence. These scattering processes, sometimes in combination with molecular absorption, form the basis for various types of LIDAR remote sensing techniques. The most well known is DIAL (DIfferential Absorption LIDAR) or DASE (Differential Absorption Scattering Energy). Table 1 summarizes these mechanisms.


Table 1. Main scattering / absorption process of laser-atmosphere Interactions.

## **2.3.1 LIDAR scattering / absorption mechanisms**

## **Rayleigh scattering**

292 Remote Sensing – Advanced Techniques and Platforms

Where P(r) is the instantaneous power received at time t from an altitude (range) r, P0 is the

efficiencies. A is the area of primary (collecting) mirror of the receiving telescope. The term

 is the solid angle subtended by the primary mirror at the range r. This simple expression for solid angle is applicable for monostatic only because all the transmitted

length of the illuminated path, which contributes to the received power, where c is the

 term determines the minimum spatial resolution available in the direction of the beam propagation. In the transverse direction the spatial resolution depends on the laser beam width at particular altitude. In a typical LIDAR system the pulse duration of the laser beam is of the order of few nanoseconds and the beam divergence is less than a milli-radian, which corresponds to a scattering volume of a few cubic meters. This is the greatest advantage of the LIDAR technique which is not possible by any other atmospheric remote

( )*r* is the volume backscattering coefficient of the atmosphere at range r. It gives the fractional amount of the incident energy scattered per steradian in the backward direction

attenuation coefficient of the atmosphere and has the unit of m-1, defined as twice the integral between the transmitter and the scattering volume to obtain the net transmission.

atmospheric species. The problem related with the LIDAR equation is that it contains two

inversion methods (*Fernald et al*., 1984; *Klett,* 1981 & 1985) have been developed to solve the equation. The LIDAR equation however assumes only single scattering. Contribution arising from multiple scattering is important for high turbidity cases such as clouds and fogs.

As the radiant energy passes through the atmosphere it undergoes transformations like absorption and scattering. Absorption (or emission) of radiation takes place when the atoms or molecules undergo transition from a energy state to another. Scattering is the deflection of incoming solar radiation in all directions. Scattering of radiation depends to a large extent on particle size. There are several scattering / absorption mechanisms that occur when the laser energy interacts with the atmosphere. The predominant scattering is quasi-elastic scattering from aerosols (Mie scattering) or molecules (Rayleigh Scattering). The quasi-elastic nature arises from the motion of the molecules or aerosols along the direction of the laser beam. Aerosols, generally move with the air mass, give rise to smaller Doppler shifts, while the molecules, move at high speed, give rise to larger Doppler shifts. Another form of atmospheric elastic scattering is resonance fluorescence. In-elastic scattering includes Raman Scattering and Non-Resonance Fluorescence. These

include the contribution from air molecules, aerosols and the other

, which make it difficult to obtain the general solution. Appropriate

per unit atmospheric path length and has the dimension of m-1sr-1.

energy contributes to the backscattered signal from the range r. The term 2

is the pulse duration of the laser beam.

is the system constant which depends on the transmitter and receiver

 *c*

gives the

is the volume

transmitted power,

velocity of light and

2 *c*

sensing technique.

 and 

> and

**2.3 LIDAR scattering / absorption mechanisms** 

2 *A r* 

The

The term

unknowns,

In 1890's Lord Rayleigh showed that the scattering of light by air molecules is responsible for the blue color of the sky. He showed that, when the size of the scatterer is small compared to the wavelength of the incident radiation. Rayleigh scattering mainly consists of scattering from the atmospheric gases. This type of scattering is varies nearly as the inverse of fourth power of interactive wavelength and directly proportional to sixth power of the radius of the scatter.

## **Mie scattering**

When the sizes of the scattering particles are comparable to or larger than the LIDAR wavelength, the scattering is governed by Mie theory. Pollen, dust, smoke, water droplets, and other particles in the lower portion of the atmosphere cause Mie scattering. Mie scattering is responsible for the white appearance of the clouds. Note that for a given incident wavelength as the size of the scatterer is reduced, the scattering computed using Mie theory coincides with the results obtained using Rayleigh formula. Thus Rayleigh scattering is said to be a special case of Mie scattering. The Mie scattering is directly proportional to wavelength and proportional to the volume of the scatterers.

### **Raman scattering**

Raman scattering is the process involving an exchange of a significant amount of energy between the scattered photon and the scattering species. Thus the Raman scattering component is shifted from the incident wave frequency by an amount corresponding to the internal energy of the species. The Raman scatter has both down-shifted (stokes) and upshifted (anti-stokes) lines in its spectrum. The cross section for Raman scattering is small and compared to Rayleigh scattering, it is smaller by about three orders of magnitude. However, by LIDAR technique, it offers a valuable means for identifying and monitoring atmospheric constituents and also for temperature measurements in the lower atmosphere. The technique makes use of stokes line since its intensity is much greater than that of anti-stokes line.

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 295

The LIDAR provides measurements of the optical backscattering cross section of air as a function of range and wavelength. This information may be subsequently interpreted to obtain profiles of the aerosol concentration, size distribution, refractive index, scattering, absorption and extinction cross sections and shape. The scattering involves with aerosol is mainly due to Mie scattering. Details on Mie scattering are provided in the earlier section.

LIDARs are well suited and widely used for determining the characteristics of clouds, especially high altitude clouds, because of their high range resolution and high sensitivity to hydrometeors. The sharp enhancement in the Mie backscattered LIDAR signal makes possible the detection and characterization of the clouds (*Fernald*, 1984). Although one channel LIDAR can define physical boundaries of clouds, polarization diversity gives fundamental principles to distinguish between water and ice phase of the clouds. The LIDAR measurements of scattering ratio and linear depolarization ratio (LDR) provide the cloud parameters and information on the thermodynamic phase of the cloud particles.

In the height range, where the contribution from the Mie backscatter is negligible ( 30 km), the recorded signal is due to the Rayleigh backscatter and its intensity, corrected for the range and atmosphere transmission, is proportional to the molecular number density. Using the number density taken from an appropriate model for a specified height, where the signal-to-noise ratio is fairly high, the constant of proportionality is evaluated and thereby the density profile is derived. Taking the pressure at the top of the height range (say 90 km) from the atmospheric model, the pressure profile is computed using the measured density profile, assuming the atmosphere to be in hydrostatic equilibrium. Adopting the perfect gas law, the temperature profile is computed using the derived density and pressure profiles.

The analysis closely follows the method described by *Hauchecorne and Chanin* (1980).

Doppler LIDARs make use of the small change in the operating frequency of the LIDAR due to motion of the scatterers to measure their velocity. Using the technique called heterodyning, the returned backscattered signal is used with another laser beam so that they interfere, yielding a more easily measurable signal at radio wave frequency. The frequency of the radio wave will be equal to the difference between the frequencies of the transmitted and the received signals. The application of Doppler LIDAR in atmospheric remote sensing is to measure wind velocity, i.e., wind speed and direction in addition to

The DIAL technique has been used to provide vertical profiles of the ozone number density from ground to 40-50 km height level. The basic principle of the DIAL technique is described in earlier section (section 2.3.1). In this technique, the laser transmitter emits signals at two close wavelengths, on and off corresponding to a peak and trough, respectively in the absorption spectrum of the species of interest. The ratio of the two received signals due to backscattering corresponds to the absorption produced by the

**LIDAR for the aerosol studies** 

**LIDAR for the cloud studies** 

**LIDAR to determine middle atmospheric temperature** 

**LIDAR for the measurements of vertical profile of ozone** 

**LIDAR to determine the wind speed** 

other parameters.

#### **Differential absorption technique**

The most sensitive and effective absorption method for the measurement and monitoring of air pollutants is the "Differential Absorption LIDAR (DIAL)" technique. In this technique, the pulsed laser transmitter emits signals at two wavelengths, on and off corresponds to absorption line and other outside the absorption line. The received backscatter power on and off wavelength is given by

$$P\_{on} = \frac{E\_{on}\mathcal{J}\_{on}(r)\mathbb{C}}{2r^2} \exp\left[-\int\_0^r 2a\_{on}(r')dr'\right] \tag{2}$$

$$P\_{\rm off} = \frac{E\_{\rm off} \mathcal{B}\_{\rm off}(r) \mathbf{C}}{2r^2} \exp\left[-\int\_0^r 2a\_{\rm off}(r') dr'\right] \tag{3}$$

Where P is the received backscatter power at time t = 2r /c, r is range, E is the transmitted laser pulse energy, is the atmospheric backscatter coefficient, is the atmospheric extinction coefficient and C is system constant. The atmospheric absorption and extinction coefficient can be expressed interms of aerosol and molecular components.

In this method, the ratio of the received backscattered power between on and off wavelength is directly proportional to the number concentrations of the molecule/gaseous pollutants.

Table-2 provides the primary laser sources, which are used for atmospheric applications. In which solid-state lasers are popular. The first laser systems used with the flash lamp pumped is a Q-switched ruby laser. Now it has been implemented in Nd-YAG laser system also.


\* Note: More recently using diode array pumping more than 20 % efficiencies have been achieved.

Table 2. Primary laser sources used for atmospheric applications

#### **2.4 Applications of LIDAR**

LIDARs are used in variety of applications in the field of atmospheric science. Some of the main applications are outlined, below.

## **LIDAR for the aerosol studies**

294 Remote Sensing – Advanced Techniques and Platforms

The most sensitive and effective absorption method for the measurement and monitoring of air pollutants is the "Differential Absorption LIDAR (DIAL)" technique. In this technique, the pulsed laser transmitter emits signals at two wavelengths, on and off corresponds to absorption line and other outside the absorption line. The received backscatter power on

> 0 ( ) exp 2 ( ) <sup>2</sup> *r*

0 ( ) exp 2 ( ) <sup>2</sup>

(2)

(3)

is the atmospheric

**(%)** 

10

2

2

*r* 

laser pulse energy, is the atmospheric backscatter coefficient,

coefficient can be expressed interms of aerosol and molecular components.

*r* 

*on on on on E rC <sup>P</sup> r dr*

*<sup>r</sup> off off off off E rC*

*P r dr*

Where P is the received backscatter power at time t = 2r /c, r is range, E is the transmitted

extinction coefficient and C is system constant. The atmospheric absorption and extinction

In this method, the ratio of the received backscattered power between on and off wavelength is directly proportional to the number concentrations of the molecule/gaseous

Table-2 provides the primary laser sources, which are used for atmospheric applications. In which solid-state lasers are popular. The first laser systems used with the flash lamp pumped is a Q-switched ruby laser. Now it has been implemented in Nd-YAG laser system

**Laser Wavelength Energy per pulse Efficiency** 

operation

0.35-1.0 m 0.1 – 20 J 1

Ruby 0.694 m 2-3 J at 0.5 Hz 0.1 – 0.2 Nd:YAG 1.06 m 1 J at 10 Hz, 10 ns pulse 1 - 2 \* CO2 9-11 m multi-line 1-10 J at 1-50 Hz 10 – 30

Note: More recently using diode array pumping more than 20 % efficiencies have been achieved.

LIDARs are used in variety of applications in the field of atmospheric science. Some of the

CO2 Tunable 0.1 J at 10 Hz 5

CO 5 – 6.5 m Not very popular for pulsed

Table 2. Primary laser sources used for atmospheric applications

**Differential absorption technique** 

and off wavelength is given by

pollutants.

Dye lasers Flash lamp pumped

**2.4 Applications of LIDAR** 

main applications are outlined, below.

also.

\*

The LIDAR provides measurements of the optical backscattering cross section of air as a function of range and wavelength. This information may be subsequently interpreted to obtain profiles of the aerosol concentration, size distribution, refractive index, scattering, absorption and extinction cross sections and shape. The scattering involves with aerosol is mainly due to Mie scattering. Details on Mie scattering are provided in the earlier section.

#### **LIDAR for the cloud studies**

LIDARs are well suited and widely used for determining the characteristics of clouds, especially high altitude clouds, because of their high range resolution and high sensitivity to hydrometeors. The sharp enhancement in the Mie backscattered LIDAR signal makes possible the detection and characterization of the clouds (*Fernald*, 1984). Although one channel LIDAR can define physical boundaries of clouds, polarization diversity gives fundamental principles to distinguish between water and ice phase of the clouds. The LIDAR measurements of scattering ratio and linear depolarization ratio (LDR) provide the cloud parameters and information on the thermodynamic phase of the cloud particles.

#### **LIDAR to determine middle atmospheric temperature**

In the height range, where the contribution from the Mie backscatter is negligible ( 30 km), the recorded signal is due to the Rayleigh backscatter and its intensity, corrected for the range and atmosphere transmission, is proportional to the molecular number density. Using the number density taken from an appropriate model for a specified height, where the signal-to-noise ratio is fairly high, the constant of proportionality is evaluated and thereby the density profile is derived. Taking the pressure at the top of the height range (say 90 km) from the atmospheric model, the pressure profile is computed using the measured density profile, assuming the atmosphere to be in hydrostatic equilibrium. Adopting the perfect gas law, the temperature profile is computed using the derived density and pressure profiles. The analysis closely follows the method described by *Hauchecorne and Chanin* (1980).

#### **LIDAR to determine the wind speed**

Doppler LIDARs make use of the small change in the operating frequency of the LIDAR due to motion of the scatterers to measure their velocity. Using the technique called heterodyning, the returned backscattered signal is used with another laser beam so that they interfere, yielding a more easily measurable signal at radio wave frequency. The frequency of the radio wave will be equal to the difference between the frequencies of the transmitted and the received signals. The application of Doppler LIDAR in atmospheric remote sensing is to measure wind velocity, i.e., wind speed and direction in addition to other parameters.

#### **LIDAR for the measurements of vertical profile of ozone**

The DIAL technique has been used to provide vertical profiles of the ozone number density from ground to 40-50 km height level. The basic principle of the DIAL technique is described in earlier section (section 2.3.1). In this technique, the laser transmitter emits signals at two close wavelengths, on and off corresponding to a peak and trough, respectively in the absorption spectrum of the species of interest. The ratio of the two received signals due to backscattering corresponds to the absorption produced by the

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 297

Reunion University and the Service d'Aéronomie (CNRS, IPSL, Paris) for atmosphere research studies, especially to study the upper troposphere and lower stratosphere (UTLS) aerosol structure and middle atmosphere temperature structure and Dynamics. The Council for Scientific and Industrial Research (CSIR) National Laser Centre (NLC) in South Africa has recently designed and developed a mobile LIDAR system to contribute to lower atmospheric research in South Africa and African countries. The CSIR mobile LIDAR acts as an ideal tool to address atmospheric remote sensing measurements from ground to 40 km and to study the atmosphere aerosol/cloud studies over Southern Hemisphere regions and this will encourage collaboration with other partner's in-terms of space-borne and ground

The CSIR NLC mobile LIDAR has been configured into mono-static that maximizes the overlap of the outgoing beam with the receiver field of view. The LIDAR system has been mounted in a mobile platform (van) with a special shock absorber frame. Figure 3 shows a 3-D pictorial representation of the mobile LIDAR with 2-D scanner. In general, any LIDAR systems can be sub-divided into three main sections, a laser transmitter, an optical receiver

Fig. 3. A 3-D pictorial representation of the CSIR-NLC mobile LIDAR with 2-D scanner

based LIDAR measurements.

and a data acquisition system.

**4.1 System description** 

**4. CSIR - NLC mobile LIDAR system** 

species (O3) in the range cell defined by the laser pulse duration and receiver gate. The amount of absorption is directly related to the concentration of the constituent.

## **LIDAR for lower atmospheric temperature and minor constituents**

Raman LIDAR is useful in obtaining molecular nitrogen concentration from low altitudes (below 30 km) where Rayleigh LIDAR technique is not applicable due to the presence of aerosols. In case of Raman scattered signal the radiation emerging only from the N2 molecules are detected that is proportional to the number density of air molecules. Temperature could be derived from the number density as the case of Rayleigh LIDAR. Raman scattering is also used to detect different molecular species present in the atmosphere.

### **LIDAR in space**

Ground-based LIDAR provides atmospheric data over a single viewing site, while LIDAR aboard an aircraft can gather data over an area confined to a region. Thus the ground-based and airborne LIDARs provide data over a limited area of a specified region of the earth. Space borne (satellite-based) LIDARs, on the other hand, have the potential for collecting data on a global scale, including remote areas like the open ocean, in a short period of time.

## **3. Lidar activities in south africa**

Although ground-based LIDAR systems exist in many developed countries and largely concentrated in northern hemisphere mid- and high latitude, it is still a very novel technique for South Africa and African countries. A recent survey on the available LIDAR system around the world, noticed that there are currently two different LIDARs available in South Africa, located in Pretoria and Durban (see. Figure 2). Both LIDAR systems are similar in operation and different in specifications and the objectives of measurements. The Durban LIDAR is operated at University of KwaZulu-Natal as part of cooperation between the

Fig. 2. Geo-graphical position of LIDAR sites in Pretoria and Durban.

Reunion University and the Service d'Aéronomie (CNRS, IPSL, Paris) for atmosphere research studies, especially to study the upper troposphere and lower stratosphere (UTLS) aerosol structure and middle atmosphere temperature structure and Dynamics. The Council for Scientific and Industrial Research (CSIR) National Laser Centre (NLC) in South Africa has recently designed and developed a mobile LIDAR system to contribute to lower atmospheric research in South Africa and African countries. The CSIR mobile LIDAR acts as an ideal tool to address atmospheric remote sensing measurements from ground to 40 km and to study the atmosphere aerosol/cloud studies over Southern Hemisphere regions and this will encourage collaboration with other partner's in-terms of space-borne and ground based LIDAR measurements.

## **4. CSIR - NLC mobile LIDAR system**

## **4.1 System description**

296 Remote Sensing – Advanced Techniques and Platforms

species (O3) in the range cell defined by the laser pulse duration and receiver gate. The

Raman LIDAR is useful in obtaining molecular nitrogen concentration from low altitudes (below 30 km) where Rayleigh LIDAR technique is not applicable due to the presence of aerosols. In case of Raman scattered signal the radiation emerging only from the N2 molecules are detected that is proportional to the number density of air molecules. Temperature could be derived from the number density as the case of Rayleigh LIDAR. Raman scattering is also used to detect different molecular species present in the

Ground-based LIDAR provides atmospheric data over a single viewing site, while LIDAR aboard an aircraft can gather data over an area confined to a region. Thus the ground-based and airborne LIDARs provide data over a limited area of a specified region of the earth. Space borne (satellite-based) LIDARs, on the other hand, have the potential for collecting data on a global scale, including remote areas like the open ocean, in a short period of time.

Although ground-based LIDAR systems exist in many developed countries and largely concentrated in northern hemisphere mid- and high latitude, it is still a very novel technique for South Africa and African countries. A recent survey on the available LIDAR system around the world, noticed that there are currently two different LIDARs available in South Africa, located in Pretoria and Durban (see. Figure 2). Both LIDAR systems are similar in operation and different in specifications and the objectives of measurements. The Durban LIDAR is operated at University of KwaZulu-Natal as part of cooperation between the

amount of absorption is directly related to the concentration of the constituent.

**LIDAR for lower atmospheric temperature and minor constituents** 

Fig. 2. Geo-graphical position of LIDAR sites in Pretoria and Durban.

atmosphere.

**LIDAR in space** 

**3. Lidar activities in south africa** 

The CSIR NLC mobile LIDAR has been configured into mono-static that maximizes the overlap of the outgoing beam with the receiver field of view. The LIDAR system has been mounted in a mobile platform (van) with a special shock absorber frame. Figure 3 shows a 3-D pictorial representation of the mobile LIDAR with 2-D scanner. In general, any LIDAR systems can be sub-divided into three main sections, a laser transmitter, an optical receiver and a data acquisition system.

Fig. 3. A 3-D pictorial representation of the CSIR-NLC mobile LIDAR with 2-D scanner

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 299

The transmitter employs a Q-Switched, flash lamp pumped Nd:YAG (Neodymium (Nd) impurity ion concentration in the Yttrium Aluminum Garnet (YAG)) solid-state pulsed laser (Continuum®, PL8010). Nd:YAG lasers operate at a fundamental wavelength of 1064 nm. Second and third harmonic conversions are sometimes required, depending on the application and are accomplished by means of suitable non-linear crystals such as Potassium (K) Di-hydrogen Phosphate (KDP). At present, the second (532 nm) and third (355 nm) harmonic is utilized and the corresponding laser beam diameter is approximately 8 mm. The laser beam is passed through a beam expander (expansion of 5 times), before being sent into the atmosphere, thereby the beam divergence is reduced by the factor of 5 (i.e, 0.6 mrad to 0.12 mrad). The resultant expanded beam has a diameter of 40 mm and is then reflected upward using a flat, 45 degree turning mirror. The entire transmission setup is mounted on an optical breadboard. The power supply unit controls and monitors the operation of the laser. It allows the user to setup the laser's flash lamp voltage, Q-Switch delay and the laser repetition rate. It also monitors system diagnostics such as the flow and temperature interlocks. The power supply also incorporates a water to water heat exchanger which regulates the temperature and quality of water used to cool the flash lamps and laser rods. The inbuilt laser Control Unit (CU601) provides cooling group interlocks, which sense water temperature, water level and water flow. A cooling group interlock violation halts the laser operation and reports the interlock violation to the remote box. At present, the laser is

Fig. 4. Block diagram of CSIR-NLC mobile LIDAR illustrating different components.

**4.1.1 Laser transmission** 

being utilized at the pulse repetition rate of 10 Hz.


The important main specifications of the LIDAR system are listed in Table 3.

Table 3. Major specifications of the CSIR-NLC mobile LIDAR system

#### **4.1.1 Laser transmission**

298 Remote Sensing – Advanced Techniques and Platforms

The important main specifications of the LIDAR system are listed in Table 3.

**Parameters Specifications** 

Beam Expander 5 x Pulse width 7 ns Pulse repetition rate 10 Hz

Telescope type Newtonian Diameter 404 mm Field of View 0.5 mrad

Filter FWHM 0.7 nm

Memory Depth 4096 Maximum Range 40.96 km Spatial Resolution 10 m

TR15-40 Interface Ethernet

**Scanner resolution (minimum)** 

X-axis (Horizontal) 0.002 rad Y-axis (Vertical) 0.001 rad

Model Licel® TR15-40

**Signal and Data Processing** 

Laser Source Nd:YAG - Continuum® Operating Wavelength 532 nm and 355 nm

Average pulse energy 120 mJ (at 532 nm) 80 mJ (at 355 nm)

Beam Divergence 0.12 mrad after Beam Expander

PMT Hamamatsu® R7400-U20 Optical fibre Multimode, 600 µm core

Processor Intel® Core2Duo 2.6 GHz

Water Vapour 0.5 km to 12 km (to be done) Temperature 0.5 km to 20 km (to be done)

Table 3. Major specifications of the CSIR-NLC mobile LIDAR system

Operating system Windows® XP Pro Software Interface NI LabVIEW®

Aerosol/Cloud study 0.5 km to 40 km

**Transmitter** 

**Receiver** 

**PC** 

**Application** 

The transmitter employs a Q-Switched, flash lamp pumped Nd:YAG (Neodymium (Nd) impurity ion concentration in the Yttrium Aluminum Garnet (YAG)) solid-state pulsed laser (Continuum®, PL8010). Nd:YAG lasers operate at a fundamental wavelength of 1064 nm. Second and third harmonic conversions are sometimes required, depending on the application and are accomplished by means of suitable non-linear crystals such as Potassium (K) Di-hydrogen Phosphate (KDP). At present, the second (532 nm) and third (355 nm) harmonic is utilized and the corresponding laser beam diameter is approximately 8 mm. The laser beam is passed through a beam expander (expansion of 5 times), before being sent into the atmosphere, thereby the beam divergence is reduced by the factor of 5 (i.e, 0.6 mrad to 0.12 mrad). The resultant expanded beam has a diameter of 40 mm and is then reflected upward using a flat, 45 degree turning mirror. The entire transmission setup is mounted on an optical breadboard. The power supply unit controls and monitors the operation of the laser. It allows the user to setup the laser's flash lamp voltage, Q-Switch delay and the laser repetition rate. It also monitors system diagnostics such as the flow and temperature interlocks. The power supply also incorporates a water to water heat exchanger which regulates the temperature and quality of water used to cool the flash lamps and laser rods. The inbuilt laser Control Unit (CU601) provides cooling group interlocks, which sense water temperature, water level and water flow. A cooling group interlock violation halts the laser operation and reports the interlock violation to the remote box. At present, the laser is being utilized at the pulse repetition rate of 10 Hz.

Fig. 4. Block diagram of CSIR-NLC mobile LIDAR illustrating different components.

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 301

Fig. 5. Original analog signal measured on 01 December 2010

Fig. 6. Same as fig. 5 but represents the original photon count signal

AD or the PC data to be displayed, as required.

is that it allows the user to infer the data simultaneously while the LIDAR system is in operational. The display can be easily visualized and the available settings enable either the

## **4.1.2 Receiver section**

The receiver system employs a Newtonian telescope configuration with a 404 mm primary mirror. The backscattered signal is first collected and focussed by the primary mirror of the telescope. The primary reflecting mirror has a 2.4 m radius of curvature and is coated with an enhanced aluminium substrate. The signal is then focused toward to a secondary 45 degree plane mirror and coupled into an optical fibre. One end of the fibre is connected to an optical baffle which receives the return signal from the telescope. The other end is connected to an optical tube with collimation optics and the PMT. We have also employed a motorized 3-Dimensional translation stage in order to accurately align the fibre using PC control.

## **4.1.3 Data acquisition**

PMT is used to convert the optical backscatter signal to an electronic signal. The PMT is installed in an optical tube and is preceded by a collimation lens and narrow band pass filter. The PMT used is a Hamamatsu R7400-U20. It is a subminiature PMT which operates in the UV to NIR wavelength range (300 nm – 900 nm) and has a fast rise time response of 0.78 ns. It is specially selected for minimal noise with an anode dark current.

Data acquisition is performed by a Licel transient recorder (TR). The system is favored by its dual capability of simultaneous acquiring analog and photon count signal, which makes it highly suited to LIDAR applications by providing a higher dynamic range. The TR15-40 is the model that was procured. It is capable of 15 MHz sampling and has a memory depth of 4096 bins. The photon count channel uses a high pass filter to select the high frequency component (>10 MHz) of the amplified PMT signal. The filtered component is then passed through a fast discriminator (250 MHz) and counter enabling the detection of single photons. The Licel system together with a LabVIEW software interface allows the user to acquire signals without any immediate programming. As mentioned earlier, the Licel data acquisition system incorporates electronics which is capable of simultaneous acquisitions of Analog Data (AD) and Photon Count (PC) data with a range resolution of 10 m. The combination of PC and AD electronics greatly extends the dynamic range of the detection channel allowing the reduction or removal of neutral density filters, which in turn greatly improves the Signal-to-Noise Ratio (SNR). The measurements are usually done at night to minimize back-ground noise.

## **4.2 Illustration**

In general, the laser beam is directed vertically upward into the sky as depicted in figure-3. The corresponding day presented a cloudy sky and there was a passage of high-altitude cirrus, which is normally found at upper altitudes from 6 km to 15 km. Since these clouds are generally optically transparent, depend upon the physical property, laser light is passed/prevented from passing through. The observations were carried out for approximately four and a half hours and the presence of clouds is clearly seen in the heighttime-backscattered signal returns for both the Analog Data (AD) and Photon Count (PC) data which is presented in Figs. 5 and 6 respectively. The figures were obtained after modifying the provided Licel–LABVIEW software, in-house, to display an automatically updated height-time-backscatter colour map in real time. The advantage of such a program

The receiver system employs a Newtonian telescope configuration with a 404 mm primary mirror. The backscattered signal is first collected and focussed by the primary mirror of the telescope. The primary reflecting mirror has a 2.4 m radius of curvature and is coated with an enhanced aluminium substrate. The signal is then focused toward to a secondary 45 degree plane mirror and coupled into an optical fibre. One end of the fibre is connected to an optical baffle which receives the return signal from the telescope. The other end is connected to an optical tube with collimation optics and the PMT. We have also employed a motorized 3-Dimensional translation stage in order to accurately align the fibre using PC

PMT is used to convert the optical backscatter signal to an electronic signal. The PMT is installed in an optical tube and is preceded by a collimation lens and narrow band pass filter. The PMT used is a Hamamatsu R7400-U20. It is a subminiature PMT which operates in the UV to NIR wavelength range (300 nm – 900 nm) and has a fast rise time response of

Data acquisition is performed by a Licel transient recorder (TR). The system is favored by its dual capability of simultaneous acquiring analog and photon count signal, which makes it highly suited to LIDAR applications by providing a higher dynamic range. The TR15-40 is the model that was procured. It is capable of 15 MHz sampling and has a memory depth of 4096 bins. The photon count channel uses a high pass filter to select the high frequency component (>10 MHz) of the amplified PMT signal. The filtered component is then passed through a fast discriminator (250 MHz) and counter enabling the detection of single photons. The Licel system together with a LabVIEW software interface allows the user to acquire signals without any immediate programming. As mentioned earlier, the Licel data acquisition system incorporates electronics which is capable of simultaneous acquisitions of Analog Data (AD) and Photon Count (PC) data with a range resolution of 10 m. The combination of PC and AD electronics greatly extends the dynamic range of the detection channel allowing the reduction or removal of neutral density filters, which in turn greatly improves the Signal-to-Noise Ratio (SNR). The measurements are usually done at night to

In general, the laser beam is directed vertically upward into the sky as depicted in figure-3. The corresponding day presented a cloudy sky and there was a passage of high-altitude cirrus, which is normally found at upper altitudes from 6 km to 15 km. Since these clouds are generally optically transparent, depend upon the physical property, laser light is passed/prevented from passing through. The observations were carried out for approximately four and a half hours and the presence of clouds is clearly seen in the heighttime-backscattered signal returns for both the Analog Data (AD) and Photon Count (PC) data which is presented in Figs. 5 and 6 respectively. The figures were obtained after modifying the provided Licel–LABVIEW software, in-house, to display an automatically updated height-time-backscatter colour map in real time. The advantage of such a program

0.78 ns. It is specially selected for minimal noise with an anode dark current.

**4.1.2 Receiver section** 

**4.1.3 Data acquisition** 

minimize back-ground noise.

**4.2 Illustration** 

control.

Fig. 5. Original analog signal measured on 01 December 2010

Fig. 6. Same as fig. 5 but represents the original photon count signal

is that it allows the user to infer the data simultaneously while the LIDAR system is in operational. The display can be easily visualized and the available settings enable either the AD or the PC data to be displayed, as required.

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 303

Fig. 8. Same as fig. 5 but represents the deadtime corrected photon count signal

Fig. 9. Same as fig. 5 but represents the glued photon count signal

To address the dynamic range of the instrument, the range corrected glued signal (i.e., signal multiplied by R2) is presented in figure 10. i.e., the figures represented here are the raw data multiplied by the square of the altitude, commonly referred as range corrected

The simultaneous AD and PC acquisitions have been post processed to merge or 'glue' the datasets into a single return signal. The combined AD and PC signals allow us to use the analog data in the high signal to noise ratio (SNR) regions and the PC data in the low SNR regions. Since the output from the AD converter is voltage (V) and the output from the photon counter is counts or count rates (MHz) a conversion factor between those outputs needs to be determined in order to convert the analog data to "virtual" count rate units. First the PC data is corrected for pulse pileup using a non-paralyzable assumption (dead-time correction). The dead time corrected PC data is then determined based on the linear relationship with Analog Signal, i.e., PC *= a \** AD *+ b,* over a range where the PC data responds linearly to the AD and where the AD is significantly above the inherent noise floor. The linear regression has been applied to determine the gain and offset coefficients (gluing coefficients), *a* and *b.* Thereafter, the coefficients are used to convert the entire AD profile to a "virtual/scaled" photon count rate. This is referred to as the scaled analog signal. i.e., the term, "*a \** AD" (see. Figure 7) and the term 'b' stands for the bin shift (offset). Commonly, the typical range is determined from the data above the threshold signal and where the PC data (see. Figure 8) is between 0.5 MHz and 10 MHz. The combined or glued signal then uses the dead-time corrected PC data for count rates below some threshold (typically 10 MHz) and the converted/scaled AD data above this point. Figure 9 displays the glued data for the above presented case (see Figure 7 and 8). Here, the gluing is performed after obtaining the dead time corrected photon count (dead time is 3.6 n sec) and also adjusting a minute bin shift between the AD and PC. The bin shift is basically a delay measured in bins (corresponding to 10 m per bin) which occurs due the detection electronics. Filters in the pre-amplifier electronics results in a delay of the AD signal with respect to the PC signal. The analog to digital conversion process also may also cause any further delay.

Fig. 7. Same as fig. 5 but represents the scaled analog signal

The simultaneous AD and PC acquisitions have been post processed to merge or 'glue' the datasets into a single return signal. The combined AD and PC signals allow us to use the analog data in the high signal to noise ratio (SNR) regions and the PC data in the low SNR regions. Since the output from the AD converter is voltage (V) and the output from the photon counter is counts or count rates (MHz) a conversion factor between those outputs needs to be determined in order to convert the analog data to "virtual" count rate units. First the PC data is corrected for pulse pileup using a non-paralyzable assumption (dead-time correction). The dead time corrected PC data is then determined based on the linear relationship with Analog Signal, i.e., PC *= a \** AD *+ b,* over a range where the PC data responds linearly to the AD and where the AD is significantly above the inherent noise floor. The linear regression has been applied to determine the gain and offset coefficients (gluing coefficients), *a* and *b.* Thereafter, the coefficients are used to convert the entire AD profile to a "virtual/scaled" photon count rate. This is referred to as the scaled analog signal. i.e., the term, "*a \** AD" (see. Figure 7) and the term 'b' stands for the bin shift (offset). Commonly, the typical range is determined from the data above the threshold signal and where the PC data (see. Figure 8) is between 0.5 MHz and 10 MHz. The combined or glued signal then uses the dead-time corrected PC data for count rates below some threshold (typically 10 MHz) and the converted/scaled AD data above this point. Figure 9 displays the glued data for the above presented case (see Figure 7 and 8). Here, the gluing is performed after obtaining the dead time corrected photon count (dead time is 3.6 n sec) and also adjusting a minute bin shift between the AD and PC. The bin shift is basically a delay measured in bins (corresponding to 10 m per bin) which occurs due the detection electronics. Filters in the pre-amplifier electronics results in a delay of the AD signal with respect to the PC signal. The analog to digital conversion process also may also cause any

further delay.

Fig. 7. Same as fig. 5 but represents the scaled analog signal

Fig. 8. Same as fig. 5 but represents the deadtime corrected photon count signal

Fig. 9. Same as fig. 5 but represents the glued photon count signal

To address the dynamic range of the instrument, the range corrected glued signal (i.e., signal multiplied by R2) is presented in figure 10. i.e., the figures represented here are the raw data multiplied by the square of the altitude, commonly referred as range corrected

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 305

Fig. 11a. Temporal evolution of the return signal while LASER is ON and OFF mode.

0

evolution in fig. 11a.

1.E+02 1.E+04 1.E+06 1.E+08 1.E+10 **Photons per second**

Fig. 11b. Height profile of averaged photon count for the above presented temporal

5

10

15

20

**Height [km]**

25

30

35

40

Fig. 10. Same as fig. 9 but represents the Range Corrected glued photon count signal

signal. The range corrected signal provides an equilibrium condition to the LIDAR transmitted and received backscatter signal (see. Equation 1).

The figures clearly distinguish the cloud observation from normal scattering from background particulate matter. Sharp enhancements are observed around 7.5 km and above (~12 km) indicating the presence of cloud. Such type of cloud otherwise termed as CIRRUS. The advantage of using LIDAR, is to observe the cloud thickness in addition to the cloud height. This is one of the important advantages of LIDAR measurements, in comparison with any other remote sensing measurement techniques. The advantage of having high resolution data (10 m) further address the accurate detection of cloud height and thickness, which is important for studying the cloud morphology. Apart from it, the above measurements illustrate the dynamic range of the LIDAR signal upto 35 km (though the figure is presented here upto 15 km). During the day-time measurements, to avoid the background light signal, neutral density (ND) filters are employed which protect further the PMT saturation and to investigate the maximum return signal strength.

The parameter, SNR judge always any instrument capability. Here, we have determined for the mobile LIDAR based on transmitting and receiving signal with and without emitting the LASER beam. The results are obtained by operating the LIDAR on a clear sky with the laser is being ON (Signal) and OFF (Noise) for an about twelve minutes in each cases (see Figure 11a). Figure 11(a) illustrates the temporal evolutions of LIDAR signal returns when the laser is ON and OFF. While the laser was on (first twelve minutes), a large photon count signal was obtained and when the laser was switched off (next twelve minutes), random noise photons are observed due to the background scattering from the atmosphere.

The above individual observational data are then averaged temporally and presented as a height profile of photon count in Figure 11b. Figure represents both the signal (blue) and

Fig. 10. Same as fig. 9 but represents the Range Corrected glued photon count signal

transmitted and received backscatter signal (see. Equation 1).

PMT saturation and to investigate the maximum return signal strength.

photons are observed due to the background scattering from the atmosphere.

signal. The range corrected signal provides an equilibrium condition to the LIDAR

The figures clearly distinguish the cloud observation from normal scattering from background particulate matter. Sharp enhancements are observed around 7.5 km and above (~12 km) indicating the presence of cloud. Such type of cloud otherwise termed as CIRRUS. The advantage of using LIDAR, is to observe the cloud thickness in addition to the cloud height. This is one of the important advantages of LIDAR measurements, in comparison with any other remote sensing measurement techniques. The advantage of having high resolution data (10 m) further address the accurate detection of cloud height and thickness, which is important for studying the cloud morphology. Apart from it, the above measurements illustrate the dynamic range of the LIDAR signal upto 35 km (though the figure is presented here upto 15 km). During the day-time measurements, to avoid the background light signal, neutral density (ND) filters are employed which protect further the

The parameter, SNR judge always any instrument capability. Here, we have determined for the mobile LIDAR based on transmitting and receiving signal with and without emitting the LASER beam. The results are obtained by operating the LIDAR on a clear sky with the laser is being ON (Signal) and OFF (Noise) for an about twelve minutes in each cases (see Figure 11a). Figure 11(a) illustrates the temporal evolutions of LIDAR signal returns when the laser is ON and OFF. While the laser was on (first twelve minutes), a large photon count signal was obtained and when the laser was switched off (next twelve minutes), random noise

The above individual observational data are then averaged temporally and presented as a height profile of photon count in Figure 11b. Figure represents both the signal (blue) and

Fig. 11a. Temporal evolution of the return signal while LASER is ON and OFF mode.

Fig. 11b. Height profile of averaged photon count for the above presented temporal evolution in fig. 11a.

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 307

Fig. 12. Height profile of aerosol extinction coefficient retrieved from LIDAR returned signal

Fig. 13. Height-time-colour map of LIDAR signal returns for 23 February 2008.

for the 30th and 31st August 2010.

noise (red) profiles. It is clear from the figure that the signal strength for the height region up to 40 km and shows more than 2 orders apart from the noise level. From the above results, one can conclude that the LIDAR provides reasonable measurements for the height region up to 40 km and that the signal to noise ratio is highly apart by an order of two. Further, more integration of signal may also address improvements in the SNR and the dynamic range of the instruments.

## **4.3 Scientific results**

## **4.3.1 LIDAR extinction co-efficient**

The altitude profiles of aerosol extinction () or backscatter coefficient () from a backscattered LIDAR signal require the solution from the LIDAR equation (see. Equation 1). As described in the LIDAR equation, the (z) = [a(z) + m(z)], and (z) = [a(z) + m(z)], where, a and a are the volume extinction and backscatter coefficients of the aerosols and m and m are the volume extinction and backscatter coefficients of the air molecules. The values of m and m are calculated from the meteorological data or from a standard atmosphere model. Determinations of a and a require an inversion of the LIDAR equation. The inversion is not a straightforward process since it involves two unknowns. In this regard, a definitive relationship between the above two unknowns should be assumed. The molecular contributions to backscattering and extinction have been estimated using a reference model atmosphere (MSISE-90). This is accomplished by the normalization of the photon count with molecular density at a specified height (vary from a day to day) taken from a model (MSISE-90) and then applying the extinction correction to the backscattering co-efficient profile using iterative analysis of the LIDAR inversion equation. The estimation of aerosol backscatter co-efficient applies the downward progression from the reference altitude of ~40 km where the aerosol concentration is said to be negligible. The backscattering co-efficient profiles as computed above are also employed for the purpose of studying the cloud characteristics. For studying the aerosol concentrations, however, extinction profiles are computed by following the LIDAR inversion method as described by *Klett*, (1985).

The LIDAR inversion technique was applied to the backscattered LIDAR signal for a two continuous day measurements 30th and 31st August 2010, to determine the aerosol backscatter and extinction coefficient. Figure 12 shows the 10 minutes averaged height profile of the aerosol extinction coefficient retrieved from LIDAR signal returned on the 30th and 31st August 2010. Different height profiles for measurements on the same day are observed. It shows that the aerosols loading were not found to be stable over the measurement site. This is due to the change in the aerosol loading resulting from the change in humidity, temperature, etc. Furthermore, the differences between measurements on different day are observed. This might be due to the variations in day's background conditions, temperature, humidity, wind, cloud, solar radiation, etc.

## **4.3.2 Detection of cloud**

Figure 13 shows an example of detection of cloud by LIDAR for the night of 23 February 2008. The laser was directed vertically upward into the sky and the corresponding night was a cloudy sky and there was a passage of cumulous clouds which is normally found at lower

noise (red) profiles. It is clear from the figure that the signal strength for the height region up to 40 km and shows more than 2 orders apart from the noise level. From the above results, one can conclude that the LIDAR provides reasonable measurements for the height region up to 40 km and that the signal to noise ratio is highly apart by an order of two. Further, more integration of signal may also address improvements in the SNR and the

The altitude profiles of aerosol extinction () or backscatter coefficient () from a backscattered LIDAR signal require the solution from the LIDAR equation (see. Equation 1). As described in the LIDAR equation, the (z) = [a(z) + m(z)], and (z) = [a(z) + m(z)], where, a and a are the volume extinction and backscatter coefficients of the aerosols and m and m are the volume extinction and backscatter coefficients of the air molecules. The values of m and m are calculated from the meteorological data or from a standard atmosphere model. Determinations of a and a require an inversion of the LIDAR equation. The inversion is not a straightforward process since it involves two unknowns. In this regard, a definitive relationship between the above two unknowns should be assumed. The molecular contributions to backscattering and extinction have been estimated using a reference model atmosphere (MSISE-90). This is accomplished by the normalization of the photon count with molecular density at a specified height (vary from a day to day) taken from a model (MSISE-90) and then applying the extinction correction to the backscattering co-efficient profile using iterative analysis of the LIDAR inversion equation. The estimation of aerosol backscatter co-efficient applies the downward progression from the reference altitude of ~40 km where the aerosol concentration is said to be negligible. The backscattering co-efficient profiles as computed above are also employed for the purpose of studying the cloud characteristics. For studying the aerosol concentrations, however, extinction profiles are computed by following the LIDAR inversion method as described by

The LIDAR inversion technique was applied to the backscattered LIDAR signal for a two continuous day measurements 30th and 31st August 2010, to determine the aerosol backscatter and extinction coefficient. Figure 12 shows the 10 minutes averaged height profile of the aerosol extinction coefficient retrieved from LIDAR signal returned on the 30th and 31st August 2010. Different height profiles for measurements on the same day are observed. It shows that the aerosols loading were not found to be stable over the measurement site. This is due to the change in the aerosol loading resulting from the change in humidity, temperature, etc. Furthermore, the differences between measurements on different day are observed. This might be due to the variations in day's background

Figure 13 shows an example of detection of cloud by LIDAR for the night of 23 February 2008. The laser was directed vertically upward into the sky and the corresponding night was a cloudy sky and there was a passage of cumulous clouds which is normally found at lower

conditions, temperature, humidity, wind, cloud, solar radiation, etc.

dynamic range of the instruments.

**4.3.1 LIDAR extinction co-efficient** 

**4.3 Scientific results** 

*Klett*, (1985).

**4.3.2 Detection of cloud** 

Fig. 12. Height profile of aerosol extinction coefficient retrieved from LIDAR returned signal for the 30th and 31st August 2010.

Fig. 13. Height-time-colour map of LIDAR signal returns for 23 February 2008.

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 309

Fig. 14. Height-Time-Color map of LIDAR signal returns (arb.unit) for 27 May 2011. The figure is overlapped by the determined boundary layer height (Black: statistical method

coefficient derived from the LIDAR data taken during the nights of 25 February 2008. The profiles are overlapped by the Stratosphere Aerosol Gas Experiment (SAGE-II) extinction data at 525 nm collected over southern Africa regions (Latitude, 15°S to 40°S and 10°E to 40°E and Longitude). The extracted mean aerosol extinction coefficients are from version 6.20 series of ~21 years (1984-2005). Here, we have used the corresponding monthly-mean extinction profiles (February). We have considered the SAGE-II profile as far as possible above 3-4 km, keeping in mind that the lower height region measurements are inaccurate due to a low signal to noise ratio (SNR) (*Formenti et al.*, 2002). The extinction profiles derived from LIDAR and SAGE-II are in close agreement with respect to trend and magnitude. The LIDAR profile has been terminated above 10 km due to thick cloud passage. One is able to observe the boundary layer peak at ~2.5 km which is described earlier, as an important parameter for atmosphere mixing (including pollutants). The presence of a cloud results in a sharp enhancement in the extinction and backscatter co-efficient to a high value making the detection quite unambiguous. A small difference in the observed magnitude might due to employed different techniques between LIDAR and satellite, time of observation, mean satellite profile versus a single day LIDAR measurement. The above mentioned height profile of aerosol extinction coefficients obtained using the LIDAR and SAGE-II satellite data are integrated appropriately to obtain the aerosol optical depth (AOD). Generally, we considered the LIDAR profile for the lower height region with respect to the SNR and at higher altitudes from the SAGE-II data. We found the value for February months is around ~0.264 which is in good agreement with AOD measured by the photometer over

based on range corrected signal, Pink: slope method)

Johannesburg (0.2966±0.06668).

height region from 3 km to 5 km. Since, these clouds are generally optically dense which prevents light to pass through. The present observations were carried out for more than two hours and the presence of clouds is clearly seen in the height-time-backscattered signal returns. Figure clearly distinguishes the cloud observation from normal scattering from background particulate matter. It shows the sharp enhancement in backscatter signal during the presence of cloud around 3.8 km and slowly has moved down to 3.5 km. The figure also demonstrates the capability of LIDAR to observe the cloud thickness (less than around 300 m) which is a unique feature of LIDAR in comparison to the satellite detection. The measured high resolution data is also important when studying cloud physics/characteristics. Otherwise, the lower height regions indicate high intensity signal returns which is due to the presence fog or aerosols.

## **4.3.3 Boundary layer detection**

The atmospheric Boundary Layer (BL) is a part of the lower troposphere where most living beings and natural/human activities occur. It varies with space and time, and changes with height mostly during the day due to variations in solar-radiation (by several kilometers) and is quite stable over night. It is well known that the aerosol content or particulate matter in the lower atmosphere fluctuates under different background conditions (e.g., temperature, humidity and solar radiation). Such fluctuations in aerosol content, particularly the height of boundary layer, can easily be determined by means of a LIDAR (Light Detection and Ranging) backscatter signal. Based on the LIDAR backscattered signal (or/and range corrected) and by applying different criteria, one would be able to identify the boundary layer height (BLH) and thus the temporal evolution. Here, we show a typical example of deduction of BLH based on two different methods, (a) statistical and (b) slope, i.e.,


Figure 14 shows the temporal (~2 hrs) evolution of LIDAR backscattered signal for the day of 27 May 2011. The figure is superimposed by the deducted BLH based on the two methods, statistical (Black circle) and slope (pink star). It is clear from the figure that the BLH varies significantly over time. In general, maximum BLH is found during the noon, as expected during the day that the earth's surface heats up due to solar radiation and this results in various thermodynamic chemical reactions causing turbulence in the PBL. The boundary layer height is therefore expected to vary more during the day and to stabilize after sun-set. The slope method provided a higher value in comparison with the statistical method (based on standard deviation) and the difference is found to be ~1 km. To conclude, deduction of BLH by the statistical method provides better results compared to the slope method.

### **4.3.4 Comparison with satellite measurements**

The extinction profile derived from the LIDAR and compared /validated using ground based and satellite borne instruments. Figure 15 presents the height profile of the extinction

height region from 3 km to 5 km. Since, these clouds are generally optically dense which prevents light to pass through. The present observations were carried out for more than two hours and the presence of clouds is clearly seen in the height-time-backscattered signal returns. Figure clearly distinguishes the cloud observation from normal scattering from background particulate matter. It shows the sharp enhancement in backscatter signal during the presence of cloud around 3.8 km and slowly has moved down to 3.5 km. The figure also demonstrates the capability of LIDAR to observe the cloud thickness (less than around 300 m) which is a unique feature of LIDAR in comparison to the satellite detection. The measured high resolution data is also important when studying cloud physics/characteristics. Otherwise, the lower height regions indicate high intensity signal

The atmospheric Boundary Layer (BL) is a part of the lower troposphere where most living beings and natural/human activities occur. It varies with space and time, and changes with height mostly during the day due to variations in solar-radiation (by several kilometers) and is quite stable over night. It is well known that the aerosol content or particulate matter in the lower atmosphere fluctuates under different background conditions (e.g., temperature, humidity and solar radiation). Such fluctuations in aerosol content, particularly the height of boundary layer, can easily be determined by means of a LIDAR (Light Detection and Ranging) backscatter signal. Based on the LIDAR backscattered signal (or/and range corrected) and by applying different criteria, one would be able to identify the boundary layer height (BLH) and thus the temporal evolution. Here, we show a typical example of

deduction of BLH based on two different methods, (a) statistical and (b) slope, i.e.,

by the statistical method provides better results compared to the slope method.

**4.3.4 Comparison with satellite measurements** 

a. The statistical method applies range (z) corrected (squared) LIDAR backscattered signal (Pr), i.e., Pr \* z2. The BLH is identified by the height where the maximum standard deviation in the range corrected signal. Here, the mean value is obtained by the integration of consecutive 5 profiles (corresponds to 50 sec) (*Chiang and Nee*, 2006). b. The slope method is based on the LIDAR backscattered signal (Pr) and their gradient (dPr/dz). The identified minimum value in the slope (between Pr and dPr/dz) defines

Figure 14 shows the temporal (~2 hrs) evolution of LIDAR backscattered signal for the day of 27 May 2011. The figure is superimposed by the deducted BLH based on the two methods, statistical (Black circle) and slope (pink star). It is clear from the figure that the BLH varies significantly over time. In general, maximum BLH is found during the noon, as expected during the day that the earth's surface heats up due to solar radiation and this results in various thermodynamic chemical reactions causing turbulence in the PBL. The boundary layer height is therefore expected to vary more during the day and to stabilize after sun-set. The slope method provided a higher value in comparison with the statistical method (based on standard deviation) and the difference is found to be ~1 km. To conclude, deduction of BLH

The extinction profile derived from the LIDAR and compared /validated using ground based and satellite borne instruments. Figure 15 presents the height profile of the extinction

returns which is due to the presence fog or aerosols.

**4.3.3 Boundary layer detection** 

the BLH (*Egert*, 2008).

Fig. 14. Height-Time-Color map of LIDAR signal returns (arb.unit) for 27 May 2011. The figure is overlapped by the determined boundary layer height (Black: statistical method based on range corrected signal, Pink: slope method)

coefficient derived from the LIDAR data taken during the nights of 25 February 2008. The profiles are overlapped by the Stratosphere Aerosol Gas Experiment (SAGE-II) extinction data at 525 nm collected over southern Africa regions (Latitude, 15°S to 40°S and 10°E to 40°E and Longitude). The extracted mean aerosol extinction coefficients are from version 6.20 series of ~21 years (1984-2005). Here, we have used the corresponding monthly-mean extinction profiles (February). We have considered the SAGE-II profile as far as possible above 3-4 km, keeping in mind that the lower height region measurements are inaccurate due to a low signal to noise ratio (SNR) (*Formenti et al.*, 2002). The extinction profiles derived from LIDAR and SAGE-II are in close agreement with respect to trend and magnitude. The LIDAR profile has been terminated above 10 km due to thick cloud passage. One is able to observe the boundary layer peak at ~2.5 km which is described earlier, as an important parameter for atmosphere mixing (including pollutants). The presence of a cloud results in a sharp enhancement in the extinction and backscatter co-efficient to a high value making the detection quite unambiguous. A small difference in the observed magnitude might due to employed different techniques between LIDAR and satellite, time of observation, mean satellite profile versus a single day LIDAR measurement. The above mentioned height profile of aerosol extinction coefficients obtained using the LIDAR and SAGE-II satellite data are integrated appropriately to obtain the aerosol optical depth (AOD). Generally, we considered the LIDAR profile for the lower height region with respect to the SNR and at higher altitudes from the SAGE-II data. We found the value for February months is around ~0.264 which is in good agreement with AOD measured by the photometer over Johannesburg (0.2966±0.06668).

CSIR – NLC Mobile LIDAR for Atmospheric Remote Sensing 311

Further the ongoing plan is to employ a 2-D scanner into the present LIDAR system (see. Figure 2) will be implemented in near future using a cable/pulley system and an electric winch to lift and lower the scanner. The integration of the scanner assists us in terms of - X-Y dimensional mapping of the atmosphere (horizontal or vertical cross-section)


Successful implementation of scanner will contribute to LIDAR technology worldwide as, with few exceptions, X-Y dimensional mapping of the atmosphere has not been fully explored. The plan is to include online control of the scanner incorporation of the position of the axes into the present data-acquisition system. The attempt will be done to modify the present data-acquisition software to capture the X-Y cross-sectional display during real time

We are thankful to the different South Africa funding agencies addition to the Council for Scientific and Industrial Research-National Laser Centre (CSIR-NLC), Department of Science and Technology (DST), National Research Foundation (NRF) (Grant no: 65086 and 68668), Southern Educational Research Alliance (SERA), African Laser Centre (ALC), Centre National de la Recherché Scientifique (CNRS) (France) and French Embassy in South Africa

Chiang, C.W. & Nee, J.B. (2006). Boundary layer height by LIDAR aerosol measurements at

Egert, S. & Peri, D. (2008). Automatic retrieval of the atmospheric boundary layer height,

Fernald, F. G. (1984). Analysis of atmospheric lidar observations – some comments, *Applied* 

Fiocco, G. & Smullin, L.D. (1963). Detection of scattering layers in the upper atmosphere

Fiocco, G. (1984). Lidar systems for aerosol studies, An outline, MAP Handbook, Vol. 13 (ed.

Formenti, P; Winkler, H., Fourie, P, Piketh, S., Makgopa, B., Helas, G. & Andreae, M.O.

Hauchecorne, A. & Chanin, M. L. (1980). Density and Temperature Profiles Obtained by

Klett J.D. (1981). Stable analytical Inversion solution for processing LIDAR returns. *Appl.* 

Klett, J.D. (1985). LIDAR inversion with variable backscatter to extinction ratios. *Appl. Opt,* 

(2002). Aeorosol optical depth over remote semi arid region of South Africa from spectral measurements of the daytime solar extinction and nighttime stellar

*Proceeding* of *24th International Laser RADAR Conference*, 320-323.

(60–140 km) by Optical RADAR, Nature, 199, 1275 – 1276.

Lidar Between 35 and 70 km, Geophys. Res. Lett. 7, 565–568.

Chung-Li (25ºN, 121ºE), *Proceeding of 23rd International Laser RADAR Conference*, 5O-


dispersion.

measurements.

(France).

**6. References** 

6.

*Optics, 23,* 652-53.

*Opt.* 20, 211.

24, 1638-1645.

R.A. Vincent), pp. 56-58.

extinction. *Atmospheric Research*, 62, 11-32.

**5. Acknowledgments** 

Fig. 15. Height profile of aerosol extinction coefficient derived from LIDAR for the night of 25 February 2008, superimposed by February monthly mean profile of SAGE-II.

#### **4.4 Future perspectives**

Based on our knowledge, there are no multi-channel LIDAR systems employed for atmosphere research in South Africa and African countries. Our goal is to achieve a multi channel LIDAR system to address aerosol/cloud, water vapour, lower atmosphere temperature and ozone measurements. LIDAR studies on particulate matter (0.5 and 0.3 microns) elucidate their distribution and concentration in the atmosphere. Particulate matter plays a key role in atmospheric physical and chemical processes from local to global scale. The complexity of these processes have been largely reviewed in literature and LIDAR measurements have mostly contributed to better understanding the role of atmosphere dynamics and particle microphysics. By making observations on a pre-determined spatial scale (from sites to regions) may plausible to calculate atmospheric mass transport and through trajectory analysis to back-track the location of plume sources, e.g. biomass burning. The atmospheric backscatter measurements of aerosols can be used to identify the stratification of pollutants and will enable the classification of the source regions, such as industrial, biological and anthropogenic sources. Later, the plan is to upgrade the system to measure water vapour concentrations in the atmosphere and its localized variations in the lower troposphere. Water-vapour effects global climate change and global warming both directly (water is a primary green-house gas) and through its impact on ecosystems where vegetation sensitivity plays an important feedback role.

Further the ongoing plan is to employ a 2-D scanner into the present LIDAR system (see. Figure 2) will be implemented in near future using a cable/pulley system and an electric winch to lift and lower the scanner. The integration of the scanner assists us in terms of


Successful implementation of scanner will contribute to LIDAR technology worldwide as, with few exceptions, X-Y dimensional mapping of the atmosphere has not been fully explored. The plan is to include online control of the scanner incorporation of the position of the axes into the present data-acquisition system. The attempt will be done to modify the present data-acquisition software to capture the X-Y cross-sectional display during real time measurements.

## **5. Acknowledgments**

310 Remote Sensing – Advanced Techniques and Platforms

Fig. 15. Height profile of aerosol extinction coefficient derived from LIDAR for the night of

Based on our knowledge, there are no multi-channel LIDAR systems employed for atmosphere research in South Africa and African countries. Our goal is to achieve a multi channel LIDAR system to address aerosol/cloud, water vapour, lower atmosphere temperature and ozone measurements. LIDAR studies on particulate matter (0.5 and 0.3 microns) elucidate their distribution and concentration in the atmosphere. Particulate matter plays a key role in atmospheric physical and chemical processes from local to global scale. The complexity of these processes have been largely reviewed in literature and LIDAR measurements have mostly contributed to better understanding the role of atmosphere dynamics and particle microphysics. By making observations on a pre-determined spatial scale (from sites to regions) may plausible to calculate atmospheric mass transport and through trajectory analysis to back-track the location of plume sources, e.g. biomass burning. The atmospheric backscatter measurements of aerosols can be used to identify the stratification of pollutants and will enable the classification of the source regions, such as industrial, biological and anthropogenic sources. Later, the plan is to upgrade the system to measure water vapour concentrations in the atmosphere and its localized variations in the lower troposphere. Water-vapour effects global climate change and global warming both directly (water is a primary green-house gas) and through its impact on ecosystems where

25 February 2008, superimposed by February monthly mean profile of SAGE-II.

vegetation sensitivity plays an important feedback role.

**4.4 Future perspectives** 

We are thankful to the different South Africa funding agencies addition to the Council for Scientific and Industrial Research-National Laser Centre (CSIR-NLC), Department of Science and Technology (DST), National Research Foundation (NRF) (Grant no: 65086 and 68668), Southern Educational Research Alliance (SERA), African Laser Centre (ALC), Centre National de la Recherché Scientifique (CNRS) (France) and French Embassy in South Africa (France).

## **6. References**


**Active Remote Sensing:** 

Yasser Hassebo

*USA* 

**Lidar SNR Improvements** 

*LaGuardia Community College of the City University of New York* 

**RA**dio **D**etection **A**nd **R**anging (RADAR), **SO**und **NA**vigation **a**nd **R**anging (SONAR), and **LI**ght **D**etection **A**nd **R**anging (LIDAR) are active remote sensing systems used for earth observations (Planes and ships' locations and velocity information, air traffic control, oceanographic and land info, ), bathymetric mapping (e.g., hypsometry, Ocean depth (echosounding), SHOALS, and seafloor), and topographic mapping. Integrating laser with RADAR techniques – laser RADAR or LIDAR - after World War II introduces scientists to a new era of Remote Sensing technologies. LIDAR is one of the most widely used active remote sensing systems to attain elevation information which an essential component to obtain geographical data. While RADAR is transmitting a long-wavelength signal (i.e., radio or microwave: cm scale) to the atmosphere and then collecting the backscattering energy signal, LIDAR transmission is a short-wavelength laser beam(s) (i.e., nm scale) to the atmosphere and then detecting the backscattering light signal(s). More lidar principles and comparison between

After World War II the first **LI**ght **D**etection **A**nd **R**anging (lidar) system was invented (Jones 1949). The light source was a flash light between aluminum electrodes with high voltage amplitude transmitter, and the receiver optics were two mirrors. Afterward a photoelectrical cell was used as a detector. During daylight, this system had been used to measure the height of cloud ceiling up to 5.5 km. At that time the acronym *lidar* didn't exit (Middleton 1953). The real revolution of lidar began with the invention of the laser (light amplification by stimulated emission of radiation) in 1960. Using laser as a source of light in a lidar system is referred to as "*Laser Radar, or Ladar, or Lidar*". Lidar operates in wide band region of the electromagnetic spectrum; ultraviolet (225 nm- 400 nm), visible (400 nm-700 nm), and infrared radiation (700 nm- 1200 nm). Lidar systems are used as ground based stations (stationary or mobile), or can be carried on platforms such as airplanes or balloons (in-situ operations), or on satellites. National Oceanic and Atmospheric Agency (NOAA) and National Aeronautics and Space Administration (NASA) aircraft and satellites are the most famous lidar platforms in the United States of America. Some other platforms are

active remote sensing techniques are introduced in section 1.1 of this chapter.

**1. Introduction** 

**2. Lidar background** 

**2.1 Lidar historical background** 

Ligda, M.G.H.(1963). Proceedings of the first conference on laser technology, U.S. Navy, ONR, 63-72. **14** 

## **Active Remote Sensing: Lidar SNR Improvements**

Yasser Hassebo *LaGuardia Community College of the City University of New York USA* 

## **1. Introduction**

312 Remote Sensing – Advanced Techniques and Platforms

Ligda, M.G.H.(1963). Proceedings of the first conference on laser technology, U.S. Navy,

**RA**dio **D**etection **A**nd **R**anging (RADAR), **SO**und **NA**vigation **a**nd **R**anging (SONAR), and **LI**ght **D**etection **A**nd **R**anging (LIDAR) are active remote sensing systems used for earth observations (Planes and ships' locations and velocity information, air traffic control, oceanographic and land info, ), bathymetric mapping (e.g., hypsometry, Ocean depth (echosounding), SHOALS, and seafloor), and topographic mapping. Integrating laser with RADAR techniques – laser RADAR or LIDAR - after World War II introduces scientists to a new era of Remote Sensing technologies. LIDAR is one of the most widely used active remote sensing systems to attain elevation information which an essential component to obtain geographical data. While RADAR is transmitting a long-wavelength signal (i.e., radio or microwave: cm scale) to the atmosphere and then collecting the backscattering energy signal, LIDAR transmission is a short-wavelength laser beam(s) (i.e., nm scale) to the atmosphere and then detecting the backscattering light signal(s). More lidar principles and comparison between active remote sensing techniques are introduced in section 1.1 of this chapter.

## **2. Lidar background**

## **2.1 Lidar historical background**

After World War II the first **LI**ght **D**etection **A**nd **R**anging (lidar) system was invented (Jones 1949). The light source was a flash light between aluminum electrodes with high voltage amplitude transmitter, and the receiver optics were two mirrors. Afterward a photoelectrical cell was used as a detector. During daylight, this system had been used to measure the height of cloud ceiling up to 5.5 km. At that time the acronym *lidar* didn't exit (Middleton 1953). The real revolution of lidar began with the invention of the laser (light amplification by stimulated emission of radiation) in 1960. Using laser as a source of light in a lidar system is referred to as "*Laser Radar, or Ladar, or Lidar*". Lidar operates in wide band region of the electromagnetic spectrum; ultraviolet (225 nm- 400 nm), visible (400 nm-700 nm), and infrared radiation (700 nm- 1200 nm). Lidar systems are used as ground based stations (stationary or mobile), or can be carried on platforms such as airplanes or balloons (in-situ operations), or on satellites. National Oceanic and Atmospheric Agency (NOAA) and National Aeronautics and Space Administration (NASA) aircraft and satellites are the most famous lidar platforms in the United States of America. Some other platforms are

Active Remote Sensing: Lidar SNR Improvements 315

Switched (an optical on-off switch) Nd: YAG (Continuum Infinity 40-100) and Q-Switched

**A Receiver** subsystem consists of an optical telescope to gather and focus the backscattering radiations, and receiver optics to provide the detector (PhotoMultiplier Tube (PMT) or Avalanche Photo Diode (APD)) with desirable collimated or/and focused strong polarized signal. Components such as mirrors, collimated lenses, aperture (field stop), ND (*N*eutral *D*ensity) filters, and *I*nterference *F*ilters (IF) are to provide special filtering against sky background radiations, analyzers (polarization selection components) are needed to select the necessary polarizations based on the applications and/or to discriminate against the unwanted background noise (as shown in chapter 7), and electro-optical elements that convert light energy to electrical energy (detectors). There are two basic types of detectors for lidar systems; the photomultiplier tube (PMT), and the avalanche photo diode (APD). In addition to the optics mounts and the manually operation aids, automated alignment

**Electronic** subsystem consists of data acquisition (mostly, multiple channels), displaying unites, *A*nalog to *D*igital (A/D) signals conversions, radar and radar circuit, control system especially for our polarization discrimination technique which I presented in this dissertation (Chapter 7) to track the azimuth angles to improve the SNR. In addition, software (Labview and Matlab) is needed for signal processing purposes, as well some hardware such as platforms (van for a ground based mobile lidar, airplane or balloon for insitu airborne lidar and satellite for higher altitude space-based scanning lidar), a temperature control unit, orientation stability elements, storage units and some additional

Using the well known fact that the laser energy of optical frequencies is highly monochromatic and coherent, and the revolution of developing the Q-Switching by McClung and Hellwarth on 1962, (McClung 1962), laser has the capability of producing pulses with very short duration, narrow bandwidth, very high peak energy, propagating into the atmosphere with small divergence degree. This prompted the development of backscattering techniques for environment and/or atmosphere compositions and structure, (aerosol, ozone, cloud plumes, smoke plumes, dust, water vapor and greenhouse gases (e.g. carbon dioxide), temperature profile, wind speed, gravity waves, etc.), distributions, concentrations and measurements. These measurement techniques are to some extent analogous to radar, except using light waves as an alternative to radio waves. Consequently, scientists denote lidar as laser radar. The essential idea of lidar operations and measurements is based on the shape of the detected backscattering lidar signals with wavelength of (*λ*) if the transmitted laser beam of wavelength *(λL*) is scattering back from distance *R*. This backscattering shape depends on the properties of the lidar characteristics and the atmosphere specifications. The transmitted lidar signal can be absorbed, scattered, and shifted, or its polarization can be changed by the atmosphere compositions and scattered in all the directions with some signals scattered back to the lidar receiver. Two parameters, in the lidar return equation, relate the lidar detected signal power and the atmospheric specifications. These parameters are the extinction and scattering coefficients, *α (λ, R), β (λ, R)* respectively. By solving the lidar equation for those coefficients one can

capabilities for lidar long-term unattended operations are needed.

equipment depending on lidar's type and measurement objective.

Nd: YAG (Surelite) at CCNY.

**2.2 How does lidar work?** 

employed around the world by groups such as the European Space Agency (ESA), the Japanese National Institute for Environmental Studies (NIES), and the National Space Development Agency of Japan (NASDA).

## **What is a lidar**

**LI**ght **D**etection **A**nd **R**anging (lidar) is an optical remote sensing system for probing the earth's atmosphere through Laser transmitter using elastic and/or inelastic scattering techniques. Most of the remote sensing lidar systems consist of three functional subsystems, as shown in Figure 1, which vary in the details based on the particular applications. These subsystems are: (1) Transmission subsystem, (2) Receiver subsystem, (3) Electronics subsystem.

Fig. 1. Essential elements of a lidar system

In the **transmission** subsystem, a laser (pulsed or continuous wave (CW)) is used as a light source. More than one laser can be used according to lidar type and objective of the measurements. Laser pulses, in the ideal case, are very short pulses with narrow bandwidth, high repetition rate, very high peak power, and are propagated with a small degree of divergence. The laser pulse is transmitted through transmission optics to the atmospheric object of interest. The essential function of the output optics is to improve the output laser beam properties and/or control the outgoing beam polarization. Elements such as lenses and mirrors are used to improve the beam collimation. Beam expansion is used to reduce the beam divergence and the area density of the laser pulse. Fiber optic cable, filters, and cover shields or housings serve the dual purpose of preventing the receiver detectors from saturation due to any unwanted transmitted radiations and of protecting the user's eyes against any injury. Wave length selective devices are also used, such as harmonic generator, to create the second, the third and the fourth harmonic. Polarizer can be used to control the transmitted beam polarization. Polarization measurement equipments are used as well. The experimental results, in this chapter, had been produced using two types of pulsed laser, Q-

employed around the world by groups such as the European Space Agency (ESA), the Japanese National Institute for Environmental Studies (NIES), and the National Space

**LI**ght **D**etection **A**nd **R**anging (lidar) is an optical remote sensing system for probing the earth's atmosphere through Laser transmitter using elastic and/or inelastic scattering techniques. Most of the remote sensing lidar systems consist of three functional subsystems, as shown in Figure 1, which vary in the details based on the particular applications. These subsystems are: (1) Transmission subsystem, (2) Receiver subsystem, (3) Electronics

**Cloud**

**Receiver Telescope**

**Detector**

In the **transmission** subsystem, a laser (pulsed or continuous wave (CW)) is used as a light source. More than one laser can be used according to lidar type and objective of the measurements. Laser pulses, in the ideal case, are very short pulses with narrow bandwidth, high repetition rate, very high peak power, and are propagated with a small degree of divergence. The laser pulse is transmitted through transmission optics to the atmospheric object of interest. The essential function of the output optics is to improve the output laser beam properties and/or control the outgoing beam polarization. Elements such as lenses and mirrors are used to improve the beam collimation. Beam expansion is used to reduce the beam divergence and the area density of the laser pulse. Fiber optic cable, filters, and cover shields or housings serve the dual purpose of preventing the receiver detectors from saturation due to any unwanted transmitted radiations and of protecting the user's eyes against any injury. Wave length selective devices are also used, such as harmonic generator, to create the second, the third and the fourth harmonic. Polarizer can be used to control the transmitted beam polarization. Polarization measurement equipments are used as well. The experimental results, in this chapter, had been produced using two types of pulsed laser, Q-

**computer**

Fig. 1. Essential elements of a lidar system

**Rada**r

**Laser**

Development Agency of Japan (NASDA).

**What is a lidar** 

subsystem.

Switched (an optical on-off switch) Nd: YAG (Continuum Infinity 40-100) and Q-Switched Nd: YAG (Surelite) at CCNY.

**A Receiver** subsystem consists of an optical telescope to gather and focus the backscattering radiations, and receiver optics to provide the detector (PhotoMultiplier Tube (PMT) or Avalanche Photo Diode (APD)) with desirable collimated or/and focused strong polarized signal. Components such as mirrors, collimated lenses, aperture (field stop), ND (*N*eutral *D*ensity) filters, and *I*nterference *F*ilters (IF) are to provide special filtering against sky background radiations, analyzers (polarization selection components) are needed to select the necessary polarizations based on the applications and/or to discriminate against the unwanted background noise (as shown in chapter 7), and electro-optical elements that convert light energy to electrical energy (detectors). There are two basic types of detectors for lidar systems; the photomultiplier tube (PMT), and the avalanche photo diode (APD). In addition to the optics mounts and the manually operation aids, automated alignment capabilities for lidar long-term unattended operations are needed.

**Electronic** subsystem consists of data acquisition (mostly, multiple channels), displaying unites, *A*nalog to *D*igital (A/D) signals conversions, radar and radar circuit, control system especially for our polarization discrimination technique which I presented in this dissertation (Chapter 7) to track the azimuth angles to improve the SNR. In addition, software (Labview and Matlab) is needed for signal processing purposes, as well some hardware such as platforms (van for a ground based mobile lidar, airplane or balloon for insitu airborne lidar and satellite for higher altitude space-based scanning lidar), a temperature control unit, orientation stability elements, storage units and some additional equipment depending on lidar's type and measurement objective.

## **2.2 How does lidar work?**

Using the well known fact that the laser energy of optical frequencies is highly monochromatic and coherent, and the revolution of developing the Q-Switching by McClung and Hellwarth on 1962, (McClung 1962), laser has the capability of producing pulses with very short duration, narrow bandwidth, very high peak energy, propagating into the atmosphere with small divergence degree. This prompted the development of backscattering techniques for environment and/or atmosphere compositions and structure, (aerosol, ozone, cloud plumes, smoke plumes, dust, water vapor and greenhouse gases (e.g. carbon dioxide), temperature profile, wind speed, gravity waves, etc.), distributions, concentrations and measurements. These measurement techniques are to some extent analogous to radar, except using light waves as an alternative to radio waves. Consequently, scientists denote lidar as laser radar. The essential idea of lidar operations and measurements is based on the shape of the detected backscattering lidar signals with wavelength of (*λ*) if the transmitted laser beam of wavelength *(λL*) is scattering back from distance *R*. This backscattering shape depends on the properties of the lidar characteristics and the atmosphere specifications. The transmitted lidar signal can be absorbed, scattered, and shifted, or its polarization can be changed by the atmosphere compositions and scattered in all the directions with some signals scattered back to the lidar receiver. Two parameters, in the lidar return equation, relate the lidar detected signal power and the atmospheric specifications. These parameters are the extinction and scattering coefficients, *α (λ, R), β (λ, R)* respectively. By solving the lidar equation for those coefficients one can

Active Remote Sensing: Lidar SNR Improvements 317

difficult for mobile or airborne platforms (Fhjii and Fukuchi 2005). Finally, assuming the atmosphere consists of molecules only and outside the gaseous absorption bands of the

*<sup>m</sup>* , where *λ* is measured in micrometers.

Rayleigh scattering strongly depends on the wavelength of the transmitted light *(λ-4*) which explains the blue color of the sky, where the scattering efficiency is proportional to λ-4, i.e. rapid increase in the scattering efficiency with decreasing *λ*. This behavior leads to more

, (i.e., *rp*

radiation ), Rayleigh scattering is not applicable but Mie scattering applies (Mie 1908; Measures 1984; Liou 2002). Mie scattering is elastic scattering which is suitable for detection of large spherical and non-spherical aerosol and cloud particles mainly in the troposphere (Barber 1975), (Wiscombe 1980). The backscattering signals from aerosol or molecules and the absorption from molecules are very strong in the lower part of the atmosphere (below 30 km), which is enough to determine various properties about the atmosphere. Micrometersized aerosol and clouds are great indicators of atmosphere boundary phenomena where they show strong backscattering interaction. By Mie scattering theory, the optical properties of water droplets can be evaluated for any wavelength in the electromagnetic spectrum (from solar to microwave) (Deirmendjian 1969). Clouds covered about 50% of the earth (Liou 2002). Clouds also have an important impact on the global warming disaster when clouds trap the outgoing terrestrial radiation and produce a greenhouse gaseous effect. Mie backscattering lidar measures backscattered radiation from aerosol and cloud particles and their polarization as well (Mie 1908; Liou 2002). Its performance is similar to radar manner. A laser pulse of energy is transmitted, interacted with different objects and then backscattered (scattering angle =180°) to the receiver detector. The detected backscattering signals are interrelated with some properties of that object (even with low concentrations or small change in concentrations of dust or aerosol objects). Mie scattering follows *(λ-0 to λ-2)*,

Raman scattering is inelastic scattering with cross section up to three times smaller than the Rayleigh cross section in magnitude. A Raman scattered signal is shifted in frequency from the incident light (Raman-shifted frequency). The Raman scattering coefficient is proportional to the atmospheric density when the air molecule (nitrogen or oxygen) is used as Raman materials (Fhjii and Fukuchi 2005). Generally speaking, Raman lidar measure intensity at shifted wavelength (Stephens 1994) and it detects selected species by monitoring the wavelength-shifted molecular return produced by vibration Raman scattering from the chosen molecules. Raman lidar, originally, was developed for NASA Tropical Ozone Transport Experiment/ Vortex Ozone Transport Experiment (TOTE/VOTE) for methane (CH4) and Ozone measurements (Heaps 1996). Also it has been used to correct the microwave temperature profile in the stratosphere (Heaps 1997).Typically; inelastic scattering (such as Raman) is very weak; therefore the daytime measurement is difficult due to the strong

 /2

, where λ is wavelength of

atmosphere, the atmosphere optical thickness can be approximated by

4 24

 / 2

0.008569 (1 0.0113 0.00013 )

scatter in blue than red light of the air molecules.

i.e., it is not significantly dependent on the wavelength.

**4.3 Raman (inelastic backscattering) lidars** 

 

For particle radius ( *pr* ) larger than

**4.2 Mie backscatter lidars** 

determine various atmospheric properties. An example of these determination processes, which based on the lidar type and the physical process used in the measurements, have been introduced in this chapter.

## **3. Lidar classifications**

Ways to classify lidar systems are: (1) the kind of physical processes (Rayleigh, Mie, elastic and inelastic backscattering, absorption, florescence , etc.), (2) the types of the laser employed (Die, and ND:YAG), (3) the objective of the lidar measurements (aerosols and cloud properties, temperature, ozone, humidity and water vapor, wind and turbulence, etc.), (4) the atmospheric parameters that can lidar measure (atmospheric density, gaseous pollutants, atmospheric temperature profiles), (5) the wavelength that been used in the measurements (ultraviolet (UV), infrared (IR), and visible), (6) the lidar configurations (monostatic, biaxial, coaxial, vertically pointed and scanning lidars and bi-static), (7) the measurement mode (analogue, digital), (8) the platform type (stationary in laboratories, mobiles in vehicles, in situ (balloon and aircraft), and satellite), and (9) number of wavelength (single, and multiple wavelengths). In the following section anticipate brief descriptions of various types of lidar, focusing mainly on those types of our research interest.

## **4. Types of lidar returns**

If light is directed towards other directions because of interaction with matter without loss of energy (but losing intensity) the fundamental physical process is called *scattering* of light. The light scattering occurs at all wavelengths in the electromagnetic spectrum and in all directions. If lidars sense only the scattering radiations in the backward direction (scattering angle s = 180 for monostatic vertically pointed lidar), we call them lidar *backscattering* radiations or signals. In terms of lidar return signals, lidar has been classified into the following types: Rayleigh, Mie, Raman, DIAL, Doppler, and florescence lidars.

#### **4.1 Rayleigh scattering lidar**

In 1871, Lord Rayleigh discovered a significant physical law of light scattering with a variety of applications. The most famous applications of this discovery are the blue sky and the sky light partial polarization explanations. Rayleigh scattering is elastic (no wavelength shift) scattering from atmospheric molecules (particle radius is much smaller compared with the incident radiation wavelength i.e.*rp* ): sum of Cabannes (sum of coherent, isotopic, polarized scattering, which approximately 96% of the scattering) and rotational- Raman S and S' branch scattering which only 4% of the scattering proceedings. Based on the Rayleigh-Jeans law, [the Planck radiance is linearly proportional to the temperature, 2 2 *BT v c T* ( ) (2 / )( ) **<sup>B</sup>** , where *B(T)* is the Planck function, **<sup>B</sup>** is the Boltzmann constant <sup>16</sup> <sup>1</sup> 103806 10 .deg *erg* **<sup>B</sup>** , *v* is the oscillator frequency, *c* the speed of light, and *T* the absolute temperature] (Liou 2002), Rayleigh lidar technique can be used to derive the atmospheric temperature profile above the aerosol free region (*R >30 km*). Since molecular scattering (Rayleigh scattering or aerosol-free scattering) is proportional to the atmospheric density, the atmospheric temperature profile can be simply derived from the atmospheric density in the range above the aerosol layers (above 30 km to below 80 km). Unfortunately, above 80 km temperature measurements require a powerful transmitter laser (up to 20 W) and receiver telescope (up to 4 m aperture) which are difficult for mobile or airborne platforms (Fhjii and Fukuchi 2005). Finally, assuming the atmosphere consists of molecules only and outside the gaseous absorption bands of the atmosphere, the atmosphere optical thickness can be approximated by

4 24 0.008569 (1 0.0113 0.00013 ) *<sup>m</sup>* , where *λ* is measured in micrometers.

Rayleigh scattering strongly depends on the wavelength of the transmitted light *(λ-4*) which explains the blue color of the sky, where the scattering efficiency is proportional to λ-4, i.e. rapid increase in the scattering efficiency with decreasing *λ*. This behavior leads to more scatter in blue than red light of the air molecules.

## **4.2 Mie backscatter lidars**

316 Remote Sensing – Advanced Techniques and Platforms

determine various atmospheric properties. An example of these determination processes, which based on the lidar type and the physical process used in the measurements, have

Ways to classify lidar systems are: (1) the kind of physical processes (Rayleigh, Mie, elastic and inelastic backscattering, absorption, florescence , etc.), (2) the types of the laser employed (Die, and ND:YAG), (3) the objective of the lidar measurements (aerosols and cloud properties, temperature, ozone, humidity and water vapor, wind and turbulence, etc.), (4) the atmospheric parameters that can lidar measure (atmospheric density, gaseous pollutants, atmospheric temperature profiles), (5) the wavelength that been used in the measurements (ultraviolet (UV), infrared (IR), and visible), (6) the lidar configurations (monostatic, biaxial, coaxial, vertically pointed and scanning lidars and bi-static), (7) the measurement mode (analogue, digital), (8) the platform type (stationary in laboratories, mobiles in vehicles, in situ (balloon and aircraft), and satellite), and (9) number of wavelength (single, and multiple wavelengths). In the following section anticipate brief descriptions of various types of lidar, focusing mainly

If light is directed towards other directions because of interaction with matter without loss of energy (but losing intensity) the fundamental physical process is called *scattering* of light. The light scattering occurs at all wavelengths in the electromagnetic spectrum and in all directions. If lidars sense only the scattering radiations in the backward direction (scattering angle s = 180 for monostatic vertically pointed lidar), we call them lidar *backscattering* radiations or signals. In terms of lidar return signals, lidar has been classified into the

In 1871, Lord Rayleigh discovered a significant physical law of light scattering with a variety of applications. The most famous applications of this discovery are the blue sky and the sky light partial polarization explanations. Rayleigh scattering is elastic (no wavelength shift) scattering from atmospheric molecules (particle radius is much smaller compared with the

polarized scattering, which approximately 96% of the scattering) and rotational- Raman S and S' branch scattering which only 4% of the scattering proceedings. Based on the Rayleigh-Jeans law, [the Planck radiance is linearly proportional to the temperature, 2 2 *BT v c T* ( ) (2 / )( )

*v* is the oscillator frequency, *c* the speed of light, and *T* the absolute temperature] (Liou 2002), Rayleigh lidar technique can be used to derive the atmospheric temperature profile above the aerosol free region (*R >30 km*). Since molecular scattering (Rayleigh scattering or aerosol-free scattering) is proportional to the atmospheric density, the atmospheric temperature profile can be simply derived from the atmospheric density in the range above the aerosol layers (above 30 km to below 80 km). Unfortunately, above 80 km temperature measurements require a powerful transmitter laser (up to 20 W) and receiver telescope (up to 4 m aperture) which are

): sum of Cabannes (sum of coherent, isotopic,

 **<sup>B</sup>** is the Boltzmann constant <sup>16</sup> <sup>1</sup> 

103806 10 .deg *erg* **<sup>B</sup>** ,

**<sup>B</sup>** ,

following types: Rayleigh, Mie, Raman, DIAL, Doppler, and florescence lidars.

been introduced in this chapter.

on those types of our research interest.

**4. Types of lidar returns** 

**4.1 Rayleigh scattering lidar** 

incident radiation wavelength i.e.*rp*

where *B(T)* is the Planck function,

**3. Lidar classifications** 

For particle radius ( *pr* ) larger than / 2 , (i.e., *rp* /2 , where λ is wavelength of radiation ), Rayleigh scattering is not applicable but Mie scattering applies (Mie 1908; Measures 1984; Liou 2002). Mie scattering is elastic scattering which is suitable for detection of large spherical and non-spherical aerosol and cloud particles mainly in the troposphere (Barber 1975), (Wiscombe 1980). The backscattering signals from aerosol or molecules and the absorption from molecules are very strong in the lower part of the atmosphere (below 30 km), which is enough to determine various properties about the atmosphere. Micrometersized aerosol and clouds are great indicators of atmosphere boundary phenomena where they show strong backscattering interaction. By Mie scattering theory, the optical properties of water droplets can be evaluated for any wavelength in the electromagnetic spectrum (from solar to microwave) (Deirmendjian 1969). Clouds covered about 50% of the earth (Liou 2002). Clouds also have an important impact on the global warming disaster when clouds trap the outgoing terrestrial radiation and produce a greenhouse gaseous effect. Mie backscattering lidar measures backscattered radiation from aerosol and cloud particles and their polarization as well (Mie 1908; Liou 2002). Its performance is similar to radar manner. A laser pulse of energy is transmitted, interacted with different objects and then backscattered (scattering angle =180°) to the receiver detector. The detected backscattering signals are interrelated with some properties of that object (even with low concentrations or small change in concentrations of dust or aerosol objects). Mie scattering follows *(λ-0 to λ-2)*, i.e., it is not significantly dependent on the wavelength.

#### **4.3 Raman (inelastic backscattering) lidars**

Raman scattering is inelastic scattering with cross section up to three times smaller than the Rayleigh cross section in magnitude. A Raman scattered signal is shifted in frequency from the incident light (Raman-shifted frequency). The Raman scattering coefficient is proportional to the atmospheric density when the air molecule (nitrogen or oxygen) is used as Raman materials (Fhjii and Fukuchi 2005). Generally speaking, Raman lidar measure intensity at shifted wavelength (Stephens 1994) and it detects selected species by monitoring the wavelength-shifted molecular return produced by vibration Raman scattering from the chosen molecules. Raman lidar, originally, was developed for NASA Tropical Ozone Transport Experiment/ Vortex Ozone Transport Experiment (TOTE/VOTE) for methane (CH4) and Ozone measurements (Heaps 1996). Also it has been used to correct the microwave temperature profile in the stratosphere (Heaps 1997).Typically; inelastic scattering (such as Raman) is very weak; therefore the daytime measurement is difficult due to the strong

Active Remote Sensing: Lidar SNR Improvements 319

Doppler shift associated with the thermal motion of radiating (absorbing) species in the mesopause region such as Na, K, Li, Ca, and Fe (Measures 1984). Furthermore, the atmospheric temperature can be detected by measuring the Doppler broadening and the measured global wind pattern can be determined by measuring the Doppler shift of laserinduced florescence from atmospheric metals atoms such as Na in the middle and upper atmosphere (Bills 1991; She and Yu 1994). The use of Doppler broadening of the structure of Na D2 line (by narrowband lidar) technique to determine the range resolved high resolution temperature profile of the mesopause region (75-115 km, is also called MLT for Mesosphere and Lower Thermosphere) and was proposed by Gibson et al. in 1979. The principle idea is that the absorption line will be broadened because of the Doppler effect for a single Na

0

*<sup>D</sup>* line-width, we can derive the temperature of the Na atoms in

, where *M* is the mass of a single Na

*<sup>D</sup>* is a function of temperature.

0 is the mean Na D2 transition wavelength, and *T* is

 *<sup>B</sup> D*

*T M*

the mesopause which equal to the surrounding atmosphere temperature where Na atom is

A Rayleigh lidar signal is useless above ~ 85 km, because of the low atmospheric density above that altitude. The backscattering cross section of Resonance fluorescence lidars is about 1014 times higher than Rayleigh backscattering cross-section for the same transmitter and receiver specifications, thus Resonance fluorescence lidars can be used in the upper atmosphere measurements. Resonance fluorescence lidars are measuring intensity at shifted wavelength using of Doppler technique (Bills 1991; She and Yu 1994) or Boltzmann technique (Gelbwachs 1994). Fluorescence lidar is used to measure metallic species in the upper layer of the atmosphere (~90km) such as, Na, K, Li (Jegou, M.Chanin et al. 1980), Ca and Fe (Granier, J. P. Jegou et al. 1989; Gardner, C. S. et al. 1993) and/or volcanic stratospheric aerosol, polar stratospheric clouds (PSCs), gravity waves, and stratospheric ozone layer. This lidar has high sensitivity and accuracy. It is also, used to determination of wind, temperature, and study of thermal structure and complex atmospheric dynamics.

Based on the wavelength that been used in lidar measurements, one can classify lidar into: Elastic, inelastic, multi wavelength, and femto-second white light lidars. Brief descriptions

An elastic scattering is defined as light scattering with no apparent wavelength shift or change with the incident wavelength. Elastic backscatter lidar operation, as one of the most popular lidar systems, is based on the elastic scattering physical process. It is detecting the total atmospheric backscatter of molecular and particle together without separation. Hence, elastic backscattering lidar is the sum of Rayleigh and Mie scatterings. The main

in equilibrium condition in the mesopause region (Fhjii and Fukuchi 2005).

atom. Doppler broadened line is given by <sup>2</sup>

the temperature. As shown the Doppler broadened

*<sup>B</sup>* is the Boltzmann constant,

**4.6 Resonance fluorescence lidars** 

Therefore if we measure

**5. Lidar wavelengths** 

**5.1 Elastic lidar** 

are introduced in the following sub-sections.

atom, 

background solar radiation. This restricts Raman lidar measurements to nighttime use where background solar radiation is absent. On the other hand, Raman lidar is a powerful remote sensing tool used to measure and trace constituents where elastic lidar can not identify the gas species (Fhjii and Fukuchi 2005). Raman-Mie Lidar technique is also used to determine the extinction and the backscattering coefficients assuming the knowledge of air pressure (Ansmann 1992). In this chapter I introduced, a polarization technique to improve lidar Signalto-Noise Ratio (SNR) by reducing the background noise during the daytime measurements. This will help for successful diurnal operation of Raman lidar.

#### **4.4 DIfferential absorption lidar (DIAL)**

**D**ifferential **A**bsorption and **S**cattering (DAS) is a good combination for detecting a good resolution of water vapor in the atmosphere using the H2O absorption line at 690 nm (Schotland 1966; Measures 1984). DAS technique is one of the best methods for detecting constituents for long-range monitoring based on a comparison between the atmospheric backscattering signals from two adjacent wavelengths that are absorbed differently by the gas of interest (Measures. R. M. 1972). The closest wavelength, of the two adjacent wavelengths, to the absorption line of the molecule of interest (i.e., strongly absorbing spectral location due to the presence of an absorbing gas) is usually called on-line and denoted as (*λON*) and the other laser wavelength is called off-line and denoted as (*λOFF*). **DI**fferential **A**bsorption **L**idar (DIAL) technique is a unique method to measure and trace gaseous concentrations in the Planetry Boundary Layer (Welton, Campble et al.) (Welton, Campble et al.) in three dimensional mode (3D) using of the DAS principal. The gas number density ( ) *N R <sup>x</sup>* can be derived from the differential absorption cross section of the molecular species of interest ( ( )( ) *ON OFF* ) in the DIAL equation (Fhjii and Fukuchi 2005)

$$N\_x(R) = \frac{1}{2\Delta\sigma} \frac{d}{dR} \ln \frac{P(R, \mathcal{A}\_{OFF})}{P(R, \mathcal{A}\_{ON})} \tag{1}$$

Where *P R*(, ) *ON* and *P R*(, ) *OFF* the power backscattered signal received from distance *R* for both wavelengths. Special careful must be taken into account when selecting the adjacent wavelengths, where the different between the two wavelengths is preferred to be < 1 cm-1, otherwise another two terms must be considered in the DIAL equation. DIAL, as a range resolved remote sensing technique, can detect lots of pollutants and greenhouse gases (H2O, SO2, O3, CO, CO2, NO, NO2, CH4, etc.) which play a big role in climate change and the earth's radiative budget. DIAL is possible in the UV (200 to 450 nm), the visible, and the near IR (1 to 5 micrometer), and in the mid-IR (5 to 11 micrometer). For example to measure Ozone as a green house gas with fatal direct effect on human health particularly in the troposphere, DIAL can be used in two appropriate bands; UV band (at 256 nm) and the mid-IR band (960 to 1070 cm -1). DIAL operations advantages are successful both day and night, detecting gases and aerosol profiles simultaneously. It can be operated in ground, airborne, and space based platforms.

#### **4.5 Doppler lidars**

Atmospheric laser Doppler velocimetry including measurements of tornados, storms, wind, turbulence, global wind cycles, and the atmosphere temperature are some of the most important remote sensing techniques (Measures 1984). Doppler broadening is due to the

background solar radiation. This restricts Raman lidar measurements to nighttime use where background solar radiation is absent. On the other hand, Raman lidar is a powerful remote sensing tool used to measure and trace constituents where elastic lidar can not identify the gas species (Fhjii and Fukuchi 2005). Raman-Mie Lidar technique is also used to determine the extinction and the backscattering coefficients assuming the knowledge of air pressure (Ansmann 1992). In this chapter I introduced, a polarization technique to improve lidar Signalto-Noise Ratio (SNR) by reducing the background noise during the daytime measurements.

**D**ifferential **A**bsorption and **S**cattering (DAS) is a good combination for detecting a good resolution of water vapor in the atmosphere using the H2O absorption line at 690 nm (Schotland 1966; Measures 1984). DAS technique is one of the best methods for detecting constituents for long-range monitoring based on a comparison between the atmospheric backscattering signals from two adjacent wavelengths that are absorbed differently by the gas of interest (Measures. R. M. 1972). The closest wavelength, of the two adjacent wavelengths, to the absorption line of the molecule of interest (i.e., strongly absorbing spectral location due to the presence of an absorbing gas) is usually called on-line and denoted as (*λON*) and the other laser wavelength is called off-line and denoted as (*λOFF*). **DI**fferential **A**bsorption **L**idar (DIAL) technique is a unique method to measure and trace gaseous concentrations in the Planetry Boundary Layer (Welton, Campble et al.) (Welton, Campble et al.) in three dimensional mode (3D) using of the DAS principal. The gas number density ( ) *N R <sup>x</sup>* can be derived from the differential absorption cross section of the molecular

<sup>1</sup> (, ) ( ) ln 2 (, )

 *OFF <sup>x</sup>*

for both wavelengths. Special careful must be taken into account when selecting the adjacent wavelengths, where the different between the two wavelengths is preferred to be < 1 cm-1, otherwise another two terms must be considered in the DIAL equation. DIAL, as a range resolved remote sensing technique, can detect lots of pollutants and greenhouse gases (H2O, SO2, O3, CO, CO2, NO, NO2, CH4, etc.) which play a big role in climate change and the earth's radiative budget. DIAL is possible in the UV (200 to 450 nm), the visible, and the near IR (1 to 5 micrometer), and in the mid-IR (5 to 11 micrometer). For example to measure Ozone as a green house gas with fatal direct effect on human health particularly in the troposphere, DIAL can be used in two appropriate bands; UV band (at 256 nm) and the mid-IR band (960 to 1070 cm -1). DIAL operations advantages are successful both day and night, detecting gases and aerosol profiles simultaneously. It can be operated in ground, airborne,

Atmospheric laser Doppler velocimetry including measurements of tornados, storms, wind, turbulence, global wind cycles, and the atmosphere temperature are some of the most important remote sensing techniques (Measures 1984). Doppler broadening is due to the

*<sup>d</sup> P R N R*

( )( ) *ON OFF* ) in the DIAL equation (Fhjii and Fukuchi 2005)

*ON*

*OFF* the power backscattered signal received from distance *R*

*dR P R* (1)

 

This will help for successful diurnal operation of Raman lidar.

**4.4 DIfferential absorption lidar (DIAL)** 

species of interest (

Where *P R*(, )

and space based platforms.

**4.5 Doppler lidars** 

 

*ON* and *P R*(, )

  Doppler shift associated with the thermal motion of radiating (absorbing) species in the mesopause region such as Na, K, Li, Ca, and Fe (Measures 1984). Furthermore, the atmospheric temperature can be detected by measuring the Doppler broadening and the measured global wind pattern can be determined by measuring the Doppler shift of laserinduced florescence from atmospheric metals atoms such as Na in the middle and upper atmosphere (Bills 1991; She and Yu 1994). The use of Doppler broadening of the structure of Na D2 line (by narrowband lidar) technique to determine the range resolved high resolution temperature profile of the mesopause region (75-115 km, is also called MLT for Mesosphere and Lower Thermosphere) and was proposed by Gibson et al. in 1979. The principle idea is that the absorption line will be broadened because of the Doppler effect for a single Na

atom. Doppler broadened line is given by <sup>2</sup> 0 *<sup>B</sup> D T M* , where *M* is the mass of a single Na

atom, *<sup>B</sup>* is the Boltzmann constant, 0 is the mean Na D2 transition wavelength, and *T* is the temperature. As shown the Doppler broadened *<sup>D</sup>* is a function of temperature. Therefore if we measure *<sup>D</sup>* line-width, we can derive the temperature of the Na atoms in the mesopause which equal to the surrounding atmosphere temperature where Na atom is in equilibrium condition in the mesopause region (Fhjii and Fukuchi 2005).

#### **4.6 Resonance fluorescence lidars**

A Rayleigh lidar signal is useless above ~ 85 km, because of the low atmospheric density above that altitude. The backscattering cross section of Resonance fluorescence lidars is about 1014 times higher than Rayleigh backscattering cross-section for the same transmitter and receiver specifications, thus Resonance fluorescence lidars can be used in the upper atmosphere measurements. Resonance fluorescence lidars are measuring intensity at shifted wavelength using of Doppler technique (Bills 1991; She and Yu 1994) or Boltzmann technique (Gelbwachs 1994). Fluorescence lidar is used to measure metallic species in the upper layer of the atmosphere (~90km) such as, Na, K, Li (Jegou, M.Chanin et al. 1980), Ca and Fe (Granier, J. P. Jegou et al. 1989; Gardner, C. S. et al. 1993) and/or volcanic stratospheric aerosol, polar stratospheric clouds (PSCs), gravity waves, and stratospheric ozone layer. This lidar has high sensitivity and accuracy. It is also, used to determination of wind, temperature, and study of thermal structure and complex atmospheric dynamics.

## **5. Lidar wavelengths**

Based on the wavelength that been used in lidar measurements, one can classify lidar into: Elastic, inelastic, multi wavelength, and femto-second white light lidars. Brief descriptions are introduced in the following sub-sections.

#### **5.1 Elastic lidar**

An elastic scattering is defined as light scattering with no apparent wavelength shift or change with the incident wavelength. Elastic backscatter lidar operation, as one of the most popular lidar systems, is based on the elastic scattering physical process. It is detecting the total atmospheric backscatter of molecular and particle together without separation. Hence, elastic backscattering lidar is the sum of Rayleigh and Mie scatterings. The main

Active Remote Sensing: Lidar SNR Improvements 321

the spectra, during the daytime, and deduct the background solar radiation noise.

considered the main disadvantage of this method.

given the following data set*S R a a* ,( )

both 

 

and Fukuchi 2005).

**5.2 Inelastic backscattering lidar** 

**5.3 Multiple wavelength lidar** 

( *R*max , where the particle backscatter coefficient *<sup>a</sup>* max *β (R )* is negligible compared to the known molecular backscatter value). Second, we seek a solution by back integration (Klett 1981) that is more stable than the corresponding forward solution. Therefore,

*a a R R* , . Consequently, an estimation of the data set *S R a a* ,( )

required and the approach that used to analyze the lidar signals and estimate the optical coefficient error is outlined in (Hassebo et al, 2005). Finally, elastic scattering is unable to identify the gas species but can detect and measure particles and clouds (Fhjii

The transmitted wavelength is different than the detected wavelength on inelastic lidars. An example of inelastic lidar is Raman Lidar. A Raman signal is very weak; therefore Raman lidar operations are restricted to the nighttime due to the strong background solar radiations during the daytime. Three ways to overcome this difficulty, they are: (1) running Raman lidar within the solar-blind region (230-300 nm), (2) second is applying narrow-bandpass filter or Fabry-Perot interferometer, and (3) the third method is operating Raman lidar in the visible band of

1. The first method is running Raman lidar within the solar-blind region (230-300 nm), where the ozone layer in the stratosphere (20-30 km) absorbs the lethal solar radiation in this spectral interval. Consequently, lidar can be operated diurnally in the solar-blind region without getting affected by the solar background noise. However, the main drawback of running lidar in this region is the attenuation of the transmitted and the returned signals by the stratospheric ozone. Another drawback is the eye hazard issue. Using this technique, in 1980th, there were some attempts to measure water vapor and temperature using multiwavelength in the solar-blind region (Renaut 1980; Petri 1982). 2. The second method is applying a narrow-bandpass filter or Fabry-Perot interferometer (Kovalev 2004). But the flitter will attenuate the signal strength as well. This is

3. The third method has been proposed by Hassebo et al. in 2005 and 2006. The principal idea is to operate Raman lidar in the visible band (607 nm for N2, 407 nm for water vapor, and 403 nm for liquid water vapor) of the spectra and then deduct the background solar radiation noise, simultaneously during the daytime optimally. This objective can be accomplished by using a polarization discrimination technique to discriminate between the sky background radiation noise and the backscattering signal. This can be approached using two polarizers at the transmitter and the receiver optics (Hassebo, B. Gross et al. 2005; Hassebo, Barry M. Gross et al. 2005; Hassebo, B. Gross et al. 2006). This technique improved the lidar Signal-to-Noise Ration (SNR) up to 300 %, and the attainable lidar range up to 34%. A discussion of this technique is introduced in section 2 of this chapter.

If the lidar transmitter is a single wavelength laser, the lidar is called single wavelength lidar. However, the lidar is referred to as a multiple wavelength lidar if it is transmitting more than one wavelength. All transmitted light into the atmosphere with wavelength shorter than 300 nm is absorbed by ozone and oxygen (solar-blind region). Wavelengths shorter than 300 nm

max , the lidar signal can be inverted to obtain

max is

disadvantage of elastic lidar is the difficulty of separating Mie form Rayleigh signals. More details explain how to overcome this disadvantage are given as follow.

1. It is difficult to determine accurately the volume extinction coefficient of the particles or aerosol, where we can not separate Mie form Rayleigh signals. In this case, we have to

assume the a value for particle lidar ratio, ( ) *S Ra* , where <sup>α</sup> (R) <sup>a</sup> S (R) <sup>a</sup> <sup>β</sup> (R) <sup>a</sup> , to solve the

lidar equation for the aerosol extinction coefficient ( ( ) *<sup>a</sup> R* ). This assumption is impossible to estimate reliably; since the aerosol lidar ratio ( ) *S Ra* varies strongly with the altitude ( ( ) *S Ra* varies between 20 and 100) due to the relative humidity increment with the altitude (S ratio depends on chemical, physical and morphological properties of the particles which are relative humidity dependent). As shown in Table 1 (Kovalev and Eichinger 2004), big variations of aerosol typical lidar ratio for different aerosol types have been determine at 532 nm wavelength using Raman lidar. Figure 2 shows a lidar return signal on June 30, 2004 at the CCNY site. The figure also shows an example of the aerosol lidar ratio ( ) *S Ra* retrieval between 20 and 100.


Table 1. Different aerosol types and the corresponding aerosol lidar ratio ( ) *S Ra*

Fig. 2. CCNY lidar retrieval for ( ) *S Ra* ration, June 30, 2004

To determine the aerosol lidar ratio ( ) *S Ra* , we can use (a) Raman lidar and High Spectra Resolution Lidar (HSRL) to get the extinction profile for particle then ( ) *S Ra* , Alternatively (b) sun-photometer observatory can be used to obtain the optical depth then seeking a solution by back integration. More details are given below.


disadvantage of elastic lidar is the difficulty of separating Mie form Rayleigh signals. More

1. It is difficult to determine accurately the volume extinction coefficient of the particles or aerosol, where we can not separate Mie form Rayleigh signals. In this case, we have to

impossible to estimate reliably; since the aerosol lidar ratio ( ) *S Ra* varies strongly with the altitude ( ( ) *S Ra* varies between 20 and 100) due to the relative humidity increment with the altitude (S ratio depends on chemical, physical and morphological properties of the particles which are relative humidity dependent). As shown in Table 1 (Kovalev and Eichinger 2004), big variations of aerosol typical lidar ratio for different aerosol types have been determine at 532 nm wavelength using Raman lidar. Figure 2 shows a lidar return signal on June 30, 2004 at the CCNY site. The figure also shows an example

**Aerosol (particle) types Aerosol lidar ratio** ( ) *S Ra* **(sr)** 

<sup>α</sup> (R) <sup>a</sup> S (R) <sup>a</sup> <sup>β</sup> (R) <sup>a</sup>

20-35 50-80 35-70 70-100

, to solve the

( ) *<sup>a</sup> R* ). This assumption is

details explain how to overcome this disadvantage are given as follow.

assume the a value for particle lidar ratio, ( ) *S Ra* , where

lidar equation for the aerosol extinction coefficient (

of the aerosol lidar ratio ( ) *S Ra* retrieval between 20 and 100.

Table 1. Different aerosol types and the corresponding aerosol lidar ratio ( ) *S Ra*

To determine the aerosol lidar ratio ( ) *S Ra* , we can use (a) Raman lidar and High Spectra Resolution Lidar (HSRL) to get the extinction profile for particle then ( ) *S Ra* , Alternatively (b) sun-photometer observatory can be used to obtain the optical depth then seeking a

a. Using Raman lidar and High Spectra Resolution Lidar (HSRL) to determine the extinction profile for particle and the particle backscatter coefficient can be obtained directly as well. These two lidars detect a separate backscatter signals from particle and

b. Using Sun-photometer observatory to obtain the optical depth (integration over the extinction coefficient profile) for both aerosol and molecule. Initially, in this method, we consider the reference boundary condition at the top of the lidar range is constant

Marine Particle Saharan dust Less absorbing urban particles Absorbing particles from biomass burning

Fig. 2. CCNY lidar retrieval for ( ) *S Ra* ration, June 30, 2004

solution by back integration. More details are given below.

molecular.

( *R*max , where the particle backscatter coefficient *<sup>a</sup>* max *β (R )* is negligible compared to the known molecular backscatter value). Second, we seek a solution by back integration (Klett 1981) that is more stable than the corresponding forward solution. Therefore, given the following data set*S R a a* ,( ) max , the lidar signal can be inverted to obtain both *a a R R* , . Consequently, an estimation of the data set *S R a a* ,( ) max is required and the approach that used to analyze the lidar signals and estimate the optical coefficient error is outlined in (Hassebo et al, 2005). Finally, elastic scattering is unable to identify the gas species but can detect and measure particles and clouds (Fhjii and Fukuchi 2005).

## **5.2 Inelastic backscattering lidar**

The transmitted wavelength is different than the detected wavelength on inelastic lidars. An example of inelastic lidar is Raman Lidar. A Raman signal is very weak; therefore Raman lidar operations are restricted to the nighttime due to the strong background solar radiations during the daytime. Three ways to overcome this difficulty, they are: (1) running Raman lidar within the solar-blind region (230-300 nm), (2) second is applying narrow-bandpass filter or Fabry-Perot interferometer, and (3) the third method is operating Raman lidar in the visible band of the spectra, during the daytime, and deduct the background solar radiation noise.


#### **5.3 Multiple wavelength lidar**

If the lidar transmitter is a single wavelength laser, the lidar is called single wavelength lidar. However, the lidar is referred to as a multiple wavelength lidar if it is transmitting more than one wavelength. All transmitted light into the atmosphere with wavelength shorter than 300 nm is absorbed by ozone and oxygen (solar-blind region). Wavelengths shorter than 300 nm

Active Remote Sensing: Lidar SNR Improvements 323

health with diseases such as lung cancer, bronchitis, and asthma. These have been essential motivations to study aerosol properties and transportation. Lidars have been successfully applied to study stratospheric aerosols mainly sulfuric-acid/water droplet (Zuev V., V. Burlakov et al. 1998), tropospheric mixture aerosols of natural (interplanetary dust particle and marine) and of anthropogenic (sulfate and soot particles) (Barnaba F and Gobbi 2001) and climate gases such as stratospheric ozone (Douglass L. R., M.R. Schoeberl et al. 2000) as well as for analyzing the clouds properties (Stein, Wedekind et al. 1999). Aerosol sources can origin from nitrate particles, sea-salt particles, and volcanic ashes and rubble. Aerosol particle sizes were categorized as aitken, large, and giant particles (Junge 1955), where: (a) Dry radii < 0.1 µm, Aitken particles, (b) Dry radii 0.1 µm< r < 1 µm, large particles, and (c) Dry radii r > 1 µm, giant particles. Aerosol concentration decreases with increasing altitude. 80% of the aerosols condense in the lowest two kilometers of the troposphere (i.e., within the Planetary Boundary

The extinction profile is considered a high-quality indicator (in the cloud free case) of aerosol concentration. A principle of measuring aerosol is using the wavelength between 300-1100 nm to determine the particle extinction and backscattering profiles. A good example for lidar, that has been used to monitor aerosol without attendance, is Micro-Pulse Lidar (MPL) (Spinhirne 1991; Spinhirne 1993; Welton, Campble et al. 2001). CUNY MPL at LaGuardia Community College will play a significant role in studying the impact of the anthropogenic aerosol on human health, life, air quality, climate change, and earth's radiation budget once it is deployed. High Spectral Resolution Lidar (HSRL) can be used, as well, to measure aerosol scattering cross section, optical depth, and backscatter phase function in the atmosphere. This can be achieved by separating the Doppler-broadened molecular backscatter return from the un-broadened aerosol return. The molecular signal is

then used as a calibration target which is available at each point in the lidar profile.

(because cloud behaves as obstruction in the laser propagation path).

boundary layer (Welton, Campble et al.) on January 25, 2006.

Cloud particle radius is larger than 1 µm (between about 2 µm to around 30 µm), which is bigger than the lidar wavelength (300- 1100 nm). Therefore lidars cannot measure the cloud size distribution (Fhjii and Fukuchi 2005). However, lidars can detect the cloud ceiling, thickness, and its vertical profile where the lidar return signal from the cloud is very strong

As shown in Fig 4, using three wavelengths of 355, 532, and 1064 nm, the CCNY stationary lidar detected clouds vertical structure between 3.5 to 4.5 km height, and the planetary

Layer (PBL)) as shown in Fig 3, for New York City on August 11, 2005.

Source: CCNY lidar system

**6.2 Cloud lidar** 

Fig. 3. New York City aerosol PBL, Aug 11, 2005

are fatal wavelengths. Consequently the minimum wavelength for elastic lidar is approximately 300 nm. The commonly used wavelengths in lidar operations are near infrared (1064 nm), visible (532 nm), and ultraviolet (335 nm) for backscatter lidars, (607 nm) for N2, (407 nm) for water vapor, and (403 nm) for liquid water vapor. The multiple wavelength backscatter lidar can be used to distinguish between fine particles (emitted from fog, combustion, plume and burning smoke) and big particles such as water vapor or clouds. This differentiation can be achieved using angstrom coefficient (Hassebo, Y. Zhao et al. 2005).

Another example for multiple wavelength backscatter lidar is the DIfferential Absorption Lidar (DIAL). DIAL is used to measure concentrations of chemical species such as ozone, water vapor, and pollutants in the atmosphere. A DIAL lidar uses two distinct laser wavelengths which are selected so that one of the wavelengths is absorbed strongly by the molecule of interest while the other wavelength is not. The difference in intensity of the two return signals can be used to deduce the concentration of the molecule being investigated.

## **5.4 Femto-second white light lidar**

Extremely high optical power (tera-watt) can be created from femto-second (1 fsec =10-15 sec) laser pulse with 1mj energy. That is Femto-second white light lidar (fsec-lidar). In the era of global warming and climate change, fsec-lidar is used to detect and analyze aerosol size and aerosol phase (measuring the depolarization), water vapor, and for better understanding of forecasting, snow and rain. The inaccessibility to 3-D analysis is a disadvantage of Differential Optical Absorption Spectrometer (DOAS) and Fourier Transform Infrared spectroscopy (FTIR). This disadvantage has been conquered by Fseclidar white light lidar. At the same time it has the multi-component analysis capability of DOAS and FTIR by using a wide band light spectrum (from UV to IR); e.g. visible (Wöste, Wedekind et al. 1997; Rodriguez, R. Sauerbrey et al. 2002). An example of fsec-lidar, based on the well-known chirp pulse amplification (CPA) technique, is the 350 mJ pulse with 70 fsec duration and peak power of 5 TW at wavelength of 800 nm (Fhjii and Fukuchi 2005).

## **6. Purposes of lidar measurements**

The purpose of Lidar Measurements is an additional way to classify lidars. Aerosol, clouds, and Velocity and Wind Lidars are introduced briefly in the following sub-sections.

## **6.1 Aerosol lidar**

The atmosphere contains not only molecules but also particulates and aerosols including clouds, fog, haze, plumes, ice crystals, and dust. The aerosol is varied in radius; from a few nanometers to several micrometers. The bigger the aerosol size the more complex the calculations of their scattering properties. Aerosol concentration varies considerably with time, type, height, and location (Stephens 1994). Aerosols absorb and scatter solar radiation (all aerosols show such degree of absorption of the ultraviolet and visible bands) and provide cloud condensation sites (Charleon 1995). Aerosol absorption degree indicates the aerosol type. Atmospheric aerosol altitude, size, distribution and transportation are major global uncertainties due to their effects on controlling the earth's planet climate stability and global warming issues. In addition of the impact of aerosol in the atmospheric global climate change (Charlson, J. Langner et al. 1991; Charlson, S. E. Schwartz et al. 1992), it also affects human

are fatal wavelengths. Consequently the minimum wavelength for elastic lidar is approximately 300 nm. The commonly used wavelengths in lidar operations are near infrared (1064 nm), visible (532 nm), and ultraviolet (335 nm) for backscatter lidars, (607 nm) for N2, (407 nm) for water vapor, and (403 nm) for liquid water vapor. The multiple wavelength backscatter lidar can be used to distinguish between fine particles (emitted from fog, combustion, plume and burning smoke) and big particles such as water vapor or clouds. This differentiation can be achieved using angstrom coefficient (Hassebo, Y. Zhao et al. 2005).

Another example for multiple wavelength backscatter lidar is the DIfferential Absorption Lidar (DIAL). DIAL is used to measure concentrations of chemical species such as ozone, water vapor, and pollutants in the atmosphere. A DIAL lidar uses two distinct laser wavelengths which are selected so that one of the wavelengths is absorbed strongly by the molecule of interest while the other wavelength is not. The difference in intensity of the two return signals can be used to deduce the concentration of the molecule being investigated.

Extremely high optical power (tera-watt) can be created from femto-second (1 fsec =10-15 sec) laser pulse with 1mj energy. That is Femto-second white light lidar (fsec-lidar). In the era of global warming and climate change, fsec-lidar is used to detect and analyze aerosol size and aerosol phase (measuring the depolarization), water vapor, and for better understanding of forecasting, snow and rain. The inaccessibility to 3-D analysis is a disadvantage of Differential Optical Absorption Spectrometer (DOAS) and Fourier Transform Infrared spectroscopy (FTIR). This disadvantage has been conquered by Fseclidar white light lidar. At the same time it has the multi-component analysis capability of DOAS and FTIR by using a wide band light spectrum (from UV to IR); e.g. visible (Wöste, Wedekind et al. 1997; Rodriguez, R. Sauerbrey et al. 2002). An example of fsec-lidar, based on the well-known chirp pulse amplification (CPA) technique, is the 350 mJ pulse with 70 fsec duration and peak power of 5 TW at wavelength of 800 nm (Fhjii and Fukuchi 2005).

The purpose of Lidar Measurements is an additional way to classify lidars. Aerosol, clouds,

The atmosphere contains not only molecules but also particulates and aerosols including clouds, fog, haze, plumes, ice crystals, and dust. The aerosol is varied in radius; from a few nanometers to several micrometers. The bigger the aerosol size the more complex the calculations of their scattering properties. Aerosol concentration varies considerably with time, type, height, and location (Stephens 1994). Aerosols absorb and scatter solar radiation (all aerosols show such degree of absorption of the ultraviolet and visible bands) and provide cloud condensation sites (Charleon 1995). Aerosol absorption degree indicates the aerosol type. Atmospheric aerosol altitude, size, distribution and transportation are major global uncertainties due to their effects on controlling the earth's planet climate stability and global warming issues. In addition of the impact of aerosol in the atmospheric global climate change (Charlson, J. Langner et al. 1991; Charlson, S. E. Schwartz et al. 1992), it also affects human

and Velocity and Wind Lidars are introduced briefly in the following sub-sections.

**5.4 Femto-second white light lidar** 

**6. Purposes of lidar measurements** 

**6.1 Aerosol lidar** 

health with diseases such as lung cancer, bronchitis, and asthma. These have been essential motivations to study aerosol properties and transportation. Lidars have been successfully applied to study stratospheric aerosols mainly sulfuric-acid/water droplet (Zuev V., V. Burlakov et al. 1998), tropospheric mixture aerosols of natural (interplanetary dust particle and marine) and of anthropogenic (sulfate and soot particles) (Barnaba F and Gobbi 2001) and climate gases such as stratospheric ozone (Douglass L. R., M.R. Schoeberl et al. 2000) as well as for analyzing the clouds properties (Stein, Wedekind et al. 1999). Aerosol sources can origin from nitrate particles, sea-salt particles, and volcanic ashes and rubble. Aerosol particle sizes were categorized as aitken, large, and giant particles (Junge 1955), where: (a) Dry radii < 0.1 µm, Aitken particles, (b) Dry radii 0.1 µm< r < 1 µm, large particles, and (c) Dry radii r > 1 µm, giant particles. Aerosol concentration decreases with increasing altitude. 80% of the aerosols condense in the lowest two kilometers of the troposphere (i.e., within the Planetary Boundary Layer (PBL)) as shown in Fig 3, for New York City on August 11, 2005.

Source: CCNY lidar system

Fig. 3. New York City aerosol PBL, Aug 11, 2005

The extinction profile is considered a high-quality indicator (in the cloud free case) of aerosol concentration. A principle of measuring aerosol is using the wavelength between 300-1100 nm to determine the particle extinction and backscattering profiles. A good example for lidar, that has been used to monitor aerosol without attendance, is Micro-Pulse Lidar (MPL) (Spinhirne 1991; Spinhirne 1993; Welton, Campble et al. 2001). CUNY MPL at LaGuardia Community College will play a significant role in studying the impact of the anthropogenic aerosol on human health, life, air quality, climate change, and earth's radiation budget once it is deployed. High Spectral Resolution Lidar (HSRL) can be used, as well, to measure aerosol scattering cross section, optical depth, and backscatter phase function in the atmosphere. This can be achieved by separating the Doppler-broadened molecular backscatter return from the un-broadened aerosol return. The molecular signal is then used as a calibration target which is available at each point in the lidar profile.

## **6.2 Cloud lidar**

Cloud particle radius is larger than 1 µm (between about 2 µm to around 30 µm), which is bigger than the lidar wavelength (300- 1100 nm). Therefore lidars cannot measure the cloud size distribution (Fhjii and Fukuchi 2005). However, lidars can detect the cloud ceiling, thickness, and its vertical profile where the lidar return signal from the cloud is very strong (because cloud behaves as obstruction in the laser propagation path).

As shown in Fig 4, using three wavelengths of 355, 532, and 1064 nm, the CCNY stationary lidar detected clouds vertical structure between 3.5 to 4.5 km height, and the planetary boundary layer (Welton, Campble et al.) on January 25, 2006.

Active Remote Sensing: Lidar SNR Improvements 325

Doppler lidar can be used to provide the velocity of a target. When the light transmitted from the lidar hits a target moving towards or away from the lidar, the wavelength of the light reflected/scattered off the target will be changed slightly. This is known as a Doppler shift, hence Doppler lidar. If the target is moving away from the lidar, the returned beam will have a longer wavelength (sometimes referred to as a red shift). In other hand, if the target is moving towards the lidar the return light will be at a shorter wavelength (blue shifted). The target can be either a hard target or an atmospheric target. Thus the same idea is used to measure the wind velocity where, the atmosphere contains many microscopic

The PBL is the most important layer to study in the earth's atmosphere. Ground-based lidar (stationary in laboratories and mobiles in vehicles stations) is providing us with continuous, stable, and high resolution measurements of almost most of the lower atmosphere parameters. Ground-based lidar has made an important contribution in correcting the satellite data and complete the missing parts from the satellite images. A good example in chapter 7 of this thesis shows how the ground-based lidar signature is supporting the satellite operations to discriminate between cloud (big particle) and smoke plume (fine particle) and to determine the plumes height and thickness which the satellite cannot provide. The main drawback of ground-based lidar is the limitation of running during the

Due to the uncertainty in validation of some remote sensing methodologies, particularly to detect cloud and measure its properties, from ground-based lidar stations, the in situ probes are useful techniques. Also the inaccessibility of the object of interest from the ground-based or space-based systems is the other reason to use air-borne lidars. The air-borne lidar platforms are air-craft, balloon, and helicopter. Applications of using air-borne lidars are to measure aerosol, clouds, temperature profile, metals in the mesopause, ozone in the stratosphere, wind, PSCs, H2O, and on land, water depth, submarine track, oil slicks, etc (Fhjii and Fukuchi 2005). One of the disadvantages of these platforms is the vibration

The ground–based lidar provides a one spot at a unique moment measurement on the earth surface. The air-borne lidars are limited in one country or specific region as well as restricted by the weather or some times politic circumstances. The merit of the space-based lidar is to give global and/or continental images of the earth's atmosphere properties, structure, and activities. Certainly, space-based lidar needs very sophisticated, extremely expensive equipment, especially for remotely control the unattended operations and adaptive optics issue. In additional to the extremely important understanding of global scale phenomena (H2O and carbon cycles, climate change, global warming, etc.) we have gained, we can reach

dust and aerosol particles (atmospheric target) which are carried by the wind.

bad weather (rain or snow) and the air control regulations issues.

**6.3 Velocity and wind lidar** 

**7. Lidar types based on platform** 

**7.1 Ground-based lidar** 

**7.2 Air-borne lidar** 

problem.

**7.3 Space-based lidar** 

Source: CCNY lidar

Fig. 4. CCNY lidar data shows cloud ceiling, thickness, and structure, Jan 25, 2006

Clouds cover approximately 50% of the earth (Liou 2002). Based on the altitude (i.e., temperature) clouds are formed in liquid or solid (crystal) phases. Clouds and their interaction with aerosol and their impact on local and global climate change encouraged NASA to create various projects to monitor and study clouds distribution, thickness, transportation, and observe transitional form of clouds or combination of several forms and varieties. Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), Micro-Pulse Lidar (MPL), and Polarization Diversity Lidar (PDL is a lidar with two channels to detect two polarizations (Fhjii, 2005; Sassen, 1994)) are well-known lidars to measure and detect clouds. Measuring cloud phase is based on Mie scattering theory, the backscattering from non-spherical (e.g., crystal phase) particles changes the polarization strongly, but the spherical (water droplets) particles do not (Sassen, K. et al. 1992; Sassen 1994). Both spherical and non-spherical cloud particles have a degree of depolarization ( *I I* / ) due to the multiple scattering effects, where , *I I* are respectively the perpendicular and the parallel intensity components for the incident light. But non-spherical cloud particles degree of depolarization is greater than spherical particles depolarization ( *NS S* ). Polarization lidars are used to differentiate between cloud liquid and sold phases.

Fig 5, shows thin cloud signals that were provided by Hassebo et al. on January 10, 2006 using elastic Mie scattering stationary lidar at the City College of NY site (longitude 73.94 W, latitude 40.83 N), at 355, 532, and 1064 nm wavelengths. Comparing the thin cloud signal (Fig 5) with Fig 4 (thick cloud signal) we noted that, as a result of laser rapid attenuation while it is penetrating the cloud, in the thin cloud case the visible beam had a sufficient intensity to open a channel with high optical transparency to a higher altitude. In contrast, in Fig 4 the cloud was thick enough to prevent the laser beams from increasing their depth of penetration into the layer beyond the cloud. That explains the useless noisy (UV and IR) signals after the cloud ceiling (R= 4.5 km, and R= 11.5 km) in both cases, and for visible signal in the heavy cloud case even when the altitude is low (4.5 km). We noted also the PBL was shown clearly in both cases where the aerosol loading in New York City is always high.

Fig. 5. CCNY Lidar backscattering signals show thin cloud at 11km, Jan 10, 2006.

## **6.3 Velocity and wind lidar**

324 Remote Sensing – Advanced Techniques and Platforms

Fig. 4. CCNY lidar data shows cloud ceiling, thickness, and structure, Jan 25, 2006

Clouds cover approximately 50% of the earth (Liou 2002). Based on the altitude (i.e., temperature) clouds are formed in liquid or solid (crystal) phases. Clouds and their interaction with aerosol and their impact on local and global climate change encouraged NASA to create various projects to monitor and study clouds distribution, thickness, transportation, and observe transitional form of clouds or combination of several forms and varieties. Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), Micro-Pulse Lidar (MPL), and Polarization Diversity Lidar (PDL is a lidar with two channels to detect two polarizations (Fhjii, 2005; Sassen, 1994)) are well-known lidars to measure and detect clouds. Measuring cloud phase is based on Mie scattering theory, the backscattering from non-spherical (e.g., crystal phase) particles changes the polarization strongly, but the spherical (water droplets) particles do not (Sassen, K. et al. 1992; Sassen 1994). Both spherical and non-spherical cloud particles have a degree of

respectively the perpendicular and the parallel intensity components for the incident light. But non-spherical cloud particles degree of depolarization is greater than spherical

Fig 5, shows thin cloud signals that were provided by Hassebo et al. on January 10, 2006 using elastic Mie scattering stationary lidar at the City College of NY site (longitude 73.94 W, latitude 40.83 N), at 355, 532, and 1064 nm wavelengths. Comparing the thin cloud signal (Fig 5) with Fig 4 (thick cloud signal) we noted that, as a result of laser rapid attenuation while it is penetrating the cloud, in the thin cloud case the visible beam had a sufficient intensity to open a channel with high optical transparency to a higher altitude. In contrast, in Fig 4 the cloud was thick enough to prevent the laser beams from increasing their depth of penetration into the layer beyond the cloud. That explains the useless noisy (UV and IR) signals after the cloud ceiling (R= 4.5 km, and R= 11.5 km) in both cases, and for visible signal in the heavy cloud case even when the altitude is low (4.5 km). We noted also the PBL was shown clearly in both cases where the aerosol

Fig. 5. CCNY Lidar backscattering signals show thin cloud at 11km, Jan 10, 2006.

*I I* / ) due to the multiple scattering effects, where , *I I* are

). Polarization lidars are used to differentiate between

Source: CCNY lidar

depolarization (

particles depolarization (

cloud liquid and sold phases.

loading in New York City is always high.

 *NS S* 

Doppler lidar can be used to provide the velocity of a target. When the light transmitted from the lidar hits a target moving towards or away from the lidar, the wavelength of the light reflected/scattered off the target will be changed slightly. This is known as a Doppler shift, hence Doppler lidar. If the target is moving away from the lidar, the returned beam will have a longer wavelength (sometimes referred to as a red shift). In other hand, if the target is moving towards the lidar the return light will be at a shorter wavelength (blue shifted). The target can be either a hard target or an atmospheric target. Thus the same idea is used to measure the wind velocity where, the atmosphere contains many microscopic dust and aerosol particles (atmospheric target) which are carried by the wind.

## **7. Lidar types based on platform**

## **7.1 Ground-based lidar**

The PBL is the most important layer to study in the earth's atmosphere. Ground-based lidar (stationary in laboratories and mobiles in vehicles stations) is providing us with continuous, stable, and high resolution measurements of almost most of the lower atmosphere parameters. Ground-based lidar has made an important contribution in correcting the satellite data and complete the missing parts from the satellite images. A good example in chapter 7 of this thesis shows how the ground-based lidar signature is supporting the satellite operations to discriminate between cloud (big particle) and smoke plume (fine particle) and to determine the plumes height and thickness which the satellite cannot provide. The main drawback of ground-based lidar is the limitation of running during the bad weather (rain or snow) and the air control regulations issues.

## **7.2 Air-borne lidar**

Due to the uncertainty in validation of some remote sensing methodologies, particularly to detect cloud and measure its properties, from ground-based lidar stations, the in situ probes are useful techniques. Also the inaccessibility of the object of interest from the ground-based or space-based systems is the other reason to use air-borne lidars. The air-borne lidar platforms are air-craft, balloon, and helicopter. Applications of using air-borne lidars are to measure aerosol, clouds, temperature profile, metals in the mesopause, ozone in the stratosphere, wind, PSCs, H2O, and on land, water depth, submarine track, oil slicks, etc (Fhjii and Fukuchi 2005). One of the disadvantages of these platforms is the vibration problem.

### **7.3 Space-based lidar**

The ground–based lidar provides a one spot at a unique moment measurement on the earth surface. The air-borne lidars are limited in one country or specific region as well as restricted by the weather or some times politic circumstances. The merit of the space-based lidar is to give global and/or continental images of the earth's atmosphere properties, structure, and activities. Certainly, space-based lidar needs very sophisticated, extremely expensive equipment, especially for remotely control the unattended operations and adaptive optics issue. In additional to the extremely important understanding of global scale phenomena (H2O and carbon cycles, climate change, global warming, etc.) we have gained, we can reach

Active Remote Sensing: Lidar SNR Improvements 327

two techniques to overcome the problems of geometrical form factor (Hassebo, R. Agishev

Bistatic lidar configuration is involving a considerable separation between the laser transmitter and the receiver subsystems. However, the usefulness of this configuration was originally used in supporting lidar with continuous wave (cw) laser source to overcome the prevention of measuring of the height variation of the density caused by cw laser (Fhjii and

As a summary of some lidar physical processes, their corresponding applications and

**Lidar Type Based On Process Wavelength Objective Platform Configurations Other** 

Backscatter inelastic Cloud Air-borne Monostatic Troposphere

based

Groundbased Space-based

Groundbased

In this section, the impact and potential of a polarization selection technique to reduce sky background signal for linearly polarized monostatic elastic backscatter lidar measurements are examined. Taking advantage of naturally occurring polarization properties in scattered sky light, a polarization discrimination technique was devised. In this technique, both lidar transmitter and receiver track and minimize detected sky background noise while maintaining maximum lidar signal throughput. Experimental Lidar elastic backscatter measurements, carried out continuously during daylight hours at 532 nm, show as much as a factor of 10 improvement in signal-to-noise ratio (SNR) and the attainable lidar range up to 34% over conventional un-polarized schemes. Results show, for vertically pointing lidars, the largest improvements are limited to the early morning and late afternoon hours. The resulting diurnal variations in SNR improvement sometimes show asymmetry with solar angle, which analysis indicates can be attributed to changes in observed relative humidity that modifies the underlying aerosol

Monostatic Stratosphere

Monostatic Troposphere

Monostatic (Biaxial and Coaxial)

Air-borne Monostatic mesosphere

Mesopause

Troposphere Stratosphere

Fukuchi 2005). Currently, this arrangement is rarely used (Measures 1984).

Elastic Wind Ground-

Aerosol H2O

Ozone Humidity gaseous

flux

**9. Improve lidar signal-to-noise ratio during daytime operations** 

et al. 2004).

Rayleigh Doppler

Mie Backscatter Raman

Raman

**8.2 Bistatic configuration** 

objective of measurements are given in Table 2.

single WL Multiple WL

Raman DIAL

Fluorescence Wind /Heat

microphysics and observed optical depth.

Table 2. Lidar classification and related research

DIAL

an inaccessible areas by air-borne and/or ground-based stations such as oceans, north and south poles.

## **8. Lidar configurations**

Essentially, there are two basic configurations for lidar systems; monostatic and bistatic configurations.

## **8.1 Monostatic lidar**

Monostatic configuration is the typical configuration for modern systems. It is employed with pulsed laser source providing very good vertical resolution and beam collimation compared with the bistatic configuration. In monostatic configurations, the transmitter and receiver are at the same location, (see Fig. 6). Monostatic systems can be classified into two categories, coaxial systems and biaxial systems. Monostatic system was first used in 1938.

## **8.1.1 Monostatic coaxial lidar**

In the Monostatic coaxial configuration the axis of transmitter laser beam is coincident with the receiver's telescope Field Of View (FOV) as shown in Figure 6 (a). The main disadvantages in the Configuration are the detector saturation problem that occurs once the lidar laser beam is shot, the unwanted signal that is detected from reflection of the transmitted light at the transmitter optics in the top of the receiver telescope, and the portion of the images - for short range - that are blocked by the secondary mirror.

Fig. 6. Field of view arrangements for lidar laser beam and detector optics

#### **8.1.2 Monostatic biaxial lidar**

In the Monostatic biaxial arrangement the transmitter and receiver are located adjacent to each other. Under this circumstance the laser beam will intersect with the receiver telescope VOF beyond specific range *R*. This range can be predetermined based on the distance between the laser FOV and telescope FOV axes. In fact, this configuration is quite useful in preventing the receiver photomultiplier (PMT) detectors saturation from the near-filed laser radiations (coaxial lidar disadvantage). However, in a biaxial lidar system, the detected signals are negatively affected by the geometrical form factor (GF) at shorter range. This effect makes near field measurements impossible (Measures 1984). Hassebo et al. proposed two techniques to overcome the problems of geometrical form factor (Hassebo, R. Agishev et al. 2004).

## **8.2 Bistatic configuration**

326 Remote Sensing – Advanced Techniques and Platforms

an inaccessible areas by air-borne and/or ground-based stations such as oceans, north and

Essentially, there are two basic configurations for lidar systems; monostatic and bistatic

Monostatic configuration is the typical configuration for modern systems. It is employed with pulsed laser source providing very good vertical resolution and beam collimation compared with the bistatic configuration. In monostatic configurations, the transmitter and receiver are at the same location, (see Fig. 6). Monostatic systems can be classified into two categories, coaxial systems and biaxial systems. Monostatic system was first used in 1938.

In the Monostatic coaxial configuration the axis of transmitter laser beam is coincident with the receiver's telescope Field Of View (FOV) as shown in Figure 6 (a). The main disadvantages in the Configuration are the detector saturation problem that occurs once the lidar laser beam is shot, the unwanted signal that is detected from reflection of the transmitted light at the transmitter optics in the top of the receiver telescope, and the portion

of the images - for short range - that are blocked by the secondary mirror.

(a) Monostatic coaxial (b) Monostatic biaxial (c) Bi-static

In the Monostatic biaxial arrangement the transmitter and receiver are located adjacent to each other. Under this circumstance the laser beam will intersect with the receiver telescope VOF beyond specific range *R*. This range can be predetermined based on the distance between the laser FOV and telescope FOV axes. In fact, this configuration is quite useful in preventing the receiver photomultiplier (PMT) detectors saturation from the near-filed laser radiations (coaxial lidar disadvantage). However, in a biaxial lidar system, the detected signals are negatively affected by the geometrical form factor (GF) at shorter range. This effect makes near field measurements impossible (Measures 1984). Hassebo et al. proposed

Fig. 6. Field of view arrangements for lidar laser beam and detector optics

south poles.

configurations.

**8.1 Monostatic lidar** 

**8. Lidar configurations** 

**8.1.1 Monostatic coaxial lidar** 

**8.1.2 Monostatic biaxial lidar** 

Bistatic lidar configuration is involving a considerable separation between the laser transmitter and the receiver subsystems. However, the usefulness of this configuration was originally used in supporting lidar with continuous wave (cw) laser source to overcome the prevention of measuring of the height variation of the density caused by cw laser (Fhjii and Fukuchi 2005). Currently, this arrangement is rarely used (Measures 1984).

As a summary of some lidar physical processes, their corresponding applications and objective of measurements are given in Table 2.


Table 2. Lidar classification and related research

## **9. Improve lidar signal-to-noise ratio during daytime operations**

In this section, the impact and potential of a polarization selection technique to reduce sky background signal for linearly polarized monostatic elastic backscatter lidar measurements are examined. Taking advantage of naturally occurring polarization properties in scattered sky light, a polarization discrimination technique was devised. In this technique, both lidar transmitter and receiver track and minimize detected sky background noise while maintaining maximum lidar signal throughput. Experimental Lidar elastic backscatter measurements, carried out continuously during daylight hours at 532 nm, show as much as a factor of 10 improvement in signal-to-noise ratio (SNR) and the attainable lidar range up to 34% over conventional un-polarized schemes. Results show, for vertically pointing lidars, the largest improvements are limited to the early morning and late afternoon hours. The resulting diurnal variations in SNR improvement sometimes show asymmetry with solar angle, which analysis indicates can be attributed to changes in observed relative humidity that modifies the underlying aerosol microphysics and observed optical depth.

Active Remote Sensing: Lidar SNR Improvements 329

**Vertical Polarized Noise - - - - Max \_\_\_\_\_ Min** 

> *S*

Lidar transceiver polarization

**East** 

**1.1**

**Sun** 

**1.1**

**Scattering plane** 

**North**

Background Noise

Solar azimuth angle (*<sup>S</sup>* )

chose to match min Pb i.e., parallel to scattering plane

Fig. 7. Sky background suppression geometry for a vertical pointing lidar:

*<sup>s</sup>* is the solar azimuth angle; and OAB is the solar scattering plane

**Lidar Transceiver** 

**1.**

Solar zenith angle (Scattering angle)

**Max Pb is ┴ to the scattering plane** 

 (*<sup>S</sup>* )

discussed. Conclusions and summary are presented in Section 1.8.7.

**9.2 Experimental approach and system geometry** 

*<sup>s</sup>* is the solar zenith angle (equal to the scattering angle for this geometry)

theory. Possible modifications due to multiple scattering are also explored.

where the SNR improvement factor is compared with a single scattering radiative transfer

In Section 1.8.6, the diurnal variation of the polarization rotation angle is compared to the theoretical result and an approach for automation of the technique based on theory is

The City University of New York (CUNY) has developed two ground-based lidar systems, one mobile and one stationary, that operate at multiple wavelengths for monostatic elastic backscatter retrievals of aerosol and cloud characteristics and profiles. Lidar measurements are performed at the Remote Sensing Laboratory of the City College of New York, (CCNY). The lidar systems are designed to monitor enhanced aerosol events as they traverse the eastern coast of the United States, and form part of NOAA's Cooperative Remote Sensing Center (NOAA-CREST) Regional East Atmospheric Lidar Mesonet (REALM) lidar network. The lidar measurements, reported here, were carried out with the mobile elastic monostatic biaxial backscatter lidar system at the CCNY site (longitude 73.94 W, latitude 40.83 N), at 532 nm

wavelength. The lidar transmitter and the receiver subsystems are detailed in Table 3.

The lidar return from the receiver telescope is detected by a photo-multiplier (PMT R11527P) with a 1 nm bandwidth optical filter (532F02-25 Andover), centered at the 532 nm

#### **9.1 Introduction**

This work describes a technique which is designed to improve the operation of conventional elastic backscatter lidars in which the transmitted signal is generally linearly polarized. The technique requires the use of a polarization sensitive receiver. Polarization selective lidar systems have, in the past, been used primarily for separating and analyzing polarization of lidar returns, for a variety of purposes, including examination of multiple scattering effects and for differentiating between different atmospheric scatterers and aerosols (Schotland, K. Sassen et al. 1971; Hansen and Travis 1974; Sassen 1974; Platt 1977; Sassen 1979; Platt 1981; Kokkinos and Ahmed 1989; G.P.Gobbi 1998; Roy, G. Roy et al. 2004). In the approach described here, the polarized nature of the sky background light is used to devise a polarization selective scheme to reduce the sky background power detected in a lidar. This leads to improved signal-to-noise ratios (SNR) and attainable lidar ranges, which are important considerations in daylight lidar operation (Hassebo, B. Gross et al. 2005; Hassebo, Barry M. Gross et al. 2005; Ahmed, Y. Hassebo et al. 2006; Ahmed, Yasser Y. Hassebo et al. 2006; Hassebo, B. Gross et al. 2006). The approach, discussed here, is based on the fact that most of the energy in linearly polarized elastically backscattered lidar signals retains the transmitted polarization (Schotland, K. Sassen et al. 1971; Hansen and Travis 1974; Kokkinos and Ahmed 1989), while the received sky background power (Welton, Campble et al.) observed by the lidar receiver shows polarization characteristics that depend on both the scattering angle, *sc* , between the direction of the lidar and the direct sunlight and the orientation of the detector polarization relative to the scattering plane. In particular, the sky background signal is minimized in the plane perpendicular to the scattering plane, while the difference between the in-plane component and the perpendicular components (i.e degree of polarization) depends solely on the scattering angle. For a vertically pointing lidar, the scattering angle *sc* is the same as solar zenith angle *<sup>s</sup>* Fig. 7. The degree of polarization of sky background signal observed by the lidar is largest for solar zenith angles near 90*<sup>o</sup> <sup>S</sup>* and smallest at solar noon. The essence of the proposed approach is therefore, at any time, to first determine the parallel component of the received sky background (Pb) with a polarizing analyzer on the receiver, thus minimizing the detected Pb, and then orienting the polarization of the outgoing lidar signal so that the polarization of the received lidar backscatter signal is aligned with the receiver polarizing analyzer. This ensures unhindered passage of the primary lidar backscatter returns, while at the same time minimizing the received sky background Pb, and thus maximizing both SNR and attainable lidar ranges.

The experimental approach and system geometry to implement the polarization discrimination scheme are described in the next Section. Section 1.8.3 presents results of elastic lidar backscatter measurements for a vertically pointing lidar at 532 nm taken on a clear day in the New York City urban atmosphere, that examine the range of application of the technique. In particular, the diurnal variations in Pb as functions of different solar angles are given and the SNR improvement is shown to be consistent with the results predicted from the measured degree of linear polarization, with maximum improvement restricted to the early morning and late afternoon. Section 1.8.4 examines the situations in which asymmetric diurnal variations in sky Pb are observed, and demonstrates the possibility that an increase in relative humidity (Halldorsson and Langerhoic), consistent with measured increases in measured Precipitable water vapor (PWV) and aerosol optical depth (AOD), may account for the asymmetry. Analysis of the overall results is presented in Section 1.8.5,

This work describes a technique which is designed to improve the operation of conventional elastic backscatter lidars in which the transmitted signal is generally linearly polarized. The technique requires the use of a polarization sensitive receiver. Polarization selective lidar systems have, in the past, been used primarily for separating and analyzing polarization of lidar returns, for a variety of purposes, including examination of multiple scattering effects and for differentiating between different atmospheric scatterers and aerosols (Schotland, K. Sassen et al. 1971; Hansen and Travis 1974; Sassen 1974; Platt 1977; Sassen 1979; Platt 1981; Kokkinos and Ahmed 1989; G.P.Gobbi 1998; Roy, G. Roy et al. 2004). In the approach described here, the polarized nature of the sky background light is used to devise a polarization selective scheme to reduce the sky background power detected in a lidar. This leads to improved signal-to-noise ratios (SNR) and attainable lidar ranges, which are important considerations in daylight lidar operation (Hassebo, B. Gross et al. 2005; Hassebo, Barry M. Gross et al. 2005; Ahmed, Y. Hassebo et al. 2006; Ahmed, Yasser Y. Hassebo et al. 2006; Hassebo, B. Gross et al. 2006). The approach, discussed here, is based on the fact that most of the energy in linearly polarized elastically backscattered lidar signals retains the transmitted polarization (Schotland, K. Sassen et al. 1971; Hansen and Travis 1974; Kokkinos and Ahmed 1989), while the received sky background power (Welton, Campble et al.) observed by the lidar receiver shows polarization characteristics that depend on both the

*sc* , between the direction of the lidar and the direct sunlight and the

*<sup>s</sup>* Fig. 7. The degree of polarization of

orientation of the detector polarization relative to the scattering plane. In particular, the sky background signal is minimized in the plane perpendicular to the scattering plane, while the difference between the in-plane component and the perpendicular components (i.e degree of polarization) depends solely on the scattering angle. For a vertically pointing lidar, the

sky background signal observed by the lidar is largest for solar zenith angles near

The experimental approach and system geometry to implement the polarization discrimination scheme are described in the next Section. Section 1.8.3 presents results of elastic lidar backscatter measurements for a vertically pointing lidar at 532 nm taken on a clear day in the New York City urban atmosphere, that examine the range of application of the technique. In particular, the diurnal variations in Pb as functions of different solar angles are given and the SNR improvement is shown to be consistent with the results predicted from the measured degree of linear polarization, with maximum improvement restricted to the early morning and late afternoon. Section 1.8.4 examines the situations in which asymmetric diurnal variations in sky Pb are observed, and demonstrates the possibility that an increase in relative humidity (Halldorsson and Langerhoic), consistent with measured increases in measured Precipitable water vapor (PWV) and aerosol optical depth (AOD), may account for the asymmetry. Analysis of the overall results is presented in Section 1.8.5,

*<sup>S</sup>* and smallest at solar noon. The essence of the proposed approach is therefore, at any time, to first determine the parallel component of the received sky background (Pb) with a polarizing analyzer on the receiver, thus minimizing the detected Pb, and then orienting the polarization of the outgoing lidar signal so that the polarization of the received lidar backscatter signal is aligned with the receiver polarizing analyzer. This ensures unhindered passage of the primary lidar backscatter returns, while at the same time minimizing the received sky background Pb, and thus maximizing both SNR and attainable

*sc* is the same as solar zenith angle

**9.1 Introduction** 

scattering angle,

scattering angle

lidar ranges.

90*<sup>o</sup>* Fig. 7. Sky background suppression geometry for a vertical pointing lidar: *<sup>s</sup>* is the solar zenith angle (equal to the scattering angle for this geometry) *<sup>s</sup>* is the solar azimuth angle; and OAB is the solar scattering plane

where the SNR improvement factor is compared with a single scattering radiative transfer theory. Possible modifications due to multiple scattering are also explored.

In Section 1.8.6, the diurnal variation of the polarization rotation angle is compared to the theoretical result and an approach for automation of the technique based on theory is discussed. Conclusions and summary are presented in Section 1.8.7.

## **9.2 Experimental approach and system geometry**

The City University of New York (CUNY) has developed two ground-based lidar systems, one mobile and one stationary, that operate at multiple wavelengths for monostatic elastic backscatter retrievals of aerosol and cloud characteristics and profiles. Lidar measurements are performed at the Remote Sensing Laboratory of the City College of New York, (CCNY). The lidar systems are designed to monitor enhanced aerosol events as they traverse the eastern coast of the United States, and form part of NOAA's Cooperative Remote Sensing Center (NOAA-CREST) Regional East Atmospheric Lidar Mesonet (REALM) lidar network. The lidar measurements, reported here, were carried out with the mobile elastic monostatic biaxial backscatter lidar system at the CCNY site (longitude 73.94 W, latitude 40.83 N), at 532 nm wavelength. The lidar transmitter and the receiver subsystems are detailed in Table 3.

The lidar return from the receiver telescope is detected by a photo-multiplier (PMT R11527P) with a 1 nm bandwidth optical filter (532F02-25 Andover), centered at the 532 nm

Active Remote Sensing: Lidar SNR Improvements 331

tracking scheme. To select the polarization of light entering the detector, a polarizing beam splitter is located in front of the collimating lens that is used in conjunction with a narrow

This polarizing beam splitter (analyzer) is then rotated to minimize the detected sky background Pb. Cross polarized extinction ratios on the receiver analyzer were approximately 10-4 . On the transmission side, a half wave plate at the output of the polarized laser output is then used to rotate the polarization of the outgoing lidar beam so as to align the polarization of the backscattered lidar signal with the receiver polarizing analyzer and hence maximize its throughput (i.e., at the minimum Pb setting). This procedure was repeated for all measurements, with appropriate adjustments being made in receiver polarization analyzer alignment and a corresponding tracking alignment in the transmitted beam polarizations to adjust for different solar angles at different times of the

Figures 9- to- 11 show experimental results with the receiver analyzer oriented to minimize Pb and a corresponding tracking lidar polarization orientation to maximize the detected backscattered lidar signal and its SNR at different times on Oct 07, 2004 (6:29 PM, 3 PM, and

10 15 20 25 30 35

 **Min 6:29 PM Mean = 0.15973 Min 6:29 PM Std = 0.07448**

 **Max 6:29 PM Mean = 0.79552 Max 6:29 PM Std = 0.16441**

**R (km)**

Fig. 9. Comparison of max Pb verses min Pb lidar signals at 6:29 PM on 07 October 2004.

made at 3:00 PM and noon on the same day as shown in Figures 4 and 5 respectively.

The detected lidar signal is the sum of atmospheric backscatter of the laser pulse and the detected background light. The upper trace corresponds to the receiver polarization analyzer oriented to minimize Pb and the lidar transmitter polarization oriented to maximize the detected backscattered lidar signal while the lower trace is the result when orthogonal orientations of both receiver analyzer and lidar polarization are used, minimizing the sky background component in the return signal. Similar measurements were

Fig. 12 shows the resulting return signals in the far zone where the sky background signal is the dominant component (20-30 km range) for these times and for both orthogonal

**Pmin b**

**Pmax b**

band filter (alternatively, dichroic material polarizers were also used).

day, and hence minimize the detected Pb and maximize lidar SNR.

noon). All times given are in (EST) Eastern Standard Time.

0

1

Lidar Signal

2

3

**9.3 Results** 

polarizations.


Table 3. Lidar system specifications

wavelength. For extended ranges, data is acquired in the photon counting (PC) mode, typically averaging 600 pulses over a one minute interval and using a Licel 40-160 transient recorder with 40 MHz sampling rate for A/D conversion and a 250 MHz photon counting sampling interval. Fig. 8 shows the arrangement used to implement the polarization-

Fig. 8. Schematic diagram of polarization experiment set up for elastic biaxial monostatic lidar (mobile lidar system)

tracking scheme. To select the polarization of light entering the detector, a polarizing beam splitter is located in front of the collimating lens that is used in conjunction with a narrow band filter (alternatively, dichroic material polarizers were also used).

This polarizing beam splitter (analyzer) is then rotated to minimize the detected sky background Pb. Cross polarized extinction ratios on the receiver analyzer were approximately 10-4 . On the transmission side, a half wave plate at the output of the polarized laser output is then used to rotate the polarization of the outgoing lidar beam so as to align the polarization of the backscattered lidar signal with the receiver polarizing analyzer and hence maximize its throughput (i.e., at the minimum Pb setting). This procedure was repeated for all measurements, with appropriate adjustments being made in receiver polarization analyzer alignment and a corresponding tracking alignment in the transmitted beam polarizations to adjust for different solar angles at different times of the day, and hence minimize the detected Pb and maximize lidar SNR.

## **9.3 Results**

330 Remote Sensing – Advanced Techniques and Platforms

**Transmitter Receiver** 

**Wavelength** 1064, 532, 355 nm Focal length 3910 mm

**Pulse Duration** 7 ns at 1064 nm **Data Acquisition** LICEL TR 40-160 **Repetition Rate** 10 Hz **Photon Counting** LICEL TR 40-160

wavelength. For extended ranges, data is acquired in the photon counting (PC) mode, typically averaging 600 pulses over a one minute interval and using a Licel 40-160 transient recorder with 40 MHz sampling rate for A/D conversion and a 250 MHz photon counting sampling interval. Fig. 8 shows the arrangement used to implement the polarization-

**1.1**

Fig. 8. Schematic diagram of polarization experiment set up for elastic biaxial monostatic

Filter

Lens

=532 nm

*PMT*

Polarizing Beam Splitter

*Detector*

*Receiver Telescope* 

**Telescope Aperture** 

**Detectors 532 nm 355 nm 1064 nm** 

CM\_1400 Schmidt Cassegrian telescope

35.56 mm

Hamamatsu PMT: R11527 P PMT: R758-10

APD

**Laser** Q-Switched Nd: YAG

**Energy/pulse** 650 mj at 1064 nm

(SLF)

Table 3. Lidar system specifications

lidar (mobile lidar system)

**Harmonic Generation**  Continuum Surelite ll-10

300 mj at 532 nm 100 mj at 355 nm

Surelite Double (SLD) Surelite Third Harmonic

½ Wave plate To rotate outgoing signal polarization

*Transmitter* 

**Laser** 

Figures 9- to- 11 show experimental results with the receiver analyzer oriented to minimize Pb and a corresponding tracking lidar polarization orientation to maximize the detected backscattered lidar signal and its SNR at different times on Oct 07, 2004 (6:29 PM, 3 PM, and noon). All times given are in (EST) Eastern Standard Time.

Fig. 9. Comparison of max Pb verses min Pb lidar signals at 6:29 PM on 07 October 2004.

The detected lidar signal is the sum of atmospheric backscatter of the laser pulse and the detected background light. The upper trace corresponds to the receiver polarization analyzer oriented to minimize Pb and the lidar transmitter polarization oriented to maximize the detected backscattered lidar signal while the lower trace is the result when orthogonal orientations of both receiver analyzer and lidar polarization are used, minimizing the sky background component in the return signal. Similar measurements were made at 3:00 PM and noon on the same day as shown in Figures 4 and 5 respectively.

Fig. 12 shows the resulting return signals in the far zone where the sky background signal is the dominant component (20-30 km range) for these times and for both orthogonal polarizations.

Active Remote Sensing: Lidar SNR Improvements 333

The relative impact on the sky background signal, Pb, of the polarization discrimination scheme is seen to be largest at 6:29 PM, when the lidar solar angle is large (89°), while at noon it is minimal. The detected signal for maximum Pb is much noisier than the detected signal with minimum Pb, except in the noon measurement. This is consistent with the shot noise limit applicable to PMT's where the detected noise amplitude *P* (standard deviation) is proportional to the square root of the mean detected background signal *P* ( i.e., *P P* ) where P is the detector output, whose mean value is proportional to Pb. This relation is most conveniently expressed in terms of the ratios of the detected signals at the orthogonal polarization states max min *RP P b b* , in which the shot noise condition is now: *R R* . This relation has been verified in our experiments and the results

**Noon** 6.7 0.46 6.83 0.46 1.2 **1.019 1.09 3:00 PM** 1.41 0.22 5.27 0.22 3.72 **1.82 1.9 6:29 PM** 0.159 0.074 0.795 0.074 5.2 **2.2 2.2** 

In assessing the extent to which the polarization discrimination detection scheme can improve the SNR and the operating range, I compare the detected SNR with a polarizer, to that which would be obtained if no polarization filtering was used. When shot noise from background light is large compared to that from the lidar signal backscatter, the SNR improvement can be expressed in terms of an SNR improvement factor ( *Gimp* ) expressed in

To examine how the decreased Pb translates into a SNR improvement, Fig. 13 shows the range dependent SNR obtained for both maximum and minimum noise polarization orientations for a representative lidar measurement. The results show that for SNR=10, the range improvement resulting from polarization discrimination resulted in an increase in lidar operating range from 9.38 km to 12.5km (a 34% improvement). Alternatively, for a

Another useful way of looking at the effect of SNR improvement is to note that the SNR improves as the square root of the detector's averaging time. Thus a 250% improvement in

The SNR improvement factor ( *Gimp* ) is plotted as a function of the local time, Fig. 14, and the solar zenith angle, Fig.15. Since the solar zenith angle retraces itself as the sun passes

SNR is equivalent to reducing the required averaging time by a factor of <sup>2</sup> (1 / 2.5) .

*SNR P P <sup>P</sup> <sup>G</sup>*

*Max b b b*

*Unpol b b*

Table 4. Comparison of experimental results to verify shot noise operation ( *R R* )

min *<sup>P</sup> <sup>R</sup> P*

min max max min min 1

*SNR P P* (2)

max min

*R*

 *<sup>P</sup> <sup>R</sup> P*

Time min *P P*min max *P P*min max

terms of maximum and minimum Pb measurements max min (,) *P P b b* as:

given lidar range, say 9 km, the SNR improvement was 250%.

**9.4 SNR Improvement with respect to solar zenith angle** 

*imp*

summarized in Table 4.

Fig. 10. Comparison of max max Pb (NMax) verses min Pb (NMin) lidar signals at 3 PM (EST) on 07 Oct 2004: Range 35 km, Lidar signal in linear scale

Fig. 11. Comparison of max Pb (NMax) verses min Pb (NMin) lidar signals at noon (EST) on 07 Oct 2004: Range 35 km, Lidar signal in linear scale, two signals are overlapped

Fig. 12. Comparison of experimental return signals at 6:29 PM, 3 PM and noon on 07 Oct 2004, range of 20- 30 km, both orthogonal cases are shown.

**NMax Data**

**2PM Mean = 5.2772 2PM Std = 0.40786**

**NMax/NMin= 1.83**

6 8 10 12 14 16 18 20 22 24 26 28 30 32 34

R (km)

**N Max / N Min = 1.02**

**N Min Mean Noon =6.70332 Std Noon= 0.4627**

Fig. 10. Comparison of max max Pb (NMax) verses min Pb (NMin) lidar signals at 3 PM

6 8 10 12 14 16 18 20 22 24 26 28 30 32 34

**N Max and N Min Data are overlapped**

**N Max Noon Mean=6.83005 Noon Std = 0.47052**

R (km)

07 Oct 2004: Range 35 km, Lidar signal in linear scale, two signals are overlapped

**Min Noon Mean = 6.70332 Min Noon std = 0.4627**

**Min 3 PM Mean = 1.41697 Min 3 PM std = 0.22211**

2004, range of 20- 30 km, both orthogonal cases are shown.

20 21 22 23 24 25 26 27 28 29 30

R (km)

Fig. 12. Comparison of experimental return signals at 6:29 PM, 3 PM and noon on 07 Oct

Fig. 11. Comparison of max Pb (NMax) verses min Pb (NMin) lidar signals at noon (EST) on

**Max 3 PM Mean = 5.2772 Max 3 PM std = 0.40786**

**MaxNoon Mean = 6.83005 Max Noon std = 0.47052**

**NMin Data**

 **Mean = 1.41697 Std = 0.22211**

**Min 6:29 PM Mean = 0.15973 Min 6:29 PM std = 0.07448**

**Max 6:29 PM Mean = 0.79552 Max 6:29 PM std = 0.16441**

2

Lidar Signal

4

6

8

10

Lidar Signal

12

14

(EST) on 07 Oct 2004: Range 35 km, Lidar signal in linear scale

Lidar Signal

The relative impact on the sky background signal, Pb, of the polarization discrimination scheme is seen to be largest at 6:29 PM, when the lidar solar angle is large (89°), while at noon it is minimal. The detected signal for maximum Pb is much noisier than the detected signal with minimum Pb, except in the noon measurement. This is consistent with the shot noise limit applicable to PMT's where the detected noise amplitude *P* (standard deviation) is proportional to the square root of the mean detected background signal *P* ( i.e., *P P* ) where P is the detector output, whose mean value is proportional to Pb. This relation is most conveniently expressed in terms of the ratios of the detected signals at the orthogonal polarization states max min *RP P b b* , in which the shot noise condition is now: *R R* . This relation has been verified in our experiments and the results summarized in Table 4.


Table 4. Comparison of experimental results to verify shot noise operation ( *R R* )

In assessing the extent to which the polarization discrimination detection scheme can improve the SNR and the operating range, I compare the detected SNR with a polarizer, to that which would be obtained if no polarization filtering was used. When shot noise from background light is large compared to that from the lidar signal backscatter, the SNR improvement can be expressed in terms of an SNR improvement factor ( *Gimp* ) expressed in terms of maximum and minimum Pb measurements max min (,) *P P b b* as:

$$G\_{imp} = \frac{SNR\_{Max}}{SNR\_{Llipol}} = \sqrt{\left(\frac{P\_b^{\min} + P\_b^{\max}}{P\_b^{\min}}\right)} \tag{2}$$

To examine how the decreased Pb translates into a SNR improvement, Fig. 13 shows the range dependent SNR obtained for both maximum and minimum noise polarization orientations for a representative lidar measurement. The results show that for SNR=10, the range improvement resulting from polarization discrimination resulted in an increase in lidar operating range from 9.38 km to 12.5km (a 34% improvement). Alternatively, for a given lidar range, say 9 km, the SNR improvement was 250%.

Another useful way of looking at the effect of SNR improvement is to note that the SNR improves as the square root of the detector's averaging time. Thus a 250% improvement in SNR is equivalent to reducing the required averaging time by a factor of <sup>2</sup> (1 / 2.5) .

#### **9.4 SNR Improvement with respect to solar zenith angle**

The SNR improvement factor ( *Gimp* ) is plotted as a function of the local time, Fig. 14, and the solar zenith angle, Fig.15. Since the solar zenith angle retraces itself as the sun passes

Active Remote Sensing: Lidar SNR Improvements 335

Symmetry was, however, not always observed in our experimental results. Fig. 16 shows Gimp plotted as a function of the solar zenith angle for 23 February 2005. Small asymmetries were observed. These appear to be related to changes in humidity, which can modify the scattering properties and lead to enhanced multiple scattering effects. The results are supported by the variation in Precipitable water vapor (PWV) shown in Fig. 17, obtained from the CCNY Global Positioning System GPS measurements which were processed by the

> **Feb 23, 05 Asymmetric**

50 55 60 65 70 75 80 85 90 95 **Solar Zenith Angle (Degree)**

Fig. 16. Gimp in detection wavelength of 532 nm verses solar zenith angle on 23 February 2005

**Asymmetric PWV Before and after noon Feb 23, 05**

**Feb 23,05 Feb 19,05**

**Symmetric PWV Before and after noon Feb 19, 05**

) than those of 19

afternoon 0.09

6:00 7:00 8:00 9:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 18:00 **Day Time** 

The 23 February the aerosol optical depth measurements from the shadow band radiometer

February, which are consistent with the asymmetry in the PWV, with higher optical depths

Within the single scattering theory, the polarization orientation at which the minimum Pb occurs should equal the azimuth angle of the sun (see Fig. 7). To validate this result, the polarizer rotation angle was tracked (by rotating the detector analyzer) over several seasons

Fig. 17. PWV (cm) loading verses local time on 19 February 2005 and 23 February 2005

**Solar noon**

**9.5 Effect of variable precipitable water vapor on SNR** 

**Before Solar Noon After Soalr Noon**

1

0 0.2 0.4 0.6 0.8 1 1.2

show larger proportional changes (morning 0.16

corresponding to high PWV (and RH%) conditions.

**9.6 SNR improvement azimuthally dependence** 

**PWV (cm)**

1.5

2

2.5

**G imp**

3

3.5

NOAA Forecast Systems Laboratory (FSL) (NOAA Web) for both days.

Fig. 13. Experimental range dependent SNR for maximum and minimum polarization orientations

through solar noon, it would be expected that the improvement factor ( *Gimp* ) would be symmetric before and after the solar noon and depend solely on the solar zenith angle. This symmetry is observed in Figs.14 and 15 for measurements made on 19 February 2005 and is supported by the relatively small changes in optical depth (AOD) values obtained from a collocated shadow band radiometer, (morning 0.08 , afternoon 0.11 )

Fig. 14. Gimp in detection wavelength of 532 nm verses local time on 19 February 2005

Fig. 15. Gimp in detection wavelength of 532 nm verses solar zenith angle on 19 February 2005

 Max Pb Min Pb

, afternoon 0.11

)

0 5 10 15 20 25 30 35 40 45 50

SNR

through solar noon, it would be expected that the improvement factor ( *Gimp* ) would be symmetric before and after the solar noon and depend solely on the solar zenith angle. This symmetry is observed in Figs.14 and 15 for measurements made on 19 February 2005 and is supported by the relatively small changes in optical depth (AOD) values obtained from a

G imp

(10, 9.38 km)

Fig. 13. Experimental range dependent SNR for maximum and minimum polarization

(10, 12.5 km)

Altitude (km)

collocated shadow band radiometer, (morning 0.08

1

6:30

1

1.5

2

2.5

**Gimp**

3

3.5

7:00

7:30

8:00

Before Solar Noon

**G imp**

8:30

9:00

9:30

10:00

10:30

11:00

After Solar Noon **Feb 19, 05**

**Feb 19, 05 Symmetry**

11:30

Fig. 14. Gimp in detection wavelength of 532 nm verses local time on 19 February 2005

12:00

12:30

**Local Time (EST)**

50 55 60 65 70 75 80 85 90 95 **Solar Zenith Angle (Degree)**

Fig. 15. Gimp in detection wavelength of 532 nm verses solar zenith angle on 19 February

13:00

13:30

14:00

**Symmetry**

14:30

15:00

15:30

16:00

16:30

17:00

17:30

18:00

1.5

2

2.5

**G imp**

3

3.5

orientations

2005

### **9.5 Effect of variable precipitable water vapor on SNR**

Symmetry was, however, not always observed in our experimental results. Fig. 16 shows Gimp plotted as a function of the solar zenith angle for 23 February 2005. Small asymmetries were observed. These appear to be related to changes in humidity, which can modify the scattering properties and lead to enhanced multiple scattering effects. The results are supported by the variation in Precipitable water vapor (PWV) shown in Fig. 17, obtained from the CCNY Global Positioning System GPS measurements which were processed by the NOAA Forecast Systems Laboratory (FSL) (NOAA Web) for both days.

Fig. 16. Gimp in detection wavelength of 532 nm verses solar zenith angle on 23 February 2005

Fig. 17. PWV (cm) loading verses local time on 19 February 2005 and 23 February 2005

The 23 February the aerosol optical depth measurements from the shadow band radiometer show larger proportional changes (morning 0.16 afternoon 0.09 ) than those of 19 February, which are consistent with the asymmetry in the PWV, with higher optical depths corresponding to high PWV (and RH%) conditions.

#### **9.6 SNR improvement azimuthally dependence**

Within the single scattering theory, the polarization orientation at which the minimum Pb occurs should equal the azimuth angle of the sun (see Fig. 7). To validate this result, the polarizer rotation angle was tracked (by rotating the detector analyzer) over several seasons

Active Remote Sensing: Lidar SNR Improvements 337

Ahmed, S., Y. Hassebo, et al. (2006). *Examination of Reductions in Detected Skylight Background* 

Ahmed, S. A., Yasser Y. Hassebo, et al. (2006). *Potential and range of application of elastic* 

Ansmann, A., U. Wanginger, M. Riebesell, C. Weitkamp amd W. Michaelis (1992).

Barber, P., and C.Yeh (1975). "Scattering of Electromagnetic Waves by Arbitrarily Shaped

Barnaba F and a. G. Gobbi (2001). "Lidar estimation of tropospheric aerosol extinction,

Bills, R., C. Gardner, and C. She (1991). "Narrowband lidar technique for sodium temprature and Doppler wind observations of the upper atmosphere." *Opt. Eng*. 30(a): 13-21.

Charlson, R. J., J. Langner, et al. (1991). "Perturbation of the northern hemisphere radiative

Charlson, R. J., S. E. Schwartz, et al. (1992). " Climate forcing by anthropogenic aerosols."

Gobbi, G. P. (1998). "Polarization lidar returns from aerosols and thin clouds: a framework

Gardner, C. S., et al. (1993). " Simultaneous observations of sporadic E, Na, Fe, and Ca+ layers at Urbana, Illinois: Three case studies." *J. Geophys. Res*. 98: 16,865-16,873. Gelbwachs, A. (1994). "Iron Boltzmann factor lidar: proposed new remote sensing technique

Granier, C., J. P. Jegou, et al. (1989). "Iron atoms and metallic species in the Earth's upper

Halldorsson and a. J. Langerhoic (1978). "Geometrical form factors for the lidar function."

Hansen, J. and a. L. Travis (1974). "Light Scattering in Planetary Atmospheres." *Space Science* 

Hassebo, Y., R. Agishev, et al. (2004). *Optimization of Biaxial Raman Lidar receivers to the overlap factor effect*" Third NOAA CREST Symposium, Hampton, VA USA.

Deirmendjian, D. (1969). *Electromagnetic Scattering on Spherical Polydispersion*. New York. Douglass L. R., M.R. Schoeberl, et al. (2000). "A composite view of ozone evolution in the

International Laser Radar Conference (ILRC), Nara, Japan.

Applications, U. S. N. O. A. "U.S. Naval Observatory Astronomical Applications,

http://aa.usno.navy.mil/data/docs/AltAz.html."

Dielectric Bodies." *Appl. Opt* 14: 2864-2872.

Charleon, R. J. E. (1995). *Aeroeol forcing of climate*. New York, J. Wllay.

observations." *J. Geophys Res*. 106 (D9): 9879-9895.

for atmospheric temperature." *Appl. Opt*(33): 7151-7156.

for the analysis." *Appl. Opt*. 37: 5505-5508.

atmosphere." *Geophys. Res. Lett*. 16: 243-246.

*Signal Attainable in Elastic Backscatter Lidar Systems Using Polarization Selection*. 23rd

*backscatter lidar systems using polarization selection to minimize detected skylight noise*.

"Independent measurement of extinction and backscatter profiles in cirrus clouds by using a combined Raman elastic-backscatter lidar." *Appl. Opt*. 33: 7113-7131.

surface area and volume: Maritime and desert-dust cases. ." *J. Geophys. Res*. 106

balance by backscattering from anthropogenic sulfate aerosols." *Tellus* 43AB: 152-

1995-1996 northern winter polar vortex developed from airborne lidar and satellite

**11. References** 

SPIE, Sweden.

(D3): 3005-3018.

*Science* 255: 423-430.

*Appl. Opt* 17: 240-244. Hamamatsu: http://www.hamamatsu.com

*Reviews* 16: 527-610

163.

since February 2004 and compared with the azimuth angle calculated using the U.S. Naval Observatory standard solar position calculator (Applications) (14 April 2005). As expected, the polarizer rotation angle needed to achieve a minimum Pb closely tracks the azimuth angle, Fig. 18.

Fig. 18. Comparison between solar azimuth angle and angle of polarization rotation needed to achieve minimum Pb: 14 April 2005

This relationship is important since it allows us to conceive of an automated approach that makes use of a pre-calculated solar azimuth angle as a function of time and date to automatically rotate and set both the transmitted lidar polarization and the detector polarizer at the orientations needed to minimize Pb. With an appropriate control system, it would then be possible to track the minimum Pb by rotating the detector analyzer and the transmission polarizer simultaneously to maximize the SNR, achieving the same results as would be done manually as described above.

## **9.7 Conclusions and summary**

SNR improvements can be obtained for lidar backscatter measurements, using a polarization selection/tracking scheme to reduce the sky background component. This approach can significantly increase the far range SNR as compared to un-polarized detection. This is equivalent to improvements in effective lidar range of over 30% for a SNR threshold of 10. The improvement is largest for large scattering angles, which for vertical pointing lidars occur near sunrise/sunset. Asymmetric skylight reduction sometimes observed in experimental results is explained by the measured increase in PWV and subsequent modification of aerosol optical depth by dehydration from morning to afternoon. It was also demonstrated that the orientation of the scattering plane defining the minimum noise state does not change in multiple scattering but follows the solar azimuth angle even for high aerosol loading. Therefore, it is quite conceivable to automate this procedure simply by using solar position calculators to orient the polarization axes.

## **10. Acknowledgment**

I greatly would like to express my sincere appreciation and thankful to almighty God, Allah. Then, secondly, I am grateful to Drs. S Ahmed, B Gross, and Moshary, for their support during this research at The City University of New York. This work was supported under contract from NOAA # NA17AE1625.

## **11. References**

336 Remote Sensing – Advanced Techniques and Platforms

since February 2004 and compared with the azimuth angle calculated using the U.S. Naval Observatory standard solar position calculator (Applications) (14 April 2005). As expected, the polarizer rotation angle needed to achieve a minimum Pb closely tracks the azimuth

**Polarizer rotating angle**

**Azimuth angle Polarizer rotating angle**

9:10 10:10 11:10 12:10 13:10 14:10 15:10 16:10 17:10 18:10 19:10 **Local Time** 

Fig. 18. Comparison between solar azimuth angle and angle of polarization rotation needed

This relationship is important since it allows us to conceive of an automated approach that makes use of a pre-calculated solar azimuth angle as a function of time and date to automatically rotate and set both the transmitted lidar polarization and the detector polarizer at the orientations needed to minimize Pb. With an appropriate control system, it would then be possible to track the minimum Pb by rotating the detector analyzer and the transmission polarizer simultaneously to maximize the SNR, achieving the same results as

SNR improvements can be obtained for lidar backscatter measurements, using a polarization selection/tracking scheme to reduce the sky background component. This approach can significantly increase the far range SNR as compared to un-polarized detection. This is equivalent to improvements in effective lidar range of over 30% for a SNR threshold of 10. The improvement is largest for large scattering angles, which for vertical pointing lidars occur near sunrise/sunset. Asymmetric skylight reduction sometimes observed in experimental results is explained by the measured increase in PWV and subsequent modification of aerosol optical depth by dehydration from morning to afternoon. It was also demonstrated that the orientation of the scattering plane defining the minimum noise state does not change in multiple scattering but follows the solar azimuth angle even for high aerosol loading. Therefore, it is quite conceivable to automate this

procedure simply by using solar position calculators to orient the polarization axes.

I greatly would like to express my sincere appreciation and thankful to almighty God, Allah. Then, secondly, I am grateful to Drs. S Ahmed, B Gross, and Moshary, for their support during this research at The City University of New York. This work was supported under

 **Azimuth angle**

angle, Fig. 18.

would be done manually as described above.

**9.7 Conclusions and summary** 

**10. Acknowledgment** 

contract from NOAA # NA17AE1625.

to achieve minimum Pb: 14 April 2005

**Angle (Degree)**


http://aa.usno.navy.mil/data/docs/AltAz.html."


Active Remote Sensing: Lidar SNR Improvements 339

Petri, K., A. Salik, and J. Coony (1982). "Variable-Wavelength Solar-Blind Raman Lidar for

Platt, C. M. R. (1977). "Lidar observation of a mixed-phase altostratus cloud." *J. Appl.* 

Platt, C. M. R. (1981). *Transmission and reflectivity of ice clouds by active probing*. Clouds, Their Formation, Optical Properties, and Effects, San Diego, Calif., Academic. Renaut, J., C. Pourny, and R. Capitini (1980). "Daytime Raman-LidarMeasurements of Water

Roy, N., G. Roy, et al. (2004). "Measurement of the azimuthal dependence of cross-polarized lidar returns and its relation to optical depth." *Appl. Opt*. 43: 2777-2785. Sassen, H. Z. K., et al. (1992). "Simulated polarization diversity lidar returns from water and

Sassen, K. (1974). "Depolarization of laser light backscattered by artificial clouds." *J. Appl.* 

Sassen, K. (1979). "Scattering of polarized laser light by water droplet, mixed-phase and ice

Sassen, K. (1994). "Advanced in polarization diversity lidar for cloud remote sensing." *Proc.* 

Sassen, K. and a. R. L. Petrilla (1986). "Lidar depolarization from multiple scattering in

Schotland, R. M. (1966). *Some Obsevation of the vertical Profile of Water Vapor by a Laser Optical Radar*. 4th Symposium on Remote Sensing of the Environment Univ. of Michigan. Schotland, R. M., K. Sassen, et al. (1971). "Observations by lidar of linear depolarization

She, C. and a. J. Yu (1994). "Simultaneous three-frequency Na lidar measurements of radial wind and temperature in the mesopause region." *Geophys. Res. Lett*. 21: 1771-1774. Spinhirne, J. D. (1991). *Lidar aerosol and cloud backscatter at 0.53, 1.06 and 1.54 μm*. presented at

Spinhirne, J. D. (1993). "Micro pulse lidar." *IEEE TRANSACTIONS ON GEOSCIENCE AND* 

Stein, B., C. Wedekind, et al. (1999). "Optical classification, existence temperatures, and

Stephens, G. L. (1994). *Remote Sensing of the Lower Atmosphere: An Introduction*. New York,

Velotta, R., B. Bartoli, et al. (1998). "Analysis of the receiver response in lidar measurements."

Welton, E., J. Campble, et al. (2001). First Annual Report: The Micro-pulse Lidar Woldwide

Takashi Fhjii and T. Fukuchi (2005). *Laser Remote Sensing*, Taylor and Francis Group

Wiscombe, W. J. (1980). "Improved Mie Scattering Algorithms." *Appl. Opt* 19: 1505.

coexistence of different polar stratospheric cloud types." *J. Geophys. Res*. 104 (D19):

crystal clouds. 2. Angular depolarization and multiple scatter behavior." *J. Atmos.* 

Temprature." *Appl. Opt* 21: 1212-1218.

Vapor." *Optics Letters* 5: 233-235.

Rodriguez, M., R. Sauerbrey, et al. (2002). "*Optics Letters*." 27(772).

marine stratus clouds." *Appl. Opt*. 25: 1450– 1459.

the 29th Aerospace Sciences Meeting, Reno, NV.

*REMOTE SENSING* 31: 48-54.

23983–23993.

Oxford Univ. Press.

*Appl. Opt*. 37: 6999–7007.

Observational Network, Project Report

ratios by hydrometeors." *J. Appl. Meteorol* 10: 1011–1017.

precipitating mixed phase clouds." *Appl. Opt*. 31: 2914-2923.

*Meteorol*. 16: 339–345.

*Meterol*. 13: 923–933.

*IEEE* 82: 1907-1914.

*Sci* 36: 852-61.

Remote Measurement of Atmospheric Water-Vapor Concentration and


Hassebo, Y. Y., B. Gross, et al. (2005). *Polarization discrimination technique to maximize LIDAR signal-to-noise ratio*. Polarization Science and Remote Sensing II, SPIE Hassebo, Y. Y., B. Gross, et al. (2006). "Polarization discrimination technique to maximize LIDAR signal-to-noise ratio for daylight operations." *App. Opt*. 45: 5521-5531. Hassebo, Y. Y., Barry M. Gross, et al. (2005). *Impact on lidar system parameters of polarization* 

Hassebo, Y. Y., Y. Zhao, et al. (2005). *Multi-wavelength Lidar Measurements at the City College* 

Heaps, W. S., J. Burris, and J. French (1997). "Lidar technique for remote measurement of

Jegou, J., M.Chanin, et al. (1980). "Lidar measurements of atmospheric lithium." *Geophys. Res.* 

Jones, F. E. (1949). "Radar as an aid to the study of the atmosphere " *Royal Aeronautical* 

Junge, C. (1955). "The size distribution and aging of natural aerosol as determined from

Klett, J. D. (1981). "Stable analytical inversion solution for processing lidar returns." *Appl.* 

Klett, J. D. (1985). "Lidar inversion with variable backscatter/extinction ratios" *Appl. Opt*. 24:

Kokkinos, D. S. and S. A. Ahmed (1989). *Atmospheric depolarization of lidar backscatter signals*.

Kovalev, V. and H. Moosmüller (1994). "Distortion of particulate extinction profiles measured with lidar in a two-component atmosphere." *Appl. Opt*. 33: 6499–6507. Kovalev, V. and W. Eichinger (2004). *Elastic Lidar, Theory, Practice, and Analysis Mathods*.

electrical and optical data on the atmpsphere." *J. Meteorol* 12: 13-25.

Lasers '88' International Conference, Lake Tahoe, NV, STS Press.

Liou, K. N. (2002). *An Introduction to Atmospheric Radiation*. California, Academic Press. McClung, F. J. a. R. W. H. (1962). "Giant Optical Pulsations from Ruby." *Appl. Phys*. 33: 828-

Measures, R. M. (1984). *Laser Remote Sensing: Fundamentals and Applications*. NY, Wiley. Measures. R. M., a. G. P. (1972). "A Study of Tunable Laser Techniques for Remote Mapping of Specific Gaseous Constituents of the Atmosphere." *Opto-electronics* 4: 141-153. Middleton, W. E. K., and A.F.Spilhaus (1953). *Meteorological Instruments*. Toronto, ,

and Measurements for Atmospheric Remote Sensing, SPIE.

Heaps, W. S., J. Burris (1996). "Airborne Raman lidar." *Appl. Opt* 35: 7128-7137.

IEEE.

9402-9405.

*Lett*. 7: 995-998.

*Society* 53: 433-448.

*Opt*. 20: 211–220.

New Jersey, Wiley.

University of Toronto Press. Mie, G. (1908). *Annalen der Physik* 24: 376-445.

NOAA " http://www.fsl.noaa.gov."

MODIS Collection 5 Aerosol Retrieval Theoretical Basis Document.

NOAA-CREST " http://earth.engr.ccny.cuny.edu/noaa/wc/DailyData/."

1638–1985.

829.

*selection / tracking scheme to reduce daylight noise*. Lidar Technologies, Techniques,

*of New York in Support of the NOAA-NEAQS and NASA-INTEX-NA Experiments*

temperature by use for a vibrational-rotational Raman spectroscopy." *Appl. Opt* 36:


**15** 

*Ukraine* 

Mykhaylo Palamar

*Ternopil National Technical University* 

**Smart Station for Data Reception** 

*Department of Devices and Control-Measurement Systems,* 

*Information Technique and Intelligent Systems Research Laboratory* 

The technology of remote sensing (ERS) provides huge information resources and has the potential to influence the socio-economic development of both security and defence. However, the mass use of remote sensing technologies demands the creation of a network with the technical means of reception and online access to remote sensing data for consumers. The primary source of data for remote sensing is an aerial station (AS), with the reception of information coming from a spacecraft (SC). Typically, these stations are special objects (mainly military), intended to receive, process and disseminate remote sensing data. For the effective use of ERS data, it is necessary to bring it closer to the end user. This requires universal compact antenna stations of a consumer class, including mobile ones.

This chapter reviews the principles, structures, models and analysis of various technical solutions and the key features, basic functions and control algorithms that are used to create universal automatic ASs (terminals with remote control) and software to control such ASs so

The idea of intelligent "personal" aerial station for information receiving is offered proceeding from its function. Such station can be used by small groups or individual researchers directly engaged in contextual information processing i.e. university laboratories, scientific centers, and other organizations interested in such information.

The results of the author's practical experience in creation of remote sensing AS with different types of rotary support devices and with various diameters (from 3 to 12 m) of parabolic reflectors are given. Experimental results of operation of control systems of

remote sensing stations using algorithms of artificial neural networks are presented.

**2. The structure and principle of the functioning of terrestrial antenna** 

The following conditions are necessary for an ERS system to function:

1. Low-orbital satellites with filming and recording equipment onboard;

as to get remote sensing information from the spacecraft.

**stations for remote sensing data reception** 

**1. Introduction** 

**of the Earth Remote Sensing** 

Wöste, L., C. Wedekind, et al. (1997). "Laser und Optoelektronik " 29 (5)(51).

Zuev V., V. Burlakov, et al. (1998). "Ten Years (1986-1995) of lidar observations of temporal and vertical structure of stratospheric aerosol over Siberia." *J. Aerosol Sci*. 29 1179- 1187.

## **Smart Station for Data Reception of the Earth Remote Sensing**

## Mykhaylo Palamar

*Department of Devices and Control-Measurement Systems, Information Technique and Intelligent Systems Research Laboratory Ternopil National Technical University Ukraine* 

## **1. Introduction**

340 Remote Sensing – Advanced Techniques and Platforms

Zuev V., V. Burlakov, et al. (1998). "Ten Years (1986-1995) of lidar observations of temporal

and vertical structure of stratospheric aerosol over Siberia." *J. Aerosol Sci*. 29 1179-

Wöste, L., C. Wedekind, et al. (1997). "Laser und Optoelektronik " 29 (5)(51).

1187.

The technology of remote sensing (ERS) provides huge information resources and has the potential to influence the socio-economic development of both security and defence. However, the mass use of remote sensing technologies demands the creation of a network with the technical means of reception and online access to remote sensing data for consumers. The primary source of data for remote sensing is an aerial station (AS), with the reception of information coming from a spacecraft (SC). Typically, these stations are special objects (mainly military), intended to receive, process and disseminate remote sensing data.

For the effective use of ERS data, it is necessary to bring it closer to the end user. This requires universal compact antenna stations of a consumer class, including mobile ones.

This chapter reviews the principles, structures, models and analysis of various technical solutions and the key features, basic functions and control algorithms that are used to create universal automatic ASs (terminals with remote control) and software to control such ASs so as to get remote sensing information from the spacecraft.

The idea of intelligent "personal" aerial station for information receiving is offered proceeding from its function. Such station can be used by small groups or individual researchers directly engaged in contextual information processing i.e. university laboratories, scientific centers, and other organizations interested in such information.

The results of the author's practical experience in creation of remote sensing AS with different types of rotary support devices and with various diameters (from 3 to 12 m) of parabolic reflectors are given. Experimental results of operation of control systems of remote sensing stations using algorithms of artificial neural networks are presented.

## **2. The structure and principle of the functioning of terrestrial antenna stations for remote sensing data reception**

The following conditions are necessary for an ERS system to function:

1. Low-orbital satellites with filming and recording equipment onboard;

Smart Station for Data Reception of the Earth Remote Sensing 343

Satellite trajectories - which are calculated for the next session - are loaded into the PC control unit in a table view before the session with the spacecraft. The control data includes codes for the antenna's angular position and velocity codes for the change. They are transferred to the high-level equipment of the antenna control system from the PC via a communication interface. The PC monitors the antenna position by broadcasting the angular coordinates received from the respective antenna sensors. Moreover, it is necessary to monitor the status of limit switches, the track time, speed and other parameters. The control system needs to be synchronised with a GPS time system in order to ensure the

Information is transmitted via the communication network to the computer after the session's end. The computer has to perform zero-level processing (unpacking the flow and binding the onboard time to terrestrial time) and referencing to geographical coordinates.

As was noted in the studies of the India Space Department, more often than not remote sensing technology has not yet been effectively used, despite the whole complex of remote

The main causes of this are the isolation of consumers from the remote sensing data processing centre, the lack of remote sensing receiving stations and the difficulties involved in gaining access to RS data. Moreover, the important factors are: an insufficient amount of software products and qualified staff in the field of contextual RS data processing, though

Currently, a mainly centralised access method for remote sensing information is used. This approach involves the receiving, processing and dissemination of data only through big centres for space information receiving, often involving military organisations. Such data centres can be compared with the big computer centres from the 1970s that acted as service providers for complex calculations on request. These were non-dynamic structures and ineffective for a wide range of customers. As such, the genuine active development and implementation of informational technologies into daily life began with the popularisation of personal computers, when a wide range of interested consumers became involved in

However, other technologies related to the distributed method of reception and processing of information are emerging. This information is received locally by organizations interested in such information by means of their own aerial terminals. In such cases the information reaches the user more quickly and more users can work on data processing and analysis concerning their subject-matter. To use this technology, it is necessary to provide users with inexpensive and easy-to-use 'personal' RS data receiving stations. Such stations can

The structure of the remote sensing antenna complex includes the following main blocks:



**2.1 The concept of smart "personal" earth stations for remote sensing** 

partially this is a consequence of the reasons already addressed.


management of the antenna system in real-time.

sensing satellites available for the country.

working with information.


The general scheme of a satellite monitoring system is shown in Fig. 1.

Fig. 1. General scheme of a satellite monitoring system (a). Receiving antenna where the reflector has a diameter of 12 m (b).

According to NASA reports at present 6130 artificial satellites are launched into space. 957 among them are operating on different Earth orbits. Nearly 7 % i.e. more than fifty of them are intended for remote sensing. Nearly 40 countries are directly involved in programmes involving satellite observations and their number is constantly growing. The trend is that the number of spacecraft is growing and the resolution of recharging equipment is increasing (several tens of cm). New technologies of satellite monitoring have appeared (e.g., the miniaturisation of equipment, the usage of micro- and nano-satellites, satellite clusters and the integration of different projects). University (students) satellites and those of other branch research organizations are being launched. New technologies of making survey of necessary territories ordered by customer are applied (Hnatyshyn & Shparyk, 2000).

Ground infrastructure remote sensing systems consist of centres receiving and processing data from spacecraft, with web portals to access the catalogues, archives and operational information from space. The necessary components are: the marketing of software products for thematic data processing systems and the training of qualified personnel.

High-sensitive antenna systems and equipment for reception, demodulation, the decoding of the electromagnetic microwaves from spacecraft and the allocation of the data streams that are encrypted in order to receive data from satellites are all also necessary.

The technology of ERS data reception is more difficult than data reception from geostationary satellites due to the need for tracking remote sensing spacecraft.

Antenna systems with hardware and software controls should automatically direct the focal axis of the reflector of the antenna system into a predictable location point for the spacecraft so as to ensure its tracking. The signals from the satellites are received by the antenna during the spacecraft's tracking.

The structure of the remote sensing antenna complex includes the following main blocks:


342 Remote Sensing – Advanced Techniques and Platforms

3. Terrestrial antenna stations for data reception, its processing and distribution to users.

a) Structure of ERS system b) Receiving ERS antenna

Fig. 1. General scheme of a satellite monitoring system (a). Receiving antenna where the

PC

archiving and

transmission of

registration,

the information

territories ordered by customer are applied (Hnatyshyn & Shparyk, 2000).

for thematic data processing systems and the training of qualified personnel.

that are encrypted in order to receive data from satellites are all also necessary.

geostationary satellites due to the need for tracking remote sensing spacecraft.

According to NASA reports at present 6130 artificial satellites are launched into space. 957 among them are operating on different Earth orbits. Nearly 7 % i.e. more than fifty of them are intended for remote sensing. Nearly 40 countries are directly involved in programmes involving satellite observations and their number is constantly growing. The trend is that the number of spacecraft is growing and the resolution of recharging equipment is increasing (several tens of cm). New technologies of satellite monitoring have appeared (e.g., the miniaturisation of equipment, the usage of micro- and nano-satellites, satellite clusters and the integration of different projects). University (students) satellites and those of other branch research organizations are being launched. New technologies of making survey of necessary

Ground infrastructure remote sensing systems consist of centres receiving and processing data from spacecraft, with web portals to access the catalogues, archives and operational information from space. The necessary components are: the marketing of software products

High-sensitive antenna systems and equipment for reception, demodulation, the decoding of the electromagnetic microwaves from spacecraft and the allocation of the data streams

The technology of ERS data reception is more difficult than data reception from

Antenna systems with hardware and software controls should automatically direct the focal axis of the reflector of the antenna system into a predictable location point for the spacecraft so as to ensure its tracking. The signals from the satellites are received by the antenna

2. Onboard data transmitters via a radio channel;

receiving,

Сonverter

465 МHz

visualization of

the information

 decoding and Unit of

reflector has a diameter of 12 m (b).

PC control unit PC

of the AS of pointing Control unit

(12-18 GHz) 8.192 GHz

during the spacecraft's tracking.

The general scheme of a satellite monitoring system is shown in Fig. 1.


Satellite trajectories - which are calculated for the next session - are loaded into the PC control unit in a table view before the session with the spacecraft. The control data includes codes for the antenna's angular position and velocity codes for the change. They are transferred to the high-level equipment of the antenna control system from the PC via a communication interface. The PC monitors the antenna position by broadcasting the angular coordinates received from the respective antenna sensors. Moreover, it is necessary to monitor the status of limit switches, the track time, speed and other parameters. The control system needs to be synchronised with a GPS time system in order to ensure the management of the antenna system in real-time.

Information is transmitted via the communication network to the computer after the session's end. The computer has to perform zero-level processing (unpacking the flow and binding the onboard time to terrestrial time) and referencing to geographical coordinates.

## **2.1 The concept of smart "personal" earth stations for remote sensing**

As was noted in the studies of the India Space Department, more often than not remote sensing technology has not yet been effectively used, despite the whole complex of remote sensing satellites available for the country.

The main causes of this are the isolation of consumers from the remote sensing data processing centre, the lack of remote sensing receiving stations and the difficulties involved in gaining access to RS data. Moreover, the important factors are: an insufficient amount of software products and qualified staff in the field of contextual RS data processing, though partially this is a consequence of the reasons already addressed.

Currently, a mainly centralised access method for remote sensing information is used. This approach involves the receiving, processing and dissemination of data only through big centres for space information receiving, often involving military organisations. Such data centres can be compared with the big computer centres from the 1970s that acted as service providers for complex calculations on request. These were non-dynamic structures and ineffective for a wide range of customers. As such, the genuine active development and implementation of informational technologies into daily life began with the popularisation of personal computers, when a wide range of interested consumers became involved in working with information.

However, other technologies related to the distributed method of reception and processing of information are emerging. This information is received locally by organizations interested in such information by means of their own aerial terminals. In such cases the information reaches the user more quickly and more users can work on data processing and analysis concerning their subject-matter. To use this technology, it is necessary to provide users with inexpensive and easy-to-use 'personal' RS data receiving stations. Such stations can

Smart Station for Data Reception of the Earth Remote Sensing 345

Since the position of the spacecraft for low-orbit remote sensing changes all the time, both hardware and software tools for the controlling and tracking of a satellite in its orbit play an important role in the structure of terrestrial receivers. The required accuracy and acceptable errors in coordinate tracking depends on the chart direction of the aerial and the diameter of

Problems involved in AS creation for tracking the remote sensing satellite are caused by the following factors: the low-orbital trajectory of the remote sensing satellite requires the use of a high-dynamic supporting-rotating device for the antenna with the relevant control systems. Increase of image dimensional resolution from the satellite requires the acceleration of the information flow transmission rate which in its turn leads to the enlargement of the reflecting surface diameter of antenna reflector (diameters varying from 3m up to 12m) and

> 2 *L V C INK r*

> > *r H D*

The larger the diameter of the reflector, the narrower antenna direction chart becomes, which leads to the need to increase dynamic pointing accuracy. For instance, for the AS TNA-57 used for receiving data from the remote sensing Ukrainian satellite 'Sich-2' in the Centre for Space Information Monitoring and Navigation Field Control (CSIM and NFC), the diameter of the antenna reflector is 12 m, its weight is 5,500 kg, while the total weight of the AS is close to 70,000 kg (Fig.1,b). The width chart of the antenna orientation on the level of the 3 dB level is equal to 14 arcmin. Thus, it is necessary to provide speeds of up to 10

The provision of a large dynamic range of motion for large antennas (a reflector with a diameter of 3m to 12m) and the need to ensure a small dynamic error for spacecraft guidance and tracking are contradicting requirements. Thus, this leads to a more

degrees / sec with a dynamic tracking error of not more than 1.5 arcmin.

, (1)

, (2)

**2.2 Features and problems that must be addressed during the station's creation** 

its mirror.

where:

where:

– the wavelength;

H – the height of the spacecraft; D – the diameter of the lens.

its weight as well (Garbuk & Gershenson, 1997).

The speed of information flow is defined as:

L – the width of the Earth's view;

V – the velocity of the sub-satellite point; I – the number of bites per pixel of the image; N – the amount of information channels;

K – the coefficient of the coding noise immunity type; r – the resolution of the Earth's surface survey capability:

significantly change their activities in relation to a number of areas connected to the use of space informational technologies, as with the appearance of the PC.

Personal RS data receiving station is relatively cheap, automated, simple in use (including mobile version) host antenna station designed for use by groups directly engaged in concerned with their subject-matter data processing and decision making (or guidelines in decision-making for management departments). These may be universities, research laboratories, institutes or departments in control organisations. The key characteristic features of such stations should be:


Personal stations allow for the reduction of the access time to remote sensing data and the cheapening and loosening of access for a wide range of users. This solves one of the main requirements of remote sensing data – the efficiency of the acquisition of actual space information about the earth's surface and its objects.

Connecting a wider range of consumers - including the involvement of university science departments and the practical training of staff in the area of thematic data processing allows for the more effective usage of the satellite in monitoring data for the stable growth and security of countries (according to the GEOSS and GMES programmes, etc.).

The availability of such systems will make remote sensing data an effective information tool for accessing situations and decision-making.

Important features of a personal remote sensing data receiving antenna station should include:


Such functionality would allow the staff to focus on online access and contextual information processing instead of focusing on hardware.

Further, technical problems we had to solve while creating the series of antenna stations for satellite tracking and receiving of remote sensing data as well as broadcasting command information to satellite are described.

## **2.2 Features and problems that must be addressed during the station's creation**

Since the position of the spacecraft for low-orbit remote sensing changes all the time, both hardware and software tools for the controlling and tracking of a satellite in its orbit play an important role in the structure of terrestrial receivers. The required accuracy and acceptable errors in coordinate tracking depends on the chart direction of the aerial and the diameter of its mirror.

Problems involved in AS creation for tracking the remote sensing satellite are caused by the following factors: the low-orbital trajectory of the remote sensing satellite requires the use of a high-dynamic supporting-rotating device for the antenna with the relevant control systems. Increase of image dimensional resolution from the satellite requires the acceleration of the information flow transmission rate which in its turn leads to the enlargement of the reflecting surface diameter of antenna reflector (diameters varying from 3m up to 12m) and its weight as well (Garbuk & Gershenson, 1997).

The speed of information flow is defined as:

$$\mathbf{C} = \frac{\mathbf{L} \cdot \mathbf{V}}{r^2} \cdot \mathbf{I} \cdot \mathbf{N} \cdot \mathbf{K} \; \tag{1}$$

where:

344 Remote Sensing – Advanced Techniques and Platforms

significantly change their activities in relation to a number of areas connected to the use of

Personal RS data receiving station is relatively cheap, automated, simple in use (including mobile version) host antenna station designed for use by groups directly engaged in concerned with their subject-matter data processing and decision making (or guidelines in decision-making for management departments). These may be universities, research laboratories, institutes or departments in control organisations. The key characteristic

Integration with processing technologies and the storage and thematic analysis of data;

Personal stations allow for the reduction of the access time to remote sensing data and the cheapening and loosening of access for a wide range of users. This solves one of the main requirements of remote sensing data – the efficiency of the acquisition of actual space

Connecting a wider range of consumers - including the involvement of university science departments and the practical training of staff in the area of thematic data processing allows for the more effective usage of the satellite in monitoring data for the stable growth

The availability of such systems will make remote sensing data an effective information tool

Important features of a personal remote sensing data receiving antenna station should

1. The prediction and calculation of the trajectory of spacecraft which are selected by their orbital data from the spacecraft catalogues and the coordinates of the station; 2. Software calibration and the accompaniment of the selected spacecraft on its trajectory

3. The tracking of the signal maximum from the spacecraft during its accompaniment and

9. Connectivity with other stations and external terminals for synchronisation and

Such functionality would allow the staff to focus on online access and contextual

Further, technical problems we had to solve while creating the series of antenna stations for satellite tracking and receiving of remote sensing data as well as broadcasting command

4. The reception and demodulation of radio-signal selection of the information flow;

correction of the calculated accompaniment trajectory if necessary;

7. Self-checking and the self-diagnosis of the units and the station as a whole; 8. Adaptiveness to the effects of various factors, both external and internal;

and security of countries (according to the GEOSS and GMES programmes, etc.).

space informational technologies, as with the appearance of the PC.

Compactness and simplicity of operation and maintenance;

features of such stations should be:

Affordable price.

include:

The use of standard PC configurations;

information about the earth's surface and its objects.

for accessing situations and decision-making.

with the minimal acceptable error;

6. Data visualisation, archiving and storage;

information processing instead of focusing on hardware.

5. Real-time data processing;

information to satellite are described.

coordination.

L – the width of the Earth's view;

V – the velocity of the sub-satellite point;

I – the number of bites per pixel of the image;

N – the amount of information channels;

K – the coefficient of the coding noise immunity type;

r – the resolution of the Earth's surface survey capability:

$$
\sigma \equiv \frac{\mathcal{A}}{D} \cdot H \,\,\,\,\,\tag{2}
$$

where:

– the wavelength;

H – the height of the spacecraft;

D – the diameter of the lens.

The larger the diameter of the reflector, the narrower antenna direction chart becomes, which leads to the need to increase dynamic pointing accuracy. For instance, for the AS TNA-57 used for receiving data from the remote sensing Ukrainian satellite 'Sich-2' in the Centre for Space Information Monitoring and Navigation Field Control (CSIM and NFC), the diameter of the antenna reflector is 12 m, its weight is 5,500 kg, while the total weight of the AS is close to 70,000 kg (Fig.1,b). The width chart of the antenna orientation on the level of the 3 dB level is equal to 14 arcmin. Thus, it is necessary to provide speeds of up to 10 degrees / sec with a dynamic tracking error of not more than 1.5 arcmin.

The provision of a large dynamic range of motion for large antennas (a reflector with a diameter of 3m to 12m) and the need to ensure a small dynamic error for spacecraft guidance and tracking are contradicting requirements. Thus, this leads to a more

Smart Station for Data Reception of the Earth Remote Sensing 347

– the longitude of the ascending node from Greenwich during the moment of the epochal

*T* – epochal time (or time moment). The satellite passes through the point of the ascending

Z

However, in reality the movement of the spacecraft is affected by a series of disturbing factors, the most significant of them being: a perturbation of the gravitational anomalies of the Earth, the effect of friction in the upper atmosphere, the influence of the gravity of the Sun and the Moon and the pressure of sunlight. The equation of the spacecraft's motion is described by the system through six differential equations of the first-order with consideration of varying factors. The task of forecasting the spacecraft's movement at every moment of time is reduced to the numerical integration of differential equations of the sixth-

Y

n

Continuously updated data on the spacecraft's orbital parameters is presented in a two-line format (\*. TLE) since the calculation of the trajectory can be obtained from the informational

The control system calculates the trajectory according to the orbital parameters data in a topocentriс coordinate system in an aimer table view **R[**tj,j,j], where j, j – the azimuth angle and the angle of the beam pointing direction of the aerial on the spacecraft at a time tj. The control system needs to perform the transformation of input coordinates j, j in order to accompany the spacecraft with this antenna, from a topocentric azimuth-elevation coordinate system into the local coordinate system of each axis of the AS (array **R**[tj,1j,2j,3j]), where 1j, 2 j, 3j - the rotation angles of each axis E1, E2, E3 from ERD at

order with initial conditions at a given time t0 (Reshetnev at al., 1988).

satellite catalogues, for instance, on-site http://celestrak.com/NORAD.

 – the angular distance of the perigee from the ascending node; *p* – the orbit parameter dependent on the large semiaxis а: p=a\*(1-e2));

N

node (the intersection of the equator when moving from south to north).

a

S

X

A

time moment *Т;*

*e* – orbit eccentricity;

Fig. 3. Parameters of Satellite orbits

time tj.

complicated structure and management system for the AS, which increases the cost of the station.

In addition, for classical azimuth-elevation supporting-rotating devices (Fig.1b) there are "dead" zones for spacecraft tracking, for those trajectories that are close to the zenith relative to the location of the terrestrial stations (Belyanstyi & Sergeev, 1980).

## **3. Structure and algorithms for new constructions of ERS stations**

This section discusses some variants of the construction and algorithms of station control systems – as designed by ourselves - which solve the above mentioned problems in order to create effective stations for receiving information from remote sensing spacecraft. The experimental results of their work are given.

## **3.1 Principles for the functioning of an AS with 3 axes pointing without 'dead zones' accompanying the spacecraft through the zenith**

To reduce the high speeds of ASs and to avoid signal loss in the "dead zones" we developed an AS with a 3-axes Support-Rotating Device (SRD) with an implemented additional azimuth axis of E1 with a slope γ 15 ° relative to the direct azimuth axis E3 and a rotation range in the horizontal plane the same as the basic azimuth axis ± 170 ° (Fig.2a).

a) antenna "EgyptSat-1" b) simulation model "EgyptSat-1"

Fig. 2. An AS with a 3-axial SRD (а) and a simulation model of spacecraft accompaniment through the zenith (b).

The aerial control system should perform an orientation of the chart direction of the reflector towards the spacecraft in real-time according to the rule about the spacecraft's motion towards the AS's coordinates. As the basis for the calculation of the orbital motion of the spacecraft, a Keplerian model of the point motion around the static attracting object is accepted. The satellite trajectory is described through Keplerian orbit elements (Fig.3), where:

*i* – the inclination of the orbiting satellite;

 – the longitude of the ascending node from Greenwich during the moment of the epochal time moment *Т;*

– the angular distance of the perigee from the ascending node;

*p* – the orbit parameter dependent on the large semiaxis а: p=a\*(1-e2));

*e* – orbit eccentricity;

346 Remote Sensing – Advanced Techniques and Platforms

complicated structure and management system for the AS, which increases the cost of the

In addition, for classical azimuth-elevation supporting-rotating devices (Fig.1b) there are "dead" zones for spacecraft tracking, for those trajectories that are close to the zenith relative

This section discusses some variants of the construction and algorithms of station control systems – as designed by ourselves - which solve the above mentioned problems in order to create effective stations for receiving information from remote sensing spacecraft. The

**3.1 Principles for the functioning of an AS with 3 axes pointing without 'dead zones'** 

To reduce the high speeds of ASs and to avoid signal loss in the "dead zones" we developed an AS with a 3-axes Support-Rotating Device (SRD) with an implemented additional azimuth axis of E1 with a slope γ 15 ° relative to the direct azimuth axis E3 and a rotation

to the location of the terrestrial stations (Belyanstyi & Sergeev, 1980).

experimental results of their work are given.

**accompanying the spacecraft through the zenith** 

through the zenith (b).

*i* – the inclination of the orbiting satellite;

a) antenna "EgyptSat-1" b) simulation model "EgyptSat-1"

satellite trajectory is described through Keplerian orbit elements (Fig.3), where:

Fig. 2. An AS with a 3-axial SRD (а) and a simulation model of spacecraft accompaniment

The aerial control system should perform an orientation of the chart direction of the reflector towards the spacecraft in real-time according to the rule about the spacecraft's motion towards the AS's coordinates. As the basis for the calculation of the orbital motion of the spacecraft, a Keplerian model of the point motion around the static attracting object is accepted. The

**3. Structure and algorithms for new constructions of ERS stations** 

range in the horizontal plane the same as the basic azimuth axis ± 170 ° (Fig.2a).

station.

*T* – epochal time (or time moment). The satellite passes through the point of the ascending node (the intersection of the equator when moving from south to north).

Fig. 3. Parameters of Satellite orbits

However, in reality the movement of the spacecraft is affected by a series of disturbing factors, the most significant of them being: a perturbation of the gravitational anomalies of the Earth, the effect of friction in the upper atmosphere, the influence of the gravity of the Sun and the Moon and the pressure of sunlight. The equation of the spacecraft's motion is described by the system through six differential equations of the first-order with consideration of varying factors. The task of forecasting the spacecraft's movement at every moment of time is reduced to the numerical integration of differential equations of the sixthorder with initial conditions at a given time t0 (Reshetnev at al., 1988).

Continuously updated data on the spacecraft's orbital parameters is presented in a two-line format (\*. TLE) since the calculation of the trajectory can be obtained from the informational satellite catalogues, for instance, on-site http://celestrak.com/NORAD.

The control system calculates the trajectory according to the orbital parameters data in a topocentriс coordinate system in an aimer table view **R[**tj,j,j], where j, j – the azimuth angle and the angle of the beam pointing direction of the aerial on the spacecraft at a time tj.

The control system needs to perform the transformation of input coordinates j, j in order to accompany the spacecraft with this antenna, from a topocentric azimuth-elevation coordinate system into the local coordinate system of each axis of the AS (array **R**[tj,1j,2j,3j]), where 1j, 2 j, 3j - the rotation angles of each axis E1, E2, E3 from ERD at time tj.

Smart Station for Data Reception of the Earth Remote Sensing 349

 

 

sin cos( 2 ) cos 1 cos sin( 2 ) *YB*

cos sin 3 cos( 2 ) cos 1 sin sin 3 sin( 2 ) cos 3 cos( 2 ) sin 1 cos cos 3 cos( 2 ) cos 1 sin cos 3 sin( 2 ) sin 3 cos( 2 ) sin 1 *arctg*

 

> 

cos sin( 2 ) sin cos( 2 ) cos 1 1 (sin cos( 2 ) cos 1 cos sin( 2 ))

 

> 

 ;

3 (around an axis Е3) into "plus" and "minus" respectively ( 170

( )*t* ), determined from the pointing-table that is calculated for the

 

The control system of such an AS needs to calculate and execute the required angle 3 vertical azimuth axis E3 after every calculation or after receiving - via the communication channel - the trajectory of spacecraft, taking into account the mechanical limits of the

*<sup>M</sup>* , if 0 *<sup>M</sup>*

 , if *<sup>M</sup>* 180 

 , if 180 190 


3 360 *<sup>M</sup>* , if 360 *<sup>M</sup>* 360

*<sup>M</sup>* - a value of azimuth counting with a maximum angle of the elevation of the spacecraft (

The calculation of the angles α1(t) і α2(t) is performed by the use of angles α(t), β(t) and α3.. Such an AS design and algorithm are implemented in the terrestrial bilateral An AS to manage and control the spacecraft telemetry RS «EgyptSat-1" is installed and operated in

Fig.4a shows the diagram of "Terra" spacecraft's tracking trajectory through zenith (the maximum lifting angle = 90) in the system of azimuth-elevation coordinates **R** [t, α, β] of topocentric coordinate system. The crimson diagram represents targeted angles on azimuth and the yellow one - on angular altitude. Fig.4b represents diagrams of tracking after

 

 

 

 

 

> 

;

*<sup>M</sup>* ;

;

 

sin cos 3 sin( 2 ) sin 3 cos( 2 ) sin 1

 

sin sin 3 sin( 2 ) cos 3 cos( 2 ) sin 1

 

 

 

 

> 

   

 

 

 

2

; (7)

 

 

> ;

 .

(8)

 

;

Where:

where:

the angle

( )*t* at max 

 

170

selected spacecraft.

Egypt (Fig. 2a).

);

 , 

 

 *<sup>M</sup>* 

 

 

rotation range of this axis, as follows:

 

 

cos cos 3 cos( 2 ) cos 1 *XB*

cos sin 3 cos( 2 ) cos 1 *ZB*

 

 

*arctg*

 

   

 

 

> 

 

 

 3 

3

3

 

 

 

To target the spacecraft, the control system controller performs a coordinate conversion according to the algorithm:

$$\alpha 2 = \operatorname{arctg} \left( \frac{\cos \gamma \cdot \sin \beta - \sin \gamma \cdot \cos \beta \cdot \cos(\alpha - \alpha 3)}{\sqrt{1 - \left(\cos \gamma \cdot \sin \beta - \cos \beta \cdot \cos(\alpha - \alpha 3) \cdot \sin \gamma\right)^2}} \right) + \gamma \tag{3}$$

$$\alpha \mathbf{1} = \begin{cases} \alpha'\_{1'} \text{ if } \mathcal{X}\_{\mathcal{A}} \ge 0; \\ \alpha'\_{1} + 180^{\circ}, \text{ if } \mathcal{X}\_{\mathcal{A}} < 0 \text{ and } \mathcal{Z}\_{\mathcal{A}} \ge 0; \\ \alpha'\_{1} - 180^{\circ}, \text{ if } \mathcal{X}\_{\mathcal{A}} < 0 \text{ and } \mathcal{Z}\_{\mathcal{A}} < 0; \end{cases} \tag{4}$$

where:

$$\alpha\_1' = \operatorname{arctg}\left(\frac{\cos\beta \cdot \sin(a - a\Im)}{\cos\gamma \cdot \cos\beta \cdot \cos(a - a\Im) + \sin\gamma \cdot \sin\beta}\right) \tag{5}$$

*XA* cos cos 3 cos cos sin sin cos sin 3 cos sin ,

$$Y\_A = -\sin\gamma \cdot \cos a \, 3 \cdot \cos \beta \cdot \cos a + \cos \gamma \cdot \sin \beta - \sin \gamma \cdot \sin a \, 3 \cdot \cos \beta \cdot \sin a$$

*ZA* sin 3 cos cos cos 3 cos sin ,

1 – the rotation angle of the main azimuth at axis Е1,

2 – the rotation angle of the elevation axis Е2, and

3 – the rotation angle of the azimuth at vertical axis Е3.

15 - the angle of the axis E1 relative to the axis of E3.

The range of angle changes:

 - (0360), - (090), 1, 3 - (0170), 2 - (0120).

During the execution of the accompaniment of a spacecraft with a given aimer table (array **R[**tj,j,j]), the controller control system has to convert them into a format of local coordinates (array **R**[tj,1j,2j,3j]).

To determine the real data about the AS's position and to compare with a given aimer table and issue them in the control and information processing centre, it is necessary that the inverse transformation of the "local" coordinate axes in the system topocentric coordinates pointing to the spacecraft accord with the correspondences below:

$$\alpha = \begin{cases} \alpha', \text{ if } \mathcal{X}\_{\mathcal{B}} \ge 0, \; \mathcal{Z}\_{\mathcal{B}} \ge 0; \\ \alpha' + 360^{\circ}, \text{ if } \; \mathcal{X}\_{\mathcal{B}} \ge 0 \text{ and } \; \mathcal{Z}\_{\mathcal{B}} < 0; \\ \alpha' + 180^{\circ}, \text{ if } \; \mathcal{X}\_{\mathcal{B}} < 0; \end{cases} \tag{6}$$

Where:

348 Remote Sensing – Advanced Techniques and Platforms

To target the spacecraft, the control system controller performs a coordinate conversion

1 (cos sin cos cos( 3) sin )

   

> 

 

> 

 

 

> ,

 

> ,

 ,

(6)

 

1 А A 1 А A

180 , if Х 0 and Z 0;

cos sin( 3) cos cos cos( 3) sin sin *arctg* 

 

 

> 

 

 

> 

> >

1 180 , if Х 0 and Z 0;

 

cos sin sin cos cos( 3) <sup>2</sup>

 

, if X 0;

 

*XA* cos cos 3 cos cos sin sin cos sin 3 cos sin

*YA* sin cos 3 cos cos cos sin sin sin 3 cos sin

*ZA* sin 3 cos cos cos 3 cos sin

During the execution of the accompaniment of a spacecraft with a given aimer table (array **R[**tj,j,j]), the controller control system has to convert them into a format of local

To determine the real data about the AS's position and to compare with a given aimer table and issue them in the control and information processing centre, it is necessary that the inverse transformation of the "local" coordinate axes in the system topocentric coordinates

B B

180 , if X 0;

, if X 0, Z 0;

 

B

B B

360 , if X 0 and Z 0;

 

 

> 

 

> 

pointing to the spacecraft accord with the correspondences below:

  1 A

2

  (4)

(5)

(3)

according to the algorithm:

where:

 *arctg*

1

1 – the rotation angle of the main azimuth at axis Е1, 2 – the rotation angle of the elevation axis Е2, and 3 – the rotation angle of the azimuth at vertical axis Е3. 15 - the angle of the axis E1 relative to the axis of E3.

The range of angle changes:

coordinates (array **R**[tj,1j,2j,3j]).

 - (0360), - (090), 1, 3 - (0170), 2 - (0120).

$$\alpha' = \operatorname{artg}\left(\frac{\cos\gamma \cdot \sin a 3 \cdot \cos(a2 - \gamma) \cdot \cos a 1 - \sin \gamma \cdot \sin a 3 \cdot \sin(a2 - \gamma) + \cos a 3 \cdot \cos(a2 - \gamma) \cdot \sin a 1}{\cos \gamma \cdot \cos a 3 \cdot \cos(a2 - \gamma) \cdot \cos a 3 \cdot \sin(a2 - \gamma) - \sin a 3 \cdot \cos(a2 - \gamma) \cdot \sin a 1}\right)$$

cos cos 3 cos( 2 ) cos 1 *XB* sin cos 3 sin( 2 ) sin 3 cos( 2 ) sin 1 ;

$$Y\_B = \sin \gamma \cdot \cos(a2 - \gamma) \cdot \cos a1 + \cos \gamma \cdot \sin(a2 - \gamma) \,\tag{7}$$

cos sin 3 cos( 2 ) cos 1 *ZB* sin sin 3 sin( 2 ) cos 3 cos( 2 ) sin 1 .

$$\beta = \operatorname{arctg}\left(\frac{\cos\gamma \cdot \sin(a2-\gamma) + \sin\gamma \cdot \cos(a2-\gamma) \cdot \cos a1}{\sqrt{1 - (\sin\gamma \cdot \cos(a2-\gamma) \cdot \cos a1 + \cos\gamma \cdot \sin(a2-\gamma))^2}}\right) \tag{8}$$

The control system of such an AS needs to calculate and execute the required angle 3 vertical azimuth axis E3 after every calculation or after receiving - via the communication channel - the trajectory of spacecraft, taking into account the mechanical limits of the rotation range of this axis, as follows:

$$\begin{aligned} \alpha 3 &= \alpha\_M \text{, if } \; 0 \le \alpha\_M \le \alpha\_{\theta^+} \; ; \\\\ \alpha 3 &= \alpha\_{\theta^+} \; , \; \text{if } \; a\_{\theta^+} < a\_M \le 180^\circ \; ; \\\\ \alpha 3 &= \alpha\_{\theta^-} \; , \; \text{if } \; 180^\circ < a\_M < 190^\circ \; ; \\\\ \alpha 3 &= \alpha\_M \; -360^\circ \; ; \; \text{if } \; 360^\circ + \alpha\_{\theta^-} \le \alpha\_M \le 360^\circ \; ; \end{aligned}$$

where:

 , - the angles of triggering the limit switches the constraint turn of the antenna on the angle 3 (around an axis Е3) into "plus" and "minus" respectively ( 170 ; 170 );

 *<sup>M</sup>* - a value of azimuth counting with a maximum angle of the elevation of the spacecraft ( *<sup>M</sup>* ( )*t* at max ( )*t* ), determined from the pointing-table that is calculated for the selected spacecraft.

The calculation of the angles α1(t) і α2(t) is performed by the use of angles α(t), β(t) and α3.. Such an AS design and algorithm are implemented in the terrestrial bilateral An AS to manage and control the spacecraft telemetry RS «EgyptSat-1" is installed and operated in Egypt (Fig. 2a).

Fig.4a shows the diagram of "Terra" spacecraft's tracking trajectory through zenith (the maximum lifting angle = 90) in the system of azimuth-elevation coordinates **R** [t, α, β] of topocentric coordinate system. The crimson diagram represents targeted angles on azimuth and the yellow one - on angular altitude. Fig.4b represents diagrams of tracking after


Smart Station for Data Reception of the Earth Remote Sensing 351

Fig. 5. Block-scheme of an antenna-feeder device of a total-difference (mono-impulse) type.

Supportrotating device

Switchboard

Fig. 6. Graph of the error of the antenna beam's angular deviation from the desired

**3.2 Antenna System with a rotary device based on the six-axis Stewart platform** 

Due to the enhancement of AS design and control algorithms, the speed of moving object tracking in the culminating moment of the spacecraft is significantly reduced, which reduces the requirements for the electromechanical components of the AS and allows the reduction of the dynamic errors involved in tracking. Structural and algorithmic solutions are

The disadvantages of all types of classic two-axial and modified three-axial SRD constructions of ASs involve their complexity and the high requirements for the accuracy of rotating mechanisms with a large diameter. This makes antenna systems too ponderous, their support-rotating devices too complex for manufacturing and assembling, and their cost

Recently, for tracking along complicated trajectories mechanisms of manipulators with parallel kinematic units especially based on six-axis Steward platform (Fig.7) are widely used in robotics, machine-tool constructions, benches and other equipment (Stewart, 1965;

trajectory in angular minutes (over the time *t*=220 s).

implemented and tested in AU "Egyptsat-1".

**(Hexapod scheme)** 

too expensive.

Fig. 4. Graphs of the trajectory of spacecraft tracking via the zenith: (a)- for a 2-axis AS in topocentric coordinates **R[**t,,]; b- for a local axis E1, E2, **R**[t,1,2,3]. The axis Е3 is fixed at 107º during the session.

trajectory conversion from topocentric coordinate system into coordinate system of antenna axes **R** [t, α1, α2, α3]. The bottom straight azimuth axis E3 before the beginning of session for the given trajectory is to rotate the antenna system on azimuth towards the direction of the maximum elevation of the spacecraft for chosen trajectory constituting in this case the angle 106° 30 min.

As can be seen from the graphs, at a spacecraft's zenith point a velocity of an azimuth axis for a classic 2-axial AS tends towards infinity (Fig.4a). After the conversion to a 3-axis coordinate system (Fig.4b), the maximal accompaniment speed of the inclined azimuth axis is not more than 2.5 degree / sec. This enables the reduction of dynamic errors during the tracking of the spacecraft.

With the exception of the software method for tracking on a pre-calculated trajectory of the spacecraft, the AS control system implements the tracking of the spacecraft by an autotracking method of a signal finder with the goal of supporting a maximum value of the signal. It is also possible to use a compound method of software tracking with automatic correction of tracking table according to the signal and additional manual control.

Total-difference (monoimpulse) type of aerial-feeder device (Fig.5) is used in the designed aerial system for the execusion of satellite automatic tracking according to direction finder signal. Besides the main total informational signal the difference signals on each coordinate forming aerial direction finder characteristic are received on its output. The differencing signal provides information about the value and the sign of an error deviation of the AS from the signal maximum.

Fig.6 shows a graph of the error of the antenna beam's angular deviation from the desired trajectory in angular minutes (over the time t = 220 s) which is not exceeding - as seen from the graphs - 4 angular minutes.

In general, the total combined error of the tracking is a function of time and depends upon the parameters of the control system and the characteristics of the controlling and disturbance signals that affect the system during the process of tracking the spacecraft. As such, the maximum efficiency of remote sensing information reception is achieved with the minimum total tracking error.

Subsection 4 is devoted to a search for the structures and algorithms for efficient system operation employing the use of artificial neural networks.

 a) **R[**t,,] b) **R**[t,1,2], 3=const c) Trajectory into Map Fig. 4. Graphs of the trajectory of spacecraft tracking via the zenith: (a)- for a 2-axis AS in topocentric coordinates **R[**t,,]; b- for a local axis E1, E2, **R**[t,1,2,3]. The axis Е3 is fixed

trajectory conversion from topocentric coordinate system into coordinate system of antenna axes **R** [t, α1, α2, α3]. The bottom straight azimuth axis E3 before the beginning of session for the given trajectory is to rotate the antenna system on azimuth towards the direction of the maximum elevation of the spacecraft for chosen trajectory constituting in this case the

As can be seen from the graphs, at a spacecraft's zenith point a velocity of an azimuth axis for a classic 2-axial AS tends towards infinity (Fig.4a). After the conversion to a 3-axis coordinate system (Fig.4b), the maximal accompaniment speed of the inclined azimuth axis is not more than 2.5 degree / sec. This enables the reduction of dynamic errors during the

With the exception of the software method for tracking on a pre-calculated trajectory of the spacecraft, the AS control system implements the tracking of the spacecraft by an autotracking method of a signal finder with the goal of supporting a maximum value of the signal. It is also possible to use a compound method of software tracking with automatic

Total-difference (monoimpulse) type of aerial-feeder device (Fig.5) is used in the designed aerial system for the execusion of satellite automatic tracking according to direction finder signal. Besides the main total informational signal the difference signals on each coordinate forming aerial direction finder characteristic are received on its output. The differencing signal provides information about the value and the sign of an error deviation of the AS

Fig.6 shows a graph of the error of the antenna beam's angular deviation from the desired trajectory in angular minutes (over the time t = 220 s) which is not exceeding - as seen from

In general, the total combined error of the tracking is a function of time and depends upon the parameters of the control system and the characteristics of the controlling and disturbance signals that affect the system during the process of tracking the spacecraft. As such, the maximum efficiency of remote sensing information reception is achieved with the

Subsection 4 is devoted to a search for the structures and algorithms for efficient system

correction of tracking table according to the signal and additional manual control.

at 107º during the session.

tracking of the spacecraft.

from the signal maximum.

the graphs - 4 angular minutes.

minimum total tracking error.

operation employing the use of artificial neural networks.

angle 106° 30 min.

Fig. 5. Block-scheme of an antenna-feeder device of a total-difference (mono-impulse) type.

Fig. 6. Graph of the error of the antenna beam's angular deviation from the desired trajectory in angular minutes (over the time *t*=220 s).

Due to the enhancement of AS design and control algorithms, the speed of moving object tracking in the culminating moment of the spacecraft is significantly reduced, which reduces the requirements for the electromechanical components of the AS and allows the reduction of the dynamic errors involved in tracking. Structural and algorithmic solutions are implemented and tested in AU "Egyptsat-1".

### **3.2 Antenna System with a rotary device based on the six-axis Stewart platform (Hexapod scheme)**

The disadvantages of all types of classic two-axial and modified three-axial SRD constructions of ASs involve their complexity and the high requirements for the accuracy of rotating mechanisms with a large diameter. This makes antenna systems too ponderous, their support-rotating devices too complex for manufacturing and assembling, and their cost too expensive.

Recently, for tracking along complicated trajectories mechanisms of manipulators with parallel kinematic units especially based on six-axis Steward platform (Fig.7) are widely used in robotics, machine-tool constructions, benches and other equipment (Stewart, 1965;

Smart Station for Data Reception of the Earth Remote Sensing 353

Fig. 8. Antenna System with a support-rotating device based on the Stewart platform

The low speed of driving actuators for any tracking trajectories of a satellite;

**3.2.2 Algorithm to control AS based on linear circulating platform** 

lower platform) basis and determine the coordinates of the lower hinges.

motion laws of actuators let us solve the inverse problem.

The main disadvantages of this type of support-rotating device include some limitations at low tilt angles of the reflector and the complexity of the simultaneous motion control of six actuators. Unlike classical AS support-rotating devices, the control of the support-rotating device based on a linear drive demands the precise coordination of the parallel movement of all six actuators simultaneously. The closing of every actuator must always lie in corresponding areas, otherwise the construction may be destroyed or the actuators may fail.

In common case to point the aerial beam on the given azimuth and location angle it is necessary to set the lengthening of each actuator on certain value. In order to find the

Let us define a plane of the support-rotating device in a Cartesian coordinates system with x, y, z, axes to which the reflector of the antenna is mounted. Since the physical size of the upper platform and the mount points of the actuators on it are known, it is possible to find the coordinates of the hinges. Similarly, let us set the base of the support-rotating device (the

No restrictions on rotation on the azimuth axis;

The ability to work in difficult conditions;

(Hexapod).

High accuracy in aiming;

Relatively low cost.

Fichter, 1986). Such mechanical systems consist of platforms connected by a system of variable (controlled) length sections, and they have a certain advantages over rotary mechanisms. For example, a combination of hardness and compactness, reliability, ease of design, manufacturability and studies (Nair & Maddocks, 1994; Kolovsky at al., 2000; Afonin at al., 2001). The Stewart platform is the subject of many scientific studies. There are examples of their use in some of the application problems provided by the data from the booklets of companies and technical exhibits, but the use of parallel kinematic mechanisms based on the Stewart platform in the mechanisms of the SRD of ASs for tracking various spacecraft trajectories - including low-orbital remote sensing satellites - has not yet been investigated.

Below we consider the construction and imitation of a model of the AS support-rotating device based on the six-degree Stewart platform (Hexapod scheme) as an alternative to traditional support-rotating devices. We investigated the possibilities and features of such an AS in performing the tracking of low-orbital satellites.

### **3.2.1 Specifics of the schema and construction of an AS with a support-rotating device Hexapod**

A support-rotating device based on a linear drive (Fig.7) consists of two platforms, one of them is the basis of SRD and the other is the basis for binding the reflector of the satellite and six actuators, each attached to the upper and lower platform via a cardan joint.

Fig. 7. Six-axis Stewart platform.

In our laboratory, we developed a research model for the construction of an AS with a support-rotating device based on the Stewart platform (Hexapod) and a control system for it (Fig.8).

The carcass of this support-rotating mechanism has six points of freedom which allows it rotate the reflector in the air with high accuracy.

A support-rotating device of this construction has benefits comparative with classic rotary mechanisms:


Fichter, 1986). Such mechanical systems consist of platforms connected by a system of variable (controlled) length sections, and they have a certain advantages over rotary mechanisms. For example, a combination of hardness and compactness, reliability, ease of design, manufacturability and studies (Nair & Maddocks, 1994; Kolovsky at al., 2000; Afonin at al., 2001). The Stewart platform is the subject of many scientific studies. There are examples of their use in some of the application problems provided by the data from the booklets of companies and technical exhibits, but the use of parallel kinematic mechanisms based on the Stewart platform in the mechanisms of the SRD of ASs for tracking various spacecraft trajectories - including low-orbital remote sensing satellites - has not yet been

Below we consider the construction and imitation of a model of the AS support-rotating device based on the six-degree Stewart platform (Hexapod scheme) as an alternative to traditional support-rotating devices. We investigated the possibilities and features of such

A support-rotating device based on a linear drive (Fig.7) consists of two platforms, one of them is the basis of SRD and the other is the basis for binding the reflector of the satellite

In our laboratory, we developed a research model for the construction of an AS with a support-rotating device based on the Stewart platform (Hexapod) and a control system for it

The carcass of this support-rotating mechanism has six points of freedom which allows it

A support-rotating device of this construction has benefits comparative with classic rotary

Simplicity of mechanical construction, toughness, easy access to mechanical units of

**3.2.1 Specifics of the schema and construction of an AS with a support-rotating** 

and six actuators, each attached to the upper and lower platform via a cardan joint.

an AS in performing the tracking of low-orbital satellites.

investigated.

**device Hexapod** 

Fig. 7. Six-axis Stewart platform.

rotate the reflector in the air with high accuracy.

aerial, absence of cable twisting; No "dead" zones during satellite tracking;

(Fig.8).

mechanisms:

Fig. 8. Antenna System with a support-rotating device based on the Stewart platform (Hexapod).


The main disadvantages of this type of support-rotating device include some limitations at low tilt angles of the reflector and the complexity of the simultaneous motion control of six actuators. Unlike classical AS support-rotating devices, the control of the support-rotating device based on a linear drive demands the precise coordination of the parallel movement of all six actuators simultaneously. The closing of every actuator must always lie in corresponding areas, otherwise the construction may be destroyed or the actuators may fail.

## **3.2.2 Algorithm to control AS based on linear circulating platform**

In common case to point the aerial beam on the given azimuth and location angle it is necessary to set the lengthening of each actuator on certain value. In order to find the motion laws of actuators let us solve the inverse problem.

Let us define a plane of the support-rotating device in a Cartesian coordinates system with x, y, z, axes to which the reflector of the antenna is mounted. Since the physical size of the upper platform and the mount points of the actuators on it are known, it is possible to find the coordinates of the hinges. Similarly, let us set the base of the support-rotating device (the lower platform) basis and determine the coordinates of the lower hinges.

Smart Station for Data Reception of the Earth Remote Sensing 355

The rotation is partly simplified if the fixed point (together with the rotation object) is in the zero point of the coordinates. Thus, the first operation of transformation is Т(-р0), and the last is Т(р0). Where Т(-р0) and Т(р0) are the appropriate matrices of transformation (Shikin

> 100 010 ( ) <sup>001</sup>

> > 000 1

000 1

100 010

Rotation around an arbitrary axis reduces in relation to the consequent rotation around the particular coordinate axes. The main problem is to find the rotation angles for every axis. Let us execute the first two rotation operations to combine the rotation axis **v** with the coordinate axis Z. Next, rotate the object around the axis Z to a necessary angle and execute

The determination of the matrices *Ry(θy)* and *Rx(θx* form the most difficult part of the

Let us draw a segment from the beginning of the coordinates to the point (ax, ay, az). This segment will have a unit length and the same direction as the vector **v**. Drop the perpendiculars from a point (ax, ay, az) to every coordinate axis as it is represented by Fig. 10. Three direction angles - φx, φy, φz - are the angles between the vector **v** and the coordinate axes.

*cos φx = ax*

*cos φz = az*

We will consider the components of vector **v**. As **v** is the vector of unit length, then:

The correlation between direction cosines and the components of vector **v** are:

Only two direction angles are independent, because:

*x y z*

(11)

(12)

*x y z*

*M = Rx(–θx)Ry(–θy)Rz(θz)Ry(θy)Rx(θx)* (13)

<sup>222</sup> 1 *xyz aaa* (14)

*cos φy = ay* (15)

*Cos2φx+Cos2φy+Cos2φz = 1* (16)

0

0

*T P*

Thus, the matrix of a complex transformation will have such a form:

Accordingly, the matrix of the complex transformation has the form:

the previous two turns in reverse order.

calculations.

( ) <sup>001</sup>

*T P*

& Boreskov, 1995):

At the maximal lengthening of the actuators, the planes will be maximally remote from each other. At the minimum lengthening the distance between them, it will be at the minimum (Fig. 9). In extreme positions, the planes can be located only when parallel to each other.

Fig. 9. Location of the platform plane at the different lengthening of actuators.

It is clear that the upper platform has to be in the middle position in order to achieve the maximal possible turn of the antenna reflector. As such, the equal motion of the actuator is kept both upwards and downwards.

Let us perform a turn of upper plane with the hinges mounted accordingly into it, making use of affine isometric transformations of the coordinates.

Three parameters are needed to perform the arbitrary rotation in space:


Let us choose a point in the centre of the upper platform as a fixed point that passes into itself (as a result of rotation) (Fig.9b). Consider a vector (i.e., the centre of rotation) set by two points p1 and p2:

$$\mathbf{v} = \mathbf{p}2 - \mathbf{p}1\tag{9}$$

The direction is determined by the order of using these points. Only the direction of this vector is important. Its position in space does not affect the rotation result.

Let us perform a rotation axis vector normalisation to simplify the operation's execution: replace it with the vector of unit length. The second vector has the same direction in space as the first one:

$$\begin{aligned} S &= \sqrt{X^2 + Y^2 + Z^2} \\ X\_N &= X/S \\ Y\_N &= Y/S \\ Z\_N &= Z/S \end{aligned} \tag{10}$$

At the maximal lengthening of the actuators, the planes will be maximally remote from each other. At the minimum lengthening the distance between them, it will be at the minimum (Fig. 9). In extreme positions, the planes can be located only when parallel to each other.

a b

It is clear that the upper platform has to be in the middle position in order to achieve the maximal possible turn of the antenna reflector. As such, the equal motion of the actuator is

Let us perform a turn of upper plane with the hinges mounted accordingly into it, making

Let us choose a point in the centre of the upper platform as a fixed point that passes into itself (as a result of rotation) (Fig.9b). Consider a vector (i.e., the centre of rotation) set by

 **v** = p2 – p1 (9) The direction is determined by the order of using these points. Only the direction of this

Let us perform a rotation axis vector normalisation to simplify the operation's execution: replace it with the vector of unit length. The second vector has the same direction in space as

*S XYZ*

222

(10)

Fig. 9. Location of the platform plane at the different lengthening of actuators.

Three parameters are needed to perform the arbitrary rotation in space:

vector is important. Its position in space does not affect the rotation result.

*N N N*

*X XS Y YS Z ZS*

 

kept both upwards and downwards.



two points p1 and p2:

the first one:


use of affine isometric transformations of the coordinates.

The rotation is partly simplified if the fixed point (together with the rotation object) is in the zero point of the coordinates. Thus, the first operation of transformation is Т(-р0), and the last is Т(р0). Where Т(-р0) and Т(р0) are the appropriate matrices of transformation (Shikin & Boreskov, 1995):

$$T(P\_0) = \begin{bmatrix} 1 & 0 & 0 & a\_x \\ 0 & 1 & 0 & a\_y \\ 0 & 0 & 1 & a\_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{11}$$

$$T(-P\_0) = \begin{bmatrix} 1 & 0 & 0 & -a\_x \\ 0 & 1 & 0 & -a\_y \\ 0 & 0 & 1 & -a\_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{12}$$

Thus, the matrix of a complex transformation will have such a form:

Rotation around an arbitrary axis reduces in relation to the consequent rotation around the particular coordinate axes. The main problem is to find the rotation angles for every axis.

Let us execute the first two rotation operations to combine the rotation axis **v** with the coordinate axis Z. Next, rotate the object around the axis Z to a necessary angle and execute the previous two turns in reverse order.

Accordingly, the matrix of the complex transformation has the form:

$$\mathbf{M} \equiv \mathbf{R}\_x(-\theta\_x)\mathbf{R}\_y(-\theta\_y)\mathbf{R}\_z(\theta\_z)\mathbf{R}\_y(\theta\_y)\mathbf{R}\_x(\theta\_x) \tag{13}$$

The determination of the matrices *Ry(θy)* and *Rx(θx* form the most difficult part of the calculations.

We will consider the components of vector **v**. As **v** is the vector of unit length, then:

$$a\_x^2 + a\_y^2 + a\_z^2 = 1\tag{14}$$

Let us draw a segment from the beginning of the coordinates to the point (ax, ay, az). This segment will have a unit length and the same direction as the vector **v**. Drop the perpendiculars from a point (ax, ay, az) to every coordinate axis as it is represented by Fig. 10. Three direction angles - φx, φy, φz - are the angles between the vector **v** and the coordinate axes. The correlation between direction cosines and the components of vector **v** are:

$$\begin{cases} \cos q\_x = a\_x \\ \cos q\_y = a\_y \\ \cos q\_z = a\_z \end{cases} \tag{15}$$

Only two direction angles are independent, because:

$$\text{Cost}2p\_x + \text{Cost}2p\_y + \text{Cost}2p\_z = 1\tag{16}$$

Smart Station for Data Reception of the Earth Remote Sensing 357

( ) 0 sin cos 0 0 <sup>0</sup>

 

cos 0 sin 0 0 100

 

 

> 

 

Thus the rotation axis (vector **v**) coincided with the axis Z. Then let us perform the rotation

sin( ) cos( ) 0 0 ( ) 0 0 10

cos( ) sin( ) 0 0

0 0 01

0 001

cos 0 sin 0 01 0 0

00 0 1

( ) sin 0 cos 0

( ) sin 0 cos 0

on needed angle elevation (angle of the aerial reflector beam pointing):

*Rz z*

The elements of the Ry(θy) matrix are calculated in a similar way (Fig.12).

Fig. 12. Rotation angle placement according to the Y axis.

The corresponding rotation matrices are:

*Ry y*

*Ry y*

1 0 00 1 0 00 0 cos sin 0 0 0

*z y*

 

*d d*

 

0 0 01 0 0 0 0 00 0 1

*d*

*x*

*d*

*x*

*d*

*x*

0 0 0 100 0 0 0 001

*d*

*x*

*d d*

(18)

(19)

(20)

(21)

*y z*

0 0 01 0 0 01

And the inversed rotation matrix Rx(–θx) will be:

*x x*

*R*

Fig. 10. Direction angles of elevation.

Knowing the values of the direction cosines, it is possible to calculate the value of the Θx and Θy angles. As we see in Fig.11, the rotation of point (ax, ay, az) will lead to the segment rotation where it will be located on the plane y=0. The length of the segment projection (before the turn) on the plane x=0 is equal to d.

Fig. 11. Rotation angle placement according to the X axis.

Since the rotation matrix contains sines and cosines instead of angles, there is no need to find the Θx value itself, so the rotation matrix Rx(θx) will be:

$$R\_x(\Theta\_x) = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & a\_z/d & -a\_y/d & 0 \\ 0 & a\_y/d & a\_z/d & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{17}$$

And the inversed rotation matrix Rx(–θx) will be:

356 Remote Sensing – Advanced Techniques and Platforms

Knowing the values of the direction cosines, it is possible to calculate the value of the Θx and Θy angles. As we see in Fig.11, the rotation of point (ax, ay, az) will lead to the segment rotation where it will be located on the plane y=0. The length of the segment projection

Since the rotation matrix contains sines and cosines instead of angles, there is no need to

10 0 0 0 0 0 0 00 0 1

*d d d d*

 

(17)

 

*z y y z*

 

 

Fig. 10. Direction angles of elevation.

(before the turn) on the plane x=0 is equal to d.

Fig. 11. Rotation angle placement according to the X axis.

*Rx x*

find the Θx value itself, so the rotation matrix Rx(θx) will be:

10 0 0 0 cos sin 0 ( ) 0 sin cos 0

00 0 1

$$R\_{\mathbf{x}}(-\Theta\_{\mathbf{x}}) = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos\theta & \sin\theta & 0 \\ 0 & -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & a\_z/d & a\_y/d & 0 \\ 0 & -a\_y/d & a\_z/d & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{18}$$

The elements of the Ry(θy) matrix are calculated in a similar way (Fig.12).

Fig. 12. Rotation angle placement according to the Y axis.

The corresponding rotation matrices are:

$$R\_y(\Theta\_y) = \begin{bmatrix} \cos \theta & 0 & \sin \theta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin \theta & 0 & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} d & 0 & -a\_x & 0 \\ 0 & 1 & 0 & 0 \\ a\_x & 0 & d & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{19}$$

$$R\_y(-\Theta\_y) = \begin{bmatrix} \cos \theta & 0 & -\sin \theta & 0 \\ 0 & 1 & 0 & 0 \\ \sin \theta & 0 & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} d & 0 & a\_x & 0 \\ 0 & 1 & 0 & 0 \\ -a\_x & 0 & d & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{20}$$

Thus the rotation axis (vector **v**) coincided with the axis Z. Then let us perform the rotation on needed angle elevation (angle of the aerial reflector beam pointing):

$$R\_z(\Theta\_z) = \begin{bmatrix} \cos(\Theta) & -\sin(\Theta) & 0 & 0 \\ \sin(\Theta) & \cos(\Theta) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{21}$$

Smart Station for Data Reception of the Earth Remote Sensing 359

Fig. 14. Modelling of the constructional parameters of the support-rotating device.

Fig. 15. Interaction scheme between the main units of the AS control system with a

The computer for the control system generates the array of the points for each actuator which create the trajectory. Every point is transacted to an FPGA that has six logical channels generated for it. Every channel is responsible for the work of a corresponding actuator, and consists of a PID regulator, a PWM inspector, a processing module for actuator sensor signals

basic nodes is represented in fig.15.

Hexapod.

physical damage. The six actuators form a single system. The control system must provide the simultaneous coordinated parallel control of 6 drives. The developed control system implements the algorithms of parallel work on the basis of a FPGA programmable logical integrated circuit. The block diagram with the cooperation chart of the control system's

After that we carry out the reverse transformations: *Ry(–θx), Rx(–θy), T(–p0)* and obtain the top plane rotated on the given pointing angle corresponding with the aerial beam elevation angle. As a result of the multiplication of all the discovered transformation matrices, we will get the complex matrix *М*:

$$\mathbf{M} \equiv \mathbf{T}(-\mathcal{V}\_0)\mathbf{R}\_x(-\theta\_x)\mathbf{R}\_y(-\theta\_y)\mathbf{R}\_z(\theta\_z)\mathbf{R}\_y(\theta\_y)\mathbf{R}\_x(\theta\_x)\mathbf{T}(\mathbf{p}\_0) \tag{22}$$

The multiplication of an arbitrary point in a three-dimensional space on a specified complex matrix will cause it to turn around to some fixed point in the same space.

After the rotation of the upper platform, we receive new coordinates of the upper ends of actuator hinges used to mount to the platform. Having the coordinates of the upper and lower hinges in space, we calculate the distance between them using a correlation (23) (the actuator lengthening that it was necessary to find):

$$S = \sqrt{\left(\mathbf{x}\_2 - \mathbf{x}\_1\right)^2 + \left(y\_2 - y\_1\right)^2 + \left(z\_2 - z\_1\right)^2} \tag{23}$$

On the basis of the resulting algorithm, the simulation work program is developed. This program provides the calculations and represents the position of the actuators and the rates of their movement depending on the azimuth and elevation angle with the reflection of three-dimensional model of the supporting-turning device and antenna (Fig.13). In this model, it is possible to set the different geometrical parameters of the support-rotating device's construction (Fig.14). Different dimensions of the construction for determination of various optimum correlation between minimum values of the inclination angles, speeds and accuracy of work and control actuator motion while constructing the control system can be set in the model.

The control of a supporting-rotating device of a Hexapod-type requires precise (coordinated in time) cooperation between the position sensors and the delivery system of the control signal for all six drives. It is needed in order to preserve the system's integrity and to avoid

Fig. 13. A three-dimensional model of the support-rotating device.

After that we carry out the reverse transformations: *Ry(–θx), Rx(–θy), T(–p0)* and obtain the top plane rotated on the given pointing angle corresponding with the aerial beam elevation angle. As a result of the multiplication of all the discovered transformation matrices, we will

The multiplication of an arbitrary point in a three-dimensional space on a specified complex

After the rotation of the upper platform, we receive new coordinates of the upper ends of actuator hinges used to mount to the platform. Having the coordinates of the upper and lower hinges in space, we calculate the distance between them using a correlation (23) (the

On the basis of the resulting algorithm, the simulation work program is developed. This program provides the calculations and represents the position of the actuators and the rates of their movement depending on the azimuth and elevation angle with the reflection of three-dimensional model of the supporting-turning device and antenna (Fig.13). In this model, it is possible to set the different geometrical parameters of the support-rotating device's construction (Fig.14). Different dimensions of the construction for determination of various optimum correlation between minimum values of the inclination angles, speeds and accuracy of work and control actuator motion while constructing the control system can be

The control of a supporting-rotating device of a Hexapod-type requires precise (coordinated in time) cooperation between the position sensors and the delivery system of the control signal for all six drives. It is needed in order to preserve the system's integrity and to avoid

Fig. 13. A three-dimensional model of the support-rotating device.

2 22

matrix will cause it to turn around to some fixed point in the same space.

actuator lengthening that it was necessary to find):

*M = T(–p0)Rx(–θx)Ry(–θy)Rz(θz)Ry(θy)Rx(θx)T(p0)* (22)

21 2 1 21 *S xx* ( )( )( ) *y y z z* (23)

get the complex matrix *М*:

set in the model.

Fig. 14. Modelling of the constructional parameters of the support-rotating device.

physical damage. The six actuators form a single system. The control system must provide the simultaneous coordinated parallel control of 6 drives. The developed control system implements the algorithms of parallel work on the basis of a FPGA programmable logical integrated circuit. The block diagram with the cooperation chart of the control system's basic nodes is represented in fig.15.

Fig. 15. Interaction scheme between the main units of the AS control system with a Hexapod.

The computer for the control system generates the array of the points for each actuator which create the trajectory. Every point is transacted to an FPGA that has six logical channels generated for it. Every channel is responsible for the work of a corresponding actuator, and consists of a PID regulator, a PWM inspector, a processing module for actuator sensor signals

Smart Station for Data Reception of the Earth Remote Sensing 361

technical difficulties relating to the determination of the series of the AS's real parameters. These include, modulus inertia moments, changes of resistance friction depending on the inclination angle and the ratio of the aerial modulus position for various axes, the rigidity changes of mechanical transmissions, clearances, the instability of electric drive characteristics, the stochastic influence of wind loadings, the possible instability of timesampling and program data processing during coordinates transformation, etc. Such mechanical systems essentially have a non-linear character. The methodological maintenance for the control of multidimensional interconnected dynamic units of such

One of the most effective and important methods for the control of dynamic objects with indistinctly determined parameters is the use of an algorithm of a proportional-integral-

> <sup>1</sup> ( ) () () () *T p D I T t d t ut K t t dt T*

This expression is converted into digital form, convenient for program-realisation on the

 

 

*T dt*

( ) ( 1) ( ( ) ( 1)) ( ) *ut ut K et et Ket P I K et et et <sup>D</sup>*( ( ) 2 ( 1) ( 2)) (25)

, (24)

Fig. 17. Graph of the tracking of axis 1 of the actuator.

mechanical systems has not been solved sufficiently.

microcontroller:

where u(t) – the regulator output signal;

*et rt* ( ) ( ) ( ), *y t* - the regulation error;

*K*p – the amplification factor in the return circuit;

**4.1 The AS model and its separate elements in the control system** 

differential (PID) controller with the adaptive adjustment of PID-coefficients:

*(t)* – the deflection of angular position from the needed target;

*r(t)*, *y(t)* – the target and the value of output signal for the object quidance;

*T*I, *T*D – the time differentiation and integration constants;

*K*P, *K*I; *K*D – PID coefficients requiring optimal adjustment.

and a calculation module for the actuator's current position. In order to call the resources of every channel, a module is created. It provides an interface to access the periphery and provides its own address space for every channel and ensures the integrity of the data passed. Additionally, an interrupt controller is created so as to increase the reaction of all the system. This controller signals to the control processor regarding emergency events.

All of the channels of the control block work synchronously. This provides for simultaneous data reading from the sensors with processing and control actions for all of the actuators. It provides work for all 6 actuators as a single system for tracking the pointing trajectory of the spacecraft.

The graphs of the aimer table transformations from the topocentric system are shown in Fig.16. A trajectory is set by the arrays of the azimuth and elevation coordinates (**R[**tj,j,j]). These arrays are transformed into the local movement coordinates for each actuator (array **R**[tj,1j,2j,3j,4j,5j,6j]).

Fig. 16. Graphs of the aimer table transformation in the topocentric coordinate system (**R**[tj,j,j]) and in the local coordinate system **R**[tj,1j,2j,3j,4j,5j,6j].

The control program on the control system computer provides the visualisation of a movement diagram for each actuator and their speed; it also provides the calculation of trajectory tracking errors (fig.17).

So, the supporting-rotating device of an aerial system constructed on the basis of a Stewart platform (parallel kinematics structure Hexapod) considerably simplifies the mechanical construction of the AS, but increases the requirements for the schema and algorithms of the control system.

## **4. The use of neural network technology in the control systems of ERS aerial stations**

The calculations of an AS's dynamic parameters for the construction of apparatusprogramming devices for aerial guidance control according to the classical method especially for six-wheeled or six-drive traversing mechanisms Hexapod - are connected with

and a calculation module for the actuator's current position. In order to call the resources of every channel, a module is created. It provides an interface to access the periphery and provides its own address space for every channel and ensures the integrity of the data passed. Additionally, an interrupt controller is created so as to increase the reaction of all the system.

All of the channels of the control block work synchronously. This provides for simultaneous data reading from the sensors with processing and control actions for all of the actuators. It provides work for all 6 actuators as a single system for tracking the pointing trajectory of the

The graphs of the aimer table transformations from the topocentric system are shown in Fig.16. A trajectory is set by the arrays of the azimuth and elevation coordinates (**R[**tj,j,j]). These arrays are transformed into the local movement coordinates for each actuator (array

Fig. 16. Graphs of the aimer table transformation in the topocentric coordinate system

The control program on the control system computer provides the visualisation of a movement diagram for each actuator and their speed; it also provides the calculation of

So, the supporting-rotating device of an aerial system constructed on the basis of a Stewart platform (parallel kinematics structure Hexapod) considerably simplifies the mechanical construction of the AS, but increases the requirements for the schema and algorithms of the

**4. The use of neural network technology in the control systems of ERS aerial** 

The calculations of an AS's dynamic parameters for the construction of apparatusprogramming devices for aerial guidance control according to the classical method especially for six-wheeled or six-drive traversing mechanisms Hexapod - are connected with

(**R**[tj,j,j]) and in the local coordinate system **R**[tj,1j,2j,3j,4j,5j,6j].

This controller signals to the control processor regarding emergency events.

spacecraft.

**R**[tj,1j,2j,3j,4j,5j,6j]).

trajectory tracking errors (fig.17).

control system.

**stations** 

Fig. 17. Graph of the tracking of axis 1 of the actuator.

technical difficulties relating to the determination of the series of the AS's real parameters. These include, modulus inertia moments, changes of resistance friction depending on the inclination angle and the ratio of the aerial modulus position for various axes, the rigidity changes of mechanical transmissions, clearances, the instability of electric drive characteristics, the stochastic influence of wind loadings, the possible instability of timesampling and program data processing during coordinates transformation, etc. Such mechanical systems essentially have a non-linear character. The methodological maintenance for the control of multidimensional interconnected dynamic units of such mechanical systems has not been solved sufficiently.

## **4.1 The AS model and its separate elements in the control system**

One of the most effective and important methods for the control of dynamic objects with indistinctly determined parameters is the use of an algorithm of a proportional-integraldifferential (PID) controller with the adaptive adjustment of PID-coefficients:

$$u(t) = K\_p \left[ \varphi(t) + \frac{1}{T\_I} \int\_{-T - \Delta t}^{T} \varphi(t) dt + T\_D \frac{d\varphi(t)}{dt} \right],\tag{24}$$

This expression is converted into digital form, convenient for program-realisation on the microcontroller:

$$u(t) = u(t-1) + K\_P(e(t) - e(t-1)) + K\_I e(t) + K\_D(e(t) - 2e(t-1) + e(t-2))\tag{25}$$

where u(t) – the regulator output signal;

*(t)* – the deflection of angular position from the needed target; *K*p – the amplification factor in the return circuit; *T*I, *T*D – the time differentiation and integration constants; *et rt* ( ) ( ) ( ), *y t* - the regulation error; *r(t)*, *y(t)* – the target and the value of output signal for the object quidance; *K*P, *K*I; *K*D – PID coefficients requiring optimal adjustment.

Smart Station for Data Reception of the Earth Remote Sensing 363

Fig.19 reflects the structural scheme of a 3-contour AS control system for each of 3 axes of the aerial station "EgyptSat-1" using a neuro-controller for optimal coefficient adjustment in

Fig. 19. Structural-algorithmic scheme of an AS control contour with a neuro-controller.

The internal contour is directly closed in the frequency regulator which controls the voltage and the current of the electric drive for local rotation control. The second is the contour of the AS's axes rotation speed control. The external control contour is closed on the angular

A model of an AS control system and the submodels of its separate units (aerial, controller, frequency regulator, motor, Fig.20) are constructed due to the program complex

an external control contour.

position of the AS axes.

Fig. 20. General AS model with a control system.

MatLab/Simulink.

The discrete transfer function of such a controller is determined by the expression:

$$\mathcal{W}\_p(z) = k\_p \left[ 1 + \frac{T\_0(1 + z^{-1})}{2T\_I(1 - z^{-1})} + \frac{T\_D}{T\_0}(1 - z^{-1}) \right] \tag{26}$$

T0 - is the quantisation time, able to adjust adaptively depending on the divergence angle while approaching a given coordinate.

However, in dynamic processes with variable parameters and interferences, it is rather difficult to ensure optimal coefficient adjustments. Very often, parameters for adaptive control should be chosen by a method of trial and error. There are a wide range of methods and algorithms for PID-controller self-adjustment, mostly resulting in the complication of algebraic calculations and requiring the introduction of many new system parameters (Kuncevych, 1982).

One of the alternatives to the classical models and methods is the creation of a control model based on the use artificial neural networks (ANNs). ANNs are a group of algorithms described and modelled according to principles analogous to the work of human brain neurons. A neuron network is able to compare its output signal with a given training signal and carry out self-adjustment according to certain criteria by means of the automatic selection of various internal weighting factors aimed at minimising the difference between the actual output signal and the training signal.

The functional characteristics of neuron networks show that this technology can provide control results much better than those obtained by means of classical controls and software (Miroshnik at al., 2000; Callan, 2001). The great value of ANN use lies in its universal solution for various types of control objects distinguished by the different parameters set, i.e., the different electro-mechanical modulus of ASs and the various types of mountingtraversing device structures and loadings (Golovko, 2001; Zaichenko, 2004). ANNs are not programmed but taught, which is why their solution quality depends mainly upon the data quality and the quantity of data needed for teaching.

#### **4.2 Neural network use for the optimisation of control parameters**

The idea the use of ANNs in aerial movement control systems is that the main control parameters (PID-coefficients, etc.) are ANN outputs adjusted while working through a series of test orbits of AS movements, i.e., ANN teaching (Omata at al., 2000). The scheme of ANN use in an AS's axes control circuit is shown in Fig.18.

Fig. 18. A scheme for neuron control with self-adjustment.

(1 ) () 1 (1 ) 2 (1 )

T0 - is the quantisation time, able to adjust adaptively depending on the divergence angle

However, in dynamic processes with variable parameters and interferences, it is rather difficult to ensure optimal coefficient adjustments. Very often, parameters for adaptive control should be chosen by a method of trial and error. There are a wide range of methods and algorithms for PID-controller self-adjustment, mostly resulting in the complication of algebraic calculations and requiring the introduction of many new system parameters

One of the alternatives to the classical models and methods is the creation of a control model based on the use artificial neural networks (ANNs). ANNs are a group of algorithms described and modelled according to principles analogous to the work of human brain neurons. A neuron network is able to compare its output signal with a given training signal and carry out self-adjustment according to certain criteria by means of the automatic selection of various internal weighting factors aimed at minimising the difference between

The functional characteristics of neuron networks show that this technology can provide control results much better than those obtained by means of classical controls and software (Miroshnik at al., 2000; Callan, 2001). The great value of ANN use lies in its universal solution for various types of control objects distinguished by the different parameters set, i.e., the different electro-mechanical modulus of ASs and the various types of mountingtraversing device structures and loadings (Golovko, 2001; Zaichenko, 2004). ANNs are not programmed but taught, which is why their solution quality depends mainly upon the data

The idea the use of ANNs in aerial movement control systems is that the main control parameters (PID-coefficients, etc.) are ANN outputs adjusted while working through a series of test orbits of AS movements, i.e., ANN teaching (Omata at al., 2000). The scheme

*I T z <sup>T</sup> Wz k <sup>z</sup>*

1 0 1 1

*T z T*

 

0

*D*

(26)

The discrete transfer function of such a controller is determined by the expression:

*p p*

while approaching a given coordinate.

the actual output signal and the training signal.

quality and the quantity of data needed for teaching.

**4.2 Neural network use for the optimisation of control parameters** 

of ANN use in an AS's axes control circuit is shown in Fig.18.

Fig. 18. A scheme for neuron control with self-adjustment.

(Kuncevych, 1982).

Fig.19 reflects the structural scheme of a 3-contour AS control system for each of 3 axes of the aerial station "EgyptSat-1" using a neuro-controller for optimal coefficient adjustment in an external control contour.

Fig. 19. Structural-algorithmic scheme of an AS control contour with a neuro-controller.

The internal contour is directly closed in the frequency regulator which controls the voltage and the current of the electric drive for local rotation control. The second is the contour of the AS's axes rotation speed control. The external control contour is closed on the angular position of the AS axes.

A model of an AS control system and the submodels of its separate units (aerial, controller, frequency regulator, motor, Fig.20) are constructed due to the program complex MatLab/Simulink.

Fig. 20. General AS model with a control system.

Smart Station for Data Reception of the Earth Remote Sensing 365

From the previous results in initial sections of GT, we can observe that considerable deviations occur as a result of the dynamic resistance moments during the AS's acceleration. To perform an optimal coefficient adjustment, the error limits are extended on the initial

As the result of modelling, the deviation error diagram from the GT can be obtained

Fig. 23. The modelling of deviation errors for AS tracking along the sinusoidal GT.

Fig. 24. Adjustments of the rate regulation of the impulse functions on 2 axes (1, 2 = 8).

orbit section up to 1.0 degree (Fig.22), otherwise the ANN cannot adjust.

(Fig.23).

The unit for the adjustment and optimisation of the PID-controller's parameters Optimum\_1 is introduced into a submodel of the controller Speed controller (Fig.21).

Fig. 21. Model of a guidance controller.

Error limits on AS movement deviations from the test sinusoidal guidance table provided within the limits of 0.2 degrees are set in the optimisation unit Block Parameter (Fig.22).

Fig. 22. The process of PID-control coefficients optimisation.

The unit for the adjustment and optimisation of the PID-controller's parameters Optimum\_1

Error limits on AS movement deviations from the test sinusoidal guidance table provided within the limits of 0.2 degrees are set in the optimisation unit Block Parameter (Fig.22).

is introduced into a submodel of the controller Speed controller (Fig.21).

Fig. 21. Model of a guidance controller.

Fig. 22. The process of PID-control coefficients optimisation.

From the previous results in initial sections of GT, we can observe that considerable deviations occur as a result of the dynamic resistance moments during the AS's acceleration. To perform an optimal coefficient adjustment, the error limits are extended on the initial orbit section up to 1.0 degree (Fig.22), otherwise the ANN cannot adjust.

As the result of modelling, the deviation error diagram from the GT can be obtained (Fig.23).

Fig. 23. The modelling of deviation errors for AS tracking along the sinusoidal GT.

Fig. 24. Adjustments of the rate regulation of the impulse functions on 2 axes (1, 2 = 8).

Smart Station for Data Reception of the Earth Remote Sensing 367

Another structure of neural AS control like dynamic object is offered. In this structure

A typical two-layer perceptron with 10 neurons in an intermediate layer was chosen for the contour of the AS's axes control. Synthesis was carried out with the NNTOOL utility and MATLAB MEDIUM. A functional model of the system with a PID and neuro-controller was created with the Simulink program (Fig.29). The neuro-controller emulates the operation of the PID-controller. Neuron network teaching was executed via the method of reverse error extension. For this purpose, a set of teaching pairs - "input vector"/"right output" - were generated. In such a case, the input vector enters the network entrance and the state of all the intermediate neurons is calculated in series, while the output vector is compared with the right one and formed at the exit. Deviation provides errors which extend in the reverse direction along the network connection; afterwards, weighting factors are corrected to rectify it. After repeating this procedure a thousand times, we managed to teach the neuron

neural network and common PID-controller are used at the same time (Fig.28).

Fig. 27. Test orbit with a maximum tracking speed of 5 degree/sec.

**4.3 Neural network use in the contour of aerial axes control** 

Fig. 28. Parallel scheme of a neuro-controller.

network.

The results of control PID-coefficient adjustment were tested on the 3-axes AS "EgyptSat-1" with the perfecting of various test orbits, and especially generated impulse functions (Fig.24, Fig.26), sinusoidal functions (Fig.25), special "high-speed" tables of target designations (Fig.27) and real satellite orbits.

Fig. 25. Adjustments of the rate regulation on sinusoidal functions (2 = 60).


Fig. 26. Diagram of the impulse AS orbit perfection along the axis.

The results of control PID-coefficient adjustment were tested on the 3-axes AS "EgyptSat-1" with the perfecting of various test orbits, and especially generated impulse functions (Fig.24, Fig.26), sinusoidal functions (Fig.25), special "high-speed" tables of target designations

Fig. 25. Adjustments of the rate regulation on sinusoidal functions (2 = 60).

Fig. 26. Diagram of the impulse AS orbit perfection along the axis.

(Fig.27) and real satellite orbits.

Fig. 27. Test orbit with a maximum tracking speed of 5 degree/sec.

## **4.3 Neural network use in the contour of aerial axes control**

Another structure of neural AS control like dynamic object is offered. In this structure neural network and common PID-controller are used at the same time (Fig.28).

Fig. 28. Parallel scheme of a neuro-controller.

A typical two-layer perceptron with 10 neurons in an intermediate layer was chosen for the contour of the AS's axes control. Synthesis was carried out with the NNTOOL utility and MATLAB MEDIUM. A functional model of the system with a PID and neuro-controller was created with the Simulink program (Fig.29). The neuro-controller emulates the operation of the PID-controller. Neuron network teaching was executed via the method of reverse error extension. For this purpose, a set of teaching pairs - "input vector"/"right output" - were generated. In such a case, the input vector enters the network entrance and the state of all the intermediate neurons is calculated in series, while the output vector is compared with the right one and formed at the exit. Deviation provides errors which extend in the reverse direction along the network connection; afterwards, weighting factors are corrected to rectify it. After repeating this procedure a thousand times, we managed to teach the neuron network.

Smart Station for Data Reception of the Earth Remote Sensing 369

Fig. 31. Comparison of the PDF-controller's operation with a recurrent perceptron.

The investigation and search for optimal structures for mounting-traversing devices and control systems for the construction of aerial stations for remote sensing data reception have been carried out in this work. The models and results of the operation of two types of mounting-traversing AS devices have numerous advantages when compared with classical models and can be used for the creation of personal aerial stations for remote sensing data reception, as shown. The application of neuron networks in the control systems of ASs for remote sensing data reception can provide for the more accurate operation of control systems for satellite guidance and their tracking along the orbit in spite of the faults relating to constructional and dynamic AS parameters. The use of a neuron network in a control circuit also provides considerable advantages over traditional control systems due to the fact that for their realisation there is no need for accurate mathematical models of control objects.

Afonin V.L., and Krainov A.F., Kovalev V.E., Lyakhov D.M., Sleptsov V., Processing

Belyanstyi P.V., Sergeev B.G. Control of terrestrial antennas and radio telescopes. – M.: Sov.

Callan R., The basic concept of neural networks. - Moscow: Publishing House "Williams",

Fichter E.F., A Stewart platform – based manipulator , general theory and practical

Garbuk S.V., Gershenson V.E. Space remote sensing. – M.: publishing house A and B, 1997. –

Golovko V.A., Neural networks: training, organization and application. - M.: IPRZHR, 2001.

Hnatyshyn A.M., Shparyk Y.S., Position and tasks of remote sensing (RS) according to the

equipment of new generation. - Design concept Moscow: Mashinostroenie, 2001,

construction. - International Journal of Robotics Research. 1986. Vol. 5, No. 2, pp.

**5. Conclusion** 

**6. References** 

256 p.

2001. - 288.

157 – 182.

296 с.


Radio, 1980. – 280 с.

requirements of Derzhheolkarty. - 200

Fig. 29. Functional comparison model of systems with a PDF and a neuro-controller.

Fig.30 depicts the results following the neuron network's operation. Evidently, a simple multilayer perceptron (red colour graphics) had worse results in comparison with the PIFcontrol. The application of a recurrent perceptron distinguishing from the previous one by presence of delay lines on entries has better results (Fig.31). However, insufficient teaching stability marks its disadvantage. Imitative modelling shows that during the optimal selection of neuron network topology and the teaching of algorithms, it is possible to use it for the effective control of complex dynamic objects, such as large-sized aerial complexes.

Fig. 30. Comparison of the PIF-controller's operation with a multilayer perceptron.

By introducing the neuron network into the control scheme, it can be used for the more effective operative adjustment of control parameters by means of its teaching of various test orbits. The strategy of neuron control with self-adjustment can be used for different types of AS drives with various dynamic characteristics.

Fig. 31. Comparison of the PDF-controller's operation with a recurrent perceptron.

## **5. Conclusion**

368 Remote Sensing – Advanced Techniques and Platforms

Fig. 29. Functional comparison model of systems with a PDF and a neuro-controller.

Fig. 30. Comparison of the PIF-controller's operation with a multilayer perceptron.

AS drives with various dynamic characteristics.

By introducing the neuron network into the control scheme, it can be used for the more effective operative adjustment of control parameters by means of its teaching of various test orbits. The strategy of neuron control with self-adjustment can be used for different types of

aerial complexes.

Fig.30 depicts the results following the neuron network's operation. Evidently, a simple multilayer perceptron (red colour graphics) had worse results in comparison with the PIFcontrol. The application of a recurrent perceptron distinguishing from the previous one by presence of delay lines on entries has better results (Fig.31). However, insufficient teaching stability marks its disadvantage. Imitative modelling shows that during the optimal selection of neuron network topology and the teaching of algorithms, it is possible to use it for the effective control of complex dynamic objects, such as large-sized The investigation and search for optimal structures for mounting-traversing devices and control systems for the construction of aerial stations for remote sensing data reception have been carried out in this work. The models and results of the operation of two types of mounting-traversing AS devices have numerous advantages when compared with classical models and can be used for the creation of personal aerial stations for remote sensing data reception, as shown. The application of neuron networks in the control systems of ASs for remote sensing data reception can provide for the more accurate operation of control systems for satellite guidance and their tracking along the orbit in spite of the faults relating to constructional and dynamic AS parameters. The use of a neuron network in a control circuit also provides considerable advantages over traditional control systems due to the fact that for their realisation there is no need for accurate mathematical models of control objects.

#### **6. References**


**16** 

*Tianjin University* 

*China*

**Atmospheric Propagation** 

Jianquan Yao, Ran Wang, Haixia Cui and Jingli Wang

Terahertz (THz) radiation, sandwiched between traditional microwave and visible light, is the electromagnetic spectrum with the frequency defined from 0.1 to 10 THz (1THz=1012Hz). Until recently, due to the difficulty of generating and detecting techniques in this region, THz frequency band remains unexplored compared to other range and

Recent advances provide new opportunities and widespread potential applications of THz in information and communication technology (ICT), material identification, imaging, nondestructive examination, global environmental monitoring as well as many other fields. The rapid development can be attributed to the nature of terahertz radiation, which offers the advantages of both microwave and light wave. The characteristics of THz atmospheric propagation now rank among the most critical issues in the principal application of space

Terahertz communication will benefit from the high-bit-rate wireless technology which takes advantage of higher frequency and broader information bandwidth allowed in this range than microwave. It is possible for such a system to achieve data rate in tens of gigabits per second. (Lee, 2009) However, as shown in Figure 1, the atmospheric opacity severely limits the communication applications at this range (Siegel, 2002) and it is the commercial viability rather than technological issues that will undoubtedly determine whether THz

The overview of the THz remote sensing from the National Institute of Information and Communications Technology (NICT) in Japan is given in Figure 2. (Yasuko, 2008) Many biological and chemical compounds exhibit distinct spectroscopic response in THz range, which presents tremendous potential in the environmental monitoring of atmospheric chemical compositions (water, oxygen, ozone, chlorine and nitrogen compounds, etc.) and the identification of climate evolution in the troposphere and lower stratosphere. (Tonouchi, 2007) The knowledge about atmospheric attenuation will illustrate the optimum frequency bands for sensing systems while the material database will discriminated atmospheric

Based on these considerations, there are three fundamental problems as follow: (Foltynowicz et al., 2005) (1)To confirm the atmospheric transparency in the THz range and

tremendous effort has been made in order to fill in "THz gap" . (Zhang & Xu, 2009)

communication and atmospheric remote sensing. (Tonouchi, 2007)

communication will be carried out into practical application.

**1. Introduction** 

components.

**of Terahertz Radiation** 


## **Atmospheric Propagation of Terahertz Radiation**

Jianquan Yao, Ran Wang, Haixia Cui and Jingli Wang *Tianjin University China*

## **1. Introduction**

370 Remote Sensing – Advanced Techniques and Platforms

Kolovsky M.Z., Evgrafov A.N., Semenov Yu.A., Slousch A.V., Advanced Theory of

Kuncevych V.M. Adaptive control to indeterminate dynamic objects // Adaptive control to

Miroshnik I.V., Fpadkov A.L., Nikiforov V.O., Nonlinear and adaptive control of complex

Nair R., Maddocks J.H., On the forward kinematics of the parallel manipulators. - The

Omata S., Khalid M., Rubiya Y., Neuro-control and its applications. - M: Radio, 2000. 272. Reshetnev М.F. and others. Control and navigation satellites in circular orbits. – M.:

Sich-2 Space System: Tasks and Application Areas – K.: SSAU, 2011, - 48 p. – Ukr. and Eng. Shikin E., Boreskov A. Computer Graphics. Dynamic, realistic images. - М.: "Dialog MIFI",

Stewart D., A Platform with Six Degrees of Freedom. - UK Institution of Mechanical

Zaichenko Y.P., Fundamentals of intelligent systems. - K.: Publishing House "Word", 2004. -

International Journal of Robotics Research, Vol. 13, No. 2, April 1994, pp. 171 – 188.

Mechanisms and Machines. - Springer – Verlag, 2000, 394 p.

dynamic systems. - St. Petersburg.: Nauka, 2000. - S.653.

Engineers Proceedings 1965-66, Vol 180, Pt 1, No 15.

dynamic objects. - Kiev: Science thought, 1982.

Engineering, 1988

1995- 288p.

352

Terahertz (THz) radiation, sandwiched between traditional microwave and visible light, is the electromagnetic spectrum with the frequency defined from 0.1 to 10 THz (1THz=1012Hz). Until recently, due to the difficulty of generating and detecting techniques in this region, THz frequency band remains unexplored compared to other range and tremendous effort has been made in order to fill in "THz gap" . (Zhang & Xu, 2009)

Recent advances provide new opportunities and widespread potential applications of THz in information and communication technology (ICT), material identification, imaging, nondestructive examination, global environmental monitoring as well as many other fields. The rapid development can be attributed to the nature of terahertz radiation, which offers the advantages of both microwave and light wave. The characteristics of THz atmospheric propagation now rank among the most critical issues in the principal application of space communication and atmospheric remote sensing. (Tonouchi, 2007)

Terahertz communication will benefit from the high-bit-rate wireless technology which takes advantage of higher frequency and broader information bandwidth allowed in this range than microwave. It is possible for such a system to achieve data rate in tens of gigabits per second. (Lee, 2009) However, as shown in Figure 1, the atmospheric opacity severely limits the communication applications at this range (Siegel, 2002) and it is the commercial viability rather than technological issues that will undoubtedly determine whether THz communication will be carried out into practical application.

The overview of the THz remote sensing from the National Institute of Information and Communications Technology (NICT) in Japan is given in Figure 2. (Yasuko, 2008) Many biological and chemical compounds exhibit distinct spectroscopic response in THz range, which presents tremendous potential in the environmental monitoring of atmospheric chemical compositions (water, oxygen, ozone, chlorine and nitrogen compounds, etc.) and the identification of climate evolution in the troposphere and lower stratosphere. (Tonouchi, 2007) The knowledge about atmospheric attenuation will illustrate the optimum frequency bands for sensing systems while the material database will discriminated atmospheric components.

Based on these considerations, there are three fundamental problems as follow: (Foltynowicz et al., 2005) (1)To confirm the atmospheric transparency in the THz range and

Atmospheric Propagation of Terahertz Radiation 373

observed signal by the process of deconvolution. (Ryu and Kong, 2010) It is essential to understand the actual effects on the amplitude and phase of THz radiation propagating through the atmosphere, which depends on the frequency of incident wave, gas components, and ambient temperature or barometric pressure in different atmospheric

This chapter aims to provide the theoretic instructions for the applications above and illuminate characteristics of THz atmospheric propagation. The fundamental theory has been systematically introduced, with the physical process of Lamber-beer law, Mie scattering theory and so on. The atmospheric absorption, scattering, emission, refraction and turbulence are taken into account and a special focus is put on the detailed derivation and physical significance of radiative transfer equation. Additionally, several THz atmospheric propagation model, including Moliere, SARTre and AMATERASU, are introduced and compared with each other. The conclusions are drawn by giving the future evolutions and

The framework of fundamental physical concepts and theories in the process of THz atmospheric propagation is shown in Figure 3. The three fundamental physical concepts (atmospheric extinction, atmospheric emission and background radiation) on the left can be uniformly expressed in the radiative transfer equation, which is the foundation of THz atmospheric propagation mode and describes the processes of energy transfer along a given optical path. Other elements (atmospheric refraction and turbulence) results in a correction and optimization of the integration path-length and radiative transfer algorithm in practical

In the process of the interaction between electromagnetic wave and medium, THz radiation is attenuated by absorption as well as scattering out of their straight path. The atmospheric

**2. Fundamental theories of terahertz atmospheric propagation** 

conditions.

solution procedure.

suggestions of further study in this region.

Fig. 3. The fundamental physical concepts and theories

**2.1 Fundamental physical processes** 

**2.1.1 Atmospheric extinction** 

a) 0-500 GHz, (b) 600-2000GHz

Fig. 1. Atmospheric transmission in the terahertz region at various locations and altitudes

Fig. 2. Overview of NICT THz remote sensing

find out the air transmission windows for communicating and sensing system. (2)To collect the spectroscopic fingerprinting of atmospheric molecules for Terahertz atmospheric monitoring. (3)To improve the signal to noise ratio and restore the original signal from the

Fig. 1. Atmospheric transmission in the terahertz region at various locations and altitudes

find out the air transmission windows for communicating and sensing system. (2)To collect the spectroscopic fingerprinting of atmospheric molecules for Terahertz atmospheric monitoring. (3)To improve the signal to noise ratio and restore the original signal from the

a) 0-500 GHz, (b) 600-2000GHz

Fig. 2. Overview of NICT THz remote sensing

observed signal by the process of deconvolution. (Ryu and Kong, 2010) It is essential to understand the actual effects on the amplitude and phase of THz radiation propagating through the atmosphere, which depends on the frequency of incident wave, gas components, and ambient temperature or barometric pressure in different atmospheric conditions.

This chapter aims to provide the theoretic instructions for the applications above and illuminate characteristics of THz atmospheric propagation. The fundamental theory has been systematically introduced, with the physical process of Lamber-beer law, Mie scattering theory and so on. The atmospheric absorption, scattering, emission, refraction and turbulence are taken into account and a special focus is put on the detailed derivation and physical significance of radiative transfer equation. Additionally, several THz atmospheric propagation model, including Moliere, SARTre and AMATERASU, are introduced and compared with each other. The conclusions are drawn by giving the future evolutions and suggestions of further study in this region.

## **2. Fundamental theories of terahertz atmospheric propagation**

The framework of fundamental physical concepts and theories in the process of THz atmospheric propagation is shown in Figure 3. The three fundamental physical concepts (atmospheric extinction, atmospheric emission and background radiation) on the left can be uniformly expressed in the radiative transfer equation, which is the foundation of THz atmospheric propagation mode and describes the processes of energy transfer along a given optical path. Other elements (atmospheric refraction and turbulence) results in a correction and optimization of the integration path-length and radiative transfer algorithm in practical solution procedure.

Fig. 3. The fundamental physical concepts and theories

## **2.1 Fundamental physical processes**

## **2.1.1 Atmospheric extinction**

In the process of the interaction between electromagnetic wave and medium, THz radiation is attenuated by absorption as well as scattering out of their straight path. The atmospheric

Atmospheric Propagation of Terahertz Radiation 375

Liebe model for continuum absorption while the infrared model is on the basis of HITRAN

In parallel, scattering effect also results in the energy attenuation along the optical path. It comprises the molecular Rayleigh scattering and the Mie scattering by aerosols and water vapor coagulum. As the wavelength of THz radiation lies in the order of aerosols, only Mie scattering should be taken into consideration. Aerosol particles mainly refer to the solid and liquid particles suspending in the atmosphere, for example, dusts, salts, ice particles and water droplets, and the Mie scattering effect mainly depends on their size-distribution,

It is difficult to simulate the scattering by aerosols due to their large scale change in time and space domain. The scale distribution is an important concept to describe aerosols, and the

( ) exp( ) *dN r ar b*

Where *N* is granule number in the unit volume, *r* is the radius of particle, *a*, *b*, *α*, *γ* is the constant which depends the origin of aerosol, including Mainland (Haze L), Sea (Haze M)

> log *dN <sup>v</sup> cr d r*

In the expression above, *v* is the spectrum parameter, usually taking 2~4. The parameter *c*

 (5)

(6)

*dr*

Fig. 4. The linear and continuum absorption of THz wave from NICT

line catalog and CKD continuum model. (Yasuko and Takamasa, 2008)

complex refractive index and the wavelength of incident radiation.

**2.1.1.2 The scattering of aerosol** 

spectrum pattern commonly includes:

*2.1.1.2.1 Revision spectrum* 

and High Stereotype (Haze H).

relates to the total density of aerosols.

*2.1.1.2.2 Junge spectrum* 

extinction is illustrated by Lamber-beer law and mainly causing the energy attenuation of incident wave. The differential and integral forms of the mathematical expression is

$$dI(\upsilon) = -\alpha\_{\upsilon}(z)I(\upsilon)dz \qquad I\_{r\_1}(\nu) = I\_{r\_0}(\nu)e^{-\int\_{\eta\_0}^{\eta} \alpha\_{\upsilon}(z)dz} \tag{1}$$

*Ir0(v)* denotes the incident radiance entering the optical path *(r0,r1)* at the frequency *v* and *Ir1(v)* is the outgoing radiance. The opacity or optical thickness is defined as

$$
\pi\_\nu(r\_{0'}r\_1) = \int\_{r\_0}^{r\_1} \alpha\_\nu(z)dz\tag{2}
$$

and the transmission is

$$\eta\_{r\_0, r\_1} = \frac{I\_{r\_1}}{I\_{r\_0}} = e^{-\tau\_v(r\_0, r\_1)}\tag{3}$$

Extinction coefficient α*v(z)* can be expressed mathematically as the summation of the absorption and scattering coefficient, α*a* andα*<sup>s</sup>*, separately

$$
\alpha\_e = \alpha\_a + \alpha\_s \tag{4}
$$

The atmospheric absorption, particularly from water vapor, involves the linear absorption and continuum absorption, while the atmospheric scattering mainly depends on aerosols.

#### **2.1.1.1 The absorption of water vapor**

The linear and continuum absorption constitutes the THz atmospheric absorption, which is dominated by water vapor. The former is comprised most of the absorption lines in the air, which is due to the molecular rotational transitions. The absorption lines of water vapor are characterized by spectroscopic parameters, including the center frequency, oscillator intensity, and pressure broadening coefficient. (Yasuko and Takamasa, 2008) Most of these optical properties have been conveniently catalogued into databases, such as JPL (Jet Propulsion Laboratory) and HITRAN (Rothman et al., 2009) to stimulate the line by line absorption.

The atmospheric absorption spectrum doesn't correspond to the accumulation of water vapor absorption lines. The continuum absorption is what remains after subtraction of linear contributions from the total absorption that can be measured directly. (Rosenkranz, 1998) It may be observed in wide electromagnetic spectrum (from microwave to infrared) and cannot be described by water vapor absorption lines. Its generating mechanism is not sufficiently understood while several theories have been proposed, including anomalous far-wing absorption, (Ma and Tipping, 1992) absorption by dimmers and larger clusters of water vapor, and absorption by collisions between atmospheric molecules. (Ma and Tipping, 1992) A semi-empirical CKD model is applicable in a wide frequency range and has been proven successful in some aspects. (Clough et al., 1989) For the simulation at frequencies below 400GHz, Liebe model could be used for dry air and water vapor continua. (Liebe, 1989) Figure 4 illustrates the discrepancy between radio-wave and infrared wave propagation models. The radio-wave model is calculated with JPL line catalog and

extinction is illustrated by Lamber-beer law and mainly causing the energy attenuation of

*Ir0(v)* denotes the incident radiance entering the optical path *(r0,r1)* at the frequency *v* and

1 <sup>0</sup> 0 1 ( , ) () *<sup>r</sup> <sup>r</sup> r r z dz*

 

 

1 0 1

*e*

(,)

*<sup>s</sup>*, separately

*v(z)* can be expressed mathematically as the summation of the

0

 *e as* 

The atmospheric absorption, particularly from water vapor, involves the linear absorption and continuum absorption, while the atmospheric scattering mainly depends on aerosols.

The linear and continuum absorption constitutes the THz atmospheric absorption, which is dominated by water vapor. The former is comprised most of the absorption lines in the air, which is due to the molecular rotational transitions. The absorption lines of water vapor are characterized by spectroscopic parameters, including the center frequency, oscillator intensity, and pressure broadening coefficient. (Yasuko and Takamasa, 2008) Most of these optical properties have been conveniently catalogued into databases, such as JPL (Jet Propulsion Laboratory) and HITRAN (Rothman et al., 2009) to stimulate the line by line

The atmospheric absorption spectrum doesn't correspond to the accumulation of water vapor absorption lines. The continuum absorption is what remains after subtraction of linear contributions from the total absorption that can be measured directly. (Rosenkranz, 1998) It may be observed in wide electromagnetic spectrum (from microwave to infrared) and cannot be described by water vapor absorption lines. Its generating mechanism is not sufficiently understood while several theories have been proposed, including anomalous far-wing absorption, (Ma and Tipping, 1992) absorption by dimmers and larger clusters of water vapor, and absorption by collisions between atmospheric molecules. (Ma and Tipping, 1992) A semi-empirical CKD model is applicable in a wide frequency range and has been proven successful in some aspects. (Clough et al., 1989) For the simulation at frequencies below 400GHz, Liebe model could be used for dry air and water vapor continua. (Liebe, 1989) Figure 4 illustrates the discrepancy between radio-wave and infrared wave propagation models. The radio-wave model is calculated with JPL line catalog and

*r I*

*I*

, *<sup>v</sup> <sup>r</sup> r r*

1 0

() ()

 

*r r I Ie*

1 0

(3)

*r <sup>r</sup> z dz*

( )

(2)

(4)

(1)

incident wave. The differential and integral forms of the mathematical expression is

( ) ( )( ) *<sup>v</sup> dI v z I v dz* 

and the transmission is

Extinction coefficient

absorption.

α

absorption and scattering coefficient,

**2.1.1.1 The absorption of water vapor** 

*Ir1(v)* is the outgoing radiance. The opacity or optical thickness is defined as

0 1

*r r*

α*a* andα

Fig. 4. The linear and continuum absorption of THz wave from NICT

Liebe model for continuum absorption while the infrared model is on the basis of HITRAN line catalog and CKD continuum model. (Yasuko and Takamasa, 2008)

#### **2.1.1.2 The scattering of aerosol**

In parallel, scattering effect also results in the energy attenuation along the optical path. It comprises the molecular Rayleigh scattering and the Mie scattering by aerosols and water vapor coagulum. As the wavelength of THz radiation lies in the order of aerosols, only Mie scattering should be taken into consideration. Aerosol particles mainly refer to the solid and liquid particles suspending in the atmosphere, for example, dusts, salts, ice particles and water droplets, and the Mie scattering effect mainly depends on their size-distribution, complex refractive index and the wavelength of incident radiation.

It is difficult to simulate the scattering by aerosols due to their large scale change in time and space domain. The scale distribution is an important concept to describe aerosols, and the spectrum pattern commonly includes:

#### *2.1.1.2.1 Revision spectrum*

$$\frac{dN(r)}{dr} = ar^{\alpha} \exp(-b^{\gamma})\tag{5}$$

Where *N* is granule number in the unit volume, *r* is the radius of particle, *a*, *b*, *α*, *γ* is the constant which depends the origin of aerosol, including Mainland (Haze L), Sea (Haze M) and High Stereotype (Haze H).

#### *2.1.1.2.2 Junge spectrum*

$$\frac{dN}{d\log r} = cr^{-v} \tag{6}$$

In the expression above, *v* is the spectrum parameter, usually taking 2~4. The parameter *c* relates to the total density of aerosols.

Atmospheric Propagation of Terahertz Radiation 377

Fig. 6. Schematic experimental setups for far-IR Fourier transform spectroscopy

transform spectroscopy works better at frequencies above 5 THz. (Han et al., 2001)

original data into the desired result.

**2.1.2 Atmospheric emission** 

(Mendrok, 2006) The expression of source terms is

The thermal emission term is defined as

light shines into a Michelson interferometer, that allows some wavelengths to pass through but blocks others due to wave interference. Computer processing is required to turn the

Compared to other spectroscopic techniques, THz-TDS presents a series of advantages. THz pulse has ps pulse duration, resulting in the intrinsic high temporal resolution and is very suitable for the dynamic spectroscopic measurement. THz-TDS provides coherent spectroscopic detection and a direct record of the THz time-domain pulse. It enables the determination of the complex permittivity of a sample, consisting of the amplitude and phase, without the requirement of Kramers-Kronig relationship. (Zhang & Xu, 2009) Additionally, time-gating technology in sampling THz pulses has been employed, which dramatically suppresses the background noise. It is especially useful to measure spectroscopy with high background radiation which is comparable or even stronger than the signal. In terms of signalto-noise ratio, THz-TDS is advantageous at low frequencies less than 3 THz, while Fourier

THz radiation propagating in the atmosphere also experiences the process of enhancement. THz emission is defined as source term J, comprising the thermal emission JB and the scattering source term JS. Compared with the attenuation by scattering out of the line-ofsight, scattering into the path is considered as a source of radiation as well, including the source sole scattering on direct radiation condition JSS and the multiple scattering source JMS.

> <sup>0</sup> (1 ) ( ) *BJ BT*

*B S B SS MS JJ J J J J* (7)

(8)

## **2.1.1.3 Terahertz spectroscopic measurement technology**

The THz spectroscopic parameters above will directly influence the accuracy of atmospheric propagation model and should be precisely measured in laboratory experiments. Currently, Terahertz Time-domain Spectroscopy (THz-TDS) technology and Fourier-transform Infrared Spectroscopy (FT-IR) have attracted a great deal of attention. A typical THz-TDS arrangement includes a femtosecond (fs) laser, a THz emitter source, a THz detector, focusing and collimating parts, a motorized delay line, a lock-in amplifier, and a data acquisition system.

As shown in Figure 5, the femtosecond laser is split into THz generation and detection arms. Coming from the same source, the pump and probe pulses have a defined temporal relationship. The THz radiation is excited by focusing the pulse onto a photoconductive antenna and the emitted THz pulses are collimated and focused onto the sample by a pair of parabolic mirrors; samples can be scanned across the focus to build up a two-dimensional image, with spectral information recorded at each pixel. (Baxter, 2011) The reflected or transmitted THz pulse is then collected and focused with another pair of parabolic mirrors onto a detector, which is a second photoconductive antenna or a sampling electro-optical crystal. The probe beam is measured with a quarter wave-plate, a Wollaston polarization (WP) splitting prism, and two balanced photodiodes. Lock-in techniques can be used to measure the photodiode signal with the modulated bias field of the photoconductive emitter as a reference. Furthermore, by measuring the signal as a function of the time delay between the arrival of THz and probe pulses, the THz time-domain electric field can be reconstructed. A computer controls the delay lines and records data from the lock-in amplifier, and the Fourier transform expresses the frequency spectrum of THz radiation. (Davies el al., 2008)

Fig. 5. Schematic experimental setups for THz-TDS system

Fourier transform infrared (FTIR) spectroscopy is a technique to obtain an infrared spectrum of absorption, emission, photoconductivity or Raman scattering of the samples. It consists of an incoherent high-pressure mercury arc lamp, a far-IR beam splitter (free-standing wire grid or Mylar), focusing and collimating optical parts for far infrared, a thermal detector, a motorized delay line, and a data acquisition system, just as Figure 6 plots. The source is generated by a broadband light source containing the full spectrum of wavelengths. The

The THz spectroscopic parameters above will directly influence the accuracy of atmospheric propagation model and should be precisely measured in laboratory experiments. Currently, Terahertz Time-domain Spectroscopy (THz-TDS) technology and Fourier-transform Infrared Spectroscopy (FT-IR) have attracted a great deal of attention. A typical THz-TDS arrangement includes a femtosecond (fs) laser, a THz emitter source, a THz detector, focusing and collimating parts, a motorized delay line, a lock-in amplifier, and a data

As shown in Figure 5, the femtosecond laser is split into THz generation and detection arms. Coming from the same source, the pump and probe pulses have a defined temporal relationship. The THz radiation is excited by focusing the pulse onto a photoconductive antenna and the emitted THz pulses are collimated and focused onto the sample by a pair of parabolic mirrors; samples can be scanned across the focus to build up a two-dimensional image, with spectral information recorded at each pixel. (Baxter, 2011) The reflected or transmitted THz pulse is then collected and focused with another pair of parabolic mirrors onto a detector, which is a second photoconductive antenna or a sampling electro-optical crystal. The probe beam is measured with a quarter wave-plate, a Wollaston polarization (WP) splitting prism, and two balanced photodiodes. Lock-in techniques can be used to measure the photodiode signal with the modulated bias field of the photoconductive emitter as a reference. Furthermore, by measuring the signal as a function of the time delay between the arrival of THz and probe pulses, the THz time-domain electric field can be reconstructed. A computer controls the delay lines and records data from the lock-in amplifier, and the Fourier transform expresses the frequency spectrum of THz radiation.

**2.1.1.3 Terahertz spectroscopic measurement technology** 

Fig. 5. Schematic experimental setups for THz-TDS system

Fourier transform infrared (FTIR) spectroscopy is a technique to obtain an infrared spectrum of absorption, emission, photoconductivity or Raman scattering of the samples. It consists of an incoherent high-pressure mercury arc lamp, a far-IR beam splitter (free-standing wire grid or Mylar), focusing and collimating optical parts for far infrared, a thermal detector, a motorized delay line, and a data acquisition system, just as Figure 6 plots. The source is generated by a broadband light source containing the full spectrum of wavelengths. The

acquisition system.

(Davies el al., 2008)

Fig. 6. Schematic experimental setups for far-IR Fourier transform spectroscopy

light shines into a Michelson interferometer, that allows some wavelengths to pass through but blocks others due to wave interference. Computer processing is required to turn the original data into the desired result.

Compared to other spectroscopic techniques, THz-TDS presents a series of advantages. THz pulse has ps pulse duration, resulting in the intrinsic high temporal resolution and is very suitable for the dynamic spectroscopic measurement. THz-TDS provides coherent spectroscopic detection and a direct record of the THz time-domain pulse. It enables the determination of the complex permittivity of a sample, consisting of the amplitude and phase, without the requirement of Kramers-Kronig relationship. (Zhang & Xu, 2009) Additionally, time-gating technology in sampling THz pulses has been employed, which dramatically suppresses the background noise. It is especially useful to measure spectroscopy with high background radiation which is comparable or even stronger than the signal. In terms of signalto-noise ratio, THz-TDS is advantageous at low frequencies less than 3 THz, while Fourier transform spectroscopy works better at frequencies above 5 THz. (Han et al., 2001)

#### **2.1.2 Atmospheric emission**

THz radiation propagating in the atmosphere also experiences the process of enhancement. THz emission is defined as source term J, comprising the thermal emission JB and the scattering source term JS. Compared with the attenuation by scattering out of the line-ofsight, scattering into the path is considered as a source of radiation as well, including the source sole scattering on direct radiation condition JSS and the multiple scattering source JMS. (Mendrok, 2006) The expression of source terms is

$$I = I\_B + I\_S = I\_B + I\_{SS} + I\_{MS} \tag{7}$$

The thermal emission term is defined as

$$J\_B = (1 - \alpha\_0)B(T) \tag{8}$$

Atmospheric Propagation of Terahertz Radiation 379

three concepts (attenuation, enhancement, and background radiation) occurring along the line-of-sight and the equation of radiative transfer describes these interactions mathematically. It is the foundation of THz atmospheric propagation model, and the

The fundamental quantity which describes a field of radiation is the spectral intensity. Let's think of a very small area element in the radiation field, as the Figure 8 above, the radiant

> *in dE I d d d dt*

denotes the time of radiation (polarization will be ignored for the moment). And the

( ) *out dE I dI d d d dt* 

 

of an infinitesimal volume is:

(13)

(14)

basal area, and *dt* 

σ

Ⅰ

where *Iv* is radiant intensity, *dw* solid angle, *dv* frequency interval, *d*

Fig. 7. Geometry inlcuding limb-sounding and nadir-sounding

derivation is as follow: (Thomas & Stamnes, 2002)

Fig. 8. The input and output optical intensity

energy of incident light in the surface

emergent radiant energy from surfaceⅡis:

*B(T)* denotes the Planck emission term which is given by Planck's function describing the radiation of a black-body at temperature *T*:

$$B\_v(T) = \frac{2hv^3}{c^2} \frac{1}{e^{hv/k\_BT} - 1} \tag{9}$$

where *h* is Planck's constant, *c* the speed of light, and *kB* denotes Boltzmann's constant. *w0* is the scattering albedo of the "mixed" atmospheric medium along the line-of-sight, which is calculated from molecular and particle optical properties:

$$\alpha\_0 = \frac{\alpha\_s^{pr}}{\alpha\_s^{pr} + \alpha\_a^{par} + \alpha\_s^{mol}} \tag{10}$$

where α*s* andα*<sup>a</sup>* are scattering and absorption coefficients with superscripts 'mol' and 'par' denoting properties of molecular and particulate matter, respectively.

The scattering source term into the optical path is described as:

$$J\_s(\Omega) = \frac{\alpha\_s}{\alpha\_e} \frac{1}{4\pi} \int\_0^{4\pi} P(\Omega, \Omega') I(\Omega') d\Omega' \tag{11}$$

It comprises radiation incident from all directions Ω*'* scattered into the direction of interest Ω. While the scattering coefficient α*<sup>s</sup>* accounts for the scattered fraction of radiation, the phase function *P(*Ω*,*Ω*')* can be interpreted as the probability of incident radiation being scattered from directionΩ*'* into directionΩwith the normalizing condition:

$$\frac{1}{4\pi} \int\_0^{4\pi} P(\Omega, \Omega') d\Omega' = 1 \tag{12}$$

*I(*Ω*')* describes the incident radiation field in terms of incident direction for the calculation of the scattering source term.

#### **2.1.3 Background radiation**

Remote observations of the atmosphere can be performed at different geometries, as Figure 7 shows. The case that the line-of-sight goes through a long tangential atmospheric path above the ground is commonly referred to as limb-sounding geometry. If the line-of-sight crosses the surface, it is called nadir-sounding geometry. The up-looking case can be obtained by inverting the sense of the nadir observation. The background radiation of THz wave in the atmosphere mainly results from many kinds of electromagnetic radiation in the interstellar space or from the planet surface. For limb-sounding and up-looking, it is the cosmologic radiation at 3K, and for nadir-sounding (or down-looking), it is the earth surface emission.

#### **2.2 Radiative transfer equation**

Radiative transfer is the physical phenomenon of energy transferring in the form of electromagnetic radiation. The propagation of radiation through a medium is affected by the

*B(T)* denotes the Planck emission term which is given by Planck's function describing the

3 2 / 2 1 ( ) <sup>1</sup> *<sup>B</sup> <sup>v</sup> hv k T*

where *h* is Planck's constant, *c* the speed of light, and *kB* denotes Boltzmann's constant. *w0* is the scattering albedo of the "mixed" atmospheric medium along the line-of-sight, which is

> *par s par par mol sas*

*<sup>a</sup>* are scattering and absorption coefficients with superscripts 'mol' and

Ω

(11)

*')* can be interpreted as the probability of incident radiation being

with the normalizing condition:

*<sup>s</sup>* accounts for the scattered fraction of radiation, the

(12)

4 0 <sup>1</sup> ( ) ( , ') ( ') ' <sup>4</sup>

<sup>1</sup> ( , ') ' 1 <sup>4</sup> *P d*

Remote observations of the atmosphere can be performed at different geometries, as Figure 7 shows. The case that the line-of-sight goes through a long tangential atmospheric path above the ground is commonly referred to as limb-sounding geometry. If the line-of-sight crosses the surface, it is called nadir-sounding geometry. The up-looking case can be obtained by inverting the sense of the nadir observation. The background radiation of THz wave in the atmosphere mainly results from many kinds of electromagnetic radiation in the interstellar space or from the planet surface. For limb-sounding and up-looking, it is the cosmologic radiation at 3K, and for nadir-sounding (or down-looking), it is the earth surface

Radiative transfer is the physical phenomenon of energy transferring in the form of electromagnetic radiation. The propagation of radiation through a medium is affected by the

*')* describes the incident radiation field in terms of incident direction for the calculation

*J PI d*

Ω

4 0

*<sup>c</sup> <sup>e</sup>* (9)

(10)

*'* scattered into the direction of interest

*hv B T*

0

'par' denoting properties of molecular and particulate matter, respectively.

*e*

 

α

The scattering source term into the optical path is described as:

It comprises radiation incident from all directions

Ω

. While the scattering coefficient

Ω*,*Ω *<sup>s</sup> <sup>s</sup>*

*'* into direction

radiation of a black-body at temperature *T*:

where

Ω

*I(*Ω

emission.

α*s* andα

phase function *P(*

scattered from direction

of the scattering source term.

**2.1.3 Background radiation** 

**2.2 Radiative transfer equation** 

calculated from molecular and particle optical properties:

Fig. 7. Geometry inlcuding limb-sounding and nadir-sounding

three concepts (attenuation, enhancement, and background radiation) occurring along the line-of-sight and the equation of radiative transfer describes these interactions mathematically. It is the foundation of THz atmospheric propagation model, and the derivation is as follow: (Thomas & Stamnes, 2002)

Fig. 8. The input and output optical intensity

The fundamental quantity which describes a field of radiation is the spectral intensity. Let's think of a very small area element in the radiation field, as the Figure 8 above, the radiant energy of incident light in the surfaceⅠof an infinitesimal volume is:

$$\mathbf{d}E^{\text{int}} = \mathbf{I}\_{\text{v}}d\text{cod}\mathbf{v}d\sigma\mathbf{d}\mathbf{t} \tag{13}$$

where *Iv* is radiant intensity, *dw* solid angle, *dv* frequency interval, *d*σ basal area, and *dt*  denotes the time of radiation (polarization will be ignored for the moment). And the emergent radiant energy from surfaceⅡis:

$$dE^{out} = (I\_{\nu} + dI\_{\nu})d\alpha d\nu d\sigma dt \tag{14}$$

Atmospheric Propagation of Terahertz Radiation 381

As the radiative transfer equation results from energy conservation law, it is applicable to the whole electromagnetic spectrum, from radio wave to visible light. In the course of this work, radiation has only been discussed in terms of scalar intensity. Considering the polarization, the radiation is described by four components (I, Q, U, V) of the Stokes vector and a complete description of interaction between the medium and the radiation will be expressed. However, scalar radiative transfer is usually a good approximation for most

Turbulence is a flow regime characterized chaotically and stochastically, the problems of which are thus treated statistically rather than deterministically. The turbulent atmospheric optical property is changing with the temporal and spatial variation, resulting in the fluctuation of atmospheric refractive index. The essence of turbulence effect is the influence of medium disturbance on the transmission of incident THz radiation, including the beam

The turbulent consequence mainly depends on the relationship of turbulent scale *l* and the

On condition that *l>>dB*, THz beam deflects during the process of the propagation in turbulence and mainly cause beam drifting on the receiver. When turbulent scale *l* is equal to the characteristic dimension *dB*, the light beam will also experience stochastic deflection, resulting in the image spot jitter. If *l<<dB*, the influence of scattering and diffraction leads to

Additionally, in terms of incident radiation, fully coherent light beams are sensitive to the properties of the medium through which they are propagating and the turbulence-induced spatial broadening is the major limiting factor in most applications. Partially coherent beams

The atmospheric refraction results from the uneven distribution of air in horizontal and vertical directions. When passing through the atmosphere, the line of sight is refracted and bended towards the surface of the planets. Taking refraction into account will correct and promote the radiative transfer path with some elementary geometrical relationships, as

In conclusion of Section 2, the general idea to solve these problems above is to study the various effects independently and superpose them. Currently, most researches are mainly focused on the atmospheric extinction and the establishment of radiative transfer model.

Microwave Observation Line Estimation and Retrieval (Moliere), developed at the Bordeaux Astronomical Observatory (France), is the versatile forward and inversion model for

drift, jitter, flickering, distortion, and degeneracy of the spatial coherence.

are less affected by atmospheric turbulence than fully ones. (Shirai 2003)

situations in radiative transfer modeling.

**2.3 Elements to promote the algorithm** 

characteristic dimension of the incident radiation *dB*.

the intensity flickering of THz beam. (Yao & Yu 2006)

**3. THz atmospheric propagation model** 

**2.3.1 Atmospheric turbulence** 

**2.3.2 Atmospheric refraction** 

plotted in Figure 9.

**3.1 Moliere** 

According to the Lamber-beer law, with the absorption coefficient α*<sup>v</sup>*, the radiant energy absorbed by the medium is:

$$d\mathbf{E}\_{\alpha} = -\alpha\_{\nu} d\mathbf{E}^{m} dr = -\alpha\_{\nu} \mathbf{I}\_{\nu} d\alpha d\nu d\sigma dt dr \tag{15}$$

 With the emission coefficient *jv*, the radiant energy of medium emission is:

$$\mathbf{d}E\_e = \mathbf{j}\_\nu dod \mathbf{v} d\sigma dt dr \tag{16}$$

In accordance with energy conservation law, we get:

$$dE^{\rm out} = dE^{\rm in} + dE\_e + dE\_a \tag{17}$$

Substituting equation (8)~(11) into equation (12):

$$\text{dI}\_{\nu} \text{d}d\omicron \text{d}\nu \text{d}\sigma \text{d}t = \text{j}\_{\nu} \text{d}\omicron \text{d}\nu \text{d}\sigma \text{d}\text{t} \text{d}r + (-\alpha\_{\nu}I\_{\upsilon}) \text{d}\omicron \text{d}\nu \text{d}\sigma \text{d}\sigma \text{d}\text{t}dr \tag{18}$$

A particularly useful simplification of the radiative transfer equation occurs under the conditions of local thermodynamic equilibrium (LTE). In this situation, the atmosphere consists of massive particles which are in equilibrium with each other, and therefore have a definable temperature. For the atmosphere in LTE, the emission coefficient and absorption coefficient are functions of temperature and density only, and the source function is defined as *Sv*≡*jv/*α*<sup>v</sup>*. It equals the Planck function according to Kirchhoff's law:

$$S\_v \equiv j\_v \;/\; \alpha\_v = B\_v(T) \tag{19}$$

Given the definition of opacity or optical thickness: *d*τ*v=*α*vdr*, we get the differential form of radiative transfer equation from equation (18):

$$\frac{dI\_v}{d\tau\_v} = S\_\nu - I\_\nu \tag{20}$$

To solve this single-order partial differential equation along integral path *(r0,r1)*, with the integral variable *r*, we get the integral form of radiative transfer equation:

$$I\_{\nu}(r\_1) = I\_{\nu}(r\_0)e^{-\int\_{r\_0}^{r\_1} a\_{\nu}(r) dr} + \int\_{r\_0}^{r\_1} e^{-\int\_{r\_1}^{r\_1} a\_{\nu}(r') dr'} S\_{\nu}(r) a\_{\nu}(r) dr \tag{21}$$

Under the assumption of LTE, the equation can be written as:

$$I\_{\nu}(r\_1) = I\_{\nu}(r\_0)e^{-\int\_{r\_0}^{r\_1} \alpha\_{\nu}(r) dr} + \int\_{r\_0}^{r\_1} B\_{\nu}(T) \alpha\_{\nu}(r) e^{-\int\_{r\_1}^{r\_1} \alpha\_{\nu}(r') dr'} dr\tag{22}$$

The physical significance of radiative equation lies in the processes of absorption and emission of atmosphere at the position *r* along a given optical path *(r0,r1)*, with the first term on the right side describing the background radiation attenuated by atmosphere while the second one standing for atmospheric emission and absorption. *Iv(r1)* is the outgoing radiance arriving the sensor at the frequency *v* and *Iv(r0)* corresponds to the background radiance entering the optical path.

*in dE dE dr I d d d dtdr*

*<sup>e</sup> dE j d d d dtdr* 

( ) *<sup>v</sup> dI d d d dt j d d d dtdr I d d d dtdr*

A particularly useful simplification of the radiative transfer equation occurs under the conditions of local thermodynamic equilibrium (LTE). In this situation, the atmosphere consists of massive particles which are in equilibrium with each other, and therefore have a definable temperature. For the atmosphere in LTE, the emission coefficient and absorption coefficient are functions of temperature and density only, and the source function is defined

> / () *S j BT vv v*

> >

To solve this single-order partial differential equation along integral path *(r0,r1)*, with the

<sup>1</sup> <sup>1</sup> <sup>1</sup> <sup>0</sup> 0 ( ) ( ') ' 1 0 () () () () *r r r r*

*r dr r r dr <sup>r</sup> Ir Ire e S r r dr*

> <sup>1</sup> <sup>1</sup> <sup>1</sup> <sup>0</sup> 0

*r r r r*

*<sup>r</sup> Ir Ire B T r e dr*

The physical significance of radiative equation lies in the processes of absorption and emission of atmosphere at the position *r* along a given optical path *(r0,r1)*, with the first term on the right side describing the background radiation attenuated by atmosphere while the second one standing for atmospheric emission and absorption. *Iv(r1)* is the outgoing radiance arriving the sensor at the frequency *v* and *Iv(r0)* corresponds to the background radiance

( ) ( ') '

*r dr r r dr*

 

 (21)

(22)

τ*v=*α

*out in <sup>e</sup> dE dE dE dE*

> 

*<sup>v</sup>*. It equals the Planck function according to Kirchhoff's law:

*v v dI S I*

*d*

integral variable *r*, we get the integral form of radiative transfer equation:

1 0 () () ( ) ()

  

 

 

 (18)

  α

*<sup>v</sup>*, the radiant energy

(15)

(16)

(17)

(19)

(20)

*vdr*, we get the differential form

According to the Lamber-beer law, with the absorption coefficient

 

With the emission coefficient *jv*, the radiant energy of medium emission is:

 

Given the definition of opacity or optical thickness: *d*

of radiative transfer equation from equation (18):

entering the optical path.

 

Under the assumption of LTE, the equation can be written as:

 

In accordance with energy conservation law, we get:

Substituting equation (8)~(11) into equation (12):

absorbed by the medium is:

as *Sv*≡*jv/*α As the radiative transfer equation results from energy conservation law, it is applicable to the whole electromagnetic spectrum, from radio wave to visible light. In the course of this work, radiation has only been discussed in terms of scalar intensity. Considering the polarization, the radiation is described by four components (I, Q, U, V) of the Stokes vector and a complete description of interaction between the medium and the radiation will be expressed. However, scalar radiative transfer is usually a good approximation for most situations in radiative transfer modeling.

## **2.3 Elements to promote the algorithm**

## **2.3.1 Atmospheric turbulence**

Turbulence is a flow regime characterized chaotically and stochastically, the problems of which are thus treated statistically rather than deterministically. The turbulent atmospheric optical property is changing with the temporal and spatial variation, resulting in the fluctuation of atmospheric refractive index. The essence of turbulence effect is the influence of medium disturbance on the transmission of incident THz radiation, including the beam drift, jitter, flickering, distortion, and degeneracy of the spatial coherence.

The turbulent consequence mainly depends on the relationship of turbulent scale *l* and the characteristic dimension of the incident radiation *dB*.

On condition that *l>>dB*, THz beam deflects during the process of the propagation in turbulence and mainly cause beam drifting on the receiver. When turbulent scale *l* is equal to the characteristic dimension *dB*, the light beam will also experience stochastic deflection, resulting in the image spot jitter. If *l<<dB*, the influence of scattering and diffraction leads to the intensity flickering of THz beam. (Yao & Yu 2006)

Additionally, in terms of incident radiation, fully coherent light beams are sensitive to the properties of the medium through which they are propagating and the turbulence-induced spatial broadening is the major limiting factor in most applications. Partially coherent beams are less affected by atmospheric turbulence than fully ones. (Shirai 2003)

## **2.3.2 Atmospheric refraction**

The atmospheric refraction results from the uneven distribution of air in horizontal and vertical directions. When passing through the atmosphere, the line of sight is refracted and bended towards the surface of the planets. Taking refraction into account will correct and promote the radiative transfer path with some elementary geometrical relationships, as plotted in Figure 9.

In conclusion of Section 2, the general idea to solve these problems above is to study the various effects independently and superpose them. Currently, most researches are mainly focused on the atmospheric extinction and the establishment of radiative transfer model.

## **3. THz atmospheric propagation model**

#### **3.1 Moliere**

Microwave Observation Line Estimation and Retrieval (Moliere), developed at the Bordeaux Astronomical Observatory (France), is the versatile forward and inversion model for

Atmospheric Propagation of Terahertz Radiation 383

down-looking geometries should be considered together, and for limb geometry if the

The new radiative transfer model [Approximate] Spherical Atmospheric Radiative Transfer model (SARTre) has been developed to provide a consistent model that accounts for the influence of aerosols and clouds, e.g. water droplets or ice particles. It includes emission and absorption as well as scattering as sources/sinks of radiation from both solar and terrestrial sources in the spherical shell atmosphere and is able to analyze data measured over the spectral range from ultraviolet to microwaves. (Mendrok et al., 2008) SARTre is designed for monochromatic, high spectral resolution forward modeling of arbitrary observing

The line-by-line calculation of molecular absorption cross sections has been adapted from the radiative transfer package MIRART (Modular Infrared Atmospheric Radiative Transfer). And the DISORT (Discrete Ordinate Radiative Transfer Model) package is used for the calculation of the incident radiation field when taking multiple scattering into account,

The Advanced Model for Atmospheric Terahertz Radiation Analysis and Simulation (AMATERASU) is developed by the National Institute of Information and Communications Technology (NICT) THz project. This project aims to develop THz technology for various applications concerning the telecommunications, atmospheric remote sensing to retrieve geophysical parameters and the study of the thermal atmospheric emission in the Earth energy budget. The framework of AMATERASU has been shown in Figure 10, mainly consisting of the spectroscopic parameters and the radiative transfer equation, as mentioned

under the assumption of a locally plane-parallel atmosphere. (Mendrok et al., 2008)

receiver is inside the atmosphere, such as balloon and airplane.

geometries, especially for the limb observation technique.

Fig. 10. The framework of AMATERASU from NICT

**3.2 SARTre** 

**3.3 AMATERASU** 

above.

Fig. 9. The radiation path and its modification due to atmospheric refraction

millimeter and sub-millimeter wavelength observations on board the Odin satellite, including a non-scattering radiative transfer model, a receiver simulator and an inversion code. The forward models comprise spectroscopic parameters, atmospheric radiative transfer model, and instrument characteristics in order to model and compute the searched atmospheric quantities. In parallel, inversion techniques have been developed to retrieve geophysical parameters such as temperature and trace gas mixing ratios from the remotely measured spectra. (Urban et al., 2004)

Moliere is presently applied to data analysis for ground-based and space-borne heterodyne instruments and definition studies for future limb sensors dedicated to Earth observation and Mars exploration. However, this code can not be used when both up-looking and down-looking geometries should be considered together, and for limb geometry if the receiver is inside the atmosphere, such as balloon and airplane.

## **3.2 SARTre**

382 Remote Sensing – Advanced Techniques and Platforms

Fig. 9. The radiation path and its modification due to atmospheric refraction

measured spectra. (Urban et al., 2004)

millimeter and sub-millimeter wavelength observations on board the Odin satellite, including a non-scattering radiative transfer model, a receiver simulator and an inversion code. The forward models comprise spectroscopic parameters, atmospheric radiative transfer model, and instrument characteristics in order to model and compute the searched atmospheric quantities. In parallel, inversion techniques have been developed to retrieve geophysical parameters such as temperature and trace gas mixing ratios from the remotely

Moliere is presently applied to data analysis for ground-based and space-borne heterodyne instruments and definition studies for future limb sensors dedicated to Earth observation and Mars exploration. However, this code can not be used when both up-looking and The new radiative transfer model [Approximate] Spherical Atmospheric Radiative Transfer model (SARTre) has been developed to provide a consistent model that accounts for the influence of aerosols and clouds, e.g. water droplets or ice particles. It includes emission and absorption as well as scattering as sources/sinks of radiation from both solar and terrestrial sources in the spherical shell atmosphere and is able to analyze data measured over the spectral range from ultraviolet to microwaves. (Mendrok et al., 2008) SARTre is designed for monochromatic, high spectral resolution forward modeling of arbitrary observing geometries, especially for the limb observation technique.

The line-by-line calculation of molecular absorption cross sections has been adapted from the radiative transfer package MIRART (Modular Infrared Atmospheric Radiative Transfer). And the DISORT (Discrete Ordinate Radiative Transfer Model) package is used for the calculation of the incident radiation field when taking multiple scattering into account, under the assumption of a locally plane-parallel atmosphere. (Mendrok et al., 2008)

## **3.3 AMATERASU**

The Advanced Model for Atmospheric Terahertz Radiation Analysis and Simulation (AMATERASU) is developed by the National Institute of Information and Communications Technology (NICT) THz project. This project aims to develop THz technology for various applications concerning the telecommunications, atmospheric remote sensing to retrieve geophysical parameters and the study of the thermal atmospheric emission in the Earth energy budget. The framework of AMATERASU has been shown in Figure 10, mainly consisting of the spectroscopic parameters and the radiative transfer equation, as mentioned above.

Fig. 10. The framework of AMATERASU from NICT

Atmospheric Propagation of Terahertz Radiation 385

Lee, Y. (2008). Principles of Terahertz Science and Technology, Springer Science+Business

Liebe H. (1989). MPM-An atmospheric millimeter-wave propagation model. International

Ma Q. & Tipping R. (1992). A far wing line shape theory and its application to the foreign-

Ma Q. & Tipping R. (1999). The averaged density matrix in the coordinate represesntation:

Mendrok J.; Baron P. & Yasuko K. (2008). The AMATERASU Scattering Module. Journal of

Rosenkranz P. (1998). Water vapor microwave continuum absorption: a comparison of measurements and models. Radio Science, Vol.33, No.4, (July 1998), pp. 919-928 Rothman L.; Gordon I. & Barbe A. (2009). The HITRAN 2008 molecular spectroscopic

Ryu, C. & Kong, S. (2010). Atmospheric degradation correction of terahertz beams using

Shirai T. (2003). Mode analysis of spreading of partially coherent beams propagating

Siegel, P. (2002). Terahertz Technology. IEEE Transactions on microwave theory and techniques, Vol.50, No.3, (March 2002), pp. 910-928, ISSN 0018-9480 Thomas G. & Stamnes K. (2002). Radiative Transfer in the Atmosphere and Ocean, Press

Tonouchi, M. (2007). Cutting-edge terahertz technology. Nature Photonics, Vol.1, No.2,

Urban J.; Baron P. & Lautié N. (2004). Moliere(v5): a versatile forward-and inversion model

Urban J.; Baron P. & Lautié N. (2004). Moliere(v5): a versatile forward-and inversion model

Wang R.; Yao J. & Xu D. (2011). The physical theory and propagation model of THz

Journal of Infrared and Millimeter Waves, Vol.10, No.6, (February 1989), pp. 631-

broadened water continuum absorption. Journal of Chemical Physics, Vol.97, No.2,

application to the calculation of the far-wing line shapes for H2O. Journal of Chemical Physics, Vol.111, No.13, (June 1999), pp. 5909-5921, ISSN 0021-9606 Mendrok J. (2006). The SARTre Model for Radiative Transfer in Spherical Atmospheres and

its application to the Derivation of Cirrus Cloud Properties, Freie Universität,

the National Institute of Information and Communications Technology, Vol. 55,

database. Journal of Quantitative Spectroscopy and Radiative Transfer, Vol.110,

multiscale signal restoration. Applied Optics, Vol.49, No.5, (February 2010), pp.

through atmospheric turbulence. *Journal of the Optical Society of America A,* pp. 1094-

Syndicate of the University of Cambridge, ISBN 0-521-40124-0, Cambridge, United

for the millimeter and sub-millimeter wavelength range. Journal of Quantitative Spectroscopy & Radiative Transfer, Vol. 83, No. 4, (February 2004), pp. 529-554,

for the millimeter and sub-millimeter wavelength range. Journal of Quantitative Spectroscopy & Radiative Transfer, Vol. 83, No. 4, (February 2004), pp. 529-554,

atmospheric propagation. Journal of Physics: Conference Series, Vol. 276, No. 1.

Media, ISBN 978-0-387-09539-4, New York, America.

(April 2008), pp. 818-828, ISSN 0021-9606

650

Berlin, Germany

927-935

1102

Kingdom

ISSN 0022-4073

ISSN 0022-4073

No. 1, (March 2008), pp. 123-132

No.9, (June 2009), pp. 533-572

(February 2007), pp. 97-105, ISSN 1749-4885

(March 2011), pp. 012223, ISSN 1742-6596

The AMATERASU has a strong heritage from the two models above, respectively in the non-scattering and scattering case. The first stage concerns a non-scattering and homogeneous atmosphere, based on the original Moliere receiver simulator and retrieval codes. The absorption coefficient module has been extent to THz region and a more general radiative transfer module has been implemented to handle different geometries of optical paths and any location for the receiver. (Baron et al., 2008) The advanced version has taken the scattering effect into consideration. Modules related to optical properties of atmospheric particles and to scattering have been adapted from SARTre. The complex refractive index data of aerosols in THz region should be emphasized as a crucial parameter for radiative transfer algorithms. (Mendrok et al., 2008)

As for the practical applications, the THz atmospheric propagation models above should be compared with each other and validated against the real laboratory measurements in order to verify the data accuracy and correctness of the algorithm hypothesis. (Wang et al., 2011)

## **4. Conclusion**

In this chapter, we have discussed the fundamental theory in the process of THz atmospheric propagation. Several kinds of THz atmospheric propagation models have been introduced as well. The critical issues lie in the construction of radiative transfer algorithm, the collection of accurate spectral parameters, such as linear and continuum absorption and complex refractive index in THz region, and the standardization of measurement procedures. The ultimate objective is to construct the atmospheric propagation model in different kinds of climatic conditions on the basis of the theoretical analysis and the material database.

## **5. Acknowledgment**

This program is supported by the National Basic Research Program of China under Grant No. 2007CB310403.

## **6. References**


The AMATERASU has a strong heritage from the two models above, respectively in the non-scattering and scattering case. The first stage concerns a non-scattering and homogeneous atmosphere, based on the original Moliere receiver simulator and retrieval codes. The absorption coefficient module has been extent to THz region and a more general radiative transfer module has been implemented to handle different geometries of optical paths and any location for the receiver. (Baron et al., 2008) The advanced version has taken the scattering effect into consideration. Modules related to optical properties of atmospheric particles and to scattering have been adapted from SARTre. The complex refractive index data of aerosols in THz region should be emphasized as a crucial parameter for radiative

As for the practical applications, the THz atmospheric propagation models above should be compared with each other and validated against the real laboratory measurements in order to verify the data accuracy and correctness of the algorithm hypothesis. (Wang et al., 2011)

In this chapter, we have discussed the fundamental theory in the process of THz atmospheric propagation. Several kinds of THz atmospheric propagation models have been introduced as well. The critical issues lie in the construction of radiative transfer algorithm, the collection of accurate spectral parameters, such as linear and continuum absorption and complex refractive index in THz region, and the standardization of measurement procedures. The ultimate objective is to construct the atmospheric propagation model in different kinds of climatic conditions on the basis of the theoretical analysis and the material

This program is supported by the National Basic Research Program of China under Grant

Baron P.; Mendrok J. & Yasuko K. (2008). AMATERASU: Model for Atmospheric TeraHertz

and Communications Technology, Vol. 55, No. 1, (March 2008), pp. 109-121 Baxter J. & Guglietta G. (2011). Terahertz Spectroscopy. Analytical Chemistry, Vol. 83, No.

Clough S.; Kneizys F. & Davies R. (1989). Line shape and the water vapor continuum.

Davies A.; Burnett A. & Fan W. (2008). THz spectroscopy of explosives and drugs. Materialstoday, Vol. 11, No. 3, (March 2008), pp. 18-26, ISSN 1369-7021 Foltynowicz, R.; Wanke, M. & Mangan, M. (2005). Atmospheric Propagation of THz

Han P.; Tani M. & Usami M. (2001). A direct comparison between terahertz time-domain

Physics, Vol. 89, No. 4, (February 2001), pp. 2357-2359, ISSN 0021-8979

spectroscopy and far-infrared Fourier transform spectroscopy. Journal of Applied

Atmospheric Research, Vol.23, No.3, (October 1989), pp. 229-241

Radiation, Sandia National Laboratories, New Mexico, America

Radiation Analysis and Simulation. Journal of the National Institute of Information

transfer algorithms. (Mendrok et al., 2008)

**4. Conclusion** 

database.

**5. Acknowledgment** 

12, (June 2011), pp. 4342-4368

No. 2007CB310403.

**6. References** 


**1. Introduction**

Accurate, detailed and up-to-date road information is of special importance in geo-spatial databases as it is used in a variety of applications such as vehicle navigation, traffic management and advanced driver assistance systems (ADAS). The commercial road maps utilized for road navigation or the geographical information system (GIS) today are based on linear road centrelines represented in vector format with poly-lines (i.e., series of nodes and shape points, connected by segments), which present a serious lack of accuracy, contents, and completeness for their applicability at the sub-road level. For instance, the accuracy level of the present standard maps is around 5 to 20 meters. The roads/streets in the digital maps are represented as line segments rendered using different colours and widths. However, the widths of line segments do not necessarily represent the actual road widths accurately. Another problem with the existing road maps is that few precise sub-road details, such as lane markings and stop lines, are included, whereas such sub-road information is crucial for applications such as lane departure warning or lane-based vehicle navigation. Furthermore, the vast majority of road maps are modelled in 2D space, which means that some complex road scenes, such as overpasses and multi-level road systems, cannot be effectively represented. In addition, the lack of elevation information makes it infeasible to carry out

**Road Feature Extraction from High Resolution** 

Hang Jin1, Marc Miska1, Edward Chung1, Maoxun Li2 and Yanming Feng3 *1Smart Transport Research Centre, Queensland University of Technology, Brisbane* 

*3Faculty of Science and Technology, Queensland University of Technology* 

**Aerial Images Upon Rural Regions Based on** 

**Multi-Resolution Image Analysis** 

*2College of Urban Economics and Public Administration, Capital University of Economics and Business, Beijing* 

**and Gabor Filters** 

*1,3Australia 2PR China* 

**17**

Traditional methods for acquiring road information include i) ground surveying and ii) delineating roads from remotely sensed imagery (Zhang & Couloigner, 2004). Ground surveying can be carried out by using devices such as total stations and GPS receivers. As both devices are point-based, rendering this method labour-intensive and time-consuming, and therefore more suitable for detailed road surveying for small areas rather than for large-scale road mapping. Road information can be delineated from remote sensing images in three

applications such as driving simulation and 3D vehicle navigation.


## **Road Feature Extraction from High Resolution Aerial Images Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters**

Hang Jin1, Marc Miska1, Edward Chung1, Maoxun Li2 and Yanming Feng3 *1Smart Transport Research Centre, Queensland University of Technology, Brisbane 2College of Urban Economics and Public Administration, Capital University of Economics and Business, Beijing 3Faculty of Science and Technology, Queensland University of Technology 1,3Australia 2PR China* 

#### **1. Introduction**

386 Remote Sensing – Advanced Techniques and Platforms

Yao, J. & Yu Y. (2006). Optoelectronic Technology, Higher Education Press, ISBN 7-04-

Yasuko, K. (2008). Terahertz-Wave Remote Sensing. Journal of the National Institute of

Yasuko K. & Takamasa S. (2008). Atmospheric Propagation Model of Terhertz-Wave.

Zhang, X. & Xu J. (2009). Introduction to THz Wave Photonics, Springer Science+Business

Information and Communication Technology, Vol.55, No.1, (March 2008), pp. 79-81

Journal of the National Institute of Information and Communications Technology,

019255-1, Bei Jing, China

Vol.55, No.1, (March 2008), pp. 73-77

Media, ISBN 978-1-4419-0977-0, New York, America

Accurate, detailed and up-to-date road information is of special importance in geo-spatial databases as it is used in a variety of applications such as vehicle navigation, traffic management and advanced driver assistance systems (ADAS). The commercial road maps utilized for road navigation or the geographical information system (GIS) today are based on linear road centrelines represented in vector format with poly-lines (i.e., series of nodes and shape points, connected by segments), which present a serious lack of accuracy, contents, and completeness for their applicability at the sub-road level. For instance, the accuracy level of the present standard maps is around 5 to 20 meters. The roads/streets in the digital maps are represented as line segments rendered using different colours and widths. However, the widths of line segments do not necessarily represent the actual road widths accurately. Another problem with the existing road maps is that few precise sub-road details, such as lane markings and stop lines, are included, whereas such sub-road information is crucial for applications such as lane departure warning or lane-based vehicle navigation. Furthermore, the vast majority of road maps are modelled in 2D space, which means that some complex road scenes, such as overpasses and multi-level road systems, cannot be effectively represented. In addition, the lack of elevation information makes it infeasible to carry out applications such as driving simulation and 3D vehicle navigation.

Traditional methods for acquiring road information include i) ground surveying and ii) delineating roads from remotely sensed imagery (Zhang & Couloigner, 2004). Ground surveying can be carried out by using devices such as total stations and GPS receivers. As both devices are point-based, rendering this method labour-intensive and time-consuming, and therefore more suitable for detailed road surveying for small areas rather than for large-scale road mapping. Road information can be delineated from remote sensing images in three

digital road map. A novel road surface and lane marking extraction approach is presented in Section 4, which detects road surface from VHR aerial images based on support vector machine (SVM) classification method, and the lane markings are further generated using 2D anisotropic Gaussian filter as well as Otsu's thresholding algorithm. Concluding remarks and

<sup>389</sup> Road Feature Extraction from High Resolution Aerial Images

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

The review conducted by Mena (2003) cites more than 250 road extraction studies, and classifies different road extraction approaches based on three principal factors: i) the preset objective, ii) the extraction technique applied, and iii) the type of sensors utilized. Although the developed approaches exhibit a variety of methodologies and techniques, different categorizations for road extraction work can still be sought in order to better match the available data and methods to its ultimate purpose. In this review, we consider the use of major state-of-the-art data sources, aerial imagery, airborne LiDAR data, and categorize the existing road extraction methods into two classes, i) road detection in rural or non-urban regions, and ii) urban area road extraction. As the aerial imagery and LiDAR data are usually collected in the same flight missions, the extraction of road information from LiDAR data only is uncommon. This review is by no means exhaustive; instead, it focuses mainly on commonly

Subsection 2.1 examines the work on rural area road extraction, and the review of road detection in urban regions is presented in Subsection 2.2. In addition, a brief summary of the road pavement marking extraction algorithms is provided in Subsection 2.3. Last but not least, the qualitative and quantitative evaluation of results is reviewed in Subsection 2.4.

Roads in rural or non-urban areas have characteristics such as constant widths, continuous curvature changes, and homogeneous local orientation distributions, which can moderate the complexity of their extraction. Basically, rural road extraction approaches, either semi-automatic or automatic, can be classified into i) artificial intelligent, ii) multi-resolution

An automatic road verification approach based on digital aerial images as well as GIS data is developed in (Wiedemann & Mayer, 1996) as a part of the update procedure for GIS data. The candidates for roadsides, which are obtained by searching the surroundings of GIS road-axes in the image based on profiles, are tested, and a measure of confidence is also calculated. However, user interaction is still required, as the results of the method are far from perfect.

In (Doucette et al., 2001), a fully automated road extraction strategy based on Kohonen's self-organizing map (SOM) is proposed to detect road information in high-resolution multi-spectral aerial imagery. The core algorithms implemented include i) anti-parallel edge centerline extractor, ii) fuzzy organization of elongated regions, and iii) self-organizing road finder. A covariance-based principal component analysis (PCA) is performed to determine the intrinsic dimensions of the image bands, and to classify the image using a maximum likelihood classifier with manually selected training samples. The extraction results over

future work recommendations are given in Section 5.

**2. Review of the related work**

used road extraction techniques.

**2.1 Rural road extraction techniques**

analysis, iii) snakes, iv) classification, and v) template matching.

Roads that do not exist in the GIS data will not be detected.

ways: i) manual delineation, ii) semi-automated extraction, iii) and fully automated detection. Manual extraction of roads from remotely sensed imagery is a simple stretching operation. However, the operation is impractically time consuming when the scenes are very complex. In addition, not only are such complex maps required for large geographic areas, frequent updating is also needed. In the semi-automatic road extraction method, approximations or seed points are given manually followed by an automatic algorithm which uses these approximations as input to enable them to automatically extract the road. Approximations can be a starting point, an ending point, intermediate points, road directions, road widths, and prior knowledge from a GIS database (Zhang, 2003). Full automatic road feature extraction is pursed by automating the selection of the necessary initial information.

As well as the advancement of innovative sensors and platforms, road network spatial information can be acquired from aerial and satellite imagery, synthetic aperture radar (SAR) imagery, airborne light detection and ranging (LiDAR) data, and from image sequences taken from ground-based mobile mapping systems (MMS) with different spatial and spectral resolutions (Quackenbush, 2004). Aerial images and LiDAR point clouds are promising data sources for generating road maps and updating available maps to support various activities and missions of government agencies and consumers (Mokhtarzade & Zoej, 2007). However, it has often been the case that while large amounts of high resolution aerial images and dense LiDAR data are being collected, piled up and remain unprocessed or unused, new data sets are continuously being gathered. This phenomenon is caused by the fact that development of automatic techniques for processing aerial imagery and LiDAR data is far behind that of the hardware sensor technologies. Object extraction for full exploitation of these data sources is very challenging. There are more challenges for automatic road information extraction in urban areas due to its much more complex circumstances.

Research on road feature extraction from aerial and satellite images can be traced back to the 1970s (Bajcsy & Tavakoli, 1976). Over three decades, a large number of automatic and semi-automatic algorithms have been attempted. Although many different approaches have been developed for the semi-automatic or automatic extraction of road information, none of these can solve all the problems without human interactions. This is because of the wide variations of roads (urban, rural, precipitous) and the complexities of their environment (occlusions caused by cars, trees, buildings, shadows etc.) (Poullis & You, 2010). It is worth noting that the existing road feature generation algorithms are all task-based and data-based. For instance, road surfaces have a quite different appearance from pavement markings; thus, approaches that are suitable for road surface extraction usually cannot be applied in the detection of pavement markings without modification. Due to the inherent difference in the data style, methods utilized for road extraction in aerial images may not be appropriate for LiDAR data sets. Therefore, in this work, an effective road information extraction system, which deals with road features in rural and urban regions respectively, is proposed based on very high resolution (VHR) aerial images.

The research is structured to present the main contributions as follows. Section 2 provides a review of the relevant work published over the past 20 years. Road feature extraction for rural and urban areas from high spatial resolution remotely sensed imagery is discussed separately in this section. In Section 3, an effective road network extraction method is presented. The homogeneity histogram thresholding algorithm utilized to detect road surface from VHR aerial images, and detected road features are then thinned and vectorized to reconstruct the digital road map. A novel road surface and lane marking extraction approach is presented in Section 4, which detects road surface from VHR aerial images based on support vector machine (SVM) classification method, and the lane markings are further generated using 2D anisotropic Gaussian filter as well as Otsu's thresholding algorithm. Concluding remarks and future work recommendations are given in Section 5.

## **2. Review of the related work**

2 Will-be-set-by-IN-TECH

ways: i) manual delineation, ii) semi-automated extraction, iii) and fully automated detection. Manual extraction of roads from remotely sensed imagery is a simple stretching operation. However, the operation is impractically time consuming when the scenes are very complex. In addition, not only are such complex maps required for large geographic areas, frequent updating is also needed. In the semi-automatic road extraction method, approximations or seed points are given manually followed by an automatic algorithm which uses these approximations as input to enable them to automatically extract the road. Approximations can be a starting point, an ending point, intermediate points, road directions, road widths, and prior knowledge from a GIS database (Zhang, 2003). Full automatic road feature extraction is

As well as the advancement of innovative sensors and platforms, road network spatial information can be acquired from aerial and satellite imagery, synthetic aperture radar (SAR) imagery, airborne light detection and ranging (LiDAR) data, and from image sequences taken from ground-based mobile mapping systems (MMS) with different spatial and spectral resolutions (Quackenbush, 2004). Aerial images and LiDAR point clouds are promising data sources for generating road maps and updating available maps to support various activities and missions of government agencies and consumers (Mokhtarzade & Zoej, 2007). However, it has often been the case that while large amounts of high resolution aerial images and dense LiDAR data are being collected, piled up and remain unprocessed or unused, new data sets are continuously being gathered. This phenomenon is caused by the fact that development of automatic techniques for processing aerial imagery and LiDAR data is far behind that of the hardware sensor technologies. Object extraction for full exploitation of these data sources is very challenging. There are more challenges for automatic road information extraction in

Research on road feature extraction from aerial and satellite images can be traced back to the 1970s (Bajcsy & Tavakoli, 1976). Over three decades, a large number of automatic and semi-automatic algorithms have been attempted. Although many different approaches have been developed for the semi-automatic or automatic extraction of road information, none of these can solve all the problems without human interactions. This is because of the wide variations of roads (urban, rural, precipitous) and the complexities of their environment (occlusions caused by cars, trees, buildings, shadows etc.) (Poullis & You, 2010). It is worth noting that the existing road feature generation algorithms are all task-based and data-based. For instance, road surfaces have a quite different appearance from pavement markings; thus, approaches that are suitable for road surface extraction usually cannot be applied in the detection of pavement markings without modification. Due to the inherent difference in the data style, methods utilized for road extraction in aerial images may not be appropriate for LiDAR data sets. Therefore, in this work, an effective road information extraction system, which deals with road features in rural and urban regions respectively, is proposed based on

The research is structured to present the main contributions as follows. Section 2 provides a review of the relevant work published over the past 20 years. Road feature extraction for rural and urban areas from high spatial resolution remotely sensed imagery is discussed separately in this section. In Section 3, an effective road network extraction method is presented. The homogeneity histogram thresholding algorithm utilized to detect road surface from VHR aerial images, and detected road features are then thinned and vectorized to reconstruct the

pursed by automating the selection of the necessary initial information.

urban areas due to its much more complex circumstances.

very high resolution (VHR) aerial images.

The review conducted by Mena (2003) cites more than 250 road extraction studies, and classifies different road extraction approaches based on three principal factors: i) the preset objective, ii) the extraction technique applied, and iii) the type of sensors utilized. Although the developed approaches exhibit a variety of methodologies and techniques, different categorizations for road extraction work can still be sought in order to better match the available data and methods to its ultimate purpose. In this review, we consider the use of major state-of-the-art data sources, aerial imagery, airborne LiDAR data, and categorize the existing road extraction methods into two classes, i) road detection in rural or non-urban regions, and ii) urban area road extraction. As the aerial imagery and LiDAR data are usually collected in the same flight missions, the extraction of road information from LiDAR data only is uncommon. This review is by no means exhaustive; instead, it focuses mainly on commonly used road extraction techniques.

Subsection 2.1 examines the work on rural area road extraction, and the review of road detection in urban regions is presented in Subsection 2.2. In addition, a brief summary of the road pavement marking extraction algorithms is provided in Subsection 2.3. Last but not least, the qualitative and quantitative evaluation of results is reviewed in Subsection 2.4.

#### **2.1 Rural road extraction techniques**

Roads in rural or non-urban areas have characteristics such as constant widths, continuous curvature changes, and homogeneous local orientation distributions, which can moderate the complexity of their extraction. Basically, rural road extraction approaches, either semi-automatic or automatic, can be classified into i) artificial intelligent, ii) multi-resolution analysis, iii) snakes, iv) classification, and v) template matching.

An automatic road verification approach based on digital aerial images as well as GIS data is developed in (Wiedemann & Mayer, 1996) as a part of the update procedure for GIS data. The candidates for roadsides, which are obtained by searching the surroundings of GIS road-axes in the image based on profiles, are tested, and a measure of confidence is also calculated. However, user interaction is still required, as the results of the method are far from perfect. Roads that do not exist in the GIS data will not be detected.

In (Doucette et al., 2001), a fully automated road extraction strategy based on Kohonen's self-organizing map (SOM) is proposed to detect road information in high-resolution multi-spectral aerial imagery. The core algorithms implemented include i) anti-parallel edge centerline extractor, ii) fuzzy organization of elongated regions, and iii) self-organizing road finder. A covariance-based principal component analysis (PCA) is performed to determine the intrinsic dimensions of the image bands, and to classify the image using a maximum likelihood classifier with manually selected training samples. The extraction results over

generate a simpler image by grey scale morphological algorithms. Then the split and merge algorithm is applied on the simplified image, which is converted to a binary image. After that, the binary image map objects are labeled using the connected component analysis (CCA), and the skeletons of roads are extracted in the classified image by morphological operations. The roadsides are finally extracted by combining the skeleton of roads and the generated straight

<sup>391</sup> Road Feature Extraction from High Resolution Aerial Images

Steger et al. (1995) propose a multi-resolution road extraction approach, where a different extraction method is utilized for each scale level. One method is applied on a fine scale with 25 cm GSD, while the other is applied at a lower resolution, which is reduced by a factor of eight. The larger scale method extracts roads based on a structural model matching technique, while the smaller scale method detects lines based on the image intensity level. Finally, the

An approach based on particle filtering is proposed in (Ye et al., 2006) to automatically extract roads from high resolution imagery. The road edges are extracted by the Canny detector, then the edge point distribution and the similarity of grey value are integrated into the particle filter to deal with complex scenes. To handle road appearance changes, the tracking algorithm is

Baumgartner et al. (1999) extract roads from multi-resolution images based on the work of Heipke (1995). In this paper, they emphasize the concept of "road model" comprising explicit knowledge about geometry, radiometry, topology, and context. They firstly segment the aerial image into global contexts (forest, rural and urban) to guide the extraction process in the various regions. In the coarse image, the line features are extracted using Steger's algorithm (1998). In the fine image, parallel edges are extracted and grouped into rectangles, which are then connected into the road segments. Finally, roads are generated through grouping road

Dal-Poz et al. (2005) present an automatic method for road seed extraction from medium and high resolution images of rural scenes. The road-sides candidates are firstly detected by the Canny edge detector; the road objects are then built based on a set of rules constructed from a prior road knowledge. The rules used to identify and build road objects consist of anti-parallelism, parallelism and proximity, homogeneity, contrast, superposition, and fragmentation. Due to incompatibility with any road objects, road crossings cannot be

Roads in urban areas have some unique characteristics absent in rural areas. There are often many shadows and occluded regions on road surfaces in urban areas due to the obstruction of tall buildings, vehicles, and trees. Furthermore, the contrast between roads and surrounding objects deteriorates significantly, since roads, side-walks, building roofs, and parking lots are usually constructed using similar materials, such as concrete and asphalt. Therefore, road extraction in urban areas cannot copy or enhance the methods and procedures which have been effective in the rural road extractions, such as the algorithms discussed above. Instead, it is necessary to develop an automatic system that can extract road information accurately as well as deal with the effects of background objects like cars, trees, or buildings. The key

outputs are combined by selecting roads that are extracted at both levels.

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

segments and closing gaps between them.

**2.2 Road extraction in urban areas**

allowed to update the road model during temporally stable image observations.

line segments.

extracted.

several different areas and sensors show that the highest extraction quality and correctness rates are from anti-parallel edge analysis of spectral band and class layers, respectively.

Rellier et al. (2002) propose a model to locally register cartographic road networks on SPOT satellite image based on Markov random fields (MRF) so as to correct the errors and improve map accuracy. The method first translates the road network into a graph where the nodes are characteristic points of the roads. Then local registration is performed by defining a model in a Bayesian framework. One interesting point of the model is that the registration is done locally, which is very useful when the map exhibits local errors. The biggest problem with the model is still the computational time, which remains too long due to the frequency of computations of the path between nodes.

To extract roads from aerial images, Amo et al. (2006) employ the region competition algorithm, a mixed approach which combines region growing techniques with active contour models. Region growing makes the first step faster and region competition delivers more accurate results. However, this method is appropriate for roads in agricultural fields only, where roads are quite homogeneous and their homogeneity is sufficiently different from that of their surroundings.

Mayer et al. (1998) utilize the ribbon snake for the extraction of salient roads from aerial images based on the extracted lines at a coarse scale and the variation of road width at a fine scale. Non-salient roads are extracted by connecting two adjacent ends of salient roads with a road hypothesis, which is then verified based on homogeneity and the constancy of width. Finally, a closed snake is initialized inside the central area of the junction and expanded until delineating the junction borders. Mayer's method can overcome some problems such as extraction of shadowed and occluded roads, but it cannot deal with the complex road scenario in urban areas.

Laptive et al. (2000) use ribbon snakes to remove irrelevant structures extracted by a preliminary line detection algorithm at a coarse resolution. The method initializes a ribbon snake for each line detected and sets the width property to zero. The snake positions are optimized at a coarse scale to get a rough approximation of the road position. A second optimization process is used at a finer scale where the road position precision was increased and the width property expanded up to the structure boundary. Finally, road width thresholding is applied in order to discard any irrelevant structures.

A prior work for road detection based on image segmentation is conducted by Wang and Newkirk (1988), where a system is developed for automated highway network extraction from Landsat Thematic Mapper (TM) imagery supported by knowledge analysis and expert system. Three steps are involved in the system: i) binary image production, ii) tracing and feature extraction, and iii) highway identification. K-means clustering is employed to classify the image into two categories: road and non-road features. Analysis and processing are then performed on the linear patterns which are generated by labeling the binary image using a tracing algorithm. The proposed method is fairly simple and fully automatic, but the experiments are limited to the extraction of highways in rural areas.

Amini et al. (2002) utilize a segmentation method called the split and merge algorithm to automatically extract roadsides from large-scale image maps. The proposed method consists of two stages: i) straight lines extraction, and ii) roads skeleton extraction. The authors firstly 4 Will-be-set-by-IN-TECH

several different areas and sensors show that the highest extraction quality and correctness rates are from anti-parallel edge analysis of spectral band and class layers, respectively.

Rellier et al. (2002) propose a model to locally register cartographic road networks on SPOT satellite image based on Markov random fields (MRF) so as to correct the errors and improve map accuracy. The method first translates the road network into a graph where the nodes are characteristic points of the roads. Then local registration is performed by defining a model in a Bayesian framework. One interesting point of the model is that the registration is done locally, which is very useful when the map exhibits local errors. The biggest problem with the model is still the computational time, which remains too long due to the frequency of computations

To extract roads from aerial images, Amo et al. (2006) employ the region competition algorithm, a mixed approach which combines region growing techniques with active contour models. Region growing makes the first step faster and region competition delivers more accurate results. However, this method is appropriate for roads in agricultural fields only, where roads are quite homogeneous and their homogeneity is sufficiently different from that

Mayer et al. (1998) utilize the ribbon snake for the extraction of salient roads from aerial images based on the extracted lines at a coarse scale and the variation of road width at a fine scale. Non-salient roads are extracted by connecting two adjacent ends of salient roads with a road hypothesis, which is then verified based on homogeneity and the constancy of width. Finally, a closed snake is initialized inside the central area of the junction and expanded until delineating the junction borders. Mayer's method can overcome some problems such as extraction of shadowed and occluded roads, but it cannot deal with the complex road scenario

Laptive et al. (2000) use ribbon snakes to remove irrelevant structures extracted by a preliminary line detection algorithm at a coarse resolution. The method initializes a ribbon snake for each line detected and sets the width property to zero. The snake positions are optimized at a coarse scale to get a rough approximation of the road position. A second optimization process is used at a finer scale where the road position precision was increased and the width property expanded up to the structure boundary. Finally, road width

A prior work for road detection based on image segmentation is conducted by Wang and Newkirk (1988), where a system is developed for automated highway network extraction from Landsat Thematic Mapper (TM) imagery supported by knowledge analysis and expert system. Three steps are involved in the system: i) binary image production, ii) tracing and feature extraction, and iii) highway identification. K-means clustering is employed to classify the image into two categories: road and non-road features. Analysis and processing are then performed on the linear patterns which are generated by labeling the binary image using a tracing algorithm. The proposed method is fairly simple and fully automatic, but the

Amini et al. (2002) utilize a segmentation method called the split and merge algorithm to automatically extract roadsides from large-scale image maps. The proposed method consists of two stages: i) straight lines extraction, and ii) roads skeleton extraction. The authors firstly

thresholding is applied in order to discard any irrelevant structures.

experiments are limited to the extraction of highways in rural areas.

of the path between nodes.

of their surroundings.

in urban areas.

generate a simpler image by grey scale morphological algorithms. Then the split and merge algorithm is applied on the simplified image, which is converted to a binary image. After that, the binary image map objects are labeled using the connected component analysis (CCA), and the skeletons of roads are extracted in the classified image by morphological operations. The roadsides are finally extracted by combining the skeleton of roads and the generated straight line segments.

Steger et al. (1995) propose a multi-resolution road extraction approach, where a different extraction method is utilized for each scale level. One method is applied on a fine scale with 25 cm GSD, while the other is applied at a lower resolution, which is reduced by a factor of eight. The larger scale method extracts roads based on a structural model matching technique, while the smaller scale method detects lines based on the image intensity level. Finally, the outputs are combined by selecting roads that are extracted at both levels.

An approach based on particle filtering is proposed in (Ye et al., 2006) to automatically extract roads from high resolution imagery. The road edges are extracted by the Canny detector, then the edge point distribution and the similarity of grey value are integrated into the particle filter to deal with complex scenes. To handle road appearance changes, the tracking algorithm is allowed to update the road model during temporally stable image observations.

Baumgartner et al. (1999) extract roads from multi-resolution images based on the work of Heipke (1995). In this paper, they emphasize the concept of "road model" comprising explicit knowledge about geometry, radiometry, topology, and context. They firstly segment the aerial image into global contexts (forest, rural and urban) to guide the extraction process in the various regions. In the coarse image, the line features are extracted using Steger's algorithm (1998). In the fine image, parallel edges are extracted and grouped into rectangles, which are then connected into the road segments. Finally, roads are generated through grouping road segments and closing gaps between them.

Dal-Poz et al. (2005) present an automatic method for road seed extraction from medium and high resolution images of rural scenes. The road-sides candidates are firstly detected by the Canny edge detector; the road objects are then built based on a set of rules constructed from a prior road knowledge. The rules used to identify and build road objects consist of anti-parallelism, parallelism and proximity, homogeneity, contrast, superposition, and fragmentation. Due to incompatibility with any road objects, road crossings cannot be extracted.

#### **2.2 Road extraction in urban areas**

Roads in urban areas have some unique characteristics absent in rural areas. There are often many shadows and occluded regions on road surfaces in urban areas due to the obstruction of tall buildings, vehicles, and trees. Furthermore, the contrast between roads and surrounding objects deteriorates significantly, since roads, side-walks, building roofs, and parking lots are usually constructed using similar materials, such as concrete and asphalt. Therefore, road extraction in urban areas cannot copy or enhance the methods and procedures which have been effective in the rural road extractions, such as the algorithms discussed above. Instead, it is necessary to develop an automatic system that can extract road information accurately as well as deal with the effects of background objects like cars, trees, or buildings. The key

requires the predetermination of road width, which is tuned to detect roads with a specific

<sup>393</sup> Road Feature Extraction from High Resolution Aerial Images

Two novel methods are developed in (Wang, 2004) to extract roads from high-resolution satellite images. One is a semi-automated road extraction method based on profile matching optimized by an auto-tuning Kalman filter, and the other is based on edge-aided multi-spectral classification. Experimental results from several aerial images show that both methods could accurately extract road networks from IKONOS and QuickBird satellite images, and could significantly eliminate the misclassification caused by small driveways,

Based on the fact that structural information obtained using mathematical morphological operators can provide complementary information to improve discrimination of different urban features that have a spectral overlap, Jin and Davis (2004) present applications of mathematical morphology for urban features extraction from high-resolution satellite imagery. To efficiently extract the road networks, directional morphological filtering is exploited to mask out those structures shorter than the distance of a typical city block. Directional top-hat operation is employed to mask out bright structures shorter than a city block. Similarly, dark structures shorter than a city block could be marked out by thresholding

Zhu et al. (2005) extract road network from 1-meter spatial resolution IKONOS satellite images based on the mathematical morphology and a line segment match method. The authors firstly generate the binary road image by adopting morphological leveling. Secondly, the coarse road network is detected using the proposed "Line Segment Match Method", which determines straight parallel line segments corresponding to roads. The holes are finally filled by using mathematical morphological operation. The proposed algorithm is based on the assumption that roads are a darker tone compared with the surrounding features, which may

Valero et al. (2010) propose a method for extracting roads in very high resolution (VHR) remotely sensed images, based on the assumption that roads are linear connected paths. Two advanced directional morphological operators, path opening and path closing, are utilized to extract structural pixel information; these remain flexible enough to fit rectilinear and slightly curved roads segments, due to their independence from the choice of a structural element shape. Morphological profiles are used to analyze object size and shape features so as to determine candidate roads in each level, since the morphological profiles of pixels on the roads are similar. Finally, a classical post-processing is employed to link the disconnected

A Gibbs point process framework, which is able to simulate and detect thin networks from remotely sensed images, is constructed in (Stoica et al., 2004) to form a line-network for the road segments connection. The estimate for the network is found by minimizing an energy function. In order to avoid local minima, a simulated annealing algorithm based on a Monte

Based on Gaussian scale-space theory, a Gaussian comparison function is developed for extracting the linear road features from urban aerial remote sensing images (Peng & Jin, 2007). The curvilinear structures of the roads are verified, grouped and extracted, based on

road segments using higher level representations (Tupin et al., 1998).

Carlo Dynamics is utilized for finite point processes.

house roofs connected with the road networks, and extensive paved grounds.

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

level of contrast and a low along-road variance.

on the directional bottom-hat images.

induce some problems in different situations.

techniques used to reconstruct the urban road model include road tracking, segmentation and classification, mathematical morphology, and model based road extraction, which will be depicted in detail in the following paragraphs.

Shukla et al. (2002) applies a path-following method to extract road from high-resolution satellite imagery by initializing two points to indicate the road direction. Scale space and edge-detection techniques are used as pre-processing for segmentation and estimation of road width. The cost minimization technique is used to determine the road direction and generate the next seeds. This method performs better than the work of (Kim et al., 2002) because it can generate seeds in different directions at intersections. The limitations are that the algorithm may not work on roads on which shadows are cast.

Zhao et al. (2002) imposes a semi-automatic method by matching a rectangular road template with both road mask and road seeds to extract roads from IKONOS imagery. A road mask is the road pixels generated from maximum likelihood classification, and the road seeds can be generated by tracing the long edge of the road mask. The problem is all of the extracted road masks are not road area, and not all the extracted long edges are road edge; this results in misclassification.

Kim et al. (2004) initializes one seed point on the centerline of the road to determine the position of the reference template. The orientation of the road centerline, which is calculated with Burn's algorithms, guides the optimal target window. A least square template matching approach, which puts emphasis on the central part of the road, is utilized to determine the new location of the next road template. The limitations of this algorithm are i) that it cannot work with shadows, which may terminate the tracking process, ii) that the operator must select the initial seeds on road central lines, and iii) that one seed can be used to extract only one direction, leading to too many seeds when the scene is large and complex.

Hu et al. (2004) present a semi-automatic road extraction method based on a piecewise parabolic model with zero-order continuity, which is constructed by seed points placed by a human operator. Road extraction becomes a problem of estimating the unknown parameters for each piece of the parabola, which could be solved by least square template matching based on the deformable template and the constraint of the geometric model. In densely populated areas, where roads have sharp turns and orthogonal intersections, many seed points need to be located, which results in degrading the efficiency.

Shi and Zhu (2002) propose an approach to extract road network in urban areas from high-resolution satellite images. The basic procedures include binary image production by a threshold selection interactively, and a line segment match for road network processing. Binary image production is not automatic and the threshold parameter may change with the variation of image input, so it lacks a degree of automatic process and robustness, and further improvement is required. Grey-scale mathematical morphology is tested as one of the potential solutions in the proposed approach.

Haverkamp (2002) extracts road centerlines in urban areas from road segments and intersections based on size, eccentricity, length of the object and spatial relationships between neighboring intersections. A vegetation mask is derived from multi-spectral IKONOS imagery, and these objects are generated by grouping pixels with similar road directional information, based on texture analysis in a panchromatic IKONOS imagery. This method 6 Will-be-set-by-IN-TECH

techniques used to reconstruct the urban road model include road tracking, segmentation and classification, mathematical morphology, and model based road extraction, which will be

Shukla et al. (2002) applies a path-following method to extract road from high-resolution satellite imagery by initializing two points to indicate the road direction. Scale space and edge-detection techniques are used as pre-processing for segmentation and estimation of road width. The cost minimization technique is used to determine the road direction and generate the next seeds. This method performs better than the work of (Kim et al., 2002) because it can generate seeds in different directions at intersections. The limitations are that the algorithm

Zhao et al. (2002) imposes a semi-automatic method by matching a rectangular road template with both road mask and road seeds to extract roads from IKONOS imagery. A road mask is the road pixels generated from maximum likelihood classification, and the road seeds can be generated by tracing the long edge of the road mask. The problem is all of the extracted road masks are not road area, and not all the extracted long edges are road edge; this results

Kim et al. (2004) initializes one seed point on the centerline of the road to determine the position of the reference template. The orientation of the road centerline, which is calculated with Burn's algorithms, guides the optimal target window. A least square template matching approach, which puts emphasis on the central part of the road, is utilized to determine the new location of the next road template. The limitations of this algorithm are i) that it cannot work with shadows, which may terminate the tracking process, ii) that the operator must select the initial seeds on road central lines, and iii) that one seed can be used to extract only

Hu et al. (2004) present a semi-automatic road extraction method based on a piecewise parabolic model with zero-order continuity, which is constructed by seed points placed by a human operator. Road extraction becomes a problem of estimating the unknown parameters for each piece of the parabola, which could be solved by least square template matching based on the deformable template and the constraint of the geometric model. In densely populated areas, where roads have sharp turns and orthogonal intersections, many seed points need to

Shi and Zhu (2002) propose an approach to extract road network in urban areas from high-resolution satellite images. The basic procedures include binary image production by a threshold selection interactively, and a line segment match for road network processing. Binary image production is not automatic and the threshold parameter may change with the variation of image input, so it lacks a degree of automatic process and robustness, and further improvement is required. Grey-scale mathematical morphology is tested as one of the

Haverkamp (2002) extracts road centerlines in urban areas from road segments and intersections based on size, eccentricity, length of the object and spatial relationships between neighboring intersections. A vegetation mask is derived from multi-spectral IKONOS imagery, and these objects are generated by grouping pixels with similar road directional information, based on texture analysis in a panchromatic IKONOS imagery. This method

one direction, leading to too many seeds when the scene is large and complex.

depicted in detail in the following paragraphs.

may not work on roads on which shadows are cast.

be located, which results in degrading the efficiency.

potential solutions in the proposed approach.

in misclassification.

requires the predetermination of road width, which is tuned to detect roads with a specific level of contrast and a low along-road variance.

Two novel methods are developed in (Wang, 2004) to extract roads from high-resolution satellite images. One is a semi-automated road extraction method based on profile matching optimized by an auto-tuning Kalman filter, and the other is based on edge-aided multi-spectral classification. Experimental results from several aerial images show that both methods could accurately extract road networks from IKONOS and QuickBird satellite images, and could significantly eliminate the misclassification caused by small driveways, house roofs connected with the road networks, and extensive paved grounds.

Based on the fact that structural information obtained using mathematical morphological operators can provide complementary information to improve discrimination of different urban features that have a spectral overlap, Jin and Davis (2004) present applications of mathematical morphology for urban features extraction from high-resolution satellite imagery. To efficiently extract the road networks, directional morphological filtering is exploited to mask out those structures shorter than the distance of a typical city block. Directional top-hat operation is employed to mask out bright structures shorter than a city block. Similarly, dark structures shorter than a city block could be marked out by thresholding on the directional bottom-hat images.

Zhu et al. (2005) extract road network from 1-meter spatial resolution IKONOS satellite images based on the mathematical morphology and a line segment match method. The authors firstly generate the binary road image by adopting morphological leveling. Secondly, the coarse road network is detected using the proposed "Line Segment Match Method", which determines straight parallel line segments corresponding to roads. The holes are finally filled by using mathematical morphological operation. The proposed algorithm is based on the assumption that roads are a darker tone compared with the surrounding features, which may induce some problems in different situations.

Valero et al. (2010) propose a method for extracting roads in very high resolution (VHR) remotely sensed images, based on the assumption that roads are linear connected paths. Two advanced directional morphological operators, path opening and path closing, are utilized to extract structural pixel information; these remain flexible enough to fit rectilinear and slightly curved roads segments, due to their independence from the choice of a structural element shape. Morphological profiles are used to analyze object size and shape features so as to determine candidate roads in each level, since the morphological profiles of pixels on the roads are similar. Finally, a classical post-processing is employed to link the disconnected road segments using higher level representations (Tupin et al., 1998).

A Gibbs point process framework, which is able to simulate and detect thin networks from remotely sensed images, is constructed in (Stoica et al., 2004) to form a line-network for the road segments connection. The estimate for the network is found by minimizing an energy function. In order to avoid local minima, a simulated annealing algorithm based on a Monte Carlo Dynamics is utilized for finite point processes.

Based on Gaussian scale-space theory, a Gaussian comparison function is developed for extracting the linear road features from urban aerial remote sensing images (Peng & Jin, 2007). The curvilinear structures of the roads are verified, grouped and extracted, based on

information included land and symbol markings that guide direction, and the geometric properties of the pavement markings and their spatial relationships are analyzed. Moreover, road construction manuals and a series of cutting-edge algorithms, including template matching, are involved in the analysis. The evaluation of accuracy by comparing the data with manually plotted ground truth data validate that road information can be extracted efficiently

<sup>395</sup> Road Feature Extraction from High Resolution Aerial Images

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

Tournaire et al. (2009) propose a specific approach for dashed lines and zebra crossing reconstruction. This approach relies on external knowledge introduced in the detection and reconstruction process, and is based on primitives extracted in the images. The core of the approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also deals with the interactions between the different objects making up a line, which means that the algorithm introduces external knowledge taken from specifications. To sample the energy function, the authors also use Green's algorithm, complete with a

Internal diagnosis and external evaluation for the extracted road models are two important aspects of assessment of the relevant automatic road extraction system (Wiedemann et al.,

In (Heipke et al., 1997) and (Wiedemann et al., 1998), an external evaluation approach of automatic road extraction algorithms is developed by comparison of these to manually plotted linear road axes used as reference data. The quality measures proposed for the automatically extracted road data comprise completeness, correctness, quality, redundancy, planimetric RMS differences, and gap statistics, and are all aimed at exhaustive evaluation as well as assessing geometrical accuracy. The proposed evaluation method is tested by comparing evaluations of three different automatic road extraction approaches, and demonstrating its

An in-depth usability evaluation of a semi-automated road extraction system is presented in (Wilson et al., 2004), highlighting both strengths and areas for improvement. The evaluation is principally conducted on the timing and statistical analysis as well as on factors that affect the extraction speed. Peteri et al. (2004) present a method to guide the determination of a reference based on statistical measures from several image interpretations. A tolerance zone representative of the variations in interpretation is defined that allows both the determination of the uncertainty of the reference object and the possibility of defining criteria for a quantitative evaluation. A few criteria defined by Musso and Vuchic (1988), including the size, form, and topology indices of the road network, are employed to carry out evaluation of

To qualitatively evaluate the performance of the semi-automatic road extraction algorithms, four criteria (correctness, completeness, efficiency, and accuracy) are utilized in (Zhou et al., 2006) and further in (Zhou et al., 2007). Completeness and correctness are the priority criteria in cartography, while the efficiency measurement principally takes the savings of human input into consideration. Tracking accuracy is assessed as the root mean square error between the

1998). However, relatively little work has been carried out in this area.

the planimetric accuracy and the spatial characterization of a road network.

to an extent in a complex urban region.

simulated annealing, to find its minimum.

**2.4 Result evaluations**

applicability.

road tracker and the human input.

locally oriented energy in continuous scale-space combining the geometric and radiometric features. The system can significantly reduce computation complexity in the line tracking, and can effectively depress the zero drift caused by Gaussian smoothing, comparing with other edge-based line detection algorithms. The proposed curvilinear feature detection method is tested to be superior to the Canny operator and the Kovesi detector, in that it can detect not only urban highways but also the non-salient rural roads.

Peng et al. (2008) update digital road maps in dense urban areas by extracting the main road network from VHR QuickBird panchromatic images. A multi-scale statistical data model, which integrates the segmentation results from both coarse and fine resolution, is employed to overcome the difficulties caused by the complexity of information contained in VHR images. Furthermore, an outdated GIS digital map is utilized to provide specific prior knowledge of the road network. The experiments indicate that the combination of generic and specific prior knowledge is essential when working at full resolution.

#### **2.3 Lane marking extraction techniques**

The popular method for road pavement marking reconstruction is through a vehicle-based mobile mapping system (MMS), where the road lane markings can be detected and reconstructed in the field using laser scanners or close range photogrammetric imagery. Due to the difference in devices used and types of features fused, approaches developed for lane feature extraction have been quite distinct from one another. For instance, lane markings are extracted based on structures (Lai & Yung, 2000), image classification (Jeong & Nedevschi, 2005), and frequency analysis (Kreucher & Lakshmanan, 1999). An exhaustive review of road marking reconstruction approaches using MMS can be seen in (Soheilian, 2008). Although accurate lane features can be obtained through MMS, it is costly and time-consuming to produce lane data over large areas.

Lane information reconstruction through feature extraction from remote sensed images has been a long-standing research topic within the photogrammetry and remote sensing community. However, due to the limitation of the ground resolution of images, the majority of existing approaches concentrate on the detection of road centerline rather than sub-road details. Research efforts have been focused in a number of institutions, resulting in various approaches to the problem, including multi-scale approaches (Baumgartner et al., 1999), knowledge-based extraction (Trinder & Wang, 1998) and context cues (Hinz & Baumgartner, 2000).

Only a few approaches involve the detection of lane marking in road extraction. Steger et al. (1997) extract the collinear road markings as bright objects with the algorithm given in (Steger, 1996) in large scale photographs when the roadsides exhibit no visible edges. Only the graph search strategy is adapted to extract road markings automatically, and a best-first search from a few salient road markings is also utilized. The strategy adds the road marking to the best connection evaluation only, which would add a global evaluation step following each marking, and try to add a new road marking if the directions of the road markings are not extracted perfectly.

In a more recent work, Kim et al. (2006) build a system to extract pavement information in complex urban areas relying on a set of simple image processing algorithms. The pavement information included land and symbol markings that guide direction, and the geometric properties of the pavement markings and their spatial relationships are analyzed. Moreover, road construction manuals and a series of cutting-edge algorithms, including template matching, are involved in the analysis. The evaluation of accuracy by comparing the data with manually plotted ground truth data validate that road information can be extracted efficiently to an extent in a complex urban region.

Tournaire et al. (2009) propose a specific approach for dashed lines and zebra crossing reconstruction. This approach relies on external knowledge introduced in the detection and reconstruction process, and is based on primitives extracted in the images. The core of the approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also deals with the interactions between the different objects making up a line, which means that the algorithm introduces external knowledge taken from specifications. To sample the energy function, the authors also use Green's algorithm, complete with a simulated annealing, to find its minimum.

## **2.4 Result evaluations**

8 Will-be-set-by-IN-TECH

locally oriented energy in continuous scale-space combining the geometric and radiometric features. The system can significantly reduce computation complexity in the line tracking, and can effectively depress the zero drift caused by Gaussian smoothing, comparing with other edge-based line detection algorithms. The proposed curvilinear feature detection method is tested to be superior to the Canny operator and the Kovesi detector, in that it can detect not

Peng et al. (2008) update digital road maps in dense urban areas by extracting the main road network from VHR QuickBird panchromatic images. A multi-scale statistical data model, which integrates the segmentation results from both coarse and fine resolution, is employed to overcome the difficulties caused by the complexity of information contained in VHR images. Furthermore, an outdated GIS digital map is utilized to provide specific prior knowledge of the road network. The experiments indicate that the combination of generic and specific prior

The popular method for road pavement marking reconstruction is through a vehicle-based mobile mapping system (MMS), where the road lane markings can be detected and reconstructed in the field using laser scanners or close range photogrammetric imagery. Due to the difference in devices used and types of features fused, approaches developed for lane feature extraction have been quite distinct from one another. For instance, lane markings are extracted based on structures (Lai & Yung, 2000), image classification (Jeong & Nedevschi, 2005), and frequency analysis (Kreucher & Lakshmanan, 1999). An exhaustive review of road marking reconstruction approaches using MMS can be seen in (Soheilian, 2008). Although accurate lane features can be obtained through MMS, it is costly and time-consuming to

Lane information reconstruction through feature extraction from remote sensed images has been a long-standing research topic within the photogrammetry and remote sensing community. However, due to the limitation of the ground resolution of images, the majority of existing approaches concentrate on the detection of road centerline rather than sub-road details. Research efforts have been focused in a number of institutions, resulting in various approaches to the problem, including multi-scale approaches (Baumgartner et al., 1999), knowledge-based extraction (Trinder & Wang, 1998) and context cues (Hinz & Baumgartner,

Only a few approaches involve the detection of lane marking in road extraction. Steger et al. (1997) extract the collinear road markings as bright objects with the algorithm given in (Steger, 1996) in large scale photographs when the roadsides exhibit no visible edges. Only the graph search strategy is adapted to extract road markings automatically, and a best-first search from a few salient road markings is also utilized. The strategy adds the road marking to the best connection evaluation only, which would add a global evaluation step following each marking, and try to add a new road marking if the directions of the road markings are

In a more recent work, Kim et al. (2006) build a system to extract pavement information in complex urban areas relying on a set of simple image processing algorithms. The pavement

only urban highways but also the non-salient rural roads.

knowledge is essential when working at full resolution.

**2.3 Lane marking extraction techniques**

produce lane data over large areas.

2000).

not extracted perfectly.

Internal diagnosis and external evaluation for the extracted road models are two important aspects of assessment of the relevant automatic road extraction system (Wiedemann et al., 1998). However, relatively little work has been carried out in this area.

In (Heipke et al., 1997) and (Wiedemann et al., 1998), an external evaluation approach of automatic road extraction algorithms is developed by comparison of these to manually plotted linear road axes used as reference data. The quality measures proposed for the automatically extracted road data comprise completeness, correctness, quality, redundancy, planimetric RMS differences, and gap statistics, and are all aimed at exhaustive evaluation as well as assessing geometrical accuracy. The proposed evaluation method is tested by comparing evaluations of three different automatic road extraction approaches, and demonstrating its applicability.

An in-depth usability evaluation of a semi-automated road extraction system is presented in (Wilson et al., 2004), highlighting both strengths and areas for improvement. The evaluation is principally conducted on the timing and statistical analysis as well as on factors that affect the extraction speed. Peteri et al. (2004) present a method to guide the determination of a reference based on statistical measures from several image interpretations. A tolerance zone representative of the variations in interpretation is defined that allows both the determination of the uncertainty of the reference object and the possibility of defining criteria for a quantitative evaluation. A few criteria defined by Musso and Vuchic (1988), including the size, form, and topology indices of the road network, are employed to carry out evaluation of the planimetric accuracy and the spatial characterization of a road network.

To qualitatively evaluate the performance of the semi-automatic road extraction algorithms, four criteria (correctness, completeness, efficiency, and accuracy) are utilized in (Zhou et al., 2006) and further in (Zhou et al., 2007). Completeness and correctness are the priority criteria in cartography, while the efficiency measurement principally takes the savings of human input into consideration. Tracking accuracy is assessed as the root mean square error between the road tracker and the human input.

finding procedure for segmentation. After achieving image segmentation, morphological opening and closing is utilized to remove small holes and noise from the road surface as well as narrow pathways connected to the main road. Then a thinning method is further applied to extract the skeleton of the road network. Finally, the generated road network is vectorized, and then pruned and simplified respectively by a proposed pruning method and Douglas-Peucker algorithm. Fig. 1 illustrates the flowchart for the developed approach. Basically, the performance includes two individual processes, namely, image segmentation

<sup>397</sup> Road Feature Extraction from High Resolution Aerial Images

Road network is detected using homogeneity histogram segmentation, which comprises the following two basic operations: contrast stretching, homogeneity histogram construction and

Colour images can be represented by linear RGB colour space or their non-linear transformation of RGB, e.g. HSI (hue, saturation and intensity). It is, in general, easier to discriminate highlights and shadows in a colour image by using the HSI colour space than the RGB colour space, but the hue is rather unstable at low saturation and makes the segmentation unreliable. Although the three basic RGB components are highly correlated in RGB colour space, the latter is applied in this paper due to its efficiency in distinguishing small variations

All of the RGB channels, especially the blue channel, in an original aerial photo (Fig. 2 (a)) have relative contrast deficiency which will impose challenges to the segmentation process. Therefore, contrast stretching is individually applied to each channel by assigning 5% and 95% in the histogram as the lower and upper bounds over which the image is to be normalized. It is clear that the contrast stretched images (shown in figure 2 (b), (c) and (d))have significantly

A general concept of the homogeneity histogram is referred to Cheng (2000). The homogeneity histogram takes into account not only the gray level but also spatial information of pixels with respect to each other. Therefore, homogeneity histogram thresholding tends to be more

The homogeneity vector of the pixel with its eight neighbours is calculated by Z-function, allowing the homogeneity histogram to be defined by normalization of the homogeneity vector. The normalized homogeneity histogram for Red, Green and Blue channels are shown

It is still difficult to detect the modes of homogeneity histogram in the above normalized homogeneity histogram when they are corrupted by noise. Therefore, once the homogeneity histogram for R, G and B channels are established, Gaussian filter is firstly applied to smooth them, instead of finding the thresholds directly by a complex peak finding algorithm proposed by Cheng (2000). In Gaussian filtering process, the spread parameter *σ*, which determines the

effective in finding homogeneous regions than histogram thresholding approaches.

and road network extraction, which will be elaborated in the following sections.

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

**3.1 Image segmentation**

smoothing.

in colour.

in Fig. 3.

**Contrast stretching**

higher contrast than the original RGB channels.

**Homogeneity histogram construction**

To sum up, the typical result evaluation approach for road extraction has been carried out by comparing the generated roads with manually plotted reference data. Correctness and completeness are the two most frequently used criteria, while other measurements are dependent on specific road extraction algorithms and objectives.

### **3. Road extraction in rural regions**

In this section, we developed a new approach for automatic road network extraction, where both spatial and spectral information from aerial photographs or pan-sharpened QuickBrid images is systematically considered and fully used. The proposed approach is performed by the following three main steps: (i) the image is classified based on homogeneity histogram segmentation to roughly identify the road network profiles; (ii) the morphological opening and closing is employed to fill tiny holes and filter out small road branches; and (iii) the extracted road surface is further thinned by a thinning approach, pruned by a proposed method and finally simplified with Douglas-Peucker algorithm.

Fig. 1. Flowchart of the proposed method

As a popular technique for image segmentation, histogram based thresholding only takes the occurrence of the gray level into account without any local information. But the segmentation based on the property of image homogeneity involves both the occurrence of the gray levels and the neighbouring homogeneity value among pixels; thus it will be employed in this study to obtain a more homogeneous segmentation result. Gaussian smoothing algorithm is then applied to this obtained homogeneity histogram, which can, in turn, ease the threshold finding procedure for segmentation. After achieving image segmentation, morphological opening and closing is utilized to remove small holes and noise from the road surface as well as narrow pathways connected to the main road. Then a thinning method is further applied to extract the skeleton of the road network. Finally, the generated road network is vectorized, and then pruned and simplified respectively by a proposed pruning method and Douglas-Peucker algorithm. Fig. 1 illustrates the flowchart for the developed approach. Basically, the performance includes two individual processes, namely, image segmentation and road network extraction, which will be elaborated in the following sections.

### **3.1 Image segmentation**

10 Will-be-set-by-IN-TECH

To sum up, the typical result evaluation approach for road extraction has been carried out by comparing the generated roads with manually plotted reference data. Correctness and completeness are the two most frequently used criteria, while other measurements are

In this section, we developed a new approach for automatic road network extraction, where both spatial and spectral information from aerial photographs or pan-sharpened QuickBrid images is systematically considered and fully used. The proposed approach is performed by the following three main steps: (i) the image is classified based on homogeneity histogram segmentation to roughly identify the road network profiles; (ii) the morphological opening and closing is employed to fill tiny holes and filter out small road branches; and (iii) the extracted road surface is further thinned by a thinning approach, pruned by a proposed

Input im age Color im age (R,G,B bands)

Generated road features

**Image segmentation**

**Road network extraction**

Extrac ted road skeleton

Final road netw ork

As a popular technique for image segmentation, histogram based thresholding only takes the occurrence of the gray level into account without any local information. But the segmentation based on the property of image homogeneity involves both the occurrence of the gray levels and the neighbouring homogeneity value among pixels; thus it will be employed in this study to obtain a more homogeneous segmentation result. Gaussian smoothing algorithm is then applied to this obtained homogeneity histogram, which can, in turn, ease the threshold

dependent on specific road extraction algorithms and objectives.

method and finally simplified with Douglas-Peucker algorithm.

Calc ulate hom ogeneity features & generate histogram

> Apply 2D Gauss sm oothing algorithm

Perform segm entation in hom ogeneity dom ain

Utilize m orphologic al opening & c losing

Apply thinning algorithm

Vec torize road skeleton

Prune short segments

Sim plify the road netw ork using Douglas-Peuc ker algorithm

Fig. 1. Flowchart of the proposed method

**3. Road extraction in rural regions**

Road network is detected using homogeneity histogram segmentation, which comprises the following two basic operations: contrast stretching, homogeneity histogram construction and smoothing.

### **Contrast stretching**

Colour images can be represented by linear RGB colour space or their non-linear transformation of RGB, e.g. HSI (hue, saturation and intensity). It is, in general, easier to discriminate highlights and shadows in a colour image by using the HSI colour space than the RGB colour space, but the hue is rather unstable at low saturation and makes the segmentation unreliable. Although the three basic RGB components are highly correlated in RGB colour space, the latter is applied in this paper due to its efficiency in distinguishing small variations in colour.

All of the RGB channels, especially the blue channel, in an original aerial photo (Fig. 2 (a)) have relative contrast deficiency which will impose challenges to the segmentation process. Therefore, contrast stretching is individually applied to each channel by assigning 5% and 95% in the histogram as the lower and upper bounds over which the image is to be normalized. It is clear that the contrast stretched images (shown in figure 2 (b), (c) and (d))have significantly higher contrast than the original RGB channels.

## **Homogeneity histogram construction**

A general concept of the homogeneity histogram is referred to Cheng (2000). The homogeneity histogram takes into account not only the gray level but also spatial information of pixels with respect to each other. Therefore, homogeneity histogram thresholding tends to be more effective in finding homogeneous regions than histogram thresholding approaches.

The homogeneity vector of the pixel with its eight neighbours is calculated by Z-function, allowing the homogeneity histogram to be defined by normalization of the homogeneity vector. The normalized homogeneity histogram for Red, Green and Blue channels are shown in Fig. 3.

It is still difficult to detect the modes of homogeneity histogram in the above normalized homogeneity histogram when they are corrupted by noise. Therefore, once the homogeneity histogram for R, G and B channels are established, Gaussian filter is firstly applied to smooth them, instead of finding the thresholds directly by a complex peak finding algorithm proposed by Cheng (2000). In Gaussian filtering process, the spread parameter *σ*, which determines the

**<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>0</sup>**

**37 43 196**

<sup>399</sup> Road Feature Extraction from High Resolution Aerial Images

Fig. 4. Normalized homogeneity histogram for Red, Green and Blue channel images

**Blue Green Red**

**195 184**

(b) Segmented Green

(d) Fused result

(c) Result of morphological operation

**Gray level**

channel

**0.25**

(a) Segmented Red

(c) Segmented Blue

main steps: morphological operation and thinning and vectorization.

Fig. 5. The segmented Red, Green and Blue channel images, and the final fused result.

skeleton. In this section, a novel road network extraction approach is developed to accurately extract road networks from a segmented road image. This extraction process includes two

(b) Connection component

analysis result

channel

channel

(a) Segmented result of

Fig. 6. The noise removal of the segmented result.

road objects

**0.5**

**Homogeneity**

**0.75**

**1**

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

Fig. 2. The original aerial photo and its Red, Green, and Blue channels.

Fig. 3. Normalized homogeneity histogram for Red, Green and Blue channel images

amount of smoothing, is determined with the algorithm proposed by Lin et al. (1996). Each peak in the homogeneity histogram represents a unique region. Accordingly, the valleys in the homogeneity histogram can be used as the thresholds for segmentation, as they can be easily found in the smoothed homogeneity histogram (see Fig. 4).

Each colour channel is segmented using the above obtained thresholds separately, and then all three segmented channel images are fused to yield the final result of segmentation (see e.g., Fig. 5). It is observed from Fig. 5 (d) that almost all the road networks are correctly extracted, but there are still many small family driveways connected to road networks and many house roofs are misclassified into the road network. These make it impossible to obtain an accurate road network without further processing.

#### **3.2 Road network extraction**

Up until now we have obtained the segmented result for road objects (see e.g. Fig. 6(a)), but the probability of misclassification is still relatively high and many small holes enclose the main road network. These holes and pathways must be removed to correctly extract the road 12 Will-be-set-by-IN-TECH

(a) RGB bands (b) Stretched R band

(c) Stretched G band (d) Stretched B band

**Red Green Blue**

**<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>0</sup>**

**Gray level**

amount of smoothing, is determined with the algorithm proposed by Lin et al. (1996). Each peak in the homogeneity histogram represents a unique region. Accordingly, the valleys in the homogeneity histogram can be used as the thresholds for segmentation, as they can be

Each colour channel is segmented using the above obtained thresholds separately, and then all three segmented channel images are fused to yield the final result of segmentation (see e.g., Fig. 5). It is observed from Fig. 5 (d) that almost all the road networks are correctly extracted, but there are still many small family driveways connected to road networks and many house roofs are misclassified into the road network. These make it impossible to obtain an accurate

Up until now we have obtained the segmented result for road objects (see e.g. Fig. 6(a)), but the probability of misclassification is still relatively high and many small holes enclose the main road network. These holes and pathways must be removed to correctly extract the road

Fig. 3. Normalized homogeneity histogram for Red, Green and Blue channel images

Fig. 2. The original aerial photo and its Red, Green, and Blue channels.

**0.25**

easily found in the smoothed homogeneity histogram (see Fig. 4).

road network without further processing.

**3.2 Road network extraction**

**0.5**

**Pixel number**

**0.75**

**1**

Fig. 4. Normalized homogeneity histogram for Red, Green and Blue channel images

Fig. 5. The segmented Red, Green and Blue channel images, and the final fused result.

skeleton. In this section, a novel road network extraction approach is developed to accurately extract road networks from a segmented road image. This extraction process includes two main steps: morphological operation and thinning and vectorization.

Fig. 6. The noise removal of the segmented result.

(a) Start from the intersection point P found in Step 1, initialize n (number of P's feature

<sup>401</sup> Road Feature Extraction from High Resolution Aerial Images

(b) Set the current tracking pixel to background after storing its position into the array, go on using the condition in Step 1 to find the next pixel on current tracking line

(b) Find the endpoint, start line tracking from it and set the pixels on the line to background (endpoint's number of feature point is 1 using the condition in Step

points) arrays to store lines started from P.

(a) Scan the image (up to bottom, left to right).

(c) Go on scanning until to the end of the image.

(a) The length of line is shorter than the threshold T.

3. Tracking lines from endpoint.

1).

**Step 3** Small line pruning

2. Output the final result.

small road branches are removed.

**3.3 Experimental results and evaluation**

Table 1. Evaluation of the test results.

until moving to the endpoint or other intersection point.

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

1. Delete line from the line array if both the following conditions are satisfied:

(b) If both endpoints of the line are not intersection points, and then go to Step 1.

Finally, Douglas-Peucker simplification algorithm, which not only decreases the number of data points but also retains the similarity of the simplified shape to the original one as close as possible, is employed to the pruned line network. The whole procedure of vectorization and simplification is shown in Fig. 8. The vectorization process consists of two steps: intersection point searching and line tracking, followed by small lines pruning and simplification. The final result is shown in Fig. 9. It can be seen that this approach works quite well that all the

In order to demonstrate the efficient performance of the proposed procedures outlined in this paper, two additional experiments have been implemented from the QuickBird satellite images, and their extraction accuracies are also evaluated. The final road network extracted using the proposed method is shown in Figure 10. Almost all the main roads are correctly extracted. However, the developed method is still experiencing difficulties in road extraction from the images where indistinct contrast between the road surface and its surroundings, as

> Variables Completeness Correctness Quality Figure 9 98.5% 96.2% 94.7% Figure 10 (a) 98.8% 99.3% 98.1% Figure 10 (a) 81.9% 98.2% 80.7% Means 93.1% 97.9% 91.2%

Basing on the method developed by Wiedemann (1996) for evaluating automatic road extraction systems, we use three indexes to assess the quality of the generated road network. The completeness is defined as the percentage of the correctly extracted data over the reference data and the correctness represents the ratio of correctly extracted road data. The quality

well as shadows, exist. This is another important research topic to be resolved.

#### **Morphological operation**

Mathematical morphology is a structure-based mathematical set theory that uses set operations such as union, intersection and complementation, so it is favoured for high-resolution image processing (Mohammadzadeh et al., 2006). Connected component analysis is firstly used to group pixels into different components based on pixel connectivity, then components whose surface area are smaller than a given threshold will be removed. The filtered image is shown in Fig. 6 (b), it can be clearly seen that all the misclassified objects unconnected to the main road network were removed. Morphological closing is then applied to remove small holes and noise from the road surface, while an opening operation is used to eliminate small pathways with a structuring element size that is smaller than the main roadâA˘ Zs width but larger than those of the pathways, resulting in the extracted road network ´ as shown in Fig. 6 (c).

## **Thinning and vectorization**

After the morphological operation, we further employ the thinning algorithm proposed by Wang and Zhang (1989) to extract the road skeleton, where the real road is replaced by its centreline with representation by a pixel. To remove short dangling branches of the centrelines caused by driveways, a novel pruning algorithm is performed as follows.


Fig. 7. Pixel P and its eight neighbours.

First of all, we introduce the definitions of four-neighbourhood and eight-neighbourhood neighbour for point P in Fig. 7. Here four-neighbourhood refers to N[1], N[3], N[5] and N[7], while eight-neighbourhood neighbour involves N[0], N[2], N[4] and N[6].

The pruning algorithm includes three steps:

**Step 1** Find all the intersection points

	- (a) *N*[*xi*] is four-neighbourhood neighbour of P.
	- (b) *N*[*xi*] is eight-neighbourhood neighbour of P and neither *N*[*xi* − 1] nor *N*[*xi* + 1] is foreground pixel. P is supposed to be a intersection point if *c* ≥ 3.

**Step 2** Line tracking

	- (a) Scan the image (up to bottom, left to right).
	- (b) Find the endpoint, start line tracking from it and set the pixels on the line to background (endpoint's number of feature point is 1 using the condition in Step 1).
	- (c) Go on scanning until to the end of the image.

**Step 3** Small line pruning

14 Will-be-set-by-IN-TECH

Mathematical morphology is a structure-based mathematical set theory that uses set operations such as union, intersection and complementation, so it is favoured for high-resolution image processing (Mohammadzadeh et al., 2006). Connected component analysis is firstly used to group pixels into different components based on pixel connectivity, then components whose surface area are smaller than a given threshold will be removed. The filtered image is shown in Fig. 6 (b), it can be clearly seen that all the misclassified objects unconnected to the main road network were removed. Morphological closing is then applied to remove small holes and noise from the road surface, while an opening operation is used to eliminate small pathways with a structuring element size that is smaller than the main roadâA˘ Zs width but larger than those of the pathways, resulting in the extracted road network ´

After the morphological operation, we further employ the thinning algorithm proposed by Wang and Zhang (1989) to extract the road skeleton, where the real road is replaced by its centreline with representation by a pixel. To remove short dangling branches of the centrelines

**N[1] N[1] N[1]**

**N[1] N[1] N[1]**

First of all, we introduce the definitions of four-neighbourhood and eight-neighbourhood neighbour for point P in Fig. 7. Here four-neighbourhood refers to N[1], N[3], N[5] and

1. Scan the image (up to bottom, left to right), if current pixel P has more than three foreground neighbours, namely, {*N* [*xi*] | *i* = 1, 2, ··· , *k*; *k* ≥ 3, *xk* = 0, 1, ··· 7} , go to

2. Initialize the feature point counter c=0, and then from i=1 to k, set c=c+1 if either

(b) *N*[*xi*] is eight-neighbourhood neighbour of P and neither *N*[*xi* − 1] nor *N*[*xi* + 1] is

foreground pixel. P is supposed to be a intersection point if *c* ≥ 3.

**N[1]**

N[7], while eight-neighbourhood neighbour involves N[0], N[2], N[4] and N[6].

**P N[1]**

caused by driveways, a novel pruning algorithm is performed as follows.

**Morphological operation**

as shown in Fig. 6 (c).

**Thinning and vectorization**

Fig. 7. Pixel P and its eight neighbours.

The pruning algorithm includes three steps:

condition (a) or (b) is satisfied.

(a) *N*[*xi*] is four-neighbourhood neighbour of P.

2. Tracking lines from the intersection point.

1. If there is no intersection point in the image, then go to 3.

**Step 1** Find all the intersection points

2.

**Step 2** Line tracking

	- (a) The length of line is shorter than the threshold T.
	- (b) If both endpoints of the line are not intersection points, and then go to Step 1.

Finally, Douglas-Peucker simplification algorithm, which not only decreases the number of data points but also retains the similarity of the simplified shape to the original one as close as possible, is employed to the pruned line network. The whole procedure of vectorization and simplification is shown in Fig. 8. The vectorization process consists of two steps: intersection point searching and line tracking, followed by small lines pruning and simplification. The final result is shown in Fig. 9. It can be seen that this approach works quite well that all the small road branches are removed.

#### **3.3 Experimental results and evaluation**

In order to demonstrate the efficient performance of the proposed procedures outlined in this paper, two additional experiments have been implemented from the QuickBird satellite images, and their extraction accuracies are also evaluated. The final road network extracted using the proposed method is shown in Figure 10. Almost all the main roads are correctly extracted. However, the developed method is still experiencing difficulties in road extraction from the images where indistinct contrast between the road surface and its surroundings, as well as shadows, exist. This is another important research topic to be resolved.


Table 1. Evaluation of the test results.

Basing on the method developed by Wiedemann (1996) for evaluating automatic road extraction systems, we use three indexes to assess the quality of the generated road network. The completeness is defined as the percentage of the correctly extracted data over the reference data and the correctness represents the ratio of correctly extracted road data. The quality

(a) Riyadh, Saudi Arabia

<sup>403</sup> Road Feature Extraction from High Resolution Aerial Images

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

(b) Hurghada, Egypt

is a more general measure of the final result combining the completeness and correctness. The optimum values for the above three defined indexes are all equal to one. Comparing automatically achieved results from the proposed process with the manual ones, the following quantified indicators have been calculated and presented in Table 1. The results demonstrate

In this section, we have presented a new approach for road extraction from large scale remote sensing images. The tests have demonstrated that considerable success can be achieved by adopting the overall flowchart presented in this paper, particularly when the contrast between road surface and background is distinct, and there is a significant proportion of road surface in the image. Importantly, a novel algorithm is developed to vectorize and prune the extracted road network. The experimental results for road extraction from aerial photo and QuickBird satellite images demonstrate that the proposed approach could extract most of the main roads

Accurate and detailed road models are of great importance in many applications, such as traffic monitoring and advanced driver assistance systems. However, the majority of road feature extraction approaches have only focused on the detection of road centerline rather than the lane details. Only a few approaches involved the detection of lane markings in the road extraction. For instance, Steger et al. (1997), Hinz and Baumgartner (2003), and Zhang (2004) extracted the road markings in their attempts to obtain clues as to the presence of road surface. Consequently, important requirements (Tournaire & Paparoditis, 2009) such as robustness, quality, completeness, are achieved less consistently compared to the lane level applications. In more recent works, Kim et al. (2006) and Tournaire et al. (2009) presented

that the proposed method achieved a significantly high level of accuracy.

despite the fact that some roads are missing or are slightly distorted.

**4. Road detection in urban areas**

Fig. 10. Road extraction tests on QuickBird images.

**3.4 Summary**

Fig. 8. Flowchart for implementation of the vectorization and pruning.

Fig. 9. Final centreline laying on the original road surface.

(a) Riyadh, Saudi Arabia

(b) Hurghada, Egypt

Fig. 10. Road extraction tests on QuickBird images.

is a more general measure of the final result combining the completeness and correctness. The optimum values for the above three defined indexes are all equal to one. Comparing automatically achieved results from the proposed process with the manual ones, the following quantified indicators have been calculated and presented in Table 1. The results demonstrate that the proposed method achieved a significantly high level of accuracy.

#### **3.4 Summary**

16 Will-be-set-by-IN-TECH

Sm all line pruning

Douglas-Peuc ker sim plific ation

Fig. 8. Flowchart for implementation of the vectorization and pruning.

Fig. 9. Final centreline laying on the original road surface.

T rac king lines started from the intersec tion point

**Yes**

In ters ecto in p o in t n u mb er>0?

**Yes**

**Search intersection points**

T rac king lines started from the endpoint

Sc an im age

Image end?

Store P as an interesec tion point

**No**

**Yes**

**Line tracking**

**No**

**No**

Featu re p o in t o f p ixel P>3?

> In this section, we have presented a new approach for road extraction from large scale remote sensing images. The tests have demonstrated that considerable success can be achieved by adopting the overall flowchart presented in this paper, particularly when the contrast between road surface and background is distinct, and there is a significant proportion of road surface in the image. Importantly, a novel algorithm is developed to vectorize and prune the extracted road network. The experimental results for road extraction from aerial photo and QuickBird satellite images demonstrate that the proposed approach could extract most of the main roads despite the fact that some roads are missing or are slightly distorted.

## **4. Road detection in urban areas**

Accurate and detailed road models are of great importance in many applications, such as traffic monitoring and advanced driver assistance systems. However, the majority of road feature extraction approaches have only focused on the detection of road centerline rather than the lane details. Only a few approaches involved the detection of lane markings in the road extraction. For instance, Steger et al. (1997), Hinz and Baumgartner (2003), and Zhang (2004) extracted the road markings in their attempts to obtain clues as to the presence of road surface. Consequently, important requirements (Tournaire & Paparoditis, 2009) such as robustness, quality, completeness, are achieved less consistently compared to the lane level applications. In more recent works, Kim et al. (2006) and Tournaire et al. (2009) presented

bandwidth, and (iii) robust to noise. Furthermore, it has optimal joint localization in both spatial and frequency domains. Therefore, Gabor filters can be considered as orientation and scale tunable edge and line (bar) detectors (Manjunath & Ma, 1998), which makes these a superior tool to detect the geometrically restricted linear features, such as road pavement

<sup>405</sup> Road Feature Extraction from High Resolution Aerial Images

The general functionality of the 2D Gabor filter family can be represented as a Gaussian function modulated by a complex sinusoidal signal. Specifically, the 2D Gabor filter can be defined in both the spatial domain *g* (*x*, *y*) and the frequency domain *G* (*u*, *v*). The 2D Gabor

> <sup>+</sup> *<sup>y</sup>*<sup>2</sup> *r σ*2 *y*

(*u* − *u*0)

scaling parameters of the Gaussian envelope; (*u*0, *v*0) presents the spatial frequencies of the sinusoid carrier in Cartesian coordinates, which can also be expressed in polar coordinates

> *xr* = *x* cos *θ* + *x* sin *θ yr* = −*x* sin *θ* + *y* cos *θ*

Road markings, which are presented as linear features with certain widths and orientations within local areas, can be considered as rectangular pulse lines. The correct determination of Gabor filter parameters is the central issue for lane pavement markings' extraction process. In order to effectively and accurately extract road lane markings with different sizes and thicknesses from aerial images using Gabor filters, we proposed an efficient method to

*θ* stands for the orientation of the span-limited sinusoidal grating. The orientation *θ* (*θ* ∈ [0, *π*)) of Gaussian envelope is given as perpendicular to the direction *ϕ* (*ϕ* ∈ [0, *π*))

2 *<sup>r</sup> <sup>σ</sup>*<sup>2</sup>

exp {*j*2*π* (*u*0*x* + *v*0*y*)}

2 *<sup>r</sup> <sup>σ</sup>*<sup>2</sup> *y* 

> *σx*, *σy*

are the two axis

*<sup>x</sup>* + (*v* − *v*0)

<sup>0</sup>, *φ* = arctan (*v*0/*u*0), and the subscript r stands for a rotation

function in spatial domain can be formulated as (Cai & Liu, 2000):

−*π x*2 *r σ*2 *x*

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

where *<sup>j</sup>* <sup>=</sup> √−1; (*x*0, *<sup>y</sup>*0) indicates the peak of the Gaussian envelope;

−*π* 

*<sup>g</sup>* (*x*, *<sup>y</sup>*) <sup>=</sup> exp

*<sup>G</sup>* (*u*, *<sup>v</sup>*) <sup>=</sup> exp

Its 2D Fourier transform is expressed as

 *u*2 <sup>0</sup> <sup>+</sup> *<sup>v</sup>*<sup>2</sup>

**Determination of Gabor filter parameters**

determine the Gabor filter parameters.

where *θ* is the rotation angle of the Gaussian envelope.

markings.

**Gabor functions**

as (*f* , *φ*), where *f* =

operation as follows:

Determination of *θ*

of the road surface by:

systems for pavement information extraction from remote sensing images with high spatial resolution.

In this section, the support vector machine (SVM) and Gabor filters are introduced into a framework for precise road model reconstruction from aerial imagery. The experimental practices using a data set of aerial images acquired in Brisbane, Queensland are utilized to evaluate the effectiveness of the proposed strategy.

#### **4.1 Methodology**

Supervised SVM image classification technique is employed to segment the road surface from other ground details, and the road pavement markings are detected on the generated road surface with Gabor filters.

An SVM is basically a linear learning machine based on the principal of optimal separation of classes (Vapnik, 1998). The goal is to find a linear separating hyperplane that separates the classes of interest provided the data is linearly separable. The hyperplane is a plane in a multidimensional space and is also called a decision surface or an optimal separating hyperplane or a maximal margin hyperplane.

Consider a set of l labelled training patterns (*x*1, *y*1),(*x*2, *y*2), ··· ,(*xi*, *yi*), ··· ,(*xl*, *yl*), where *xi* denotes the *i*-th training sample and *yi* ∈ {1, −1} denotes the class label. If the data are not linearly separable in the input space, a non-linear transformation function Φ (·) is used to project *xi* from the input space to a higher dimensional feature space. An optimal separating hyperplane is constructed in the feature space by maximizing the margin between the closest points Φ (*xi*) of two classes. The inner-product between two projections is defined by a kernel function *K* (*x*, *y*) = Φ (*x*) · Φ (*y*). The commonly used kernels include polynomial, Gaussian RBF, and Sigmoid kernels. Further details about kernels can be found in (Vapnik, 1998).

The decision function of the SVM is defined as

$$f\left(\mathbf{x}\right) = w \cdot \Phi\left(\mathbf{x}\right) + b = \sum\_{i=1}^{l} \alpha\_i y\_i K\left(\mathbf{x}, \mathbf{x}\_i\right) + b$$

subject to ∑*<sup>l</sup> <sup>i</sup>*=<sup>1</sup> *αiyi* = 0 and 0 ≤ *α* ≤ *C*, where *C* denotes a positive value determining the constraint violation during the training process.

Due to its properties of non-parametric, sparsity, and intrinsic feature reduction, SVM is superior to conventional classifiers, such as the maximum likelihood classifier, for image classification in very high resolution (VHR) remotely sensed data, since the estimated distribution function usually employs the normal distribution, which may not represent the actual distribution of the data (Huang & Zhang, 2008).

#### **4.1.1 Gabor filters**

2D Gabor filters, extended from 1D Gabor by Daugman (1985), have been successfully applied to a variety of image processing and pattern recognition problems, such as texture analysis, and image segmentation. 2D Gabor filters can be used to extract the road lane markings thanks to their following properties: (i) tuneable to specific orientations, (ii) adjustable orientation bandwidth, and (iii) robust to noise. Furthermore, it has optimal joint localization in both spatial and frequency domains. Therefore, Gabor filters can be considered as orientation and scale tunable edge and line (bar) detectors (Manjunath & Ma, 1998), which makes these a superior tool to detect the geometrically restricted linear features, such as road pavement markings.

#### **Gabor functions**

18 Will-be-set-by-IN-TECH

systems for pavement information extraction from remote sensing images with high spatial

In this section, the support vector machine (SVM) and Gabor filters are introduced into a framework for precise road model reconstruction from aerial imagery. The experimental practices using a data set of aerial images acquired in Brisbane, Queensland are utilized to

Supervised SVM image classification technique is employed to segment the road surface from other ground details, and the road pavement markings are detected on the generated road

An SVM is basically a linear learning machine based on the principal of optimal separation of classes (Vapnik, 1998). The goal is to find a linear separating hyperplane that separates the classes of interest provided the data is linearly separable. The hyperplane is a plane in a multidimensional space and is also called a decision surface or an optimal separating

Consider a set of l labelled training patterns (*x*1, *y*1),(*x*2, *y*2), ··· ,(*xi*, *yi*), ··· ,(*xl*, *yl*), where *xi* denotes the *i*-th training sample and *yi* ∈ {1, −1} denotes the class label. If the data are not linearly separable in the input space, a non-linear transformation function Φ (·) is used to project *xi* from the input space to a higher dimensional feature space. An optimal separating hyperplane is constructed in the feature space by maximizing the margin between the closest points Φ (*xi*) of two classes. The inner-product between two projections is defined by a kernel function *K* (*x*, *y*) = Φ (*x*) · Φ (*y*). The commonly used kernels include polynomial, Gaussian RBF, and Sigmoid kernels. Further details about kernels can be found in (Vapnik, 1998).

> *l* ∑ *i*=1

Due to its properties of non-parametric, sparsity, and intrinsic feature reduction, SVM is superior to conventional classifiers, such as the maximum likelihood classifier, for image classification in very high resolution (VHR) remotely sensed data, since the estimated distribution function usually employs the normal distribution, which may not represent the

2D Gabor filters, extended from 1D Gabor by Daugman (1985), have been successfully applied to a variety of image processing and pattern recognition problems, such as texture analysis, and image segmentation. 2D Gabor filters can be used to extract the road lane markings thanks to their following properties: (i) tuneable to specific orientations, (ii) adjustable orientation

*<sup>i</sup>*=<sup>1</sup> *αiyi* = 0 and 0 ≤ *α* ≤ *C*, where *C* denotes a positive value determining the

*αiyiK* (*x*, *xi*) + *b*

resolution.

**4.1 Methodology**

subject to ∑*<sup>l</sup>*

**4.1.1 Gabor filters**

surface with Gabor filters.

evaluate the effectiveness of the proposed strategy.

hyperplane or a maximal margin hyperplane.

The decision function of the SVM is defined as

constraint violation during the training process.

actual distribution of the data (Huang & Zhang, 2008).

*f* (*x*) = *w* · Φ (*x*) + *b* =

The general functionality of the 2D Gabor filter family can be represented as a Gaussian function modulated by a complex sinusoidal signal. Specifically, the 2D Gabor filter can be defined in both the spatial domain *g* (*x*, *y*) and the frequency domain *G* (*u*, *v*). The 2D Gabor function in spatial domain can be formulated as (Cai & Liu, 2000):

$$g\_{\varepsilon}(\mathbf{x}, y) = \exp\left\{-\pi \left(\frac{\mathbf{x}\_r^2}{\sigma\_x^2} + \frac{y\_r^2}{\sigma\_y^2}\right)\right\} \exp\left\{j2\pi \left(u\_0 \mathbf{x} + v\_0 y\right)\right\}$$

Its 2D Fourier transform is expressed as

$$G\left(\mu, v\right) = \exp\left\{-\pi \left[\left(\mu - \mu\_0\right)\_r^2 \sigma\_x^2 + \left(v - v\_0\right)\_r^2 \sigma\_y^2\right]\right\},$$

where *<sup>j</sup>* <sup>=</sup> √−1; (*x*0, *<sup>y</sup>*0) indicates the peak of the Gaussian envelope; *σx*, *σy* are the two axis scaling parameters of the Gaussian envelope; (*u*0, *v*0) presents the spatial frequencies of the sinusoid carrier in Cartesian coordinates, which can also be expressed in polar coordinates as (*f* , *φ*), where *f* = *u*2 <sup>0</sup> <sup>+</sup> *<sup>v</sup>*<sup>2</sup> <sup>0</sup>, *φ* = arctan (*v*0/*u*0), and the subscript r stands for a rotation operation as follows:

$$x\_r = x\cos\theta + x\sin\theta$$

$$y\_r = -x\sin\theta + y\cos\theta$$

where *θ* is the rotation angle of the Gaussian envelope.

#### **Determination of Gabor filter parameters**

Road markings, which are presented as linear features with certain widths and orientations within local areas, can be considered as rectangular pulse lines. The correct determination of Gabor filter parameters is the central issue for lane pavement markings' extraction process. In order to effectively and accurately extract road lane markings with different sizes and thicknesses from aerial images using Gabor filters, we proposed an efficient method to determine the Gabor filter parameters.

#### Determination of *θ*

*θ* stands for the orientation of the span-limited sinusoidal grating. The orientation *θ* (*θ* ∈ [0, *π*)) of Gaussian envelope is given as perpendicular to the direction *ϕ* (*ϕ* ∈ [0, *π*)) of the road surface by:

*σ<sup>x</sup>* =

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

by:

**4.2 Experiments and discussion**

Fig. 11 shows one of the testing images.

Fig. 11. One testing site (4096×4096 pixels).

noises misclassified into road class.

ln 2 2*π d* tan (�*θ*/2)

<sup>407</sup> Road Feature Extraction from High Resolution Aerial Images

According to orientation bandwidths of cat cortical simple cells (Liu et al., 2003), the mean angle covers a range from 26◦ to 39◦. After examining the line extraction results over the above range, we find it appropriate to set �*θ* = 30◦. Then *σ<sup>x</sup>* and *σ<sup>y</sup>* can be further obtained

*σ<sup>x</sup>* = *σ<sup>y</sup>* = 0.58/ *f*

The objective of the experiment is to determine the performance of the proposed road feature extraction approach quantitatively over the study area. A dataset of aerial images located in South Brisbane, Queensland have been selected as the study areas. The selected aerial images consist of three bands: Red, Blue and Green, with Ground Sampling Distance (GSD) of 7 cm.

Several training samples were used to train the support vector machine and the resulting model was used to classify the whole image into two features: road and non-road. For the implementation of SVM, the software package LIBSVM by Chang and Lin (2003) was adapted. Gaussian RBF was used as the kernel function, and the constraint violation *C* was set to be 10. After the image classification, the connected component analysis was used to remove small

To this point, the road surface has been obtained using SVM classification. Gabor filter was then utilized to extract the lane marking features while restrain the affection from other ground objects. To reduce the calculation complexity, Principle Component Analysis (PCA) was applied on the color image and only the 1st component was chosen for Gabor filtering. The parameters of Gabor filters are determined as outlined in the previous section. For instance, the orientation of the lane markings shown in Fig. 11 is approximately 130 degrees. The average of width of the road markings is 6 pixels, thus the frequency *f* is set to be 0.17, while the axis scaling parameters *σx* and *σy* of the Gaussian function is set to be 3.4. The

$$\theta = (\varrho + \pi/2)\,\%\pi$$

where % is the modulo operator.

Determination of *f*

*f* is the frequency of the sinusoid, which determines the 2D spectral centroid positions of the Gabor filter. This parameter is derived with respect to the width of road lane markings. In order to produce a single peak for the given lane line as well as discard other ground objects, such as white vehicles, the frequency *f* of the Gabor filter must satisfy the following conditions:

$$1/W'$$

where where *Wm* is the width of the road marking in pixel, and *W*� is the width of other white features. The details of the proofing process can be referred to (Liu et al., 2003).

In our experiments, we set *f* = 1/*Wm*, which will produce only a single peak in the output of the filter on road markings regardless of the values of *σx* and *σy*.

Determination of *σx* and *σy*.

The parameters *σx* and *σy* determine the spread of the Gabor filter in Ï ¸T and Îÿ directions respectively. According to (Liu et al., 2003), *σx* and *σy* have the following parameter constraint:

$$
\sigma\_y = k \sigma\_x
$$

where *k* is a constant. As the road lane markings have strict orientation and enough distance between adjacent lanes, we set *k*=1 to simplify the calculation.

The relationship between the orientation bandwidth �*θ* and the frequency *f* within the frequency domain is illustrated in figure 1, which can be given by:

$$
\triangle \theta = 2 \arctan \left( \frac{l}{f} \right)
$$

where �*θ* is the orientation bandwidth. It give:

$$l = f \tan\left(\triangle\theta/2\right)$$

Applying the 3dB frequency bandwidth in *V* direction when *φ* = 90◦ to equation (2), we have

$$G\left(\mu\_0, h\right)|\_{\phi=90} = \exp\left[-\pi \left(h\sigma\_x\right)^2\right] = \sqrt{2}/2$$

It gives

$$
\sigma\_{\mathfrak{X}} = \frac{\sqrt{\frac{\ln 2}{2\pi}}}{d \tan \left(\triangle \theta / 2\right)}
$$

According to orientation bandwidths of cat cortical simple cells (Liu et al., 2003), the mean angle covers a range from 26◦ to 39◦. After examining the line extraction results over the above range, we find it appropriate to set �*θ* = 30◦. Then *σ<sup>x</sup>* and *σ<sup>y</sup>* can be further obtained by:

$$\sigma\_{\chi} = \sigma\_{y} = 0.58/f$$

#### **4.2 Experiments and discussion**

20 Will-be-set-by-IN-TECH

*θ* = (*ϕ* + *π*/2) %*π*

*f* is the frequency of the sinusoid, which determines the 2D spectral centroid positions of the Gabor filter. This parameter is derived with respect to the width of road lane markings. In order to produce a single peak for the given lane line as well as discard other ground objects, such as white vehicles, the frequency *f* of the Gabor filter must satisfy the following

< *f* ≤ 1/*Wm*

In our experiments, we set *f* = 1/*Wm*, which will produce only a single peak in the output of

The parameters *σx* and *σy* determine the spread of the Gabor filter in Ï ¸T and Îÿ directions respectively. According to (Liu et al., 2003), *σx* and *σy* have the following parameter constraint:

*σ<sup>y</sup>* = *kσ<sup>x</sup>*

where *k* is a constant. As the road lane markings have strict orientation and enough distance

The relationship between the orientation bandwidth �*θ* and the frequency *f* within the

*l* = *f* tan (�*θ*/2)

Applying the 3dB frequency bandwidth in *V* direction when *φ* = 90◦ to equation (2), we have

−*π* (*hσx*)

2 <sup>=</sup> <sup>√</sup> 2/2

 *l f* 

�*θ* = 2 arctan

is the width of other white

1/*W*�

features. The details of the proofing process can be referred to (Liu et al., 2003).

where where *Wm* is the width of the road marking in pixel, and *W*�

the filter on road markings regardless of the values of *σx* and *σy*.

between adjacent lanes, we set *k*=1 to simplify the calculation.

where �*θ* is the orientation bandwidth. It give:

frequency domain is illustrated in figure 1, which can be given by:

*G* (*u*0, *h*) |*φ*=90= exp

where % is the modulo operator.

Determination of *σx* and *σy*.

Determination of *f*

conditions:

It gives

The objective of the experiment is to determine the performance of the proposed road feature extraction approach quantitatively over the study area. A dataset of aerial images located in South Brisbane, Queensland have been selected as the study areas. The selected aerial images consist of three bands: Red, Blue and Green, with Ground Sampling Distance (GSD) of 7 cm. Fig. 11 shows one of the testing images.

Fig. 11. One testing site (4096×4096 pixels).

Several training samples were used to train the support vector machine and the resulting model was used to classify the whole image into two features: road and non-road. For the implementation of SVM, the software package LIBSVM by Chang and Lin (2003) was adapted. Gaussian RBF was used as the kernel function, and the constraint violation *C* was set to be 10. After the image classification, the connected component analysis was used to remove small noises misclassified into road class.

To this point, the road surface has been obtained using SVM classification. Gabor filter was then utilized to extract the lane marking features while restrain the affection from other ground objects. To reduce the calculation complexity, Principle Component Analysis (PCA) was applied on the color image and only the 1st component was chosen for Gabor filtering. The parameters of Gabor filters are determined as outlined in the previous section. For instance, the orientation of the lane markings shown in Fig. 11 is approximately 130 degrees. The average of width of the road markings is 6 pixels, thus the frequency *f* is set to be 0.17, while the axis scaling parameters *σx* and *σy* of the Gaussian function is set to be 3.4. The

1. Detection rate

2. False alarm rate

1. Detection rate

2. False alarm rate

markings are given in Table 2.

Table 2. Evaluation of the test results.

3. Quality

3. Quality

*<sup>d</sup>* <sup>=</sup> *TP TP* + *FN*

<sup>409</sup> Road Feature Extraction from High Resolution Aerial Images

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

*<sup>f</sup>* <sup>=</sup> *FP TP* + *FP*

*<sup>q</sup>* <sup>=</sup> *TP*

*FP* (false positive) is the number of non-road pixels identified as road surfaces.

be 15 cm in our experiment. Then the accuracy measures are given as:

In the above equation, *TP* (true positive) is the number of road surface pixels correctly identified, *FN* (false negative) is the number of road surface pixels identified as other objects,

The evaluation of the extracted pavement marking accuracy is carried out by comparing the extracted pavement markings with manually plotted road markings used as reference data as presented in (Wiedemann et al., 1998), and both data sets are given in vector representation. The buffer width is predefined to be the average width of the road markings, and we set it to

> *<sup>d</sup>* <sup>=</sup> *lengtho f thematchedre f erence lengtho f re f erence*

*<sup>f</sup>* <sup>=</sup> *lengtho f theunmatchedextraction lengtho f extraction*

*lengtho f extraction* + *unmatchedre f erence*

Markings 93.3% 10.6% 83.7%

Markings 94.5% 2.7% 92.7%

Markings 83.5% 15.2% 71.8%

*<sup>q</sup>* <sup>=</sup> *lengtho f thematchedre f erence*

Road boundaries and road markings are firstly digitized from the test images and used as ground truth. Three measures of the extraction results for both road surfaces and pavement

> Test image Road features Detection rate False alarm rate Quality Image I Surface 91.8% 12.9% 80.3%

Image II Surface 93.2% 7.2% 88.5%

Image III Surface 88.3% 2.2% 86.2%

For the entire four test sites, nearly 90% of the road surfaces are correctly detected, and the relevant false alarm rate is about 10%. The completeness of road pavement marking extraction reaches above 87%, except for test site IV, which is seriously affected by shadows. The shadows on the road surfaces can reduce the intensity contrast between pavement markings

*TP* + *FP* + *FN*

#### Fig. 12. Gabor filtered result.

filtered image is as illustrated in figure 4, which was then masked by the road surface acquired in the previous step.

Finally, the Gabor filtered image was then segmented by Otsu's thresholding algorithm, and directional morphological opening and closing algorithms were utilized to remove misclassified features. Some white linear features such as house roof ridges may be misclassified into lane markings, so we further utilized the extracted road surface in the previous step as a mask to remove these kinds of objects. The lane segments may also be corrupted by many facts: occlusion, e.g. trees above the road surfaces; worn-out painting of lane lines; dirty markings on the road surfaces. We eliminated the affection from vehicles in the road markings extraction by utilizing the following two indicators: (i) elongation - the ratio of the major axis to the minor axis of the polygon, and (ii) lengths of the major and minor axis. The elongation measure of vehicle is smaller than the road lane markings, and the length of the major and minor axis of vehicle are within certain ranges. In this experiment, the major axis length of the vehicle is set to be within 2 to 10m, while minor axis is set to be between 1.5m and 3m. The extracted pavement markings are superimposed on the road surfaces, as given in Fig. 13.

#### Fig. 13. Final result.

The quantitative evaluation of the experimental results is achieved by comparing the automated (derived) results against a manually compiled, high quality reference model. Following the concept of error matrix, the evaluation matrices for the accuracy assessment of road surfaces detection can be defined at the pixel level as follows:

#### 1. Detection rate

22 Will-be-set-by-IN-TECH

filtered image is as illustrated in figure 4, which was then masked by the road surface acquired

Finally, the Gabor filtered image was then segmented by Otsu's thresholding algorithm, and directional morphological opening and closing algorithms were utilized to remove misclassified features. Some white linear features such as house roof ridges may be misclassified into lane markings, so we further utilized the extracted road surface in the previous step as a mask to remove these kinds of objects. The lane segments may also be corrupted by many facts: occlusion, e.g. trees above the road surfaces; worn-out painting of lane lines; dirty markings on the road surfaces. We eliminated the affection from vehicles in the road markings extraction by utilizing the following two indicators: (i) elongation - the ratio of the major axis to the minor axis of the polygon, and (ii) lengths of the major and minor axis. The elongation measure of vehicle is smaller than the road lane markings, and the length of the major and minor axis of vehicle are within certain ranges. In this experiment, the major axis length of the vehicle is set to be within 2 to 10m, while minor axis is set to be between 1.5m and 3m. The extracted pavement markings are superimposed on the road surfaces, as

The quantitative evaluation of the experimental results is achieved by comparing the automated (derived) results against a manually compiled, high quality reference model. Following the concept of error matrix, the evaluation matrices for the accuracy assessment

of road surfaces detection can be defined at the pixel level as follows:

Fig. 12. Gabor filtered result.

in the previous step.

given in Fig. 13.

Fig. 13. Final result.


3. Quality

$$q = \frac{TP}{TP + FP + FN}$$

*TP* + *FP*

*<sup>d</sup>* <sup>=</sup> *TP TP* + *FN*

In the above equation, *TP* (true positive) is the number of road surface pixels correctly identified, *FN* (false negative) is the number of road surface pixels identified as other objects, *FP* (false positive) is the number of non-road pixels identified as road surfaces.

The evaluation of the extracted pavement marking accuracy is carried out by comparing the extracted pavement markings with manually plotted road markings used as reference data as presented in (Wiedemann et al., 1998), and both data sets are given in vector representation. The buffer width is predefined to be the average width of the road markings, and we set it to be 15 cm in our experiment. Then the accuracy measures are given as:

1. Detection rate

$$d = \frac{length of the matched reference}{length of reference}$$

2. False alarm rate

$$f = \frac{length of the unmatched extraction}{length of extraction}$$

3. Quality

$$q = \frac{length of the matched reference}{length of extraction + unmuthed reference}$$

Road boundaries and road markings are firstly digitized from the test images and used as ground truth. Three measures of the extraction results for both road surfaces and pavement markings are given in Table 2.


Table 2. Evaluation of the test results.

For the entire four test sites, nearly 90% of the road surfaces are correctly detected, and the relevant false alarm rate is about 10%. The completeness of road pavement marking extraction reaches above 87%, except for test site IV, which is seriously affected by shadows. The shadows on the road surfaces can reduce the intensity contrast between pavement markings

**6. References**

Amini, J., Saradjian, M. R., Blais, J. A. R., Lucas, C. & Azizi, A. (2002). Automatic

<sup>411</sup> Road Feature Extraction from High Resolution Aerial Images

Amo, M., Martinez, F. & Torre, M. (2006). Road extraction from aerial images using a region competition algorithm, *IEEE Transactions on Image Processing* 15(5): 1192–1201. Bajcsy, R. & Tavakoli, M. (1976). Computer recognition of roads from satellite pictures, *IEEE*

Baumgartner, A., Steger, C., Mayer, H., Eckstein, W. & Ebner, H. (1999). Automatic road

Cai, J. & Liu, Z. (2000). Off-line unconstrained handwritten word recognition, *International*

Chang, C.-C. & Lin, C.-J. (2003). Libsvm: a library for support vector machines, *Technical report*,

Cheng, H. D. & Sun, Y. (2000). A hierarchical approach to color image segmentation using

Dal-Poz, A. P., Vale, G. M. D. & Zanin, R. B. (2005). Automatic extraction of road

Daugman, J. G. (1985). Uncertainty relation for resolution in space, spatial frequency, and

Doucette, P., Agouris, P., Stefanidis, A. & Musavi, M. (2001). Self-organised clustering for road

Haverkamp, D. (2002). Extracting straight road structure in urban environments using ikonos

Heipke, C., Mayer, H., Wiedemann, C. & Jamet, O. (1997). Evaluation of automatic

Heipke, C., Steger, C. & Multhammer, R. (1995). A hierarchical approach to automatic road

Hinz, S. & Baumgartner, A. (2000). Road extraction in urban areas supported by context

Hinz, S. & Baumgartner, A. (2003). Automatic extraction of urban road networks from

Hu, X., Zhang, Z. & Tao, C. V. (2004). A robust method for semi-automatic extraction of road

Huang, X. & Zhang, L. (2008). An adaptive mean-shift analysis approach for object extraction

*Photogrammetric Engineering & Remote Sensing* 70(12): 1393–1398.

*Journal of Pattern Recognition and Artificial Intelligence* 14(3): 259–280.

homogeneity, *IEEE Transactions on Image Processing* 9(12): 2071–2082.

*Society of America, A: Optics and Image Science* 2(7): 1160–1169.

satellite imagery, *Optical Engineering* 41(2107): 2107–2110.

*Observation and Geoinformation* 4(2): 95–107.

*& Remote Sensing* 65(7): 777–785.

University.

77(3): 509–520.

55(5-6): 347–358.

32(3-2W3): 47–56.

2486: 222–231.

58(1-2): 83–98.

*and Remote Sensing* 46(12): 4173–4185.

*Transactions on Systems, Man and Cybernetics* 6(9): 623–637.

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

road-side extraction from large scale imagemaps, *International Journal of Applied Earth*

extraction based on multi-scale, grouping, and context, *Photogrammetric Engineering*

Deptartment of Computer Science and Information Engineering, National Taiwan

seeds from high-resolution aerial images, *Annals of the Brazilian Academy of Sciences*

orientation optimized by two-dimensional visual cortical filters, *Journal of Optical*

extraction in classified imagery, *ISPRS Journal of Photogrammetry and Remote Sensing*

road extraction, *International Archives of Photogrammetry and Remote Sensing*

extraction from aerial imagery, *Society of Photographic Instrumentation Engineers (SPIE)*

objects, *International Archives of Photogrammetry and Remote Sensing* 33(B3/1): 405–412.

multi-view aerial imagery, *ISPRS Journal of Photogrammetry and Remote Sensing*

centerlines using a piecewise parabolic model and least squares template matching,

and classification from urban hyperspectral imagery, *IEEE Transactions on Geoscience*

and the road surface background, which makes it difficult to enhance the road markings using the Gabor filter. The average false alarm rate of the four test sites is about 10%.

#### **4.3 Summary**

In this section, an automatic road surface and pavement marking extraction approach from aerial images with high spatial resolution is proposed. The developed method, which is based on SVM image classification as well as Gabor filtering, can generate accurate lane level digital road maps automatically. The experimental results using the aerial image dataset with ground resolution of 7 cm have demonstrated that the proposed method works satisfactorily. Further work will concentrate on the process of seriously curved road surface and large images, which may be achieved by using knowledge based image analysis and image partition technique.

#### **5. Conclusions and future work**

#### **5.1 Conclusions**

In conclusion, we have presented an integrated approach for road feature extraction from both rural and urban areas. Road surface and lane markings have been extracted from very high resolution (VHR) aerial images in rural areas based on homogeneity histogram thresholding and Gabor filters. The homogeneity histogram image segmentation method takes into account not only the color information but also the spatial relation among pixels to explore the features of an image. We further proposed a road network vectorization and pruning algorithm, which can effectively eliminate the short tracks segments. In the urban area, the road surface is firstly classified by SVM image segmentation method, and then Gabor filter is further employed to enhance the road lane markings whilst constraining the effects of other ground features. The experimental results from several VHR satellite images in rural areas have indicated that over 95% of road networks have been correctly extracted. The omission of road feature is a result of occlusions, poor contrast with the surrounding scenario, and partial shadows over the road. This has preliminarily demonstrated that the presented extraction strategy for road feature extraction in rural areas is promising. Experiments with three typical test sites in urban areas have resulted in over 90% of the road surfaces being corrected extracted, with the misclassification rate below 10%. The correction rate for lane marking extraction is approximate 95%, and only about 10% of the other ground objects are misclassified as lane marking.

#### **5.2 Future work**

Although the proposed approach has generated satisfactory results on the testing datasets, problems still exist: for example, lane markings obstructed by vehicles may not be effectively detected. Therefore, future work will focus on the improvement of detection accuracy and precise model reconstruction. For instance, an automatic vehicle detection approach may be introduced to efficiently detect and remove vehicles from the road surface. GPS real-time kinematic positioning solutions from a probe vehicle could beappropriate for the recovery of lane markings in areas where there are large obstructions: for example, a large number of skyscrapers or trees would greatly deteriorate the extraction result in urban or forest areas. We also consider using the linear feature linking technique to connect the broken road features.

#### **6. References**

24 Will-be-set-by-IN-TECH

and the road surface background, which makes it difficult to enhance the road markings using

In this section, an automatic road surface and pavement marking extraction approach from aerial images with high spatial resolution is proposed. The developed method, which is based on SVM image classification as well as Gabor filtering, can generate accurate lane level digital road maps automatically. The experimental results using the aerial image dataset with ground resolution of 7 cm have demonstrated that the proposed method works satisfactorily. Further work will concentrate on the process of seriously curved road surface and large images, which may be achieved by using knowledge based image analysis and image partition technique.

In conclusion, we have presented an integrated approach for road feature extraction from both rural and urban areas. Road surface and lane markings have been extracted from very high resolution (VHR) aerial images in rural areas based on homogeneity histogram thresholding and Gabor filters. The homogeneity histogram image segmentation method takes into account not only the color information but also the spatial relation among pixels to explore the features of an image. We further proposed a road network vectorization and pruning algorithm, which can effectively eliminate the short tracks segments. In the urban area, the road surface is firstly classified by SVM image segmentation method, and then Gabor filter is further employed to enhance the road lane markings whilst constraining the effects of other ground features. The experimental results from several VHR satellite images in rural areas have indicated that over 95% of road networks have been correctly extracted. The omission of road feature is a result of occlusions, poor contrast with the surrounding scenario, and partial shadows over the road. This has preliminarily demonstrated that the presented extraction strategy for road feature extraction in rural areas is promising. Experiments with three typical test sites in urban areas have resulted in over 90% of the road surfaces being corrected extracted, with the misclassification rate below 10%. The correction rate for lane marking extraction is approximate 95%, and only about 10% of the other ground objects are misclassified as lane

Although the proposed approach has generated satisfactory results on the testing datasets, problems still exist: for example, lane markings obstructed by vehicles may not be effectively detected. Therefore, future work will focus on the improvement of detection accuracy and precise model reconstruction. For instance, an automatic vehicle detection approach may be introduced to efficiently detect and remove vehicles from the road surface. GPS real-time kinematic positioning solutions from a probe vehicle could beappropriate for the recovery of lane markings in areas where there are large obstructions: for example, a large number of skyscrapers or trees would greatly deteriorate the extraction result in urban or forest areas. We also consider using the linear feature linking technique to connect the broken road features.

the Gabor filter. The average false alarm rate of the four test sites is about 10%.

**4.3 Summary**

**5.1 Conclusions**

marking.

**5.2 Future work**

**5. Conclusions and future work**


image, *IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing*

from high-resolution imagery, *Photogrammetric Engineering & Remote Sensing*

cartographic database on a spot satellite image, *Pattern Recognition* 35(10): 2213–2221.

high-resolution satellite images, *IEEE Transactions on Geoscience and Remote Sensing*

algorithm for high resolution images using path following approach, *the Indian Conference on Computer Vision, Graphics and Image Processing*, Ahmedabad, India. Soheilian, B. (2008). *Roadmark reconstruction from stereo-images acquired by a ground-based mobile*

Poullis, C. & You, S. (2010). Delineation and geometric modeling of road networks, *ISPRS*

<sup>413</sup> Road Feature Extraction from High Resolution Aerial Images

Péteri, R., Couloigner, I. & Ranchin, T. (2004). Quantitatively assessing roads extracted

Quackenbush, L. J. (2004). A review of techniques for extracting linear features from imagery,

Rellier, G., Descombes, X. & Zerubia, J. (2002). Local registration and deformation of a road

Shi, W. & Zhu, C. (2002). The line segment match method for extracting road network from

Shukla, V., Chandrakanth, R. & Ramachandran, R. (2002). Semi-automatic road extraction

Steger, C. (1996). *Extracting curvilinear structures: A differential geometric approach*, Vol. 1064,

Steger, C. (1998). An unbiased detector of curvilinear structures, *IEEE Transactions on Pattern*

Steger, C., Glock, C., Eckstein, W., Mayer, H. & Radig, B. (1995). *Model-based road extraction from images*, Birkhäuser Basel, Birkhauser Verlag, Basel, Switzerland, pp. 275–284. Steger, C., Mayer, H. & Radig, B. (1997). *The role of grouping for road extraction*, Vol. 245-256,

Stoica, R., Descombes, X. & Zerubia, J. (2004). A gibbs point process for road extraction from remotely sensed images, *International Journal of Computer Vision* 57(2): 121–136. Tournaire, O. & Paparoditis, N. (2009). A geometric stochastic approach based on marked

Trinder, J. C. & Wang, Y. (1998). Automatic road extraction from aerial images, *Digital Signal*

Tupin, F., Maitre, H., Mangin, J.-F., Nicolas, J.-M. & Pechersky, E. (1998). Detection of linear

Valero, S., Chanussot, J., Benediktsson, J., Talbot, H. & Waske, B. (2010). Advanced

Wang, F. & Newkirk, R. (1988). A knowledge-based system for highway network extraction,

Wang, P. & Zhang, Y. (1989). A fast and flexible thinning algorithm, *IEEE Transactions on*

point processes for road mark detection from high resolution aerial images, *ISPRS*

features in sar images: application to road network extraction, *IEEE Transactions on*

directional mathematical morphology for the detection of the road network in very high resolution remote sensing images, *Pattern Recognition Letters* 31(10): 1120–1127.

*Journal of Photogrammetry and Remote Sensing* 65(2): 165–181.

Upon Rural Regions Based on Multi-Resolution Image Analysis and Gabor Filters

*Photogrammetric Engineering & Remote Sensing* 70(12): 1383–1392.

*mapping system*, Ph.d thesis, Université Paris Est.

*Analysis and Machine Intelligence* 20(2): 113–125.

Birkhäuser, Basel, Switzerland, pp. 1931–1952.

*Geoscience and Remote Sensing* 36(2): 434–453.

*Journal of Photogrammetry and Remote Sensing* 64(6): 621–631.

Vapnik, V. N. (1998). *Statistical learning theory*, John Wiley & Sons, Inc., New York.

*IEEE Transactions on Geoscience and Remote Sensing* 26(5): 525 – 531.

Springer Verlag, pp. 630–641.

*Processing* 8: 125–224.

*Computers* 38(5): 741–745.

1(2): 139–146.

70(12): 1449–1456.

40(2): 511–514.


26 Will-be-set-by-IN-TECH

Jeong, P. & Nedevschi, S. (2005). Efficient and robust classification method using combined

Jin, X. & Davis, C. H. (2004). New applications for mathematical morphology in urban

Kim, J. G., Han, D. Y., Yu, K. Y., Kim, Y. I. & Rhee, S. M. (2006). Efficient extraction of road

from aerial images, *Canadian Journal of Civil Engineering* 33(10): 1320–1331. Kim, T., Park, S. R., Jeong, S. & Kim, K. O. (2002). Semi automatic tracking of road centerlines

Kim, T., Park, S.-R., Kim, M.-G., Jeong, S. & Kim, K.-O. (2004). Tracking road centerlines

Kreucher, C. & Lakshmanan, S. (1999). Lana: a lane extraction algorithm that uses frequency domainfeatures, *IEEE Transactions on Robotics and Automation* 15(2): 343–350. Lai, A. & Yung, N. (2000). Lane detection by orientation and length discrimination, *IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics* 30(4): 539–548. Laptev, I., Mayer, H., Lindeberg, T., Eckstein, W., Steger, C. & Baumgartner, A. (2000).

Lin, H.-C., Wang, L.-L. & Yang, S.-N. (1996). Automatic determination of the spread parameter

Liu, Z., Cai, J. & Buse, R. (2003). *Handwriting recognition: soft computing and probabilistic*

Manjunath, B. S. & Ma, W. Y. (1998). Texture features for browing and retrieval of image data, *IEEE Transactions on Pattern Analysis and Machine Intelligence* 18(8): 837–842. Mayer, H. & Steger, C. (1998). Scale-space events and their link to abstraction for road extraction, *ISPRS Journal of Photogrammetry and Remote Sensing* 53(2): 62–75. Mena, J. B. (2003). State of the art on automatic road extraction for gis update: a novel

Mohammadzadeh, A., Tavakoli, A. & Zoej, M. J. V. (2006). Road extraction based on

Mokhtarzade, M. & Zoej, M. (2007). Road detection from high-resolution satellite images

Musso, A. & Vuchic, V. R. (1988). Characteristics of metro networks and methodology for their

Peng, J. & Jin, Y. Q. (2007). An unbiased algorithm for detection of curvilinear

Peng, T., Jermyn, I., Prinet, V. & Zerubia, J. (2008). Incorporating generic and specific

fuzzy logic and mathematical morphology from pan-sharpened ikonos images,

using artificial neural networks, *International Journal of Applied Earth Observation and*

structures in urban remote sensing images, *International Journal of Remote Sensing*

prior knowledge in a multi-scale phase field model for road extraction from vhr

in gaussian smoothing, *Pattern Recognition Letters* 17(12): 1247–1252.

classification, *Pattern Recognition Letters* 24(16): Pages 3037–3058.

evaluation, *Transportation Research Record* (1162): 22–33.

*Photogrammetric Engineering & Remote Sensing* 70(12): 1417–1422.

*Machine Vision and Application* 12(1): 23–31.

*approaches*, Springer Verlag, Berlin, Germany.

*Photogrammetric Record* 21(113): 44–60.

*Geoinformation* 9(1): 32–40.

28(23): 5377–5395.

*Technology* 15(4): 528– 537.

5558: 137–148.

Kathmandu, Nepal.

feature vector for lane detection, *IEEE Transactions on Circuits and Systems for Video*

feature extraction from high-resolution satellite imagery, *Proceedings of the SPIE*

information for car navigation applications using road pavement markings obtained

from high resolution remote sensing data, *the 23rd Asian Conference on Remote Sensing*,

from high resolution remote sensing images by least squares correlation matching,

Automatic extraction of roads from aerial images based on scale space and snakes,

image, *IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing* 1(2): 139–146.


**18** 

Albert Lin

*Taiwan, R.O.C.* 

*National Space Organization* 

**Hardware Implementation of a Real-Time Image** 

**Data Compression for Satellite Remote Sensing** 

The image data compression is very important to reduce the image data volume and data rate for the satellite remote sensing. The chapter describes how the image data compression hardware is implemented and uses the FORMOSAT-5 Remote Sensing Instrument (RSI) as an example. The FORMOSAT-5 is an optical remote sensing satellite with 2 meters Panchromatic (PAN) image resolution and 4 meters Multi-Spectrum (MS) image resolution, which is under development by the National Space Organization (NSPO) in Taiwan. The payload consists of one PAN band with 12,000 pixels and four MS bands with 6,000 pixels in the remote sensing instrument. The image data compression method complies with the Consultative Committee for Space Data Systems (CCSDS) standard CCSDS 122.0-B-1 (2005). The compression ratio is 1.5 for lossless compression, 3.75 or 7.5 for lossy compression. The Xilinx Virtex-5QV FPGA, XQR5VFX130 is used to achieve near real time compression. Parallel and concurrent handling strategies are used to achieve high-performance

The CCSDS Recommended Standard for Image Data Compression is intended to be suitable for spacecraft usage. The algorithm complexity is sufficiently low for hardware implement and memory buffer requirement. It can support strip-based input format for push broom imaging. The compressor consists of two functional blocks, Discrete Wavelet Transfer (DWT) and Bit Plane Encoder (BPE). The image compression methodology is described in

The CCSDS Recommendation supports two choices of DWT: an integer DWT (IDWT) and a floating point DWT (FDWT). The integer DWT requires only integer arithmetic, is capable of providing lossless compression, and has lower implementation complexity, but lower compression ratio. The floating point DWT provides improved compression effectiveness,

The DWT stage performs three levels of two-dimensional (2-d) wavelet decomposition and generates 10 subbands as illustrated in Fig. 1. The low pass IDWT is as Equation (1) and the

but requires floating point calculations and cannot provide lossless compression.

**1. Introduction** 

computing in the process.

the following sections.

**2.1 Discrete wavelet transform** 

**2. Image compression methodology** 


## **Hardware Implementation of a Real-Time Image Data Compression for Satellite Remote Sensing**

Albert Lin *National Space Organization Taiwan, R.O.C.* 

## **1. Introduction**

28 Will-be-set-by-IN-TECH

414 Remote Sensing – Advanced Techniques and Platforms

Wang, R. (2004). *Automated road extraction from high-resolution satellite imagery*, Master thesis,

Wiedemann, C., Heipke, C., Mayer, H. & Olivier, J. (1998). *Empirical evaluation of automatically extracted road axes*, IEEE Computer Society Press, Silver Spring, MD, pp. 172–187. Wiedemann, C. & Mayer, H. (1996). Automatic verification of roads in digital images using

Wilson, H., Mcglone, J. C., MaKeown, D. M. & Irvine, J. M. (2004). User-centric evaluation

Ye, F., Lin, S. & Tang, j. (2006). Automatic road extraction using partical filters from high

Zhang, C. (2003). *Updating of cartographic road databases by image analysis*, PhD thesis, Swiss

Zhang, C. (2004). Towards an operational system for automated updating of road databases

Zhang, Q. & Couloigner, I. (2004). Automatic road change detection and gis updating from high spatial remotely-sensed imagery, *Geo-Spatial Information Science* 7(2): 89–95. Zhao, H., Kumagai, J., Nakagawa, M. & Shibasaki, R. (2002). Semi-automatic road extraction

Zhou, J., Bischofa, W. F. & Caelli, T. (2006). Road tracking in aerial images based on

Zhou, J., Cheng, L. & Bischof, W. F. (2007). Online learning with novelty detection in

Zhu, C., Shi, W., Peraresi, M., Liu, L., Chen, X. & King, B. (2005). The

of semi-automated road network extraction, *Photogrammetric Engineering & Remote*

resolution images, *Journal of China University of Mining and Technology* 16(4): 490–493.

by integration of imagery and geodata, *ISPRS Journal of Photogrammetry and Remote*

from high-resolution satellite image, *Proceedings of Photogrammetric Computer Vision*,

human-computer interaction and bayesian filtering, *ISPRS Journal of Photogrammetry*

human-guided road tracking, *IEEE Transactions on Geoscience and Remote Sensing*

recognition of road network from high-resolution satellite remotely sensed data using image morphological characteristics, *International Journal of Remote Sensing*

University of New Brunswich, Fredericton, Canada.

Federal Institute of Technology, Zurich, Switzerland.

profiles, *Mustererkennung* pp. 609 – 618.

*Sensing* 70(12): 1353–1364.

*Sensing* 58(3-4): 166–186.

*and Remote Sensing* 61(2): 108–124.

Graz, Austria.

45(12): 3967–3977.

26(24): 5493–5508.

The image data compression is very important to reduce the image data volume and data rate for the satellite remote sensing. The chapter describes how the image data compression hardware is implemented and uses the FORMOSAT-5 Remote Sensing Instrument (RSI) as an example. The FORMOSAT-5 is an optical remote sensing satellite with 2 meters Panchromatic (PAN) image resolution and 4 meters Multi-Spectrum (MS) image resolution, which is under development by the National Space Organization (NSPO) in Taiwan. The payload consists of one PAN band with 12,000 pixels and four MS bands with 6,000 pixels in the remote sensing instrument. The image data compression method complies with the Consultative Committee for Space Data Systems (CCSDS) standard CCSDS 122.0-B-1 (2005). The compression ratio is 1.5 for lossless compression, 3.75 or 7.5 for lossy compression. The Xilinx Virtex-5QV FPGA, XQR5VFX130 is used to achieve near real time compression. Parallel and concurrent handling strategies are used to achieve high-performance computing in the process.

## **2. Image compression methodology**

The CCSDS Recommended Standard for Image Data Compression is intended to be suitable for spacecraft usage. The algorithm complexity is sufficiently low for hardware implement and memory buffer requirement. It can support strip-based input format for push broom imaging. The compressor consists of two functional blocks, Discrete Wavelet Transfer (DWT) and Bit Plane Encoder (BPE). The image compression methodology is described in the following sections.

## **2.1 Discrete wavelet transform**

The CCSDS Recommendation supports two choices of DWT: an integer DWT (IDWT) and a floating point DWT (FDWT). The integer DWT requires only integer arithmetic, is capable of providing lossless compression, and has lower implementation complexity, but lower compression ratio. The floating point DWT provides improved compression effectiveness, but requires floating point calculations and cannot provide lossless compression.

The DWT stage performs three levels of two-dimensional (2-d) wavelet decomposition and generates 10 subbands as illustrated in Fig. 1. The low pass IDWT is as Equation (1) and the

Hardware Implementation of a Real-Time

data are finished.

Fig. 2. BPE Encoding Flow

**3. Hardware implementation 3.1 Architecture description** 

transmitted to the ground station.

Image Data Compression for Satellite Remote Sensing 417

segment can be defined to limit the data volume. The quality limit can be defined to

The BPE performs DC and AC data encryption as the flow shown in Fig.2. In DC part data encryption, AC part maximum value of each block will be computed. Then, a scheme should be used to determine how many bits for "DC\_MAX\_Depth" and "AC\_MAX\_Depth" in this segment. In addition, the DC and AC optimized encryption type and value of W/8 blocks should be determined. Finally, the DC part data and W/8 AC\_MAX data will be encrypted and the bit stream is transmitted to next stage. W is the pixel size per image line,

In AC part data encryption, it consists of 5 stages. Data encryption and bit-out proceed block by block in each stage. The entropy coding scheme is used by data encryption. The stage 0 is for processing DC 3rd part data. The stage 1 is for processing Parent part coefficients in each block. The stage 2 is for processing Children part coefficients in each block. The stage 3 is for processing Grand-Children part coefficients in each block. The stage 4 is just concatenated stage 1, stage2 and stage 3 left data. After adding segment header, the compressed image

The image flow of the Remote Sensing Instrument in the FORMOSAT-5 is shown in Fig. 3. Behind the telescope, there is one CMOS sensor module inside the Focal Panel Assembly (FPA) to take the images. The CMOS sensor module can be accessed by two FPA electronics. The output data stream is sent to the Image Data Pre-processing (IDP) module in the RSI EU for data re-ordering. Then the resultant data are sent to the Image Data Compression (IDC) module for data compression. The compressed data with format header are stored in the Mass Memory (MM) modules under the control of the Memory Controller (MC) module. While the satellite flies above the ground station, the image files can be retrieved and

constraint the amount of DWT coefficient information to be encoded.

e.g. W is 12,000 for PAN image and W is 6,000 for MS image in FORMOSAT-5.

high pass IDWT is as Equation (2). The low pass FDWT is as Equation (3) and the high pass FWDT is as Equation (4), j=0, 1,…11999 for PAN band, j=0,1,…5999 for MS bands in FORMOSAT-5 case.

Fig. 1. Three-Level 2-d DWT Decomposition of an Image

$$\mathbf{C}\_{j} = \frac{1}{64}\mathbf{x}\_{2j-4} - \frac{1}{8}\mathbf{x}\_{2j-2} + \frac{1}{4}\mathbf{x}\_{2j-1} + \frac{23}{32}\mathbf{x}\_{2j} + \frac{1}{4}\mathbf{x}\_{2j+1} - \frac{1}{8}\mathbf{x}\_{2j+2} + \frac{1}{64}\mathbf{x}\_{2j+4} \tag{1}$$

$$D\_j = \frac{1}{16} \mathbf{x}\_{2j-2} - \frac{9}{16} \mathbf{x}\_{2j} + \mathbf{x}\_{2j+1} - \frac{9}{16} \mathbf{x}\_{2j+2} + \frac{1}{16} \mathbf{x}\_{2j+4} \tag{2}$$

$$\mathbf{C}\_{j} = \sum\_{n=-4}^{4} h\_{n} X\_{2j+1+n}; \qquad \qquad j = 0, 1, \ldots, 11999 \tag{3}$$

$$D\_j = \sum\_{n=-3}^{3} g\_n X\_{2j+1+n}; \qquad \qquad j = 0, 1, \dots, 11999 \tag{4}$$

For FDWT, the coefficients in the equation (3) and (4) are listed in Table 1. The coefficients used in the FORMOSAT-5 are a little different from those defined in the CCSDS 122.0-B-1. Just 24 bits, not 32 bits, are used for these coefficients in the FORMOSAT-5 to save FPGA multiplexer resource.


Table 1. Coefficients of floating point DWT

#### **2.2 Bit plane encoder**

After DWT processing, the Bit Plane Encoder handles DWT coefficient for data compression. The Bit Plane Encoder encodes a segment of images from most significant bit (MSB) to least significant bit (LST). The BPE encoding uses less bits to express image data to achieve compression ratio. In CCSDS 122.0-B-1, the maximum number of bytes in the compressed

high pass IDWT is as Equation (2). The low pass FDWT is as Equation (3) and the high pass FWDT is as Equation (4), j=0, 1,…11999 for PAN band, j=0,1,…5999 for MS bands in

24 22 21 2 21 22 24

22 2 21 22 24

*C x x x xx x x j j j j jj j j* (1)

*D x xx x x j j jj j j* (2)

(3)

(4)

1 1 1 23 1 1 1 64 8 4 32 4 8 64

> 19 91 16 16 16 16

; 0,1,...,11999 *j njn*

; 0,1,...,11999 *j njn*

For FDWT, the coefficients in the equation (3) and (4) are listed in Table 1. The coefficients used in the FORMOSAT-5 are a little different from those defined in the CCSDS 122.0-B-1. Just 24 bits, not 32 bits, are used for these coefficients in the FORMOSAT-5 to save FPGA

 FDWT Coefficients defined in CCSDS FDWT Coefficients used in FORMOSAT-5 i Low Pass Filter, hi High Pass Filter, gi Low Pass Filter, hi High Pass Filter, gi 0 0.852698679009 - 0.788485616406 0.852698564529 - 0.788485646247 ±1 0.377402855613 0.418092273222 0.377402901649 0.418092250823 ±2 - 0.110624404418 0.040689417609 - 0.110624432563 0.040689468383 ±3 - 0.023849465020 - 0.064538882629 - 0.023849487304 - 0.064538883647

After DWT processing, the Bit Plane Encoder handles DWT coefficient for data compression. The Bit Plane Encoder encodes a segment of images from most significant bit (MSB) to least significant bit (LST). The BPE encoding uses less bits to express image data to achieve compression ratio. In CCSDS 122.0-B-1, the maximum number of bytes in the compressed

FORMOSAT-5 case.

multiplexer resource.

**2.2 Bit plane encoder** 

Fig. 1. Three-Level 2-d DWT Decomposition of an Image

4

3

*n*

*n*

4

3

2 1

2 1

*D gX j*

±4 0.037828455507 0.037828445434

Table 1. Coefficients of floating point DWT

*C hX j*

segment can be defined to limit the data volume. The quality limit can be defined to constraint the amount of DWT coefficient information to be encoded.

The BPE performs DC and AC data encryption as the flow shown in Fig.2. In DC part data encryption, AC part maximum value of each block will be computed. Then, a scheme should be used to determine how many bits for "DC\_MAX\_Depth" and "AC\_MAX\_Depth" in this segment. In addition, the DC and AC optimized encryption type and value of W/8 blocks should be determined. Finally, the DC part data and W/8 AC\_MAX data will be encrypted and the bit stream is transmitted to next stage. W is the pixel size per image line, e.g. W is 12,000 for PAN image and W is 6,000 for MS image in FORMOSAT-5.

In AC part data encryption, it consists of 5 stages. Data encryption and bit-out proceed block by block in each stage. The entropy coding scheme is used by data encryption. The stage 0 is for processing DC 3rd part data. The stage 1 is for processing Parent part coefficients in each block. The stage 2 is for processing Children part coefficients in each block. The stage 3 is for processing Grand-Children part coefficients in each block. The stage 4 is just concatenated stage 1, stage2 and stage 3 left data. After adding segment header, the compressed image data are finished.

Fig. 2. BPE Encoding Flow

## **3. Hardware implementation**

## **3.1 Architecture description**

The image flow of the Remote Sensing Instrument in the FORMOSAT-5 is shown in Fig. 3. Behind the telescope, there is one CMOS sensor module inside the Focal Panel Assembly (FPA) to take the images. The CMOS sensor module can be accessed by two FPA electronics. The output data stream is sent to the Image Data Pre-processing (IDP) module in the RSI EU for data re-ordering. Then the resultant data are sent to the Image Data Compression (IDC) module for data compression. The compressed data with format header are stored in the Mass Memory (MM) modules under the control of the Memory Controller (MC) module. While the satellite flies above the ground station, the image files can be retrieved and transmitted to the ground station.

Hardware Implementation of a Real-Time

Fig. 5. Image Data Rate between each stage

a) b)

Fig. 7. Architecture Block Diagram of the PAN Channel in IDC

Fig. 6. a) PAN Compression Circuit Board; b) MS Compression Circuit Board

Image Data Compression for Satellite Remote Sensing 419

Fig. 3. Image Flow of Remote Sensing Instrument

## **3.2 Design and implementation**

The image data input interfaces between each functional module are shown in the Fig. 4. The serial image data from FPA are re-ordered in the IDP to make the image data output in correct pixel order. Then the image data are transferred to IDC in parallel on 12-bit data bus with lower transmission clock rate. One channel of PAN data and four channels of MS data are compressed individually in the IDC. The compressed PAN and MS data are stored individually in image files under the control of MC module.

Fig. 4. Image Data Signal Interfaces between Functional Modules

## **3.3 Hardware design**

The image data rate between each stage is shown in Fig. 5. The PAN sensors output are divided into 8 channels with 80Mbps rate individually to accommodate the high data rate. The channel rate for each MS band is 40Mbps. The parallel handling architecture can increase the image data handling speed.

The PAN and MS image data compression boards are shown in Fig. 6 a) & b). The architecture block diagram of the PAN channel in IDC is illustrated in Fig. 7. The MS channels are similar. The space grade Xilinx FPGA, XQR5VFX130, is used for image compression processing. The major characteristics of the XQR5VFX130 are 130,000 logic cells, 298 blocks of 36K bits RAM, 320 enhanced DSP slices,700Krad total dose, and etc. The PROM part for FPGA programming is XQR17V16, which has 16Mbits memory size with 50krad total dose capability. One XQR5VFX130 FPGA is used for PAN data compression.

The image data input interfaces between each functional module are shown in the Fig. 4. The serial image data from FPA are re-ordered in the IDP to make the image data output in correct pixel order. Then the image data are transferred to IDC in parallel on 12-bit data bus with lower transmission clock rate. One channel of PAN data and four channels of MS data are compressed individually in the IDC. The compressed PAN and MS data are stored

The image data rate between each stage is shown in Fig. 5. The PAN sensors output are divided into 8 channels with 80Mbps rate individually to accommodate the high data rate. The channel rate for each MS band is 40Mbps. The parallel handling architecture can

The PAN and MS image data compression boards are shown in Fig. 6 a) & b). The architecture block diagram of the PAN channel in IDC is illustrated in Fig. 7. The MS channels are similar. The space grade Xilinx FPGA, XQR5VFX130, is used for image compression processing. The major characteristics of the XQR5VFX130 are 130,000 logic cells, 298 blocks of 36K bits RAM, 320 enhanced DSP slices,700Krad total dose, and etc. The PROM part for FPGA programming is XQR17V16, which has 16Mbits memory size with 50krad total dose capability. One XQR5VFX130 FPGA is used for PAN data compression.

Fig. 3. Image Flow of Remote Sensing Instrument

individually in image files under the control of MC module.

Fig. 4. Image Data Signal Interfaces between Functional Modules

**3.2 Design and implementation** 

**3.3 Hardware design** 

increase the image data handling speed.

Fig. 5. Image Data Rate between each stage

Fig. 6. a) PAN Compression Circuit Board; b) MS Compression Circuit Board

Fig. 7. Architecture Block Diagram of the PAN Channel in IDC

Hardware Implementation of a Real-Time

Fig. 8c. DWT Flow (3)

sent to mass memory word by word for storage.

Segment Header

Coded bit plane b=0

…………

Table 2. Compression Data Format

Initial coding of DC coefficients Coded AC coefficient bit depths Coded bit plane b=BitDepthAC-1 Coded bit plane b=BitDepthAC-2

**3.5 BPE process** 

Image Data Compression for Satellite Remote Sensing 421

The BPE module is the actual unit to perform data compression. When DWT acknowledges that one section data is completed and saved in the buffer, BPE retrieves the wavelet domain data from buffer and uses different compression scheme for different DWT sub-section data. According to various compression ratio requirements, BPE performs data truncation or appends zero fill bits. After necessary header information is added, the compressed data is

The compression data format is listed in Table 2. Within a segment, BitDepthDC is defined as the bit number of the maximum value in all DC coefficients. BitDepthAC is defined as the bit number of the maximum value in all AC coefficients. The amount of quantization q' of DC coefficients is determined by the dynamic range of the AC and DC coefficients in a segment in Table 3. DC quantization factor q is defined as q= max(q', BitShift(LL3)). The value of q indicates the number of least significant bits in each DC coefficient that are not encoded in the quantized DC coefficient values. The number of bits needed to represent each quantized DC efficient, N = max {BitDepthDC – q, 1}. For example, one segment has BitDepthDC=16 and BitDepthAC=4. According to Table 3, the DC quantization amount

Two XQR5VFX130 FPGAs are used for four MS data compression. The external memories, 24 chips of 256K x 32 SRAM, are used as data buffer during compression process.

### **3.4 DWT process**

The DWT flows at three levels are illustrated in Fig. 8a, 8b and 8c. The RAM memory banks are used for buffer storage. In the first level, the LL1, LH1, HL1 and HH1 are generated. Then, the LL1 is transmitted to level 2 DWT process to generate LL2, LH2, HL2 and HH2. The LL2 is transmitted to level 3 DWT process to generate LL3, LH3, HL3 and HH3. The LL3 contains the most information of the original image. These subbands are stored in the temporary buffers for BPE process.

Fig. 8a. DWT Flow (1)

Fig. 8b. DWT Flow (2)

Fig. 8c. DWT Flow (3)

## **3.5 BPE process**

420 Remote Sensing – Advanced Techniques and Platforms

Two XQR5VFX130 FPGAs are used for four MS data compression. The external memories,

The DWT flows at three levels are illustrated in Fig. 8a, 8b and 8c. The RAM memory banks are used for buffer storage. In the first level, the LL1, LH1, HL1 and HH1 are generated. Then, the LL1 is transmitted to level 2 DWT process to generate LL2, LH2, HL2 and HH2. The LL2 is transmitted to level 3 DWT process to generate LL3, LH3, HL3 and HH3. The LL3 contains the most information of the original image. These subbands are stored in the

24 chips of 256K x 32 SRAM, are used as data buffer during compression process.

**3.4 DWT process** 

Fig. 8a. DWT Flow (1)

Fig. 8b. DWT Flow (2)

temporary buffers for BPE process.

The BPE module is the actual unit to perform data compression. When DWT acknowledges that one section data is completed and saved in the buffer, BPE retrieves the wavelet domain data from buffer and uses different compression scheme for different DWT sub-section data. According to various compression ratio requirements, BPE performs data truncation or appends zero fill bits. After necessary header information is added, the compressed data is sent to mass memory word by word for storage.

The compression data format is listed in Table 2. Within a segment, BitDepthDC is defined as the bit number of the maximum value in all DC coefficients. BitDepthAC is defined as the bit number of the maximum value in all AC coefficients. The amount of quantization q' of DC coefficients is determined by the dynamic range of the AC and DC coefficients in a segment in Table 3. DC quantization factor q is defined as q= max(q', BitShift(LL3)). The value of q indicates the number of least significant bits in each DC coefficient that are not encoded in the quantized DC coefficient values. The number of bits needed to represent each quantized DC efficient, N = max {BitDepthDC – q, 1}. For example, one segment has BitDepthDC=16 and BitDepthAC=4. According to Table 3, the DC quantization amount


Table 2. Compression Data Format

Hardware Implementation of a Real-Time

Fig. 9. IDC Implementation Block Diagram

Fig. 10. Approach of 9 Taps Low Pass Filter in IDC

Image Data Compression for Satellite Remote Sensing 423


Table 3. DC Coefficient Quantization

q' = 16 -10 = 6. Then, DC quantization factor q is 6 and N = 16 - 6 =10. So, each DC coefficient bit(15) ~ bit(6) are encoded using coding quantization method, and bit(5) ~ bit(4) will just concatenated immediately at the end of the coded quantized DC coefficients of the segment, finally bit(3) ~ bit(0) are encoded at AC stage0 phase. The detailed coding algorithm is described in CCSDS 122.0-B-1 (2005).

The AC part data have the major portion of image (63/64), so AC part data coding dominates the whole compression performance. The CCSDS adopts bit plane encoding concept, that is, the most important bits of each AC subsection part data is encoded first, then less important bits, until specified segment byte limit size is achieved or bit 0 of each data segment is encoded. Even, it is needed to append zero bits to achieve segment byte limited size.

In order to have good compression efficiency, the CCSDS standard specifies AC Parent, Children, and Grand Children data to proceed entropy symbol mapping scheme. The basic concept of entropy coding is to use smaller bit pattern to represent more frequently repeated bit pattern.

In the CCSDS standard, a "gaggle" consists of a set of 16 consecutive blocks within a segment. There are two running phases in our design to use entropy coding scheme to represent the final coding result, pre-running phase and normal running phase. The prerunning phase is designed to get 2-bits、3-bits、and 4-bits entropy value for each gaggle on each bit-plane. The normal running phase is to use entropy table to map the final coding bits string. The detailed coding algorithm is described in CCSDS 122.0-B-1 (2005). The IDC implementation block diagram is shown in Fig. 9.

## **3.6 FPGA design optimization**

Some design skills are used to save the limited multiplier and memory resources in the FPGA chip. In the Equation (1) and (2), nine multipliers for Low Pass Filter and seven multipliers for High Pass Filter are needed. Totally 3 x 2 x (9+7) = 96 multipliers are needed for 3 layers, horizontal and vertical, low pass and high pass filter. By using the multiplexers, adders and timing sharing algorithm in our IDC design as in Fig. 10 and 11, three multipliers for Low Pass Filter and two multipliers for High Pass Filter are needed. In other words, totally 3 x 2 x (3+2) = 30 multipliers are needed for 3 layers 2 dimension FDWT architecture, i.e. 66 multipliers are reduced.

Otherwise q' =1 *BitDepthAC* / 2 DC dynamic range is moderately

q' = 16 -10 = 6. Then, DC quantization factor q is 6 and N = 16 - 6 =10. So, each DC coefficient bit(15) ~ bit(6) are encoded using coding quantization method, and bit(5) ~ bit(4) will just concatenated immediately at the end of the coded quantized DC coefficients of the segment, finally bit(3) ~ bit(0) are encoded at AC stage0 phase. The detailed coding

The AC part data have the major portion of image (63/64), so AC part data coding dominates the whole compression performance. The CCSDS adopts bit plane encoding concept, that is, the most important bits of each AC subsection part data is encoded first, then less important bits, until specified segment byte limit size is achieved or bit 0 of each data segment is

In order to have good compression efficiency, the CCSDS standard specifies AC Parent, Children, and Grand Children data to proceed entropy symbol mapping scheme. The basic concept of entropy coding is to use smaller bit pattern to represent more frequently repeated

In the CCSDS standard, a "gaggle" consists of a set of 16 consecutive blocks within a segment. There are two running phases in our design to use entropy coding scheme to represent the final coding result, pre-running phase and normal running phase. The prerunning phase is designed to get 2-bits、3-bits、and 4-bits entropy value for each gaggle on each bit-plane. The normal running phase is to use entropy table to map the final coding bits string. The detailed coding algorithm is described in CCSDS 122.0-B-1 (2005). The IDC

Some design skills are used to save the limited multiplier and memory resources in the FPGA chip. In the Equation (1) and (2), nine multipliers for Low Pass Filter and seven multipliers for High Pass Filter are needed. Totally 3 x 2 x (9+7) = 96 multipliers are needed for 3 layers, horizontal and vertical, low pass and high pass filter. By using the multiplexers, adders and timing sharing algorithm in our IDC design as in Fig. 10 and 11, three multipliers for Low Pass Filter and two multipliers for High Pass Filter are needed. In other words, totally 3 x 2 x (3+2) = 30 multipliers are needed for 3 layers 2 dimension FDWT

encoded. Even, it is needed to append zero bits to achieve segment byte limited size.

no quantization is performed

the AC dynamic range

than half the AC dynamic range

higher than half the AC dynamic range

q' = BitDepthDC-3 DC dynamic range is close to half

q' = BitDepthDC-10 DC dynamic range is much higher

DC and AC dynamic range q' value Remark *BitDepthDC* 3 q' = 0 DC dynamic range is very small;

(1 /2 ) 1

(1 / 2 ) 10

algorithm is described in CCSDS 122.0-B-1 (2005).

implementation block diagram is shown in Fig. 9.

architecture, i.e. 66 multipliers are reduced.

**3.6 FPGA design optimization** 

3

3

Table 3. DC Coefficient Quantization

*BitDepthDC BitDepthAC*

*BitDepthDC BitDepthAC*

*and BitDepthDC*

*and BitDepthDC*

bit pattern.

Fig. 9. IDC Implementation Block Diagram

Fig. 10. Approach of 9 Taps Low Pass Filter in IDC

Hardware Implementation of a Real-Time

throughput for real time data processing.

cost and reduce the power consumption used by memory chips.

There are some comparisons of data compression chips in Table 5.

Chip Type Xilinx Space Grade FPGA ASIC ASIC Compression Algorithm CCSDS 122.0 CCSDS 122.0 JPEG2000 Line Width (Pixels) 12000 8192 4096

(Watt/Msamples/sec) 0.06 0.17 0.05

Bits Per Pixel 12 16 8, 10, 12, 14, 16 Input Data Rate 480Mbps 320Mbps 780Mbps Radiation(Total Dose, Si) 700K >=50K Commercial

The 12-bit test images in the CCSDS official website have been tested and similar results are gotten as in the CCSDS report. In order to consider more practical case, one North Vancouver image taken by FORMOSAT-2 satellite on 2009/12/9 is adopted. The

FORMOSAT-5 RSI EU IDC

Where : W is pixels per line (12000); n is layer number (1~3) Table 4. Memory Size in FDWT Implementation

equivalent to 0.124 Watt/Msamples/sec.

Model

Table 5. Data Compression Chip Comparison

**4. Image quality verification** 

Features

Power Consumption

Image Data Compression for Satellite Remote Sensing 425

The source clock is 45 MHz for PAN and 11.25 MHz for MS. In the Layer1, LH1, HL1 and HH1 data are generated every two source clocks with data size W/2 words. In the Layer2, LH2, HL2 and HH2 data are generated every four source clocks with data size W/4 words. In the Layer 3, LL3, LH3, HL3, HH3 data are generated every 8 source clocks with data size W/8 words. The data in different layers are generated interleavely to achieve high

The buffer size to handle the image compression is Width \* Length for frame-based method. But for the strip-based method, just fixed buffer size, Width \* 138, is needed. For 8 minutes FORMOSAT-5 PAN imaging data, the buffer size for frame-based will be 200,000 times of buffer size for strip-based. So, it is very important to use strip-based method to save memory size, cost, and handling time in satellite application, even for ground image handling. The total required memory can be reduced as shown in Table 4. It can save the

Low Pass Filter [2 x (**9** x W/2n)] x 32 bits [2 x (**5** x W/2n)] x 32 bits High Pass Filter [2 x (**7** x W/2n)] x 32 bits [2 x (**4** x W/2n)] x 32 bits

The Xilinx Virtex-5QV FPGA static power is 2.49761 watts estimated by Xilinx XPower Analyzer tool. Since the throughput is 40.4 Msamples/sec for PAN, the power consumption of the compression FPGA is about 0.06 Watt/Msamples/sec. The total power consumption of the PAN compression board is about 5 watts, including SRAM and IO circuit, i.e.

There are some benefits to use space grade FPGA chip than ASIC. The space grade FPGA has good anti-radiation capability. The line pixel number and clock rate can be reconfigured.

CCSDS 120.1-G-1 FORMOSAT-5 Approach

CAMBR DWT+BPE IC [Winterrowd 2009]

ANALOG DEVICES

ADV202

Fig. 11. Approach of 7 Taps High Pass Filter in IDC

Fig. 12. DWT Timing Relation between Three Layers

The timing relation chart of DWT three layers is shown in Fig. 12. The "W" is the original source image width (pixels/line) which is 12000 for PAN and 6000 for MS in FROMOSAT-5.

Fig. 11. Approach of 7 Taps High Pass Filter in IDC

Fig. 12. DWT Timing Relation between Three Layers

The timing relation chart of DWT three layers is shown in Fig. 12. The "W" is the original source image width (pixels/line) which is 12000 for PAN and 6000 for MS in FROMOSAT-5. The source clock is 45 MHz for PAN and 11.25 MHz for MS. In the Layer1, LH1, HL1 and HH1 data are generated every two source clocks with data size W/2 words. In the Layer2, LH2, HL2 and HH2 data are generated every four source clocks with data size W/4 words. In the Layer 3, LL3, LH3, HL3, HH3 data are generated every 8 source clocks with data size W/8 words. The data in different layers are generated interleavely to achieve high throughput for real time data processing.

The buffer size to handle the image compression is Width \* Length for frame-based method. But for the strip-based method, just fixed buffer size, Width \* 138, is needed. For 8 minutes FORMOSAT-5 PAN imaging data, the buffer size for frame-based will be 200,000 times of buffer size for strip-based. So, it is very important to use strip-based method to save memory size, cost, and handling time in satellite application, even for ground image handling. The total required memory can be reduced as shown in Table 4. It can save the cost and reduce the power consumption used by memory chips.


Where : W is pixels per line (12000); n is layer number (1~3)

Table 4. Memory Size in FDWT Implementation

The Xilinx Virtex-5QV FPGA static power is 2.49761 watts estimated by Xilinx XPower Analyzer tool. Since the throughput is 40.4 Msamples/sec for PAN, the power consumption of the compression FPGA is about 0.06 Watt/Msamples/sec. The total power consumption of the PAN compression board is about 5 watts, including SRAM and IO circuit, i.e. equivalent to 0.124 Watt/Msamples/sec.

There are some benefits to use space grade FPGA chip than ASIC. The space grade FPGA has good anti-radiation capability. The line pixel number and clock rate can be reconfigured. There are some comparisons of data compression chips in Table 5.


Table 5. Data Compression Chip Comparison

## **4. Image quality verification**

The 12-bit test images in the CCSDS official website have been tested and similar results are gotten as in the CCSDS report. In order to consider more practical case, one North Vancouver image taken by FORMOSAT-2 satellite on 2009/12/9 is adopted. The

Hardware Implementation of a Real-Time

Image Data Compression for Satellite Remote Sensing 427

Fig. 13. North Vancouver Image Taken by FORMOSAT-2 satellite

Fig. 14. Architecture of Image Compression Verification on Hardware

and 49.2dB for compression ratio 7.5.

in Fig. 15. The PSNR is 82.8dB for compression ratio 1.5, 56.9dB for compression ratio 3.75,

Fig. 15. Test Image before Compression (left) and Test Image after Decompression (right)

compression ratios are set 1.5, 3.75 and 7.5. The Peak Signal to Noise Ratio (PSNR) is used as the performance index.

$$PSNR \equiv 20\log\_{10}\frac{2^B - 1}{\sqrt{MSE}}(dB),\tag{5}$$

where B denotes the bit depth and the Mean Squared Error (MSE) is given by

$$MSE = \frac{1}{w \cdot h} \sum\_{i=1}^{w} \sum\_{j=1}^{h} \left(\mathbf{x}\_{i,j} - \hat{\mathbf{x}}\_{i,j}\right)^2 \tag{6}$$

where *<sup>i</sup>*, *<sup>j</sup> x* is the pixel of the original image, , ˆ*<sup>i</sup> <sup>j</sup> x* is the pixel of the decoded image, *w* is the width of image and h is the height of image.

In our verification, one 8-lines strip-based segment is adopted with 1500 blocks for PAN and 375 blocks for MS. The average PSNR is calculated by Matlab® software. The test results are listed in the Table 6.


\* IDWT: Integer Discrete Wavelet Transform FDWT: Floating Point Discrete Wavelet Transform

Table 6. Image PSNR under Various Compressions

When the IDWT is used with compression ratio 1.5, the PSNR is very large to indicate near lossless compression, except the infrared band. When the FDWT is used with compression ratio 7.5, the PSNR may drop to 35dB which is worse than average PSNR 56.77dB using six 12-bit CCSDS test images. This is mainly because North Vancouver image shown in Fig. 13 is much more complicated than the standard CCSDS test images.

To use the satellite image as data input to real compression hardware, a set of simulated Focal Plane Assembly (FPA) is under development as illustrated in Fig. 14. The satellite image taken by FORMOSAT-2 is expanded from 8 bits to 12 bits per image pixel by adding random value of 4 least significant bits to simulate FORMOSAT-5 image. The test image can be downloaded from the personal computer to the image sensors simulator which is to replace the real image sensor array in the FPA. Then the test image can be transmitted out by the FPA simulator like real push broom image data. The test image will be compressed by hardware, then decompressed by software to check the hardware compression performance to simulated satellite image.

To have a quick check on hardware function, a test image with 1024 pixels x 1024 pixels size and 12 bits resolution has been downloaded to a prototype board. The test image is compressed by hardware, and then decompressed by software. These two images are shown

compression ratios are set 1.5, 3.75 and 7.5. The Peak Signal to Noise Ratio (PSNR) is used as

*PSNR dB*

1 1 <sup>1</sup> <sup>ˆ</sup> *w h*

*i j MSE x x w h*

where *<sup>i</sup>*, *<sup>j</sup> x* is the pixel of the original image, , ˆ*<sup>i</sup> <sup>j</sup> x* is the pixel of the decoded image, *w* is the

In our verification, one 8-lines strip-based segment is adopted with 1500 blocks for PAN and 375 blocks for MS. The average PSNR is calculated by Matlab® software. The test results are

CR=1.5 IDWT Lossless Lossless Lossless Lossless 73.1

CR=3.75 IDWT 47.3 47.8 44.3 45.3 41.1

CR=7.5 IDWT 43.1 41.8 37.6 38.1 38.5

When the IDWT is used with compression ratio 1.5, the PSNR is very large to indicate near lossless compression, except the infrared band. When the FDWT is used with compression ratio 7.5, the PSNR may drop to 35dB which is worse than average PSNR 56.77dB using six 12-bit CCSDS test images. This is mainly because North Vancouver image shown in Fig. 13

To use the satellite image as data input to real compression hardware, a set of simulated Focal Plane Assembly (FPA) is under development as illustrated in Fig. 14. The satellite image taken by FORMOSAT-2 is expanded from 8 bits to 12 bits per image pixel by adding random value of 4 least significant bits to simulate FORMOSAT-5 image. The test image can be downloaded from the personal computer to the image sensors simulator which is to replace the real image sensor array in the FPA. Then the test image can be transmitted out by the FPA simulator like real push broom image data. The test image will be compressed by hardware, then decompressed by software to check the hardware compression

To have a quick check on hardware function, a test image with 1024 pixels x 1024 pixels size and 12 bits resolution has been downloaded to a prototype board. The test image is compressed by hardware, and then decompressed by software. These two images are shown

\* IDWT: Integer Discrete Wavelet Transform FDWT: Floating Point Discrete Wavelet Transform

FDWT 51.1 51.1 51.1 51.1 51

FDWT 47.7 48 45 45.8 41.7

FDWT 43.6 42.2 38 38.5 35.1

where B denotes the bit depth and the Mean Squared Error (MSE) is given by

Panchromatic Band

<sup>10</sup> 2 1 20log , *B*

*MSE*

, ,

*i j i j*

2

(5)

(6)

Red Band Green Band Blue Band Infrared

Band

the performance index.

listed in the Table 6.

Compression Ratio

width of image and h is the height of image.

Methods

Image

Table 6. Image PSNR under Various Compressions

performance to simulated satellite image.

is much more complicated than the standard CCSDS test images.

Fig. 13. North Vancouver Image Taken by FORMOSAT-2 satellite

Fig. 14. Architecture of Image Compression Verification on Hardware

in Fig. 15. The PSNR is 82.8dB for compression ratio 1.5, 56.9dB for compression ratio 3.75, and 49.2dB for compression ratio 7.5.

Fig. 15. Test Image before Compression (left) and Test Image after Decompression (right)

**19** 

*Canada* 

**Progress Research on Wireless Communication** 

After a recent series of unfortunate underground mining disasters, the vital importance of communications for underground mining is underlined one more time. Establishing reliable communication is a very difficult task for underground mining due to the extreme environmental conditions. Nevertheless, wireless sensors are considered to be promising candidates for communication devices for underground mine environment. Hence, they can be useful for several applications dealing with the mining industry such as Miners' tracking, prevention of fatal accident between men and vehicles, providing warning signals when miner

entering the unsafe area, monitoring underground gases, message communication, etc.

expand, the area to be covered expands automatically.

Despite its potential advantages, the realization of wireless sensors is challenging and several open research problems exist. In fact, underground communication is one of the few fields where the environment has a significant and direct impact on the communication performance. Furthermore, underground mines are very dynamic environments. As mines

In mine, communication requires complete coverage inside the mine galleries, increasing system reliability and higher transmission rates for faster data throughput. It is extremely important for information to be conveyed to and gathered from every point of mine due to both safety and productivity reasons. In order to meet these needs, the communications industry has looked to Ultra-Wide-band (UWB) for wireless sensors. There have been numerous research results in the literature to indicate that UWB is one of the enabling technologies for sensor network applications [1, 2, 3, 4, 5, 6]. Therefore, UWB provides a good combination of high performance with low complexity for WSN applications [7, 8, 9, 10].

Since UWB has excellent spatial resolution it can be advantageously applied in the field of localization and tracking [11, 12, 13]. In addition to UWB technology, multiple antenna systems have drawn great interest in the wireless community. Multiple antenna systems employ multiple antennas at the transmitter, receiver, or both. By using the antennas in a smart fashion, it may be possible to achieve array gain or diversity gain when multiple antennas are located at either the transmitter or receiver link ends. When multiple antennas are present at both link ends, however, the achievable data rate can potentially be increased

linearly proportional to the minimum of the number of antennas at the link ends.

**1. Introduction** 

**Systems for Underground Mine Sensors** 

Larbi Talbi1, Ismail Ben Mabrouk1 and Mourad Nedil2

*1Université du Québec en Outaouais* 

*2Université du Québec en Abitibi-Témiscamingue* 

## **5. Conclusion**

In this chapter there has been described the implementation of CCSDS recommended image data compression. The parallel processing, time sharing and computation via pure hardware in FPGA chip can achieve high-performance computing. The image data compression module based on FPGA has been developing to provide enough compression ratios with required image quality for FORMOSAT-5 mission. The performance has been verified by standard CCSDS 122.0 test images and FORMOSAT-2 images. The technology can be used on similar image data compression application in space. The compression throughput can be promoted following the improvement on the FPGA technology. The main advantage of this technique is that it allows real time image compression by efficient hardware implementation with low power consumption. This makes it especially suitable for satellite remote sensing.

## **6. Acknowledgment**

The work is supported by the National Space Organization (NSPO) in Taiwan under the FORMOSAT-5 project. The author greatly acknowledge the following partners for their contribution: Dr. C. F. Change and Miss Cynthia Liu in NSPO on Image Algorithm Development and Image Quality Verification, CMOS Sensor Inc. on IDP module, Camels Vision Technologies Inc. on MC and MM modules, Chung-Shan Institute of Science & Technology (CSIST) on the whole EU, and in particular Dr. Mao-Chin Lin, Mr. Li-Rong Ran on IDC module.

## **7. References**


## **Progress Research on Wireless Communication Systems for Underground Mine Sensors**

Larbi Talbi1, Ismail Ben Mabrouk1 and Mourad Nedil2 *1Université du Québec en Outaouais 2Université du Québec en Abitibi-Témiscamingue Canada* 

## **1. Introduction**

428 Remote Sensing – Advanced Techniques and Platforms

In this chapter there has been described the implementation of CCSDS recommended image data compression. The parallel processing, time sharing and computation via pure hardware in FPGA chip can achieve high-performance computing. The image data compression module based on FPGA has been developing to provide enough compression ratios with required image quality for FORMOSAT-5 mission. The performance has been verified by standard CCSDS 122.0 test images and FORMOSAT-2 images. The technology can be used on similar image data compression application in space. The compression throughput can be promoted following the improvement on the FPGA technology. The main advantage of this technique is that it allows real time image compression by efficient hardware implementation with low power consumption. This makes it especially suitable for satellite

The work is supported by the National Space Organization (NSPO) in Taiwan under the FORMOSAT-5 project. The author greatly acknowledge the following partners for their contribution: Dr. C. F. Change and Miss Cynthia Liu in NSPO on Image Algorithm Development and Image Quality Verification, CMOS Sensor Inc. on IDP module, Camels Vision Technologies Inc. on MC and MM modules, Chung-Shan Institute of Science & Technology (CSIST) on the whole EU, and in particular Dr. Mao-Chin Lin, Mr. Li-Rong Ran

CCSDS, "CCSDS 122.0 released 12-bits images", http://cwe.ccsds.org/sls/docs/sls-dc,

CCSDS, "Image Data Compression. Recommendation for Space Data System Standards",

CCSDS, "Image Data Compression. Report Concerning Space Data Systems Standards",

Wang, Hongqiang, "CCSDS Image Data Compression C source codes", http://hyperspectral.unl.edu/, University of Nebraska-Lincoln, (Sept 2008) Winterrowd, Paul, etc. "A 320 Mbps Flexible Image Data Compressor for Space

Applications", IEEEAC paper#1311, 2009

CCSDS 122.0-B-1. Blue Book. Issue 1. Washington, D.C., USA: CCSDS, (November

CCSDS 120.1-G-1. Green Book. Issue 1. Washington, D.C., USA: CCSDS, (June 2007)

**5. Conclusion** 

remote sensing.

on IDC module.

**7. References** 

(2007)

2005)

**6. Acknowledgment** 

After a recent series of unfortunate underground mining disasters, the vital importance of communications for underground mining is underlined one more time. Establishing reliable communication is a very difficult task for underground mining due to the extreme environmental conditions. Nevertheless, wireless sensors are considered to be promising candidates for communication devices for underground mine environment. Hence, they can be useful for several applications dealing with the mining industry such as Miners' tracking, prevention of fatal accident between men and vehicles, providing warning signals when miner entering the unsafe area, monitoring underground gases, message communication, etc.

Despite its potential advantages, the realization of wireless sensors is challenging and several open research problems exist. In fact, underground communication is one of the few fields where the environment has a significant and direct impact on the communication performance. Furthermore, underground mines are very dynamic environments. As mines expand, the area to be covered expands automatically.

In mine, communication requires complete coverage inside the mine galleries, increasing system reliability and higher transmission rates for faster data throughput. It is extremely important for information to be conveyed to and gathered from every point of mine due to both safety and productivity reasons. In order to meet these needs, the communications industry has looked to Ultra-Wide-band (UWB) for wireless sensors. There have been numerous research results in the literature to indicate that UWB is one of the enabling technologies for sensor network applications [1, 2, 3, 4, 5, 6]. Therefore, UWB provides a good combination of high performance with low complexity for WSN applications [7, 8, 9, 10].

Since UWB has excellent spatial resolution it can be advantageously applied in the field of localization and tracking [11, 12, 13]. In addition to UWB technology, multiple antenna systems have drawn great interest in the wireless community. Multiple antenna systems employ multiple antennas at the transmitter, receiver, or both. By using the antennas in a smart fashion, it may be possible to achieve array gain or diversity gain when multiple antennas are located at either the transmitter or receiver link ends. When multiple antennas are present at both link ends, however, the achievable data rate can potentially be increased linearly proportional to the minimum of the number of antennas at the link ends.

Progress Research on Wireless Communication Systems for Underground Mine Sensors 431

centimetres corresponds to half of wavelength of the lowest frequency component for uncorrelated small scale fading. During all measurements, the heights of the transmitting and receiving antennas were maintained at 1.7 m in the same horizontal level, and the channel was kept stationary by ensuring there was no movement in the surrounding

The UWB measurements were performed in frequency domain using the frequency channel sounding technique based on S21 parameter obtained with a network analyzer. In fact, the system measurement setup consists of E8363B network analyzer (PNA) and two different kinds of antennas, with directional and omnidirectional radiation patterns, respectively. There were no amplifiers used during the measurements because the distance between the transmitter and the receiver was just 10 meters. The transmitting port of the PNA swept 7000 discrete frequencies ranging from 3 GHz to 10 GHz uniformly distributed over the bandwidth, and the receiving port measured the magnitude and the phase of each frequency component. Figure 3 shows a typical complex channel transfer function (CTF)

Fig. 3. Channel Transfer Function Measured with the Agilent E8363B Network Analyzer

environment.

Fig. 2. Overview of the Measurement Setup

measured with the Network Analyzer.

In a sensor network, nodes are generally densely deployed. They do not compete with each other but collaborate to perform a common task. Consider a situation where multiple nodes sense the same object and feed the measurements to a remote data fusion center (relay station). Since nodes are spatially clustered, it is natural to let them cooperate as multiple inputs in transmission and receiving, for the ultimate objective to save energy. In [14], Cui, Goldsmith and Bahai investigated the energy efficiency of MIMO and cooperative MIMO techniques in sensor networks. They mainly consider using MIMO for diversity gain, which improves the quality of the link path.

This chapter will study the application of UWB and MIMO techniques in wireless sensor networks. Hence, a channel characterization of the wireless underground channel is essential for the proliferation of communication protocols for wireless sensor network.

## **2. UWB channel characterization**

## **2.1 Description of the underground mining environment**

The measurements were performed in various galleries of a former gold mine, at a 70 m underground level. The environment mainly consists of very rough walls and the floor is not flat and it contains some puddles of water. The dimension of the mine corridors varies between 2.5 m and 3 m in width and approximately 3 m in high. The measurements were taken in both line of sight (LOS) and non line of sight (NLOS) scenarios. Figure 1 illustrates photography of the underground gallery and the measurement arrangement.

## **2.2 Measurement campaign**

The transmitter antenna was always located in a fixed position, while the receiver antenna was moved throughout along the gallery on 49 grid points. As shown in figure 2, the grid was arranged as 7X7 points with 5 cm spacing between each adjacent point. The 5 centimetres corresponds to half of wavelength of the lowest frequency component for uncorrelated small scale fading. During all measurements, the heights of the transmitting and receiving antennas were maintained at 1.7 m in the same horizontal level, and the channel was kept stationary by ensuring there was no movement in the surrounding environment.

Fig. 2. Overview of the Measurement Setup

430 Remote Sensing – Advanced Techniques and Platforms

In a sensor network, nodes are generally densely deployed. They do not compete with each other but collaborate to perform a common task. Consider a situation where multiple nodes sense the same object and feed the measurements to a remote data fusion center (relay station). Since nodes are spatially clustered, it is natural to let them cooperate as multiple inputs in transmission and receiving, for the ultimate objective to save energy. In [14], Cui, Goldsmith and Bahai investigated the energy efficiency of MIMO and cooperative MIMO techniques in sensor networks. They mainly consider using MIMO for diversity gain, which

This chapter will study the application of UWB and MIMO techniques in wireless sensor networks. Hence, a channel characterization of the wireless underground channel is essential for the proliferation of communication protocols for wireless sensor network.

The measurements were performed in various galleries of a former gold mine, at a 70 m underground level. The environment mainly consists of very rough walls and the floor is not flat and it contains some puddles of water. The dimension of the mine corridors varies between 2.5 m and 3 m in width and approximately 3 m in high. The measurements were taken in both line of sight (LOS) and non line of sight (NLOS) scenarios. Figure 1 illustrates

photography of the underground gallery and the measurement arrangement.

Fig. 1. Photography of the Underground Gallery and the Measurement Arrangement.

The transmitter antenna was always located in a fixed position, while the receiver antenna was moved throughout along the gallery on 49 grid points. As shown in figure 2, the grid was arranged as 7X7 points with 5 cm spacing between each adjacent point. The 5

improves the quality of the link path.

**2. UWB channel characterization** 

**2.2 Measurement campaign** 

**2.1 Description of the underground mining environment** 

The UWB measurements were performed in frequency domain using the frequency channel sounding technique based on S21 parameter obtained with a network analyzer. In fact, the system measurement setup consists of E8363B network analyzer (PNA) and two different kinds of antennas, with directional and omnidirectional radiation patterns, respectively. There were no amplifiers used during the measurements because the distance between the transmitter and the receiver was just 10 meters. The transmitting port of the PNA swept 7000 discrete frequencies ranging from 3 GHz to 10 GHz uniformly distributed over the bandwidth, and the receiving port measured the magnitude and the phase of each frequency component. Figure 3 shows a typical complex channel transfer function (CTF) measured with the Network Analyzer.

Fig. 3. Channel Transfer Function Measured with the Agilent E8363B Network Analyzer

Progress Research on Wireless Communication Systems for Underground Mine Sensors 433

combination have shown that the path loss PL (d) in dB at any location in the gallery can be

0

(2)

*d*

0 10

*dB dB* 10. .log *<sup>d</sup> PL d PL d n <sup>X</sup>*

where PL(��) is the path loss at the reference distance �� set to 1m, n is the path loss exponent and �� is a zero-mean Gaussian distributed random variable in dB with the

The measurements of UWB propagations channel in line of sight case were made between 1 m and 10 m with intervals of 1 m. Figure 4 illustrates the gallery layout and the measurements Tx-Rx arrangements under LOS and Figure 5 shows the results of path loss as function of distance for the three antennas combinations: directional - directional,

As listed in Table 2, the path loss exponent n, in LOS scenario is equal to 1.99, 2.01 and 2.11 for directional-omni, directional-directional, and omni-omni antennas combination respectively. It can be noted that the path loss exponent for all these combinations is close to free space path loss exponent where n=2, with the smallest path loss fluctuation for directional-omni antenna combination, and the standard deviation of Gaussian random variable ��� is smaller for directional antenna in LOS environment. The results of path loss exponent values observed in [16] [17] for indoor UWB propagation are lower to the results

written as a random log-normal distribution by :

standard deviation.

**2.3.1 LOS scenario 2.3.1.1 Path loss model** 

directional-omni and omni-omni.

Fig. 4. Gallery Layout and Measurement Setup in LOS

The frequency span of 1 MHz is chosen small enough so that diffraction coefficients, dielectric constants, etc., can be considered constant within the bandwidth of 7 GHz [15]. At each distance between the transmitter and the receiver, the channel transfer function was measured 30 times, to reduce the effects of random noise on the measurements, and then stored in a computer hard drive via a GPIB interface. The 7 GHz bandwidth gives a theoretical time resolution of 142.9 ps (in practice, due to the use of windowing the time resolution is estimated to be 2/bandwidth) and the sweeping time of the network analyzer is decreased to validate the quasi- static assumption of the channel. The frequency resolution of 1 MHz gives maximum delay range of 1 μs.

Before the measurements, the calibration of the setup was done to reduce the influence of unwanted RF cables effects. Table 1 lists the parameters setup.



Since the measurements are performed in frequency domain, the inverse Fourier transform (IFT) was applied to the measured complex transfer function using Kaiser-Bessel window in order to obtain the channel impulse response. The Kaiser window is designed as FIR filter with parameter β=6 to reduce the side lobes of the transformation.

#### **2.3 Measurements results and analysis**

The large scale measurements are performed to determine the propagation distance-power law in the underground environment. The average path loss in dB for arbitrary transmitterreceiver separation distance d can be represented as:

$$PL\_{average}\left(d\right) = \frac{1}{M} \frac{1}{N} \sum\_{i=1}^{M} \sum\_{j=1}^{N} \left| H\left(f\_i, d\right) \right|^2 \tag{1}$$

where H(݂,d) is the measured complex frequency response and N represents the number of data points measured during a sweep of 7000 discrete frequencies ranging from 3 GHz to 10 GHz, and M represents the number of sweeps that has been averaged.

According to the measured channel transfer function and the data fitting using the linear least squares regression, the computations of different transmitter-receiver antennas combination have shown that the path loss PL (d) in dB at any location in the gallery can be written as a random log-normal distribution by :

$$PL\_{d\mathbb{B}}\left(d\right) = PL\_{d\mathbb{B}}\left(d\_0\right) + 10.n.\log\_{10}\left(\frac{d}{d\_0}\right) + X\_{\sigma} \tag{2}$$

where PL(��) is the path loss at the reference distance �� set to 1m, n is the path loss exponent and �� is a zero-mean Gaussian distributed random variable in dB with the standard deviation.

#### **2.3.1 LOS scenario**

432 Remote Sensing – Advanced Techniques and Platforms

The frequency span of 1 MHz is chosen small enough so that diffraction coefficients, dielectric constants, etc., can be considered constant within the bandwidth of 7 GHz [15]. At each distance between the transmitter and the receiver, the channel transfer function was measured 30 times, to reduce the effects of random noise on the measurements, and then stored in a computer hard drive via a GPIB interface. The 7 GHz bandwidth gives a theoretical time resolution of 142.9 ps (in practice, due to the use of windowing the time resolution is estimated to be 2/bandwidth) and the sweeping time of the network analyzer is decreased to validate the quasi- static assumption of the channel. The frequency

Before the measurements, the calibration of the setup was done to reduce the influence of

Frequency Sweeping Points 7000 Frequency Resolution 1 MHz Time Resolution 286 *ps* Maximum Delay Range 1000 *ns* Sweep Average 30 Tx-Rx Antennas Height 1.7 m

Since the measurements are performed in frequency domain, the inverse Fourier transform (IFT) was applied to the measured complex transfer function using Kaiser-Bessel window in order to obtain the channel impulse response. The Kaiser window is designed as FIR filter

The large scale measurements are performed to determine the propagation distance-power law in the underground environment. The average path loss in dB for arbitrary transmitter-

*average i*

where H(݂,d) is the measured complex frequency response and N represents the number of data points measured during a sweep of 7000 discrete frequencies ranging from 3 GHz to 10

According to the measured channel transfer function and the data fitting using the linear least squares regression, the computations of different transmitter-receiver antennas

*PL d H f d M N*

 <sup>2</sup> 1 1 1 1 , *M N*

(1)

*i j*

**Parameters Values**  Bandwidth 7 GHz Center Frequency 6.5 GHz

resolution of 1 MHz gives maximum delay range of 1 μs.

Table 1. Measurement System Parameters

**2.3 Measurements results and analysis** 

receiver separation distance d can be represented as:

unwanted RF cables effects. Table 1 lists the parameters setup.

with parameter β=6 to reduce the side lobes of the transformation.

GHz, and M represents the number of sweeps that has been averaged.

#### **2.3.1.1 Path loss model**

The measurements of UWB propagations channel in line of sight case were made between 1 m and 10 m with intervals of 1 m. Figure 4 illustrates the gallery layout and the measurements Tx-Rx arrangements under LOS and Figure 5 shows the results of path loss as function of distance for the three antennas combinations: directional - directional, directional-omni and omni-omni.

Fig. 4. Gallery Layout and Measurement Setup in LOS

As listed in Table 2, the path loss exponent n, in LOS scenario is equal to 1.99, 2.01 and 2.11 for directional-omni, directional-directional, and omni-omni antennas combination respectively. It can be noted that the path loss exponent for all these combinations is close to free space path loss exponent where n=2, with the smallest path loss fluctuation for directional-omni antenna combination, and the standard deviation of Gaussian random variable ��� is smaller for directional antenna in LOS environment. The results of path loss exponent values observed in [16] [17] for indoor UWB propagation are lower to the results

Progress Research on Wireless Communication Systems for Underground Mine Sensors 435

*rms* 


2

function in LOS scenario.

to the results reported in indoor channel [19] [20].

2

2

2

*k k*

 

*k k*

 

*a P*

 

. . *kk k k*

*k k k k*

2 2 2

. . *kk k k*

*k k k k*

*a P*

*a P*

*a P*

Where ��, P(��) and �� are the gain, power and delay of the ���path respectively. From (3), (4) and (5) we have calculated the RMS delay spread for each antenna combination by using predefined thresholds. A threshold of 40 dB below the strongest path was chosen to avoid the effect of noise on the statistics of multipath arrival times. Fig. 7 shows the effects of antenna directivity on the RMS delay spread computed from the cumulative distribution

According to the figure 7, we can observe that for 50% of all locations, the directional directional combination offers the best result of ���� with 2 ns. However, the directionalomni and the omni-omni combinations introduce 7.7 ns and 9.5 ns of ���� respectively. Hence, we can say that the former combination reduces 7.5 ns of ���� in comparison with the latter one. The effect of directional antenna in underground LOS environment is similar

(4)

(5)

 

Fig. 6. Typical underground Power Delay Profile in LOS

 

<sup>2</sup> <sup>2</sup>

(3)

Fig. 5. Path Loss vs. T-R Separation Distance in LOS

obtained for underground UWB propagation. In an indoor environment, such as a corridor or a hallway clear of obstacles, the results may show lower path loss exponent due to multipath signal addition, whereas in the mine gallery, the walls are uneven, scattering the signal and thus showing results in closer agreement with the free-space path loss exponent, due mainly to the LOS component reaching the antenna.


Table 2. Summary of Path Loss Exponents n and Standards Deviations ߪௗ in LOS.

#### **2.3.1.2 RMS delay spread**

A statistical characterization of the channel impulse response is a useful process for describing the rapid fluctuations of the amplitude, phase, and multipath propagation delays of the UWB signal. The number of multipath in an underground environment is more important due to the reflection and scattering from the ground and surrounding rough surfaces. Figure 6 shows a typical power delay profile (PDP) measured with omni-omni antenna in LOS environment.

In order to compare different multipath channels of different antennas combination, the mean excess delay and RMS delay spread are evaluated using the below equations [18] :


obtained for underground UWB propagation. In an indoor environment, such as a corridor or a hallway clear of obstacles, the results may show lower path loss exponent due to multipath signal addition, whereas in the mine gallery, the walls are uneven, scattering the signal and thus showing results in closer agreement with the free-space path loss exponent,

*LOS Omni Omni Direct Direct Direct Omni*

*n* 2.11 2.01 1.99

*dB* 0.89 0.13 0.32

A statistical characterization of the channel impulse response is a useful process for describing the rapid fluctuations of the amplitude, phase, and multipath propagation delays of the UWB signal. The number of multipath in an underground environment is more important due to the reflection and scattering from the ground and surrounding rough surfaces. Figure 6 shows a typical power delay profile (PDP) measured with omni-omni

In order to compare different multipath channels of different antennas combination, the mean excess delay and RMS delay spread are evaluated using the below equations [18] :


Table 2. Summary of Path Loss Exponents n and Standards Deviations ߪௗ in LOS.

Fig. 5. Path Loss vs. T-R Separation Distance in LOS

due mainly to the LOS component reaching the antenna.

**2.3.1.2 RMS delay spread** 

antenna in LOS environment.

profile given by:

Fig. 6. Typical underground Power Delay Profile in LOS


$$\overline{\tau} = \frac{\sum a\_k^2 \cdot \tau\_k}{\sum a\_k^2} = \frac{\sum P\left(\tau\_k\right) \cdot \tau\_k}{\sum\_k P\left(\tau\_k\right)}\tag{4}$$

$$\overline{\tau^2} = \frac{\sum\_{k} a\_k^2 \cdot \tau\_k^2}{\sum\_{k} a\_k^2} = \frac{\sum\_{k} P(\tau\_k) \cdot \tau\_k^2}{\sum\_{k} P(\tau\_k)}\tag{5}$$

Where ��, P(��) and �� are the gain, power and delay of the ���path respectively. From (3), (4) and (5) we have calculated the RMS delay spread for each antenna combination by using predefined thresholds. A threshold of 40 dB below the strongest path was chosen to avoid the effect of noise on the statistics of multipath arrival times. Fig. 7 shows the effects of antenna directivity on the RMS delay spread computed from the cumulative distribution function in LOS scenario.

According to the figure 7, we can observe that for 50% of all locations, the directional directional combination offers the best result of ���� with 2 ns. However, the directionalomni and the omni-omni combinations introduce 7.7 ns and 9.5 ns of ���� respectively. Hence, we can say that the former combination reduces 7.5 ns of ���� in comparison with the latter one. The effect of directional antenna in underground LOS environment is similar to the results reported in indoor channel [19] [20].

Progress Research on Wireless Communication Systems for Underground Mine Sensors 437

The results of path loss as function of distance for directional - directional and omni - omni

As listed in Table 3, the path loss exponent with directional antennas is twice larger than of

*NLOS Omni Omni Direct Direct*

*n* 3.00 6.16

*dB* 0.66 1.47

Table 3. Summary of Path Loss Exponents n and Standards Deviations ��� in NLOS

In NLOS scenario, the UWB signal reaches the receiver through reflections, scattering, and diffractions. Figure 10 shows that a typical power delay profile (PDP) measured with Omni-Omni antenna in NLOS environment consists of components from multiple reflected,

Figure 11 shows that the use of directional antennas, for 50% of all locations in NLOS

scenario, can reduce, 13 ns of ���� compared to omnidirectional antennas.

antennas combinations are shown in Figure 9.

Fig. 9. Path Loss vs. T-R Separation Distance in NLOS

the omnidirectional antennas.

**2.3.2.2 RMS delay spread** 

scattered, and diffracted propagation paths.

Fig. 7. Cumulative Distribution Function of RMS delay spread in LOS

### **2.3.2 NLOS scenario**

#### **2.3.2.1 Path loss model**

The measurements of UWB propagations in non line of sight were made between 4 m and 10 m with intervals of 1m. Figure 8 illustrates the gallery layout and the measurements arrangement in NLOS.

Fig. 8. Gallery Layout and Measurement Setup in NLOS

The results of path loss as function of distance for directional - directional and omni - omni antennas combinations are shown in Figure 9.

Fig. 9. Path Loss vs. T-R Separation Distance in NLOS

As listed in Table 3, the path loss exponent with directional antennas is twice larger than of the omnidirectional antennas.


Table 3. Summary of Path Loss Exponents n and Standards Deviations ��� in NLOS

## **2.3.2.2 RMS delay spread**

436 Remote Sensing – Advanced Techniques and Platforms

Fig. 7. Cumulative Distribution Function of RMS delay spread in LOS

Fig. 8. Gallery Layout and Measurement Setup in NLOS

The measurements of UWB propagations in non line of sight were made between 4 m and 10 m with intervals of 1m. Figure 8 illustrates the gallery layout and the measurements

**2.3.2 NLOS scenario 2.3.2.1 Path loss model** 

arrangement in NLOS.

In NLOS scenario, the UWB signal reaches the receiver through reflections, scattering, and diffractions. Figure 10 shows that a typical power delay profile (PDP) measured with Omni-Omni antenna in NLOS environment consists of components from multiple reflected, scattered, and diffracted propagation paths.

Figure 11 shows that the use of directional antennas, for 50% of all locations in NLOS scenario, can reduce, 13 ns of ���� compared to omnidirectional antennas.

Progress Research on Wireless Communication Systems for Underground Mine Sensors 439

supposed to have a width of about 4 to 5 m. The gallery also has several branches of different size at variant locations. The humidity is still high, drops of water falling from everywhere and big pools of water cover the ground. The temperature is stable of 6 to 15º C

The MIMO antenna system consists of a set patch antenna, developed in our laboratory and have been used for transmission and reception of the RF signal, at 2.4GHz. Measurement campaigns under LOS and NLOS scenarios were performed in frequency domain using the frequency channel sounding technique based on measuring ܵଶଵ parameter with a network analyzer (Agilent E8363B). In fact, the system measurement setup, as shown in figure 13, consists of a network analyzer (PNA), 2X2 MIMO antenna set, two switches, one power amplifier for the transmitting signal and one low noise amplifier for the receiving signal.

For the Line-of-Sight (LOS) scenario, the transmitter remained fixed at ܶ௫ଵ , where the receiver changed its position along the gallery, from 1 meter up to 25 meters far from the transmitter. While for NLOS the transmitter remained fixed at ܶ௫ଶ and the ܶ௫ െ ܴ௫ separation varies from 6m up to 25m. Figure 14 illustrates photography of the receiver

along the year. A photography of this underground gallery is shown in figure 12.

Fig. 12. Photography of the mine gallery

Both amplifiers have a gain of 30 dB.

location and a map of the underground gallery.

**3.2 Measurement setup** 

Fig. 10. Path Loss vs. T-R separation distance in NLOS

Fig. 11. Cumulative Distribution Function of RMS delay spread in NLOS

#### **3. MIMO channel characterization at 2.4 GHz**

#### **3.1 Description of the underground environment**

Measurements were conducted in a gallery located at a 40-m deep underground level.

In this gallery, the floor is uneven with bumps and several ditches. In addition, the walls are not aligned. Dimensions vary almost randomly throughout the gallery, although the latter is supposed to have a width of about 4 to 5 m. The gallery also has several branches of different size at variant locations. The humidity is still high, drops of water falling from everywhere and big pools of water cover the ground. The temperature is stable of 6 to 15º C along the year. A photography of this underground gallery is shown in figure 12.

Fig. 12. Photography of the mine gallery

## **3.2 Measurement setup**

438 Remote Sensing – Advanced Techniques and Platforms

Fig. 10. Path Loss vs. T-R separation distance in NLOS

Fig. 11. Cumulative Distribution Function of RMS delay spread in NLOS

Measurements were conducted in a gallery located at a 40-m deep underground level.

In this gallery, the floor is uneven with bumps and several ditches. In addition, the walls are not aligned. Dimensions vary almost randomly throughout the gallery, although the latter is

**3. MIMO channel characterization at 2.4 GHz 3.1 Description of the underground environment** 

The MIMO antenna system consists of a set patch antenna, developed in our laboratory and have been used for transmission and reception of the RF signal, at 2.4GHz. Measurement campaigns under LOS and NLOS scenarios were performed in frequency domain using the frequency channel sounding technique based on measuring ܵଶଵ parameter with a network analyzer (Agilent E8363B). In fact, the system measurement setup, as shown in figure 13, consists of a network analyzer (PNA), 2X2 MIMO antenna set, two switches, one power amplifier for the transmitting signal and one low noise amplifier for the receiving signal. Both amplifiers have a gain of 30 dB.

For the Line-of-Sight (LOS) scenario, the transmitter remained fixed at ܶ௫ଵ , where the receiver changed its position along the gallery, from 1 meter up to 25 meters far from the transmitter. While for NLOS the transmitter remained fixed at ܶ௫ଶ and the ܶ௫ െ ܴ௫ separation varies from 6m up to 25m. Figure 14 illustrates photography of the receiver location and a map of the underground gallery.

Progress Research on Wireless Communication Systems for Underground Mine Sensors 441

The RMS delay spread roughly characterizes the multipath propagation in the delay domain. The RMS delay spread is the square root of the second central moment of the

> � � ∑� P�

− �∑� Piτ� ∑� P� � �

�� is the average power and �� is the received power (in

(6)

�� − (τ�)� <sup>=</sup> �<sup>∑</sup> Piτ�

linear units) at τ� corresponding arrival time. We have a threshold of 10 dB for all power

The RMS delay spread has been computed for each impulse response of all the gallery measurements using the 2X2 MIMO system under LOS and NLOS scenarios and plotted in

> 2X2 MIMO LOS 2X2 MIMO NLOS

1 3 5 7 9 11 13 15 17 19 21 23 25

Distance to the transmitter (m)

For the considered underground gallery, the profile seen in figure 15 is not monotonically increasing as may be expected. Results thus show propagation behavior that is specific for these underground environments. This is likely due to scattering on the rough sidewalls' surface that exhibit a difference of 25 cm between the maximum and minimum surface variation. Moreover, the RMS delay for the MIMO in NLOS scenario is higher than the one

**3.3 Measurement results 3.3.1 RMS delay (**����**)** 

averaged power and it is defined as:

where τ� is the mean excess delay, τ

Fig. 15. RMS delay spread as a function of the distance

RMS Delay Spread (ns)

� = �τ

terms of the separation distance ������ in figure 15.

delay profiles, in order to guarantee the elimination of the noise.

Fig. 13. Measurement setup

Fig. 14. The underground gallery plan

#### **3.3 Measurement results**

## **3.3.1 RMS delay (**����**)**

440 Remote Sensing – Advanced Techniques and Platforms

Fig. 13. Measurement setup

Fig. 14. The underground gallery plan

The RMS delay spread roughly characterizes the multipath propagation in the delay domain. The RMS delay spread is the square root of the second central moment of the averaged power and it is defined as:

$$\tau\_{\rm rms} = \sqrt{\overline{\tau}^2 - (\overline{\tau})^2} \ = \sqrt{\frac{\sum\_{\rm l} \text{Pi} \tau\_{\rm l}^2}{\sum\_{\rm l} \text{P\_l}}} - \left(\frac{\sum\_{\rm l} \text{Pi} \tau\_{\rm l}}{\sum\_{\rm l} \text{P\_l}}\right)^2 \tag{6}$$

where τ� is the mean excess delay, τ �� is the average power and �� is the received power (in linear units) at τ� corresponding arrival time. We have a threshold of 10 dB for all power delay profiles, in order to guarantee the elimination of the noise.

The RMS delay spread has been computed for each impulse response of all the gallery measurements using the 2X2 MIMO system under LOS and NLOS scenarios and plotted in terms of the separation distance ������ in figure 15.

Fig. 15. RMS delay spread as a function of the distance

For the considered underground gallery, the profile seen in figure 15 is not monotonically increasing as may be expected. Results thus show propagation behavior that is specific for these underground environments. This is likely due to scattering on the rough sidewalls' surface that exhibit a difference of 25 cm between the maximum and minimum surface variation. Moreover, the RMS delay for the MIMO in NLOS scenario is higher than the one

Progress Research on Wireless Communication Systems for Underground Mine Sensors 443

Linear Fit

2X2 MIMO Path Loss (NLos)

8 10 12 14

10 log10 (distance)

**α** 1.73 3.03 �� 1.29 2.75

From the results shown in Table 5 the NLOS scenario have path loss exponent greater than 2 and also have larger �� value compared with LOS scenarios. While, for the LOS case the exponent α=1.73, is smaller than the free space exponent α=2, the reason behind this is because of the collection of all multipath components so that a higher power is received than

If we consider a system composed on m transmitting antennas and n receiving antennas, the maximum capacity of a memoryless MIMO narrow band channel expressed in bits/s/Hz, with a uniform power allocation constraint and in the presence of additional white Gaussian

where is the average signal to noise ratio per receiving antenna; Im denotes the identity matrix of size *m*, the upper script H represents the hermitian conjugate of the matrix and

**MIMO LOS MIMO NLOS** 

C = log2 det (Im + .HHH) (8)


the direct two signals in the free space.

noise is given by Foschini et al.[22]:

**3.3.3 Capacity** 

Fig. 17. Average Path versus distance in NLOS scenario

Table 5. Path Loss exponent α and standard deviation of X (��)




Averaged Path Loss (dB)




of MIMO by about 5 ns due to the walls attenuation. Table 4 summarizes the RMS values for LOS and NLOS locations.


Table 4. Summary of the RMS delay spread for measurements corresponding to LOS and NLOS galleries

#### **3.3.2 Path loss**

Path loss in the channel is normally distributed in decibel (dB) with a linearly increasing mean and is modeled as:

$$PL\_{dB}(d\_0) = \overline{PL\_{dB}} \text{ ( $d\_0$ )} + 10 \text{adlog ( $\frac{d}{d\_0}$ )} + \text{ $\chi$ } \tag{7}$$

where ����� ������� ���) is the mean path loss at the reference distance ��, 10αlog (d/��) is the mean path loss referenced to ��, and X is a zero mean Gaussian random variable expressed in dB. Path Loss as a function of distance are shown in figure 16 and figure 17 for both LOS and NLOS galleries respectively. The mean path loss at �� and the path loss exponent α were determined through least square regression analysis [21]. The difference between this fit and the measured data is represented by the Gaussian random variable X. Talble 5 lists the values obtained for α and �� (standard deviation of X).

Fig. 16. Average Path versus distance in LOS scenario

Fig. 17. Average Path versus distance in NLOS scenario


Table 5. Path Loss exponent α and standard deviation of X (��)

From the results shown in Table 5 the NLOS scenario have path loss exponent greater than 2 and also have larger �� value compared with LOS scenarios. While, for the LOS case the exponent α=1.73, is smaller than the free space exponent α=2, the reason behind this is because of the collection of all multipath components so that a higher power is received than the direct two signals in the free space.

#### **3.3.3 Capacity**

442 Remote Sensing – Advanced Techniques and Platforms

of MIMO by about 5 ns due to the walls attenuation. Table 4 summarizes the RMS values for

Table 4. Summary of the RMS delay spread for measurements corresponding to LOS and

Path loss in the channel is normally distributed in decibel (dB) with a linearly increasing

where ����� ������� ���) is the mean path loss at the reference distance ��, 10αlog (d/��) is the mean path loss referenced to ��, and X is a zero mean Gaussian random variable expressed in dB. Path Loss as a function of distance are shown in figure 16 and figure 17 for both LOS and NLOS galleries respectively. The mean path loss at �� and the path loss exponent α were determined through least square regression analysis [21]. The difference between this fit and the measured data is represented by the Gaussian random variable X. Talble 5 lists

0 2 4 6 8 10 12 14

10 log10 (distance)

����(��) = ���� ������� ���) + 10αlog ( �

the values obtained for α and �� (standard deviation of X).


Fig. 16. Average Path versus distance in LOS scenario



Averaged Path Loss (dB)



0

**RMS (ns) MIMO LOS MIMO NLOS** 

��

2X2 MIMO Path Loss (Los)

Linear Fit

) + X (7)

**Minimum / Maximum** 0.44 / 2.64 2.7815 / 10.292 **Mean / Standard deviation (σ)** 1.33 / 0.68 5.6081 / 2.0750

LOS and NLOS locations.

NLOS galleries

**3.3.2 Path loss** 

mean and is modeled as:

If we consider a system composed on m transmitting antennas and n receiving antennas, the maximum capacity of a memoryless MIMO narrow band channel expressed in bits/s/Hz, with a uniform power allocation constraint and in the presence of additional white Gaussian noise is given by Foschini et al.[22]:

$$\mathbf{C} = \log\_2 \det \left( \mathbf{I}\_{\mathrm{m}} + \sigma . \mathbf{H} \mathbf{H}^{\mathrm{H}} \right) \tag{8}$$

where is the average signal to noise ratio per receiving antenna; Im denotes the identity matrix of size *m*, the upper script H represents the hermitian conjugate of the matrix and

Progress Research on Wireless Communication Systems for Underground Mine Sensors 445

Nevertheless, UWB technology for wireless networks is not all about advantages. Some of the main difficulties of UWB communication are low transmission power so information can only travel for short distance comparing to 2.4 GHz which can rich long distance. Moreover UWB in the microwave range does not offer a high resistance to shadowing, but this problem can be mitigated in sensor networks by appropriate routing, and possible

A. A. M. Saleh and R. A. Valenzuela, "A Statistical Model for Indoor Multipath Propagation," IEEE J. Select. Areas Commun., vol. SAC-5, pp. 128-137, Feb. 1987. A. F. Molisch, B. Kannan, C. C. Chong, S. Emami, A. Karedal, J. Kunisch, H. Schantz, U.

A.J. Goldsmith S. Cui and A. Bahai.: 'Energy-efficiency of mimo and cooperative mimo in

A.Muqaibel, A. Safaai-Jazi, A, Attiya, B Woerner, and S. Riad, "Path-Loss and time

Arslan A, Chen AN and Benedetto MG (2006) Ultra-wideband wireless communication.

Arslan H and Benedetto MGD (2005) Introduction to UWB. Book Chapter, Ultra Wideband Wireless Communications (ed. Arslan H), John Wiley & Sons, USA. Chehri A and Fortier P (2006a) Frequency domain analysis of UWB channel propagation in

Chehri A, Fortier P and Tardif PM (2006a) Deployment of ad-hoc sensor networks in

Choi JD and Stark WE (2002) Performance of ultra-wideband communications with

F. Granelli, H. Zhang, X. Zhou, S. Maranò, "Research Advances in Cognitive Ultra Wide

G. J. Foschini and J. Gans, "On Limits of Wireless Communications in a Fading Environment

J. Li, T. Talty, "Channel Characterization for Ultra-Wideband Intra-Vehicle Sensor Networks," Military Communications conference (MILCOM), pp. 1-5, 2006. L. Stoica, A. Rabbachin, H.O. Repo, T.S. Tiuraniemi, I. Oppermann, "An Ultrawideband

802.15-04 0662-00-004a, San Antonio, TX, USA, Nov. 2004.

IEEE Transactions, Vol 5, Issue 3, March 2006 Pages 550-559.

Wiley Interscience, Hoboken, New Jersey.

Communications, pp. 1754–1766.

pp. 315-335, March, 1996

Applications, Vol. 11, pp. 487-499, 2006.

on Vehicular Technology, Vol. 54, pp. 1632-1645, 2005.

Montreal, Canada, 25–28 September 2006, pp. 1–5.

Schuster and K. Siwiak, "IEEE 802.15.4a Channel Model – Final Report", IEEE

sensor networks', IEEE Journal on Selected Areas of Communications, 22(6),

dispersion parameters for indoor UWB propagation ", Wireless Communications,

underground mines. Proceedings of IEEE 64th Vehicular Technology Conference,

underground mines. Proceedings of Conference on Wireless and Optical Communication, and Wireless Sensor Network, Alberta, Canada, 3–4 July 2006, pp.

suboptimal receivers in multipath channels. IEEE Journal on Selected Areas in

Band Radio and Their Application to Sensor Networks," Mobile Networks and

when Using Multiple Antennas", Wireless Personal Communications, vol. 6, no. 3,

System Architecture for Tag Based Wireless Sensor Networks," IEEE Transactions

collaborative communications.

August 2004.

13–19.

**5. References** 

det(X) means the determinant of a matrix X. To clearly point out the MIMO system performance for the LOS and NLOS cases, the ergodic capacity is calculated for a fixed transmitted power and the SNR at the receiver is determined by the path loss. In this case, the capacity includes both effects related to received power and spatial richness. The relationship between the channel capacity C and the distance ்݀௫ିோ௫ based on equation (8) is shown in figure 18. Obviously, one can see that the NLOS suffer from its higher path-loss exponent which is due to the directional radiation pattern of the MIMO patch antenna resulting in lower capacity compared to the LOS case by about 3 bit/s/Hz.

Fig. 18. Channel capacity for LOS and NLOS scenarios

## **4. Conclusion**

This study deals with several aspects relative to UWB and MIMO propagation channel and its deployment for wireless sensors. Successful design and deployment of these techniques require detailed channel characterization. Measurement campaigns, made at two different deep levels in a former gold mine under LOS and NLOS scenarios, have been analyzed to obtain the relevant statistical parameters of the channel.

Although MIMO system can offer high capacity performance through multipath propagation channel but it has some drawbacks such as complexity, power consumption and size limitation of the wireless sensor. However, UWB has several advantages compared to narrowband systems. The wide bandwidth (typically 500 MHz or more) gives UWB excellent immunity to interference from narrowband systems and from multipath effects. Another important benefit of UWB is its high data rate. Additionally, UWB offers significant advantages with respect to robustness, energy consumption and location accuracy.

Nevertheless, UWB technology for wireless networks is not all about advantages. Some of the main difficulties of UWB communication are low transmission power so information can only travel for short distance comparing to 2.4 GHz which can rich long distance. Moreover UWB in the microwave range does not offer a high resistance to shadowing, but this problem can be mitigated in sensor networks by appropriate routing, and possible collaborative communications.

## **5. References**

444 Remote Sensing – Advanced Techniques and Platforms

det(X) means the determinant of a matrix X. To clearly point out the MIMO system performance for the LOS and NLOS cases, the ergodic capacity is calculated for a fixed transmitted power and the SNR at the receiver is determined by the path loss. In this case, the capacity includes both effects related to received power and spatial richness. The relationship between the channel capacity C and the distance ்݀௫ିோ௫ based on equation (8) is shown in figure 18. Obviously, one can see that the NLOS suffer from its higher path-loss exponent which is due to the directional radiation pattern of the MIMO patch antenna

> 2X2 MIMO NLOS 2X2 MIMO LOS

1 3 5 7 9 11 13 15 17 19 21 23 25

Distance Tx-Rx (m)

This study deals with several aspects relative to UWB and MIMO propagation channel and its deployment for wireless sensors. Successful design and deployment of these techniques require detailed channel characterization. Measurement campaigns, made at two different deep levels in a former gold mine under LOS and NLOS scenarios, have been analyzed to

Although MIMO system can offer high capacity performance through multipath propagation channel but it has some drawbacks such as complexity, power consumption and size limitation of the wireless sensor. However, UWB has several advantages compared to narrowband systems. The wide bandwidth (typically 500 MHz or more) gives UWB excellent immunity to interference from narrowband systems and from multipath effects. Another important benefit of UWB is its high data rate. Additionally, UWB offers significant advantages with respect to robustness, energy consumption and

resulting in lower capacity compared to the LOS case by about 3 bit/s/Hz.

0

**4. Conclusion** 

location accuracy.

Fig. 18. Channel capacity for LOS and NLOS scenarios

obtain the relevant statistical parameters of the channel.

2

4

Capacity (bit/s/Hz)

6

8

10


**20** 

Assad Anis

*Pakistan* 

**Cold Gas Propulsion System – An Ideal** 

*NED University of Engineering and Technology* 

**Choice for Remote Sensing Small Satellites** 

Cold gas propulsion systems play an ideal role while considering small satellites for a wide range of earth orbit and even interplanetary missions. These systems have been used quite frequently in small satellites since 1960's. It has proven to be the most suitable and successful low thrust space propulsion for LEO maneuvers, due to its low complexity, efficient use of propellant which presents no contamination and thermal emission besides its low cost and power consumed. The major benefits obtained from this system are low budget, mass, and volume. The system mainly consists of a propellant tank, solenoid valves, thrusters, tubing and fittings (fig. 1). The propellant tank stores the fuel required for attitude control of satellite during its operation in an orbit. The fuel used in cold gas systems is compressed gas. Thrusters provide sufficient amount of force to provide stabilization in pitch, yaw and roll movement of satellite. From design point of view, three important components of cold gas propulsion systems play an important role i.e. mission design, propellant tank and cold gas thrusters. These components are discussed in detail in section 3. Selection of suitable propellant for cold gas systems is as important as above three components. This part is discussed in section 2 of this chapter. Section 4 describes the case study of cold gas propulsion system which is practically implemented in Pakistan's first

**1. Introduction** 

prototype remote sensing satellite PRSS.

Fig. 1. Schematic of cold gas propulsion system


## **Cold Gas Propulsion System – An Ideal Choice for Remote Sensing Small Satellites**

Assad Anis

*NED University of Engineering and Technology Pakistan* 

## **1. Introduction**

446 Remote Sensing – Advanced Techniques and Platforms

L. Yuheng, L. Chao, Y. He, J. Wu, Z. Xiong, "A Perimeter Intrusion Detection System Using

M. Chamchoy, W. Doungdeun, S, Promwong "Measurement and modeling of UWB path

Molisch AF (2005) Ultra wideband propagation channels-theory, measurement, and modeling. IEEE Transactions on Vehicular Technology, pp. 1528–1545. Molisch, A. F.; Cassioli, D.; Chong, C.-C.; Emami, S.; Fort, A.; Kannan, B.; Karedal, J.;

R.S. Thoma, O. Hirsch, J. Sachs, Zetik, R., "UWB Sensor Networks for Position Location and

S. Ghassemzadeh, L. Greenstein, T. Sveinsson, A.Kavcic, V. Tarokh, "UWB indoor path loss

T. S. Rappaport, Wireless Communications: Principles & Practice, Upper Saddle River, NJ,

X. Huang, E. Dutkiewicz, R. Gandia, D. Lowe, "Ultra-Wideband Technology for Video

Conf (VTC 2003- Fall), Orlando, FL, USA, pp. 629-633, Sept. 2003.

Antennas and Propagation (EuCAP), pp. 1-9, 2007.

Communications and Networking in China, pp. 861-865, 2007.

14 Oct. 2005 Pages:991-995.

Research, 4: 1–21.

Prentice Hall PTR, 1996

Informatics, pp. 1012-1017, 2006.

Dual-Mode Wireless Sensor Networks," Second International Conference on

loss for single-band and multi-band propagation channel", Communications and Information Technology, 2005. ISCIT 2005. IEEE International Symposium, vol2, 12-

Kunisch, J.; Schantz, H. G.; Siwiak, K.; Win, M. Z.; "A Comprehensive Standardized Model for Ultrawideband Propagation Channels", Antennas and Propagation, IEEE Transactions on. Volume 54, Issue 11, Part 1, Nov. 2006 Page(s):3151 - 3166 Nedil M, Denidni TA, Djaiz A and Habib AM (2008) A new ultra-wideband beamforming

for wireless communications in underground mines. Progress in Electromagnetics

Imaging of Objects and Environments," The Second European Conference on

model for residential and commercial environments," in Proc. IEEE Veh. Technol.

Surveillance Sensor Networks," IEEE International Conference on Industrial

Cold gas propulsion systems play an ideal role while considering small satellites for a wide range of earth orbit and even interplanetary missions. These systems have been used quite frequently in small satellites since 1960's. It has proven to be the most suitable and successful low thrust space propulsion for LEO maneuvers, due to its low complexity, efficient use of propellant which presents no contamination and thermal emission besides its low cost and power consumed. The major benefits obtained from this system are low budget, mass, and volume. The system mainly consists of a propellant tank, solenoid valves, thrusters, tubing and fittings (fig. 1). The propellant tank stores the fuel required for attitude control of satellite during its operation in an orbit. The fuel used in cold gas systems is compressed gas. Thrusters provide sufficient amount of force to provide stabilization in pitch, yaw and roll movement of satellite. From design point of view, three important components of cold gas propulsion systems play an important role i.e. mission design, propellant tank and cold gas thrusters. These components are discussed in detail in section 3. Selection of suitable propellant for cold gas systems is as important as above three components. This part is discussed in section 2 of this chapter. Section 4 describes the case study of cold gas propulsion system which is practically implemented in Pakistan's first prototype remote sensing satellite PRSS.

Fig. 1. Schematic of cold gas propulsion system

Cold Gas Propulsion System – An Ideal Choice for Remote Sensing Small Satellites 449

*Orbit Insertion Orbit Maintenance* 

*Yes* 

*Yes Yes Yes* 

*Yes* 

Table 3. Principal Options for Sapcecraft Propulsion Systems

propellant are interconnected by general gas equation.

figure 2. The force *PA* tending to separate the tanks is given as,

Mission Design Orbit changes Plane changes Orbit trim Stationkeeping Repositioning

Attitude Control

Maneuvering

*Propulsion Technology* 

*Cold Gas* 

*Solid* 

*Liquid Monopropellant Bipropellant Dual Mode Hybrid* 

*Electric* 

**3.2 Tank design** 

Thrust vector control Attitude control Attitude changes

Reaction wheel unloading

Table 2. Spacecraft Propulsion Functions

*Yes* 

*Yes Yes Yes* 

*Task Description* 

(Translational velocity change) Convert one orbit to another

Remove launch vehicle errors Maintain contellation position Change constellation position

(Rotational velocity change) Remove vector errors Maintain an attitude Change attitudes

Remove stored momentum Repositioning the spacecraft axes

*Perigeee Apogee State Isp (S)* 

*Yes* 

*Yes Yes Yes Yes* 

*Yes* 

*c sp*

*g I* 

exp 1 *p f*

In case of cold gas propulsion systems, the pressure, mass, volume and temperature of the

Satellite propellant tanks used in cold gas propulsion systems are either spherical or cylindrical in shape. Tank weights are a byproduct of the structural design of the tanks. The load in the walls of the spherical pressure vessels is pressure times the area as shown in

*<sup>V</sup> W W*

*and Maneuvering Attitude Control Typical Steady* 

*Yes* 

*30-70* 

*280-300* 

*220-240 305-310 313-322 250-340* 

*300-3,000* 

(3)

*Yes Yes Yes* 

*PV mRT* (4)

## **2. Cold gas propellants**

Table 1 shows typical performance values for selected cold gas propellants. Nitrogen is most commonly used as a cold gas propellant, and it is preferred for its storage density, performance, and lack of contamination concerns. As shown in table below, hydrogen and helium have greater specific impulse as compared to other propellants, but have a low molecular weight. This quality causes an increased tank volume and weight, and ultimately causing an increase in system weight. Carbon dioxide can be a good choice, but due to its toxic nature, it is not considered for cold gas systems.

Another good alternative propellant could be ammonia, which stores in its liquid form to reduce tank volume. Its specific impulse is higher than nitrogen or other propellants and reduces concerns of leakage, although it also necessitates a lower mass flow rate. Despite the benefits, ammonia is not suitable for this system as one alternative to decrease the system size and weight includes pressurizing the satellite and allowing the entire structure to act as a propellant tank, as previously mentioned. In this system, the ammonia could cause damage to electrical components.


Table 1. Cold Gas Propellant Performances

## **3. Cold gas propulsion system design**

## **3.1 Mission design**

In order to design a cold gas propulsion system for a specific space mission, it is important first to find out the *V* requirements for the maneuvers listed in table 2. Table 2 gives information about all the operations performed for spacecraft attitude and orbit control. However, cold gas systems are used only for attitude control and orbit maintenance and maneuvering (table 3).

Tsiolkowski equation and its corollaries are used to convert these velocity change requirements into propellant requirements.

$$
\Delta V = \mathcal{g}\_c I\_{sp} \ln \left( \frac{\mathcal{W}\_i}{\mathcal{W}\_f} \right) \tag{1}
$$

$$\mathcal{W}\_f = \mathcal{W}\_i \left[ \mathbf{1} - \exp\left(-\frac{\Delta V}{\mathcal{g}\_c I\_{sp}}\right) \right] \tag{2}$$


Table 2. Spacecraft Propulsion Functions

Table 1 shows typical performance values for selected cold gas propellants. Nitrogen is most commonly used as a cold gas propellant, and it is preferred for its storage density, performance, and lack of contamination concerns. As shown in table below, hydrogen and helium have greater specific impulse as compared to other propellants, but have a low molecular weight. This quality causes an increased tank volume and weight, and ultimately causing an increase in system weight. Carbon dioxide can be a good choice, but due to its

Another good alternative propellant could be ammonia, which stores in its liquid form to reduce tank volume. Its specific impulse is higher than nitrogen or other propellants and reduces concerns of leakage, although it also necessitates a lower mass flow rate. Despite the benefits, ammonia is not suitable for this system as one alternative to decrease the system size and weight includes pressurizing the satellite and allowing the entire structure to act as a propellant tank, as previously mentioned. In this system, the ammonia could cause

> Density (g/cm3)

In order to design a cold gas propulsion system for a specific space mission, it is important first to find out the *V* requirements for the maneuvers listed in table 2. Table 2 gives information about all the operations performed for spacecraft attitude and orbit control. However, cold gas systems are used only for attitude control and orbit maintenance and

Tsiolkowski equation and its corollaries are used to convert these velocity change

*c sp*

*f i* 1 exp

*<sup>V</sup> W W*

*<sup>W</sup> V gI <sup>W</sup>*

ln *<sup>i</sup>*

*f*

*c sp*

*g I*

 

Hydrogen 2.0 0.02 296 272 Helium 4.0 0.04 179 165 Nitrogen 28.0 0.28 80 73 Ammonia 17.0 Liquid 105 96 Carbon dioxide 44.0 Liquid 67 61

Specific Thrust (s) Theoretical Measured

(1)

(2)

**2. Cold gas propellants** 

damage to electrical components.

Propellant Molecular Weight

Table 1. Cold Gas Propellant Performances

**3. Cold gas propulsion system design** 

requirements into propellant requirements.

**3.1 Mission design** 

maneuvering (table 3).

(Kg/Kmole)

toxic nature, it is not considered for cold gas systems.


Table 3. Principal Options for Sapcecraft Propulsion Systems

$$\mathcal{W}\_p = \mathcal{W}\_f \left[ \exp\left(\frac{\Delta V}{\mathcal{g}\_c I\_{sp}}\right) - 1 \right] \tag{3}$$

In case of cold gas propulsion systems, the pressure, mass, volume and temperature of the propellant are interconnected by general gas equation.

$$PV = mRT\tag{4}$$

#### **3.2 Tank design**

Satellite propellant tanks used in cold gas propulsion systems are either spherical or cylindrical in shape. Tank weights are a byproduct of the structural design of the tanks. The load in the walls of the spherical pressure vessels is pressure times the area as shown in figure 2. The force *PA* tending to separate the tanks is given as,

Cold Gas Propulsion System – An Ideal Choice for Remote Sensing Small Satellites 451

2. . . .2. . *hx x*

*h*

*t d p r d*

Thrusters are the convergent-divergent nozzles (fig. 4) that provide desired amount of thrust to perform maneuvers in space. The nozzle is shaped such that high-pressure lowvelocity gas enters the nozzle and is compressed as it approaches smallest diameter section,

Thrust is generated by momemtum exchange between the exhaust and the spacecraft and by the pressure imbalance at the nozzle exit. According to Newton's secomd law the thrust is

> *p e*

*w F V*

In case of satellites, the thrusters are designed for infinite expansion i.e. for vacuum conditions where ambient pressure is taken as zero. The thrust equation for infinite

> 2 2 <sup>1</sup> 1 1 *<sup>e</sup> t c e e*

*<sup>P</sup> F AP P A*

 

 

The area ratio and pressure ratio is given as,

*c*

*P*

*pr t*

(8)

*F mV <sup>e</sup>* (9)

*<sup>g</sup>* (10)

*F PA e e* (11)

(12)

where the gas velocity increases to exactly the speed of sound.

**3.3 Thrusters design** 

Fig. 4. Convergent-Divergent Nozzle

given as

or we may write as,

expansion is given as,

$$PA = P\pi r^2 \tag{5}$$

Fig. 2. Spherical Tank Stress

Stress is calculated as,

$$\text{stress} = \sigma = \frac{load}{area} = \frac{P\pi r^2}{2\pi rt} = \frac{Pr}{2t} \tag{6}$$

The thickness of the tank is accurately calculated by including joint efficiency in eq. (7) and is given as follows,

$$t = \frac{P \times r}{2\sigma e - 0.2P} \tag{7}$$

In case of cylindrical pressure vessel, the hoop stress is twice that in spherical pressure vessels. The longitudinal stresses in cylindrical pressure vessels remain the same as in spherical pressure vessels. To determine the hoop stress *<sup>h</sup>* , a cut is made along the longitudinal axis and construct a small slice as illustrated in figure 3.

Fig. 3. Cylindrical Pressure Vessel Stresses

The equation may be written as,

$$
\mathbf{2}.\sigma\_h.\mathbf{i}.d\_x = p.\mathbf{2}.r.d\_x
$$

$$
\sigma\_h = \frac{pr}{t} \tag{8}
$$

#### **3.3 Thrusters design**

450 Remote Sensing – Advanced Techniques and Platforms

<sup>2</sup> *PA P r* 

> 2 2 2

(6)

(7)

*<sup>h</sup>* , a cut is made along the

*area rt t* 

*load P r Pr stress*

The thickness of the tank is accurately calculated by including joint efficiency in eq. (7) and

2 0.2 *P r <sup>t</sup> e P*

In case of cylindrical pressure vessel, the hoop stress is twice that in spherical pressure vessels. The longitudinal stresses in cylindrical pressure vessels remain the same as in

spherical pressure vessels. To determine the hoop stress

Fig. 3. Cylindrical Pressure Vessel Stresses

The equation may be written as,

longitudinal axis and construct a small slice as illustrated in figure 3.

Fig. 2. Spherical Tank Stress

is given as follows,

is calculated as,

Stress  (5)

Thrusters are the convergent-divergent nozzles (fig. 4) that provide desired amount of thrust to perform maneuvers in space. The nozzle is shaped such that high-pressure lowvelocity gas enters the nozzle and is compressed as it approaches smallest diameter section, where the gas velocity increases to exactly the speed of sound.

#### Fig. 4. Convergent-Divergent Nozzle

Thrust is generated by momemtum exchange between the exhaust and the spacecraft and by the pressure imbalance at the nozzle exit. According to Newton's secomd law the thrust is given as

$$F = \dot{m}V\_e \tag{9}$$

or we may write as,

$$F = \frac{\dot{w}\_p}{\mathcal{S}} V\_e \tag{10}$$

$$F = P\_e A\_e \tag{11}$$

In case of satellites, the thrusters are designed for infinite expansion i.e. for vacuum conditions where ambient pressure is taken as zero. The thrust equation for infinite expansion is given as,

$$F = A\_t P\_c \gamma \left[ \left( \frac{2}{\gamma - 1} \right) \left( \frac{2}{\gamma + 1} \right) \left( 1 - \frac{P\_c}{P\_c} \right) \right] + P\_c A\_c \tag{12}$$

The area ratio and pressure ratio is given as,

Cold Gas Propulsion System – An Ideal Choice for Remote Sensing Small Satellites 453

in the system in order to avoid any failure i.e. complete pressure lost during opened valve position. The system uses eight Thrusters of 1N each functioning with inlet pressure of 8 bars. By integrating these thrusters to the spacecraft body, pitch, yaw and roll control as well as *V* can be accomplished. The choice of suitable propellant also plays an important role in designing cold gas systems. Compressed nitrogen gas offers a very good combination of storage density and specific impulse, as compared with other available cold gas propellants. The use of Hydrogen or helium requires much larger mass, because of their low gas density. Since the propellant is simple pressurized nitrogen, a variety of suitable tank materials can be selected. The tank designed and developed for this mission is Aluminum 6061 spherical tank which stores 2 kg of gaseous nitrogen. The whole system is well tested before mounting on

PRSS is a prototype satellite which is not developed for flight in future. The purpose of this work is to design, develop and test a small satellite on ground so that the experience can be utilized in near future on engineering qualified and flight models. The CAD model of PRSS

is shown in fig. 5. This model is developed in PRO/E wildfire 2.0 software.

the honeycomb PRSS structure.

**4.1 Introduction to PRSS** 

Fig. 5. CAD model of PRSS

1 Telescope

The satellite mainly consists of

$$\frac{A\_{\varepsilon}}{A\_{t}} = \frac{1}{M\_{\varepsilon}} \left\{ \left( \frac{2}{\gamma + 1} \right) \left( 1 + \frac{\gamma - 1}{2} M\_{\varepsilon}^{\ \ 2} \right) \right\}^{\frac{\gamma + 1}{2\gamma - 1}} \tag{13}$$

$$\frac{P\_c}{P\_c} = \left(1 + \frac{\gamma - 1}{2} M\_c^{\;>\!} \right)^{\frac{\gamma}{\gamma - 1}} \tag{14}$$

The specific impulse (*Isp*) for cold gases ranges from 30-75 seconds and may be calculated as,

$$I\_{sp} = \frac{\mathcal{C}}{\mathcal{S}} \gamma \left[ \left( \frac{2}{\mathcal{Y} - 1} \right) \left( \frac{2}{\mathcal{Y} + 1} \right)^{\frac{\mathcal{Y} + 1}{\mathcal{Y} - 1}} \left( 1 - \frac{P\_c}{P\_c} \right)^{\frac{\mathcal{Y} - 1}{\mathcal{Y}}} \right]^{\frac{1}{2}} \tag{15}$$

Pressure at throat can be calculated by the following formula

$$\frac{P\_t}{P\_c} = \left(1 + \frac{\gamma - 1}{2}\right)^{-\frac{\gamma}{\gamma - 1}}\tag{16}$$

The characteristics velocity ( \* *C* ) can be calculated by following formula

$$C^\* = \frac{a\_0}{\gamma \left(\frac{2}{\gamma + 1}\right)^{\frac{\gamma + 1}{2(\gamma - 1)}}} \tag{17}$$

The exite velocity is given as

$$V\_c = \sqrt{\frac{2\gamma RT\_c}{\mathcal{V} - 1} \left(1 - \frac{P\_c}{P\_c}\right)^{\frac{\mathcal{V} - 1}{2}}} \tag{18}$$

The above equations are helpful in designing of a cold thruster.

#### **4. Case study**

The author has personally leaded and guided the satellite Research and Development Centre research team of Pakistan Space and Upper Atmosphere Research Commission in designing and development of cold gas propulsion system of prototype of Pakistan's first remote sensing satellite (PRSS). Satellite research and development center Karachi has produced an inexpensive and modular system for small satellites applications. The cold gas propulsion resulting from the effort is unique in several ways. It utilizes a simple tank storage system in which the entire system operates at an optimum design in line pressure. In order to minimize the power consumption, the thrusters are operated by solenoid valves that require an electric pulse to open and close. Between the pulses the thruster is magnetically latched in either the open or closed position as required. This dramatically reduces the power required by the thruster valves while maintaining the option for small impulse bit. Flow rate sensors are used in the system in order to avoid any failure i.e. complete pressure lost during opened valve position. The system uses eight Thrusters of 1N each functioning with inlet pressure of 8 bars. By integrating these thrusters to the spacecraft body, pitch, yaw and roll control as well as *V* can be accomplished. The choice of suitable propellant also plays an important role in designing cold gas systems. Compressed nitrogen gas offers a very good combination of storage density and specific impulse, as compared with other available cold gas propellants. The use of Hydrogen or helium requires much larger mass, because of their low gas density. Since the propellant is simple pressurized nitrogen, a variety of suitable tank materials can be selected. The tank designed and developed for this mission is Aluminum 6061 spherical tank which stores 2 kg of gaseous nitrogen. The whole system is well tested before mounting on the honeycomb PRSS structure.

## **4.1 Introduction to PRSS**

452 Remote Sensing – Advanced Techniques and Platforms

*<sup>A</sup> <sup>M</sup>*

*<sup>P</sup> <sup>M</sup>*

*c*

*P*

Pressure at throat can be calculated by the following formula

The exite velocity is given as

**4. Case study** 

*t c P P*

The characteristics velocity ( \* *C* ) can be calculated by following formula

*t e*

*A M*

2 1 12 1 <sup>2</sup> <sup>1</sup> 1 2 *<sup>e</sup> <sup>e</sup>*

> <sup>1</sup> <sup>1</sup> <sup>2</sup> <sup>1</sup> 2 *<sup>e</sup> <sup>e</sup>*

<sup>1</sup> <sup>1</sup> <sup>2</sup> \* 2 2 <sup>1</sup>

<sup>1</sup> <sup>1</sup> <sup>1</sup> 2

\* 0

 

2 <sup>2</sup> 1 1 *c e <sup>e</sup>*

*<sup>a</sup> <sup>C</sup>*

*RT P <sup>V</sup>*

The author has personally leaded and guided the satellite Research and Development Centre research team of Pakistan Space and Upper Atmosphere Research Commission in designing and development of cold gas propulsion system of prototype of Pakistan's first remote sensing satellite (PRSS). Satellite research and development center Karachi has produced an inexpensive and modular system for small satellites applications. The cold gas propulsion resulting from the effort is unique in several ways. It utilizes a simple tank storage system in which the entire system operates at an optimum design in line pressure. In order to minimize the power consumption, the thrusters are operated by solenoid valves that require an electric pulse to open and close. Between the pulses the thruster is magnetically latched in either the open or closed position as required. This dramatically reduces the power required by the thruster valves while maintaining the option for small impulse bit. Flow rate sensors are used

The above equations are helpful in designing of a cold thruster.

  

The specific impulse (*Isp*) for cold gases ranges from 30-75 seconds and may be calculated as,

1 1 *<sup>e</sup> sp*

 

*<sup>C</sup> <sup>P</sup> <sup>I</sup> g P*

1

1 2 2( 1) 1

1

 

*c*

*P*

*c*

 1

(14)

1

(16)

(17)

(18)

(15)

 (13)

PRSS is a prototype satellite which is not developed for flight in future. The purpose of this work is to design, develop and test a small satellite on ground so that the experience can be utilized in near future on engineering qualified and flight models. The CAD model of PRSS is shown in fig. 5. This model is developed in PRO/E wildfire 2.0 software.

Fig. 5. CAD model of PRSS

The satellite mainly consists of

1 Telescope

Cold Gas Propulsion System – An Ideal Choice for Remote Sensing Small Satellites 455

Fig. 6. Cold Gas Propulsion System for PRSS

Fig. 7. Propellant Tank


All the above mentioned systems have been integrated successfully on Al 6061 honeycomb structure which is cubical in shape with dimensions 1 m x 1m x 1.2 m. All subsystems have been designed for 3 years satellite life. PRSS has an overall weight of 100 kg and therefore falls into the category of small satellites.

## **4.2 System design**

Cold gas propulsion systems use thrusters which utilize smallest rocket technology available today. These systems are well known for their low complexity when characterized by low specific impulse. They are the cheapest, simplest and reliable propulsion systems available for orbit maintenance and maneuvering and attitude control. Cold gas Propulsion systems are designed for use as satellite maneuvering control system where a limited lifetime is required. Their specific impulse ranges from 30 seconds to 70 seconds, depending on the type propellant used. They usually consist of a pressurized gas tank, control valves, regulators, filters and a nozzle. The nozzle can be of bell type, conical, or a tube nozzle. SRDC-K will be using a standard conical nozzle, with a 160 half-angle and nozzle area ratio of 50:1. A schematic of a cold gas thruster system used by PRSS is shown below in Fig 6. System weight is mainly determined by the pressure in the thrust chamber. The increased chamber pressure results in increase propellant tank and piping masses, therefore, an optimum pressure must be used so that the system weight can be minimized. Nitrogen is stored at 100 bar pressure in propellant tank. Fill and drain valves facilitates filling and venting nitrogen from the system. Eight Thrusters are connected to solenoid valves and propellant tank with PTFE tubing which can carry a pressure of more than 20 bars. The inline and the thrusters operating pressure is 8 bars. The system also contains pressure transducer before and after pressure regulator to sense the tank pressure and inline pressure respectively.

## **4.3 Propellant tank design, development and testing**

The propellant tank as shown in fig. 7 is a standard spherical pressure vessel. It is being designed and built by SRDC-K, with the detailed analysis also being performed at SRDC-K. In order to reduce costs the tank is being welded by two hemispherical Aluminum parts.

All the above mentioned systems have been integrated successfully on Al 6061 honeycomb structure which is cubical in shape with dimensions 1 m x 1m x 1.2 m. All subsystems have been designed for 3 years satellite life. PRSS has an overall weight of 100 kg and therefore

Cold gas propulsion systems use thrusters which utilize smallest rocket technology available today. These systems are well known for their low complexity when characterized by low specific impulse. They are the cheapest, simplest and reliable propulsion systems available for orbit maintenance and maneuvering and attitude control. Cold gas Propulsion systems are designed for use as satellite maneuvering control system where a limited lifetime is required. Their specific impulse ranges from 30 seconds to 70 seconds, depending on the type propellant used. They usually consist of a pressurized gas tank, control valves, regulators, filters and a nozzle. The nozzle can be of bell type, conical, or a tube nozzle. SRDC-K will be using a standard conical nozzle, with a 160 half-angle and nozzle area ratio of 50:1. A schematic of a cold gas thruster system used by PRSS is shown below in Fig 6. System weight is mainly determined by the pressure in the thrust chamber. The increased chamber pressure results in increase propellant tank and piping masses, therefore, an optimum pressure must be used so that the system weight can be minimized. Nitrogen is stored at 100 bar pressure in propellant tank. Fill and drain valves facilitates filling and venting nitrogen from the system. Eight Thrusters are connected to solenoid valves and propellant tank with PTFE tubing which can carry a pressure of more than 20 bars. The inline and the thrusters operating pressure is 8 bars. The system also contains pressure transducer before and after pressure regulator to sense the tank pressure and inline pressure

The propellant tank as shown in fig. 7 is a standard spherical pressure vessel. It is being designed and built by SRDC-K, with the detailed analysis also being performed at SRDC-K. In order to reduce costs the tank is being welded by two hemispherical Aluminum parts.

 CCD Camera Optics Electronics

3 Reaction wheels

Power System

**4.2 System design** 

respectively.

 3 Digital Sun Sensor DSS RF systems & Antennas

On Board Computer Electronics

 4 Solar Panels on each side of the cube Honeycomb Aluminum Structure

falls into the category of small satellites.

**4.3 Propellant tank design, development and testing** 

fittings

 3 Gyros 3 Torque Rods

 Cold Gas Propulsion System which includes 8 thrusters, 1 propellant tank and regulators and

Fig. 6. Cold Gas Propulsion System for PRSS

Fig. 7. Propellant Tank

Cold Gas Propulsion System – An Ideal Choice for Remote Sensing Small Satellites 457

The ambient hydrostatic proof pressure test is conducted at 130 +20/-0 bars for a pressure hold period of 300 seconds. Post acceptance test, radiographic inspection of the girth weld and penetrant inspection of the entire external surface are conducted to verify that the tank is not damages during acceptance testing. All units successfully passed acceptance testing. After the conclusion of acceptance testing one propellant tank was subjected to the

The propellant tank assembly has successfully completed all acceptance and qualification level testing. The tank meets or exceeds all requirements that provide the low cost solution

After successful testing, propellant tank is then mounted on PRSS structure as shown in

This system uses 8 thrusters (fig 9.a) of 1N mounted on PRSS as shown in fig. 10. These thrusters have been designed and developed for infinite expansion i.e. for vacuum

 Radiographic inspection Mass measurement Final examination Cleanliness verification

 Proof pressure cycling test MEOP pressure cycling test External Leakage test Radiographic inspection Penetrant inspection Burst pressure test Visual inspection Data review

to the spacecraft.

fig. 8.

following sequence of qualification tests prior to delivery:

Fig. 8. Installation of Propellant Tank on PRSS Structure

**4.4 Thrusters design, development and testing** 

The hemisphere has a wall thickness of 4.2 mm and a factor of safety of 1.5 is used. This gives a minimum theoretical burst pressure of 200 bars. ASME section VIII pressure vessel code is used for the designing the spherical propellant tank. Table 4 presents the calculated values of design tank design parameters. Titanium could have been another choice for the designing of propellant tank. The space grade of titanium is TiAl4V. The weight of the propellant tank would have been less with titanium as compared with aluminum but the cost in manufacturing titanium tank is much higher as compared with aluminum.


Table 4.

The tank design analyses included stress analysis for the tank shell. This approach used assumptions, computer tools, test data and experimental data which are commonly utilized on a majority of the pressure vessels for successful design, fabrication, testing and qualification. The following factors have been taken in to consideration for performing stress analysis on the tank shell.


The validation of tank shell design has been done by stress analysis and also the resonant frequencies have been obtained. The propellant tank is subjected to the following sequence of acceptance tests,


The hemisphere has a wall thickness of 4.2 mm and a factor of safety of 1.5 is used. This gives a minimum theoretical burst pressure of 200 bars. ASME section VIII pressure vessel code is used for the designing the spherical propellant tank. Table 4 presents the calculated values of design tank design parameters. Titanium could have been another choice for the designing of propellant tank. The space grade of titanium is TiAl4V. The weight of the propellant tank would have been less with titanium as compared with aluminum but the

*Parameters Designed Parameters* 

Propellant N2 Gas Tank volume 0.016 m3 Operating pressure 100 bars Proof pressure 150 bars Burst pressure 200 bars Thickness of tank shell 0.0042 m

The tank design analyses included stress analysis for the tank shell. This approach used assumptions, computer tools, test data and experimental data which are commonly utilized on a majority of the pressure vessels for successful design, fabrication, testing and qualification. The following factors have been taken in to consideration for performing

The validation of tank shell design has been done by stress analysis and also the resonant frequencies have been obtained. The propellant tank is subjected to the following sequence

cost in manufacturing titanium tank is much higher as compared with aluminum.

Table 4.

stress analysis on the tank shell. Temperature environment Material properties Volumetric properties

 Mass properties of fluid Fluids used by the tank

 External loads Size of girth weld Resonant frequency Tank boundary conditions Residual stress in girth weld Load reaction points and Design safety factors

of acceptance tests,

 External leakage test Penetrant inspection

 Preliminary visual examination Ambient proof pressure test

Mass properties of the tank shell material

Cleanliness verification

The ambient hydrostatic proof pressure test is conducted at 130 +20/-0 bars for a pressure hold period of 300 seconds. Post acceptance test, radiographic inspection of the girth weld and penetrant inspection of the entire external surface are conducted to verify that the tank is not damages during acceptance testing. All units successfully passed acceptance testing. After the conclusion of acceptance testing one propellant tank was subjected to the following sequence of qualification tests prior to delivery:


The propellant tank assembly has successfully completed all acceptance and qualification level testing. The tank meets or exceeds all requirements that provide the low cost solution to the spacecraft.

After successful testing, propellant tank is then mounted on PRSS structure as shown in fig. 8.

Fig. 8. Installation of Propellant Tank on PRSS Structure

## **4.4 Thrusters design, development and testing**

This system uses 8 thrusters (fig 9.a) of 1N mounted on PRSS as shown in fig. 10. These thrusters have been designed and developed for infinite expansion i.e. for vacuum

Cold Gas Propulsion System – An Ideal Choice for Remote Sensing Small Satellites 459

have been developed using stainless steel material. The use of the stainless steel eliminates the potential for reaction between propellant and thruster and also outgassing concerns. The test bench developed at S/P/T laboratory as shown in fig.9.b is capable of testing cold gas thrusters from 1 to 5N. The system consists of an aluminum plate which is mounted on a ball bearing. The thruster is connected to a plate and fitted with solenoid valve through SS

The system uses FUTEK load cell which is basically a force sensor to measure the force from the thruster. Pressure data logger and transducer are also connected to the system to

All components of propulsion system have been successfully integrated with PRSS structure as shown in fig. 10. The structure has been assembled using Al6061 honeycomb structure with the help of end attachments and inserts. Inserts are designed and developed according to ESA standards and end attachments are developed using AU4G. Thrusters have been mounted on each panel with the help of inserts and titanium bolts. Titanium bolts are used for the purpose of high strength and light weight. Four thrusters are mounted on right face of the structure, four on the left side while set of two thrusters are mounted in the middle of each panel for pitch stabilization. Propellant tank is mounted on the inner side of the top

316 tubing.

measure the pressure during testing.

**4.5 Propulsion system integration on PRSS structure** 

panel with the help of inserts and titanium bolts.

Fig. 11. Thrust Control Panel for PRSS Propulsion System

(a) Cold Gas Thruster (b) Test Bench

Fig. 9.

Fig. 10. PRSS Structure

conditions and hence, the atmospheric pressure is zero. Area ratio of 50 has been used while the combustion chamber pressure is 8 bars. The characteristics velocity has been calculated and equals to 433.71 m/sec and as result of that the *Isp* came out to be 73 seconds. Assuming a nozzle efficiency of 98% the nozzle cone half angle has been calculated as 160. Thrusters have been developed using stainless steel material. The use of the stainless steel eliminates the potential for reaction between propellant and thruster and also outgassing concerns. The test bench developed at S/P/T laboratory as shown in fig.9.b is capable of testing cold gas thrusters from 1 to 5N. The system consists of an aluminum plate which is mounted on a ball bearing. The thruster is connected to a plate and fitted with solenoid valve through SS 316 tubing.

The system uses FUTEK load cell which is basically a force sensor to measure the force from the thruster. Pressure data logger and transducer are also connected to the system to measure the pressure during testing.

## **4.5 Propulsion system integration on PRSS structure**

458 Remote Sensing – Advanced Techniques and Platforms

conditions and hence, the atmospheric pressure is zero. Area ratio of 50 has been used while the combustion chamber pressure is 8 bars. The characteristics velocity has been calculated and equals to 433.71 m/sec and as result of that the *Isp* came out to be 73 seconds. Assuming a nozzle efficiency of 98% the nozzle cone half angle has been calculated as 160. Thrusters

(a) Cold Gas Thruster (b) Test Bench

Fig. 9.

Fig. 10. PRSS Structure

All components of propulsion system have been successfully integrated with PRSS structure as shown in fig. 10. The structure has been assembled using Al6061 honeycomb structure with the help of end attachments and inserts. Inserts are designed and developed according to ESA standards and end attachments are developed using AU4G. Thrusters have been mounted on each panel with the help of inserts and titanium bolts. Titanium bolts are used for the purpose of high strength and light weight. Four thrusters are mounted on right face of the structure, four on the left side while set of two thrusters are mounted in the middle of each panel for pitch stabilization. Propellant tank is mounted on the inner side of the top panel with the help of inserts and titanium bolts.

Fig. 11. Thrust Control Panel for PRSS Propulsion System

Cold Gas Propulsion System – An Ideal Choice for Remote Sensing Small Satellites 461

*V*

**6. Abbreviations** 

*A* Area, m2

*e* Joint Efficiency

*Ve* Exit velocity, m/s

 Specific heat ratio *sp I* Specific Impulse, S

\* *C* Characteristics velocity, m/s

*Pt* Pressure at throat, Bars *a*0 Sonic velocity of the gas, m/s *Tc* Chamber temperature, K

Acknowledgement

Inc., 2000.

*Ae* Exit Area, mm2 *Me* Exit Mach number *Pe* Exit pressure, Bars

*<sup>h</sup>* Hoop Stress

**7. References** 

*Wi* Initial vehicle weight, Kg *Wf* Final vehicle weight, Kg

*V* Velocity increase of vehicle, m/s *<sup>c</sup> g* Gravitational constant, 9.8 m/s2 *P* Pressure of the gas, bars *V* Volume of the gas, m3 *m* Mass of the gas, Kg

*R* General gas constant, KJ/KgK *T* Temperature of the gas, K

*r* Internal radius of the tank, m *t* Thickness of the tank wall, m

*m* Mass flow rate of the propellant, Kg/s

*w* Weight flow rate of propellants, N/s *Pe* Exit pressure of the propellant, bars

*Pc* Chamber pressure in the nozzle, Bars

Charles D. Brown, Spacecraft propulsion, AIAA series.

DUPONT, SOVA' 134 A, Material Safety Data Sheet, October 2006 European Corporation for Space Standardization (ECSS), ECSS-E-32-02A Guide book for the design of ASME section VIII Pressure vessels, Third Edition

Allowable Stresses, MPa

*Wp* Propellant weight required to produce the given

*dx* Length of an element in Cylindrical pressure vessel, m

Assad Anis, Design and development of cold gas propoulsion systems for Pakistan Remote

Handbook of Bolts and Bolted Joints, Edited by John H. Bickford and Sayed Nassar

Micci, Michael M. and Andrewd, KetsDever, Ed. Micro Propulsion For Small Spacecraft

Volume 187, Reston, Virginia: American Institute of Aeronautics and Astronautrics,

Sensing Satellite Systems, 978-1 4244-3300-1, 2008, pg-49-53, IEEE.

## **4.6 Thrust control mechanism**

The control panel for the thrust system has been designed on Lab view software as shown in fig. 11 to test the system on ground level. This controller monitors the position of the satellite as well as pressure of the propellant tank and solenoid valves from pressure transducers present in the system. It also observes the impulse bit of the system and the temperature of the propellant tank. Firing time of thrusters is well adjustable on the panel. Ground test of the propulsion system can be monitored by this control system. The test results are listed in table 5.


Table 5. Minimum Impulse Bit

## **5. Conclusion**

In conclusion, this work results in reduction of the size, mass, power, and cost of system. Use of Titanium bolts, Aluminum Inserts, Aluminum Tank, and PTFE Tubing gives great reduction in mass by 35% and ultimately benefits in lowering the cost. Electric Solenoid valves reduce the power consumption by 40%. The main purpose of this work is to document the potentials of low power Cold Gas Propulsion System adequately to allow the engineers and designers of small satellites to consider it as a practical propulsion system option.

## **6. Abbreviations**

460 Remote Sensing – Advanced Techniques and Platforms

The control panel for the thrust system has been designed on Lab view software as shown in fig. 11 to test the system on ground level. This controller monitors the position of the satellite as well as pressure of the propellant tank and solenoid valves from pressure transducers present in the system. It also observes the impulse bit of the system and the temperature of the propellant tank. Firing time of thrusters is well adjustable on the panel. Ground test of the propulsion system can be monitored by this control system. The test

> *Minimum Impulse Bit, msec*

> > 6.20

3.3 2.95

6.60

3.65 2.85

6.10

3.25 2.90

5.80

3.00 2.75

6.02

3.25 3.00

6.72

3.86 3.20

6.56

3.66 3.15

6.53

3.73 3.15

In conclusion, this work results in reduction of the size, mass, power, and cost of system. Use of Titanium bolts, Aluminum Inserts, Aluminum Tank, and PTFE Tubing gives great reduction in mass by 35% and ultimately benefits in lowering the cost. Electric Solenoid valves reduce the power consumption by 40%. The main purpose of this work is to document the potentials of low power Cold Gas Propulsion System adequately to allow the engineers and designers of small satellites to consider it as a practical propulsion system

*Opening Coil Response @ 10 bars, 24 VDC, msec* 

3.20

3.30

3.50

2.90

3.25

3.10

3.10

3.25

*Minumum Impulse Bit, msec* 

6.15

6.15

6.40

6.65

6.25

6.30

6.25

6.40

**4.6 Thrust control mechanism** 

results are listed in table 5.

Thruster # 1

Thruster # 2

Thruster # 3

Thruster # 4

Thruster # 5

Thruster # 6

Thruster # 7

Thruster # 8

**5. Conclusion** 

option.

Table 5. Minimum Impulse Bit

*Opening Coil Response @ 8bars, 24 VDC (msec)* 

2.9

2.95

2.85

2.80

2.77

2.86

2.90

2.80


## **7. References**

Assad Anis, Design and development of cold gas propoulsion systems for Pakistan Remote Sensing Satellite Systems, 978-1 4244-3300-1, 2008, pg-49-53, IEEE.

Charles D. Brown, Spacecraft propulsion, AIAA series.

DUPONT, SOVA' 134 A, Material Safety Data Sheet, October 2006

European Corporation for Space Standardization (ECSS), ECSS-E-32-02A

Guide book for the design of ASME section VIII Pressure vessels, Third Edition

Handbook of Bolts and Bolted Joints, Edited by John H. Bickford and Sayed Nassar Acknowledgement

Micci, Michael M. and Andrewd, KetsDever, Ed. Micro Propulsion For Small Spacecraft Volume 187, Reston, Virginia: American Institute of Aeronautics and Astronautrics, Inc., 2000.


NASA-STD-5003, Fracture Control Requirement for Payload using the Space Shuttle, 7th

Wertz, James R. And Wilry J. Larson, Space Mission Analysis and Design, Third Ed.

Wiley J. Larson, James R. Wertz, Space Mission Analysis and Design, Third Edition, ISBN 1-

Zakirov V., Sweeting M., Erichsen P. and Lawrence T. "Specifics of small satellite propulsion" Part 1, 15th AIAA Conference on Small Satellites, (2001).

Segundo, California, Microcosm Press, 1999.

October 1996

881883-10-8

*Edited by Boris Escalante-Ramirez*

This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas.

Remote Sensing - Advanced Techniques and Platforms

Remote Sensing

Advanced Techniques and Platforms

*Edited by Boris Escalante-Ramirez*

Photo by chombosan / iStock