**2. Digital signal transmission and detection**

A schematic of a generic digital communication system, shown in Figure 1, consists of several blocks. Though all blocks are important for reliable communications, our focus inhere will be on three blocks - the modulator, channel and demodulator as these blocks are sufficient to establish the principles of digital transmission and detection. Digital signals for transmission over a given medium are prepared from the outputs of an information source, which may produce signals in either analog or digital form. An analog information bearing waveform is sampled at an appropriate sampling rate and then encoded into digital signal. The encoded signal, in general, is called a baseband signal and the information resides in the signal amplitude - binary if the number of levels used is two or M-ary if the number of levels is more than two. The digital signals are further processed with implementation of source encoding and error control coding before converting them into waveforms that suit the transmission medium.

Fig. 1. A Schematic of a Generic Digital Communication System

#### **2.1 Principles of signal detection and decision hypothesis**

Let us now deliberate on the fundamentals of digital transmission and reception processes. Consider a signal consisting of a string of binary symbols that assume values of ls and Os occurring every *T* seconds 1 . The transmitted signal passes through an ideal Gaussian noise

<sup>1</sup> It is comrnon practice to represent 1 and O by a voltage *+A* and *-A* respectively. Other representations are also possible. *T* is the symbol duration

channel having a bandwidth at least equal to the signal bandwidth. The signal received at the receiver is a replica of the transmitted signal scaled by the propagation loss and accompanied by additive white Gaussian noise (AWGN) having a two sided power spectral density �. The random additive noise accompanying the received signal imparts uncertainties to the received signal. It is implicit that the receiver does not possess *a priori* knowledge on which particular signal was transmitted, therefore it has to take a decision, after observing the receiver output at the end of every signaling interval, which particular signal was transmitted. It is easy to see that the receiver outputs a number (corresponding to the output voltage or current) that fluctuates randomly around a mean value observed by the receiver in the absence of noise. The decision device samples the receiver output every signaling interval and compares it with an appropriately chosen threshold, 'Y called the decision threshold. If the received signal sample value exceeds ry, it decides that 1 is received otherwise a reception of a O is declared. The decision hypothesis is then written as:

$$w(T) \stackrel{H\_1}{\underset{H\_0}{\gtrless}} \gamma$$

i.e. when the output voltage is greater than 'Y, hypothesis H1 that 1 was transmitted is declared as true otherwise the hypothesis Ho is chosen. The receiver makes an error in making a decision when it declares a O (or 1) is received though the actual transmitted symbol was 1 (or O). This is the fundamental principle on which the receiver decides after observing the received signal<sup>2</sup> . The selection of the decision threshold, 'Y is based on transmission probabilities of different symbols; it is chosen midway between the average received voltages of the symbols when their transmission probabilities are equal <sup>3</sup> . 1n the case of multilevel signaling, the signaling waveforms may attain more than two levels. For example, four level signaling symbols are used to represent combination of two binary symbols, i.e. 00, 10, 11, and 01. A collection of two binary symbols results in one of 2 <sup>2</sup>level symbols with duration twice that of the binary signaling symbol while reducing its transmission bandwidth by ½.

#### **3. Digital receiver and optimum detection**

The previous section introduced decision hypotheses. 1n this section, we determine structures of receivers that result in a minimum error probability. It is intuitively obvious that to achieve minimum error probability, the signal to noise ratio at the receiver output must be maximized. Beside maximizing the output signal to noise ratio, the receiver must take decisions at end of signalling period, thus time synchronization with the transmitted signal must be maintained. It is therefore assumed that an accurate symbol timing is available to the receiver through a timing recovery block, which forms an integral part of the receiver. Consider that the receiver block has *H(f)* and *h(t)* its frequency transfer function and impulse response respectively. The basic receiver structure is shown in Figure 2. The receiver block consists of a processor whose structure that maximizes the output signal to noise ratio at the end of each symbol time *(T)* can be determined as follows. The input to the receiver block is the signal s1 ( *t)*

<sup>2</sup> This principle is known as *posteriori* decision making.

<sup>3</sup> In case of unequal transmission probabilities, the decision threshold moves away from the symbols with higher transmission probability.

#### Fig. 2. The Basic Digital Receiver Structure

contaminated by an additive white Gaussian noise (AWGN), *n(t),* having a two sided power spectral density �. The output of the block at *t* = *T* is given by

$$\begin{split} \boldsymbol{\sigma}\_{o}(T) &= \int\_{-\infty}^{T} \boldsymbol{s}(\tau) \boldsymbol{h}(T-\tau) + \boldsymbol{n}(\tau) \boldsymbol{h}(T-\tau) d\tau \\ &= \boldsymbol{\sigma}\_{T} + \boldsymbol{n}\_{T} \end{split} \tag{2}$$

It is mathematically more convenient to write (2) in frequency domain as

$$W\_o(f) = \int\_{-\infty}^{\infty} \mathcal{S}(f) H(f) + \sqrt{\frac{\mathcal{N}\_0}{2}} H(f) df \tag{3}$$

The signal power is given by I *J�00* S(J)H(J)dfl<sup>2</sup>and the output noise power is <sup>I</sup>*J�00* �dflH(J)l 2 . To maximize the signal to noise ratio we use Schwartz's inequality,i.e.

$$\frac{|\int\_{-\infty}^{\infty} S(f)H(f)df|^{2}}{\int\_{-\infty}^{\infty} \frac{N\_{0}}{2} |H(f)df|^{2}} \le \frac{|\int\_{-\infty}^{\infty} S(f)df|^{2} |\int\_{-\infty}^{\infty} H(f)df|^{2}}{|\int\_{-\infty}^{\infty} \frac{N\_{0}}{2} |H(f)df|^{2}},\tag{4}$$

The equality, representing maximum signal to noise ratio, holds when H(J)\* 4 is equal to kS(J), which means that the H\*(J) is aligned with a scaled version of S(J). The amplitude scaling represents the gain of the filter, which without a loss of generality can be taken to be unity. When this condition is used in (4) and by taking its inverse Fourier transform a time domain relation between the signaling waveform and the receiver impulse response is obtained. The resulting relationship leads to a concept, which is known as matched filter receiver. It turns out that *h(t)* = *ks(T- t)\*,* which means that when the receiver impulse response equals the complex conjugate of time reflected signalling waveform, the output signal to noise ratio is maximized. Using the condition in (3) the maximum output signal to noise ratio equals *ft.* It is important to note that to achieve maximum signal to noise ratio at the decision device input does not require preservation of the transmitted signal waveform since the maximum signal to noise ratio is equal to the signal energy to noise power spectral density ratio. This result distinguishes digital communication from analog communication where the latter requires that the waveform of the transmitted signal must be reproduced exactly at the receiver output. Another version of the matched filter receiver is obtained when the optimum condition is substituted in (2) to obtain:

$$v\_o(T) = \int\_{-\infty}^{T} s(\tau)^\* s(t - \tau) + n(\tau)s(t - \tau)d\tau \tag{5}$$

<sup>4</sup> H(f)\* is complex conjugate of H(f)

Equation (5) give an altemate receiver structure, which also delivers optimum decisions. The correlation receiver as it is called is shown in Figure 3. The two optimum structures defined above can be used far any signaling format.

An important question that remains relates to measure of digital communication performance. The probability of making an incorrect decision (probability of error), *Pe,* is a universally accepted performance measure of digital communication. Consider a long string of binary data symbols consisting of ls and Os contaminated by AWGN is received by the receiver. The input to the decision device is then a random process with a mean equal to s1 = *v<sup>5</sup> ,* s2 = *-v<sup>5</sup>* depending whether the transmitted signal was 1 or O and its distribution is Gaussian because of the presence of AWGN. The conditional error probability is then determined by finding the area under the probability density curve from -oo to 'Y when a 1 is transmitted and from 'Y to oo when a O is transmitted as shown in Figure 4. The error probability is then given by

Fig. 3. The Digital Correlation Receiver

$$P(e) = P(1)P(e|1) + P(0)P(e|0) \tag{6}$$

where P(ell) and P(elO) are respectively given as

$$P(e|1) = \int\_{-\infty}^{\gamma} \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{(\mathbf{x} - \mathbf{v}\_{s1})^2}{2\sigma^2}\right) d\mathbf{x} \tag{7}$$

$$P(e|0) = \int\_{\gamma}^{\infty} \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{(\mathbf{x} - \mathbf{v}\_{s0})^2}{2\sigma^2}\right) d\mathbf{x}$$

Equations (7) are called as likelihood conditioned on a particular transmitted signal. It is quite clear that far M-ary signaling, there will be *M* such likelihoods and average probability of error is obtained by de-conditioning these likelihoods. The integrals in (7) do nat have closed form solution and their numerical solutions are tabulated in the form of **Q(.)** or complementary error functions *erfc(.).* The **Q(.)** is defined as

$$Q(\mathbf{z}) = \frac{1}{\sqrt{2\pi}} \int\_{\mathbf{z}}^{\infty} \exp(-\frac{\mathbf{x}^2}{2}) d\mathbf{x} \tag{8}$$

The *erfc(.)* is related to *erf(.)* and **Q(.)** as

$$\begin{aligned} \text{erf}(\mathbf{z}) &= \frac{2}{\sqrt{\pi}} \int\_0^z \exp(-t^2) dt = 1 - 2Q(2\sqrt{z})\\ \text{erfc}(\mathbf{z}) &= 1 - \text{erf}(\mathbf{z}) = \frac{2}{\sqrt{\pi}} \int\_z^\infty \exp(-t^2) dt = 2Q(2\sqrt{z}) \end{aligned} \tag{9}$$

For binary bipolar transmission, the error probability is obtained as

$$P\_{\ell} = Q\left(\sqrt{\frac{2\mathcal{E}\_b}{N\_0}}\right) \tag{10}$$

1n the case of unipolar transmission, the error probability is Q( *j"fi).* The performance is 3dB inferior to that of polar signaling.

Fig. 4. The Error Likelihood for Binary Digital Communications

#### **4. Bandpass signaling**

So far transmission of binary baseband signals were considered. Under certain circumstances, the chosen transmission medi um does not support baseband transmissions<sup>5</sup> . The signal centered at a carrier is given by:

$$\mathbf{x}(t) = \mathfrak{Re}[s\_l(t) \exp(j2\pi f\_c t)]\tag{11}$$

where s1 ( t) is the low-pass equivalent of the bandpass signal, Je is the center frequency 6 . 1n a similar manner, the low pass equivalence of bandpass noise and system transfer function can be obtained and all analytical work can be done using low-pass equivalent representations. The bandpass signal is then obtained by multiplying the signal by exp(j2n Jct) and selecting its real part and discarding its imaginary part. The baseband pulse amplitude communications is limited to multilevel communication where a collection of *k* bits are represented by *M* = z<sup>k</sup> voltage levels. 1n the case of modulated signals, many formats to represent signals exist. The collection of data may be represented by *M* amplitude levels or frequencies, or phases as well as their combinations.

Furthermore, several orthogonal signals may be employed to construct signals that are represented in a multi-dimensional space instead of a single dimension space as is in the case of baseband multilevel signaling. For example, orthogonal M-ary frequency modulation (OMFSK) requires *M* dimensional space to represent signals. However, in the case of M-ary

<sup>5</sup>In many applications, the signal propagation media require that the spectrum of the baseband signal is translated (modulated) to a certain center frequency that falls within the passband of the medium.

<sup>6</sup>The low pass equivalent is obtained by translating the center frequency from fc to zero i,e. the signal resulting is now centered at zero frequency instead of Je and is complex. The removal of the carrier frequency facilitates analysis and simulations, the latter is an essential part of communication system designs.

phase shift keying (MPSK), two dimensional space is sufficient. The signal design now pertains to locating the signal points in the signal space on the basis of transrnission requiring minimum signal power (or energy) and ease of detection. Though the signals are located in a multi-dimensional space, the principle of estimating the error probability in the presence of Gaussian noise described earlier remains unaltered.

#### **4.1 Error performance of binary signalling**

Far binary bandpass signalling, the general farm of the transmitted signal is given as:

$$s(t) = A(t)\cos(2\pi f\_0 t + \phi(t))\tag{12}$$

The principles established to evaluate BER far baseband signalling can be directly applied to the bandpass signalling. Far example, far different signalling farmats, the transrnit signal may be written as

$$\begin{aligned} \text{For} \quad & \text{BASH, } s\_i(t) = \sqrt{\frac{2\mathcal{E}\_i(t)}{T}} \cos(2\pi f\_0 t + \phi), \ i = 0, 1, 0 \le t \le T, \\ \text{For} \quad & \text{BPSK, } s\_i(t) = \sqrt{\frac{2\mathcal{E}}{T}} \cos(2\pi f\_0 t + \phi\_i(t)), \ i = 0, 1, 0 \le t \le T, \\ \text{For} \quad & \text{BPSK, } s\_i(t) = \sqrt{\frac{2\mathcal{E}}{T}} \cos(2\pi f\_i t + \phi(t)), \ i = 0, 1, 0 \le t \le T, \end{aligned} \tag{13}$$

To calculate the probability of error, the output far coherent detection of BPSK is given by *jff.* cos(2rcf*<sup>0</sup>* t) far O ::; t ::; T plus Gaussian noise. The threshold is located at O, far which the error probability is written as Q( ji). It is interesting to nate that the error probability may also be written in terms of the Euclidean distance between the signal points in the signal space, i.e. *Q* ( *jfi).* Following a similar procedure, the BER far BASK and BFSK are given as:

$$\begin{aligned} P\_2 &= Q\left(\sqrt{\frac{\mathcal{E}}{N\_0}}\right), \text{for coherent amplitude detection} \\ P\_2 &= \frac{1}{2} \exp\left(-\frac{\mathcal{E}}{2N\_0}\right), \text{for coherent frequency detection} \end{aligned} \tag{14}$$

On inspection, we nate that the bipolar BASK is similar to BPSK, therefare its BER perfarmance is also be the same. We can extend the BPSK to quadrature phase shift keying where two quadrature carriers are modulated by information signals. It can be shown that the probability of error far QPSK is identical to that of BPSK [Sklar (1988)]. Figure 5 shows comparison of BER of several binary transrnission schemes.

#### **4.2 Error performance of M-ary signalling**

The procedures of finding error probabilities of binary bandpass signalling can easily be extended to M-ary signalling. This section derives symbol error probabilities far several M-ary bandpass signaling. 1n general, the M-ary bandpass signals are represented as:

$$s\_i(t) = A\_i \sqrt{\frac{2}{T}} \mathbf{g}(t) \cos(\omega\_i t) \tag{15}$$

#### Fig. 5. Error Performance of Different Transmission Formats

The pulse amplitude modulated (PAM) signal may be represented as:

$$\begin{split} s\_{\mathfrak{m}}(t) &= \mathfrak{Re}[A\_{\mathfrak{m}}\mathfrak{g}(t)e^{j2\pi f\_{\mathfrak{c}}t}] \\ &= A\_{\mathfrak{m}}\mathfrak{g}(t)\cos(2\pi f\_{\mathfrak{c}}t); \mathfrak{m} = 1,2,\ldots,M, \mathbf{0} \leq t \leq T \end{split} \tag{16}$$

where Am = (2m - 1 - M)d, m = 1, 2, ... , M are the amplitude levels and 2d is the difference between the two adjacent levels. These signals have energies given by,

$$\mathcal{E}\_m = \int\_0^T s\_m^2(t)dt\tag{17}$$

$$\begin{aligned} &= \int\_0^T A\_m^2 g^2(t)dt\\ &= A\_m^2 \mathcal{E}\_{\mathcal{S}} \end{aligned}$$

1n general, we can write the transmit signals as Sm ( t) = smf ( t) 7 , where

$$f(t) = \sqrt{\frac{2}{\mathcal{E}\_{\mathcal{g}}}} g(t) \cos 2\pi f\_{\mathcal{E}} t \tag{18}$$
 
$$s\_m = A\_m \sqrt{\frac{2}{\mathcal{E}\_{\mathcal{g}}}}$$

The error performance is directly related to the Euclidean distance between any two signal points, which is given by

$$d\_{mn}^{(\varepsilon)} = \sqrt{(s\_m - sn)^2} = d\sqrt{2\mathcal{E}\_{\mathcal{S}}}|m - n| \tag{19}$$

<sup>7</sup>*f* (t) is called basis function, a unit measure of length along an axis of signal space

and the minimum distance that results in worst error performance is obviously important and is equal to *a/!j-.* The error probability for Pulse Amplitude Modulation (PAM) is written as:

$$P\_M = \left(\frac{M-1}{M}\right) P\_r \left( |r - s\_m| \ge d\sqrt{\frac{\mathcal{E}\_g}{2}} \right) \tag{20}$$

$$= 2\left(\frac{M-1}{M}\right) Q\left(d\sqrt{\frac{\mathcal{E}\_g}{N\_0}}\right)$$

The multi-level phase shift keying (MPSK) modulated signals is written as

$$\begin{split} s\_{\mathfrak{m}}(t) &= \mathfrak{Re}\left[\mathfrak{g}(t) \exp\left(j\frac{2\pi(m-1)}{M}\right) \exp(j2\pi f\_{\mathfrak{c}}t)\right], m = 1, 2, \dots, M, 0 \le t \le T \\ &= \mathfrak{g}(t) \cos\left(2\pi f\_{\mathfrak{c}}t + \frac{2\pi(m-1)}{M}\right) \\ &= \mathfrak{g}(t) \cos\left[\frac{2\pi(m-1)}{M}\right] \cos(2\pi f\_{\mathfrak{c}}t) - \mathfrak{g}(t) \sin\left[\frac{2\pi(m-1)}{M}\right] \sin(2\pi f\_{\mathfrak{c}}t) \end{split} \tag{21}$$

where M is the number of phases and each phase represents one symbol representing k bits. The Euclidean distance between the signal points is

$$d\_{mn}^{(e)} = \sqrt{\mathcal{E}\_{\mathcal{S}}[1 - \cos(\frac{2\pi(m-n)}{M})]} \tag{22}$$

with minimum error distance of

$$d\_{\min}^{(e)} = \sqrt{\mathcal{E}\_{\mathcal{S}}[1 - \cos(\frac{2\pi}{\mathbf{M}})]} \tag{23}$$

The expression for MPSK error performance is given by [Proakis & Salehi (2008)]

$$P\_t = \frac{2}{\log\_2 M} Q\left(\sin\frac{\pi}{M}\sqrt{2\mathcal{E}\_b \frac{\log\_2 M}{N\_0}}\right) \tag{24}$$

The PAM and MPSK signals can be combined to result in different signal constellations, which are known an MAPSK. Figure 6 shows examples of M-ary Amplitude-PSK constellations [Thomas et.al (1974)]. QAM may be considered as a variation of MAPSK. The multilevel frequency shift keying (MFSK) signal is represented by

$$\begin{split} s\_{\mathfrak{m}}(t) &= \mathfrak{Re}[s\_{l\mathfrak{m}}(t) \exp(j2\pi f\_{\mathfrak{c}}t)], \mathfrak{m} = 1, 2, \dots, M, 0 \leq t \leq T \\ &= \sqrt{\frac{2\mathcal{E}}{T}} \cos(2\pi f\_{\mathfrak{c}}t + 2\pi m \Delta f t) \end{split} \tag{25}$$

This signaling format uses orthogonal frequencies and it can be related to orthogonal M-ary frequency shift keying (OMFSK) or Multi-carrier modulation (MC) and Orthogonal Frequency Division Multiplexing (OFDM). For equal energy orthogonal signals, the optimum detector consists of a bank of correlators that deliver cross correlation between the received vector r

Fig. 6. Examples of M-ary Amplitude-PSK constellations [Thomas et.al (1974)]

and each of the M possible transmitted signals s <sup>m</sup>; the decision device at the receiver selects the largest output and corresponding to it, is declared as the received signal.

$$\mathcal{C}(s\_m, r) = \sum\_{k=1}^{M} r\_k s\_{mk}, m = 1, 2, \dots, M \tag{26}$$

To evaluate the probability of error, suppose s1 is transmitted. The received vector consists of outputs { r m} of which only first correlator output is due to signal plus noise while all others deliver outputs due to noise only. The probability of correct decision is given by

$$P\_c = \int\_{-\infty}^{\infty} P(n\_2 < r\_1, n\_3 < r\_1, n\_4 < r\_1, \dots, n\_M < r\_1 | r\_1) dr\_1 \tag{27}$$

and the probability of error is PM = l - Pc is given by:

$$\text{probability of error is } r\_M = 1 - r\_\mathcal{E} \text{ as given by}.$$

$$P\_M = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{\infty} \left[ 1 - \left( \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{y} e^{-\frac{y^2}{2}} \right)^{M-1} \right] \exp\left( -\frac{1}{2} \left( y - \sqrt{\frac{2\mathcal{E}\_s}{N\_0}} \right)^2 \right) dy \tag{28}$$

1n the forgoing sections, it was assumed that the receiver is equipped with phase recovery and the demodulation process is coherent. Several techniques to recover phase hare been reported and the reader is referred to [Franks (1983)], [Biguesh & Gersham (2006)], and [Bjornson & Ottersten (2010)]. However, in the presence of fast channel fading, it may not be feasible to recover the absolute carrier phase. 1n this case, differential phase encoding and detection is used. The technique works well provided the carrier phase does not change over time duration equal to two adjacent symbols. Figure 7 compares the performances of M-ary PSK, MQAM and M-ary FSK for several values of M.

Fig. 7. Comparison of performance of M-ary transmission systems

#### **4.3 Performance evaluation in the presence of ISI**

So far, we considered ideal AWGN channels without any bandwidth restriction but are contaminated only by AWGN. A majority of channels that are found in real life have additional impairments, which range from bandwidth restriction to multipath propagation, interference as well as flat or frequency selective fading. When the channel bandwidth is equal to the signal transmission bandwidth, the transmission rate can be increased by transmitting signals with controlled ISI. The ISI can be controlled with the use of partial response pulses; duo-binary or modified duo-binary signaling pulses are good examples. For signal detection two options are available. 1n the first, the receiver estimates ISI and subtracts it from the detected signal before taking a decision. The second is to pre-code the data prior to transmission. As far as data detection is concemed, suh-optimum symbol by symbol detection or optimum maximum likelihood estimation (MLE) are possible. The performance is usually given in terms of upper bound of symbol error probability. For MPAM, transmissions with controlled ISI, the performance is given by [Proakis & Salehi (2008)) as:

$$P\_{\ell} < 2\left(1 - \frac{1}{M^2}\right) Q\left(\sqrt{\frac{\pi}{4}})^2 \frac{6}{M^2 - 1} \frac{\mathcal{E}}{N\_0}\right) \tag{29}$$

Due to considerable complexity of MLE, suh-optimum techniques of mitigating channel induced ISI, another suh-optimum technique effective against ISI, called equalization is used. A number of structures and algorithms have been proposed in the literature, see for example, [Falconer et.al (2002)), [Proakis & Salehi (2008)).

#### **4.4 Digital communication over fading channels**

Time variability of signal amplitude, known as fading accruing due to terminal mobility is an important characteristic of wireless channels. The fading sometimes is accompanied by shadowing, whose effect is gradual but lasts longer. The probability of error under fading or time varying conditions is obtained by first finding the conditional probability of error at a certain selected value of the signal to noise ratio (ft) and then averaging it over the signal variability statistics (fading and shadowing). A number of models have been used

to statistically describe fading and shadowing; Rayliegh being the most commonly used to describe the fading signal envelope (amplitude) while the signal to noise ratio follows an exponential distribution. These distributions are given as:

$$\begin{aligned} \text{Rayleigh Distributive, } p\_R(r) &= \frac{r}{\sigma^2} \exp\left(-\frac{r^2}{2\sigma^2}\right) \\\\ \text{Ricean Distributive, } p\_R(r) &= \frac{r}{\sigma^2} \exp\left(-\frac{r^2 + \mathcal{Q}^2}{2\sigma^2}\right) I\_\mathcal{o}\left(\frac{r\mathcal{Q}}{\sigma^2}\right) \end{aligned} \tag{30}$$

where Qcos(2nf<sup>c</sup> t) is the line of sight component accompanying the scattered component (waves) having a mean power cr *2 .* 10(.) is the modified Bessel function of the first kind with index zero. The Ricean distribution is generally defined in terms of Rice parameter *K,* the ratio of direct (line of sight or specular component) to the scattered signal power i.e. *K* <sup>=</sup>g:. A larger value of *K* implies a lower fading depth. The Rayleigh and Ricean distribution can be explained by the underlying physics of the radio wave propagation. Recently, a more generalized fading model in the form of Nakagami distribution has been used since it emulates Rayleigh, Ricean and Log-normal distributions and it fits better the measurement results. The reason why this model fits the results better lies in its two degrees of freedom. The Nakagami probability density function is given by:

$$p\_R(r) = 2\left(\frac{m}{2\Omega}\right)^m \frac{1}{\Gamma(m)} r^{2m-1} \exp\left(\frac{mr^2}{2\sigma^2}\right) \tag{31}$$

where mean E[r], and standard deviation O are defined as:

$$E[r] = \overline{r} = \frac{\Gamma(m + \frac{1}{2})}{\Gamma(m)} \sqrt{\frac{\Omega}{m}} \tag{32}$$

$$\Omega = \mathfrak{Z}(\overline{r}\frac{\Gamma(m)}{\Gamma(m+\frac{1}{2})})^2 \tag{33}$$

and r ( m) is Gamma function defined as

$$
\Gamma(m) = \int\_0^\infty t^{m-1} e^{-t} dt,\ m > 0\tag{34}
$$

The slow fading is modeled as log-normal distribution, which is defined as:

$$\text{Lognormal1 \\_fadjn}\_r p\_R(r) = \begin{cases} \frac{1}{\sqrt{2\pi}\sigma(x-b)} \exp\left(\frac{-\ln(r-b)-a)^2}{\sigma^2}\right), r > b\\ 0, r > b \end{cases} \tag{35}$$

This is valid for cr > O, -oo < a, *b* < oo. The above mentioned concept is applied now to Rayleigh distributed signal envelope. For example, the bit error probability for binary DPSK for a signal to noise ratio of 'Y given in [Sheikh (2004)] is averaged over the statistics of 'Y as:

$$p\_{\rm DPSK-Fading} = \int\_0^\infty \frac{1}{2} \exp(-\gamma^2) \frac{1}{\Gamma} \exp(-\frac{\gamma}{\Gamma}) d\gamma \tag{36}$$

This results in

$$P\_2 = \frac{1}{2(1+\Gamma)}\tag{37}$$

For non-coherent system, the probability of error for orthogonal FSK is

$$P\_2 = \frac{1}{1 + 2\Gamma} \tag{38}$$

When the signal to noise ratio is large, the probabilities of error for different binary systems are inversely proportional to the average signal to noise ratio and in general given by ,¼, where *k* = 4, 2, 2 and 1 for coherent PSK, orthogonal FSK, DPSK and non-coherent orthogonal FSK respectively.

#### **4.4.1 Multilevel signaling in presence of fading**

The probability of symbol error for M-ary modulation schemes operating in the presence of Rayleigh fading can be found using the standard procedure but it is computationally quite intense [Simon & Alouni (2000)] as it involves integration of powers of Q-functions. A simpler way out is to seek tight bounds over the fading statistics. For details, the reader is referred to [Proakis & Salehi (2008)]. Some analytical results are still possible. For example, the symbol error probability for MPSK operating over Nakagami fading channels, is given by:

$$P\_s(e) = \left(\frac{M-1}{M}\right) \left\{ 1 - \sqrt{\frac{\sin^2(\frac{\pi}{M})\Gamma\_s}{1 + \sin^2(\frac{\pi}{M})\Gamma\_s}} \left(\frac{M}{(M-1)\pi}\right) \times \left[\frac{\pi}{2} + \tan^{-1}\left(\sqrt{\frac{\sin^2(\frac{\pi}{M})\Gamma\_m}{1 + \sin^2(\frac{\pi}{M})\Gamma\_s}}\right) \cos(\frac{\pi}{M})\right] \right\}$$

To find the expression for symbol error rate probability for MFSK operating over Nakagami fading channel is quite involved and finding a tight upper bound for expression in integration is preferred. Reference [Simon & Alouni (2000)] gives an upper bound on symbol error probability. 1n summary, the bit (binary signaling) or symbol (for M-ary signaling) error probabilities in general are found by first finding error probability conditioned on a chosen signal to noise ratio, "/, and then de-conditioning it by integrating over the statistics of ry.

#### **4.5 Reliable communication over impaired channels**

The previous sections laid foundations for evaluating the performance of digital transmission systems operating over noisy, static and fading channels. 1n this section, recognizing the relatively poorer performance in the presence of channel impairments, additional signal processing is needed to improve the performance. In this regard, the use of diversity is very effective. Beside fading, the communication channel may also exhibit frequency selectivity, which accrues as a result of non-uniform amplitude or non-linear phase response of the channel over the transmission bandwidth. These two important impairments result in considerable performance degradation of digital transmission systems. With increasing use of the Intemet, reliable broadband communication is desirable but use of wider transmission bandwidths for higher transmission rates bring into picture the channel frequency selectivity. 1n these circumstances, the (bit or symbol) error probability evaluation becomes complex and many attempts to find a closed form solution have not succeeded and expressions for error probability bounds are usually derived.

#### **4.6 Diversity techniques for time varying fading channels**

The channel fading significantly degrades the performance of digital communication systems. Diversity reception implies use of more than one signal replicas with an aim to mitigate effect of fading and achieve higher average signal to noise ratio. For example, if the probability of signal replica fading below a threshold is *p,* then the probability of *N* independently fading signal replicas simultaneously falling below the same threshold will be p <sup>N</sup>. Fortunately, signal passing through mutually independently fading paths results in multiple signal replicas. Independently fading signal replicas can be created in several ways e.g. by transmitting the same signal at many frequencies separated by more than the channel's coherence bandwidth, or receiving the signal at spatially different locations so that the received signals have negligible correlation at these locations. The receiver may select a signal replica having the highest signal to noise ratio (selection diversity) or combine replicas (diversity combining) to achieve higher signal to noise ratio. The process of obtaining signal replicas in this manner is called as explicit diversity.

Beside using explicit (antenna, frequency, polarization, etc.) diversity, implicit diversity is another form of diversity, which is realized with transmission of signals occupying bandwidth much wider than the channel coherence bandwidth Wy » Wc. Spread spectrum signals are good examples where implicit diversity is realized. 1n these systems, the receiver resolves multipath components of the received signal i.e. the replicas of the transmitted signals are formed after propagating through multipath channel. The number of paths can be many but usually three to four are used in a typical system. The time resolution of the multipath components is J r , where Wy is the transmission bandwidth. For a channel having T *m* seconds delay spread will resolve T�r paths. The receiver that resolves channel paths is called Rake receiver.

#### **4.6.1 Performance with diversity in the presence of rayleigh fading**

We assume that there are *L* diversity branches or paths, each carrying the same information signal. The signal amplitude on each branch is assumed to fade slowly but independently with Rayleigh statistics. The signals on all diversity branches fade independently and are accompanied by AWGN with the same power spectral density or variance. The low-pass equivalent received signals on L-channels can be expressed in the form:

$$r\_{lk}(t) = \mu\_k e^{l\Phi\_k} s\_{km}(t) + z\_k(t), \; k = 1, 2, \dots, L, \; m = 1, 2\tag{39}$$

where IXkexp(j<f>k) represents the *kth* channel gain factors and phase shifts respectively, skm(t) denotes the *mth* signal transmitted on the *kth* channel, and zk(t) denotes the additive white Gaussian noise on the *kth* channel. All signals in the set { s1an ( t)} have the same energy and noise variance. The optimum demodulator for the signal received from the *kth* channel consists of two matched filters, one having the impulse response

$$b\_{\rm k1}(t) = s\_{\rm k1}^\*(T - t) \tag{40}$$

and the other having the response

$$b\_{\mathbf{k}2}(t) = \mathbf{s}\_{\mathbf{k}2}^{\*}(T - t) \tag{41}$$

1n the case of binary transmission skl (t) = -sk2(t). Thus, binary PSK signalling requires a single matched filter in each branch. The matched filters are followed by a combiner that

uses a certain algorithm to maximize output SNR. The best performance is achieved when the signal at the outputs of each matched filter is multiplied by the corresponding complex-valued (conjugate) channel gain ix<sup>k</sup> exp(j,fJk). Thus, branches with stronger signals contribute more compared to those by weaker branches. After performing complex valued operations, two sums are performed; one consists of the real parts of the weighted outputs from the matched filters corresponding to a transmitted 1, the second consists of the real part of the outputs from the matched filter corresponding to a transmitted O. This optimum combiner is called maximal ratio combiner. The block diagram in Figure 8 shows a generic structure of a diversity receiver for binary digital communications system. To demonstrate the concept, the performance of PSK with Lth order diversity is evaluated. The output of the maximal ratio combiner can be expressed as a single decision variable in the form:

Fig. 8. General Structure of Diversity Receiver

$$\begin{split} \mathcal{U} &= \mathfrak{Re}\left(2\mathcal{E}\sum\_{k=1}^{L} a\_k^2 + \sum\_{k=1}^{L} a\_k N\_k\right) \\ &= \mathfrak{Re}\left(2\mathcal{E}\sum\_{k=1}^{L} a\_k^2 + \sum\_{k=1}^{L} a\_k N\_{kr}\right) \end{split} \tag{42}$$

where <sup>N</sup><sup>k</sup> , denotes the real part of the complex valued Gaussian noise variable.

$$N\_k = e^{j\phi\_k} \int\_0^T z\_k(t)s\_k^\*(t)dt\tag{43}$$

The probability of error on a fixed set of attenuation factors iXk is obtained first and then it is averaged over the joint probability density function of the { iX<sup>k</sup> }.

#### **4.6.2 Probability of error for PSK with diversity**

For a fixed set of { iX<sup>k</sup> } the decision variable U is Gaussian with mean

$$E[\mathcal{U}] = 2\mathcal{E} \sum\_{1}^{L} a\_{k}^{2} \tag{44}$$

and variance

$$
\sigma\_{ll}^2 = 2\text{N}\_{\text{o}}\mathcal{E} \sum\_{k=1}^{L} \alpha\_k^2 \tag{45}
$$

For these values of the mean and variance, the probability that U is less than zero is simply

$$P\_2(\gamma) = Q\left(\sqrt{2\gamma\_b}\right) \tag{46}$$

where 'Yb is SNR per bit. The combiner output signal to noise ratio is given as:

$$\gamma = \frac{\mathcal{E}}{N\_0} \sum\_{k=1}^{L} a\_k^2 = \sum\_{1}^{L} \gamma\_k \tag{47}$$

2 where 'Yk = *E* [ tf;] is the instantaneous SNR on the kth channel. 1n order to proceed further, o) we need the probability density function of '}', *p(,-).* For *L* = 1, 'Yk = 11 has a chi-square probability density function. The characteristic function of 'Yl is given by

$$\left[\Psi\_{\gamma\_1}(jv) = E[jv\gamma\_1)\right] = \frac{1}{1 - jv\Gamma\_1}.\tag{48}$$

where r 1 is the average signal to noise ratio of channel 1, which is assumed to be identical for all channels, i.e.

$$
\Gamma = \Gamma\_k = \frac{\mathcal{E}}{N\_0} E[\mathfrak{a}\_k^2],
\tag{49}
$$

and independent of *k.* The fading on channels i.e. ixk are mutually independent, hence the characteristics function for the L 'Yk is simply

$$\left[\Psi\_{\gamma}(jv) = E[jv\gamma)\right] = \frac{1}{(1 - jv\Gamma)^L} \tag{50}$$

The probability density function *p(* 'Y) is

$$p\_{\gamma} = \frac{1}{(L-1)\Gamma^L} \gamma^{L-1} \exp(\frac{\gamma}{\Gamma}) \tag{51}$$

Now, the conditional probability is de-conditioned by averaging the p" over the channel fading statistics. For *L* branch binary PSK,

$$\begin{split} P\_2 &= \int\_0^\infty P\_2(\gamma) p\_2(\gamma) \, d\gamma \\ &= [\frac{1}{2}(1-\mu)]^L \sum\_{k=1}^{L-1} \binom{L-1+k}{k} [\frac{1}{2}(1+\mu)]^k \end{split} \tag{52}$$

where, by definition

$$
\mu = \sqrt{\frac{\Gamma}{1 + \Gamma}}\tag{53}
$$

when the average SNR per channel, r, satisfies the condition r » 1, the (52) approximates to

$$P\_2 \approx (\frac{1}{4\overline{\Gamma}})^L \binom{2L-1}{L} \tag{54}$$

for sufficiently large r (greater than 0 1 dB). Equation (54) shows that with diversity the error probability decreases inversely with the L th power of SNR. This procedure is applied to evaluate performance of several transmission formats.

#### **4.6.3 Error performance of orthogonal FSK with diversity**

Consider now the coherently orthogonal detected FSK for which the two decision variables at the maximal ratio combiner output may be expressed as:

$$\mathcal{U}\_1 = \mathfrak{Re}[2\mathcal{E}\sum\_{k=1}^L \alpha\_k^2 + a\_k N\_{k1}]$$

$$\mathcal{U}\_2 = \mathfrak{Re}[\sum\_{k=1}^L \alpha\_k N\_{k2}] \tag{55}$$

where it is assumed that signal skl *(t)* was transmitted and Nkl and Nkz are the two sets of noise components at the output of the matched filters. The probability of error is simply the probability the U2 » U1. When { ttk} are fixed, the conditional probability of error for binary FSK is given by:

$$P\_2(\gamma) = Q(\sqrt{\gamma})\tag{56}$$

This is then averaged over the fade statistics. The results for PSK still apply if we replace f by ½r. Thus, the probability of error for coherently demodulated orthogonal FSK is obtained if the parameter 1' is redefined as

$$
\mu = \sqrt{\frac{\Gamma}{2 + \Gamma}}.\tag{57}
$$

For large values of 'Y, the performance P2, can be approximated as:

$$P\_2 \approx (\frac{1}{2\Gamma})^L \binom{2L-1}{L} \tag{58}$$

There is 3dB difference between the performances of PSK and coherent orthogonal FSK. This difference is identical to the performance difference in the absence of fading. The above result applies to the case when the signal phase is recovered or estimated accurately. However, in the presence of fast channel fading accurate recovery of the phase becomes difficult and deployment of coherent PSK may not remain feasible and under these circumstances differential phase shift keying (DPSK) or non-coherent FSK is used. 1n the case of DPSK, the information signal is pre-coded prior to transmission.

#### **4.6.4 Error performance for DPSK with diversity**

1n the case of DPSK, the assumption that the channel parameters, ttkexp(-j<P<sup>k</sup> ) do not change over two adjacent symbol periods. The combiner for binary DPSK yields a decision variable,

$$\mathcal{U} = \mathfrak{Re}[\sum\_{k=1}^{L} (2\mathcal{E}\mathfrak{a}\_{k1}e^{j\phi\_{k}} + N\_{k1})(2\mathcal{E}\mathfrak{a}\_{k1}e^{j\phi\_{k}} + N\_{k2}^\*)] \tag{59}$$

where Nkl and Nkz denote the received noise components at the matched filters' outputs over the two consecutive signaling intervals. The probability of error is simply the probability that *U* < O. The conditional error probability is:

$$P\_2(\gamma) = (\frac{1}{2})^{2L-1} e^{-\gamma} \sum\_{k=0}^{L-1} b\_k \gamma\_k^2 \tag{60}$$

where *'Y* is given by

$$b\_k = \frac{1}{k!} \sum\_{n=0}^{L-1-K} \binom{2L-1}{L} \tag{61}$$

The average of P2 ( 'Y) over the fading statistics, p ( 'Y), yields:

$$P\_2 = \frac{1}{2^{2L-1}(L-1)!(1+\Gamma)^L} \sum\_{k=0}^{L-1} b\_k (L-1+k)! (\frac{\Gamma}{1+\Gamma})^k \tag{62}$$

For **f » 1,** the error probability for binary DPSK with diversity is approximated by:

$$P\_2 \approx (\frac{1}{2\Gamma})^L \binom{2L-1}{L} \tag{63}$$

#### **4.6.5 Error performance of non-coherent orthogonal FSK with diversity**

In the case of non-coherent detection, it is assumed that the channel parameters, { ake- if/k} do not change over a signaling period . The combiner for the multichannel signals is a square law combiner and its output consists of the two decision variables,

$$\mathcal{U}\_1 = \sum\_{k=1}^{L} |2\mathcal{E}\varkappa\_k e^{-j\phi\_k} + N\_{k\_1}|^2$$

$$\mathcal{U}\_2 = \sum\_{k=1}^{L} |N\_{k\_2}|^2 \tag{64}$$

where U1 is assumed to contain the signal. The probability of error is P(U2 > U1). Note that the conditional probability of error for DPSK given in (60) applies to square law combined FSK when *'Y* is replaced by ½-r-The probability of error given in (52) also applies to square law combined FSK with parameter µ defines as

$$
\mu = \frac{\Gamma}{2 + \Gamma} \tag{65}
$$

An alternative approach is more direct where the probability density functions p(U1) and p(U2) are used in the expression for probability of error. Since the complex random variables {akeiif'k}, {Nkl} and {Nk<sup>2</sup> } are zero mean Gaussian distributed, the decision variables U1 and U2 are distributed according to a chi-square probability distribution with 2L degrees of freedom, that is

$$p(\mathcal{U}\_1) = \frac{1}{(2\sigma\_1^2)^L (L-1)!} \mathcal{U}\_1^{L-1} \exp\left(-\frac{\mathcal{U}\_1}{2\sigma\_1^2}\right). \tag{66}$$

where

$$
\sigma\_1^2 = \frac{1}{2} E(|\mathcal{Z}\mathcal{A}\_k e^{j\phi\_k} + \mathcal{N}\_{k1}|^2) = \mathcal{Z}\mathcal{E}\mathcal{N}\_0(1+\Gamma) \tag{67}
$$

Similarly,

$$p(\mathcal{U}\_2) = \frac{2}{(2\sigma\_2^2)^L (L-1)!} \mathcal{U}\_2^{L-1} \exp\left(-\frac{\mathcal{U}\_2}{2\sigma\_2^2}\right) \tag{68}$$

where *(Ti* <sup>=</sup>*2[N<sup>0</sup> •* The probability of error is just the probability that U2 > U1. We get the same result as in **(52)** for µ, defined in **(65).** The probability of error can be simplified for r » **1. 1n** this case the error probability is given by

$$P\_2 = (\frac{1}{\Gamma})^L \binom{2L-1}{L} \tag{69}$$

#### **4.6.6 Performance of multilevel signaling with diversity**

The general result for the probability of a symbol error in M-ary PSK and DPSK are given as [Proakis & Salehi (2008)]:

$$\begin{aligned} \mathbf{P}\_e &= \frac{(-1)^{L-1}(1-\mu^2)^L}{\pi(L-1)!} \times \\ \left(\frac{\partial^{L-1}}{\partial b^{L-1}} \frac{1}{b-\mu^2} \left[\frac{\pi}{M}(M-1) - \frac{\mu \sin(\pi/M)}{\sqrt{b-\mu^2 \cos^2(\pi/M)}} \cot^{-1} \frac{-\mu \cos(\pi/M)}{\sqrt{b-\mu^2 \cos^2(\pi/M)}}\right] \right)\_{b=1} \text{ (70)} \end{aligned}$$

where for coherent PSK

$$
\mu = \sqrt{\frac{\Gamma\_{\text{max}}}{1 + \Gamma}} \tag{71}
$$

and forDPSK

$$
\mu = \frac{\Gamma}{1 + \Gamma} \tag{72}
$$

where r is the average received SNR per channel. Figure 9 shows the comparative performance of different diversity order for MDPS as an example. The error performance is seen to improve with increase in the number of diversity branches and increasing *M* results in degradation over the binary case.

Fig. 9. Error Performance of MDPSK in Presence of Fading

#### **4.7 Spread spectrum communications**

Spread spectrum signals are characterized by their bandwidth, which is much wider than the information bandwidth, i.e. Wr » *R,* where Wr is the transmission bandwidth and *R*  is the information symbol rate. The bandwidth expansion ratio, *Be* = ~, is also defined as processing gain and is given as:

$$G\_{\mathcal{V}}(f) = \frac{\text{transmission bandwidth}}{\text{post-correlation bandwidth}} \tag{73}$$

Components at frequencies which are integer multiples of J,.,, i.e.:

$$G\_{\rm s}(f) = \frac{1}{M\Delta}G(f) \sum\_{k=-\infty}^{\infty} \delta(f - \frac{k}{M\Delta})\tag{74}$$

$$\frac{\frac{1}{T\_b}}{W} = \frac{1}{WT\_b} = \frac{T\_c}{T\_b} = \frac{1}{L\_c} \tag{75}$$

The information signal bandwidth expansion is achieved by superimposing 8 it on a pseudo-random sequence, which occupies bandwidth much wider than the information signal bandwidth . The selection of sequence code determines the degree of privacy and longer the code the better is it. The bandwidth expansion has another advantage as it spreads the information bearing signals energy over a wide bandwidth thereby effectively burying the information bearing signal beneath the noise with resulting signal to noise ratio becomes very low. This process is sometimes called covert signaling, a necessary requirement in defence communication. This property is also known as Low Probability of Intercept (LPI). The pseudo-random sequence is used as a code key at the receiver to recover the imbedded information signal in the spread spectrum signal. Figure 10 shows an example of a typical spread spectrum system . At the receiver, the multiplication of the sequence with the received signal collapses the expanded bandwidth to that of the information signal, the latter is recovered by low pass filtering the correlator output. The interference present in the received signal is spread with the result that desired signal to noise ratio is increased . This is illustrated in Figure 11. In addition, the signals arriving via different paths can be resolved with the use of delayed versions of the spreading code. Most of the discussion above is applicable to direct sequence spread spectrum systems. There are several other types of spread spectrum systems. Frequency hopping is another important form of spread spectrum system where the message spectrum is randomly translated to discrete frequencies spread over a wide bandwidth. The randomly hopped signal spectrum produced at the transmitter is hopped back to the information signal bandwidth at the receiver.

#### **4.7.1 CDMA system architectures**

A modulator and a demodulator are the main parts of a spread spectrum system. The modulator is a multiplier of the signal and the spreading sequence followed by an up converter to the transmission frequency. The channel encoder and decoder similar to those used in a typical digital communication system are employed. The pseudo-random sequence at the transmitter and the receiver are time synchronized. Synchronization, code acquisition,

<sup>8</sup> The process of imposition is equivalent to multiplication of information signal by the spreading sequence.

Fig. 10. A QPSK Spread Spectrum Transmitter System [Proakis & Salehi (2008)]

Fig. 11. Concept of Processing Gain and Interference Rejection

and code tracking are also essential parts of a spread spectrum system. Figure 12 shows several alternatives to place CDMA system blocks in the receiver. The channel may introduce interference, which could be narrow band or wide-band, pulsed or continuous.

#### **4.8 Error performance of the decoder**

Here we follow the approach of [Proakis & Salehi (2008)], where the coded sequence are mapped into a binary PSK signal to produce the transmitted equivalent low pass signal representing to the ith coded bit is

$$g\_i(t) = p\_i(t)c\_i(t) = (2b\_i - 1)(2c\_i - 1)g(t - iT\_\mathbb{C})\tag{76}$$

Fig. 12. Receiver architectures for Spread Spectrum System (from [Proakis & Salehi (2008)])

and the received low pass equivalent signal for the ith code element is

$$r\_i(t) = p\_i(t)c\_i(t) + z(t)\_\prime \; i T\_\mathcal{c} \le t \le (i + T\_\mathcal{c}) \tag{77}$$

$$=(2b\_i - 1)(2c\_i - 1)g(t - iT\_\varepsilon) + z(t)\tag{78}$$

where z( *t)* represents the interference or jamming signal that corrupts the information bearing signal. The interference is assumed to be stationary random process with zero mean. In case *z(t)* is a sample function of a complex valued Gaussian process, the optimum demodulator is implemented by either a filter matched to the waveform *g(t)* or a correlator. In the matched filter realization, the sampled output from the matched filter is multiplied by 2bi - 1, which is obtained from the PN generator at the demodulator provided the PN generator is properly synchronized. Since *(2bi* - **1** )2 = **1,** when *bi* = 0 or **1,** the effect of the PN sequence on the received coded bit is removed . The decoder operates on the quantized output, which is denoted by *Yj, i* :::; j :::; *N.* A decoder that employs soft decision decoding computes the correlation metrics:

$$\text{C}M\_{\text{i}} = \sum\_{j=1}^{N} (2c\_{i\text{j}} - 1)y\_{j\text{r}} \text{ j} = 1, 2, \dots, 2^{k} \tag{79}$$

where Cij denotes the jth bit of the ith codeword . The correlation metric for the all zeros code word is

$$\text{C.M}\_1 = 2n\mathcal{E}\_\mathbb{C} + \sum\_{j=1}^N (2c\_{1j} - 1)(2b\_j - 1)v\_j = 2n\mathcal{E}\_\mathbb{C} - \sum\_{j=1}^N (2b\_j - 1)v\_j \tag{80}$$

where Vj, **1** :::; j :::; *N,* is the additive noise term corrupting the jth coded bit and *Ee* is the chip energy. It is defined as

$$v\_j = \Re \{ \int\_0^{T\_\varepsilon} g^\*(t) z(t + (j - 1)T\_\varepsilon) dt \}, \ j = 1, 2, \dots, N\_\prime \tag{81}$$

The correlation metric for to code word *Cm* of weight *Wm* is

$$\mathcal{L}\_{Mm} = \mathcal{E}\_i(n - \frac{2w\_m}{n}) + \sum\_{j=1}^{N} (2c\_{mj} - 1)(2b\_j - 1)v\_j \tag{82}$$

Now we need to determine the probability that *CMm* > CMl· The difference between CM1 and *CMm* is

$$D = \mathbb{C}M\_1 - \mathbb{C}M\_m = 4\mathcal{E}\_c w\_m - 2\sum\_{j=1}^N c\_{mj}(2b\_j - 1)v\_j \tag{83}$$

Since the code word *Cm* has weight *Wm,* therefore there are *Wm* non-zero components in the summation of noise terms containing in (83). It can be assumed that the minimum distance of the code is sufficiently large in which case we can invoke central limit theorem for the summation of noise components . This is valid for PN spread spectrum signals with bandwidth expansion of 10 or more<sup>9</sup> . Thus, the summation of noise components is modeled as a Gaussian random variable. Since *E* [ *(2bi* - 1)] = 0 and *E* ( *Vj)* = 0, the mean of the second term in (83) is also zero. The variance is

$$\sum\_{j=1}^{N} \sum\_{i=1}^{N} c\_{mi} c\_{mj} \mathbb{E}[(2b\_j - 1)(2b\_i - 1)] \mathbb{E}[v\_i v\_j] \tag{84}$$

The sequence of binary digits from the PN generator are assumed to be uncorrelated, hence

$$E[(2b\_j - 1)(2b\_j - 1)] = \delta\_{ij} \tag{85}$$

and

$$
\sigma\_m^2 = 4w\_m \mathbb{E}[v^2] \tag{86}
$$

where E[v2] is the second moment of any element in the set {vj}- The second moment is evaluated to yield

$$E[v^2] = \int\_0^{T\_\varepsilon} \int\_0^{T\_\varepsilon} g^\*(t)g(t)\phi\_{zz}(t-\tau)d\tau dt = \int\_{-\infty}^\infty |G(f)|^2 \Phi\_{zz}(f) df \tag{87}$$

where *t/Jzz* ( T) = ½ *E* [z\* ( *t* )z( t)] is the autocorrelation function and <l>zz (J) is the power spectral density of the interference *z(t).* When the interference is spectrally flat within the bandwidth occupied by the transmitted signal (the bandwidth of the bandpass channel is Wand that of equivalent lowpass channel is W - L), i.e. <l>zz = *Jo,* If I ~ W. Using (87) in (86), we get E[v2] = *2Eefo,* hence the variance of the interference term in (86) becomes

$$
\sigma\_m^2 = 8\mathcal{E}\_c l\_o w\_m \tag{88}
$$

In this case the probability that D < 0 is

$$P\_2(m) = Q(\sqrt{\frac{2}{I\_0} \mathcal{E}\_c w\_m})\tag{89}$$

The energy per coded chip *Ee* is related to the energy per information bit *Eb* as

$$
\mathcal{E}\_c = \frac{k}{n} \mathcal{E}\_b = \mathcal{R}\_c \mathcal{E}\_b \tag{90}
$$

Substituting from (90) in (89), we get

$$P\_2(m) = Q(\sqrt{\frac{2\mathcal{E}\_b}{Jo}R\_\circ w\_m} = Q(\sqrt{2\gamma\_b R\_\circ w\_m})\tag{91}$$

<sup>9</sup> Minimum spreading gain of 10 is internationally accepted value .

where *'Yb* = *£bl Jo* is the SNR per information bit. Finally the code word error probability may be upper bounded by the union bound as

$$P\_M \le \sum\_2^M Q(\sqrt{2\gamma\_b R\_c \pi v\_m})\tag{92}$$

where *M* = zk. The multiplication of the interference with the signal from the PN generator spreads the interference to the signal bandwidth W, and the narrow band integration following the multiplication sees only the fraction t; of the total interference . Thus the performance of the OS spread spectrum system is enhanced by the processing gain . lO

#### **4.8.1 Rake receiver**

The rake receiver is a special type of receiver that separates the arriving signal paths with certain delays between them and combines them in phase to construct the output signal. The combining processes are similar to those used in diversity discussed in section 4.6. By benefitting from the diversity gain resulting from combing, the output signal to noise ratio is improved . Several approaches are used in combing . For example, only those signals that exceed a certain threshold are selected for combining or use the principles of MRC or EGC. A block diagram of a typical rake receiver shown in Figure 13 illustrates how the signals on receiver fingers combine and result in higher output signal to noise ratio. Figure 14 shows the rake combining concept.

Fig. 13. Structure of 3-finger Rake receiver

#### **4.8.2 Performance of RAKE receiver**

The performance of the rake receiver may be evaluated by making a number of assumptions similar to those used in diversity e.g. receiver has *M* fingers, the signals on each finger are slowly and independently varying and the channel state *ck(t)* is perfectly estimated . The signals are spread using pseudo-random sequences which are considered uncorrelated (noise

<sup>10</sup>Ideally the RF bandwidth should be 2(W + R) of the low-pass equivalent bandwidth is W+R, but since *<sup>W</sup>*» R, therefore approxim ate bandwidth is just 2W.

Fig. 14. Combining Concept of Rake Receiver

like) and have the correlation property as

$$\int\_0^T r(t)s\_{lm}^\*(t-\frac{k}{W})dt \approx 0, \; k \neq m, \; i = 1,2\tag{93}$$

When the transmitted signal is s11 *(t),* the received signal is

$$r\_l(t) = \sum\_{n=1}^{L} c\_n s\_{l1}(t - \frac{n}{W} + z(t), \, 0 \le t \le T \tag{94}$$

The decision variable, *Um,* may be rewritten as

$$\mathcal{U}\_{\mathfrak{M}} = \mathfrak{Re}[\sum\_{k=1}^{L} c\_k^\*(t) \int\_0^T t(t) s\_{lm}^\*(t - \frac{k}{\mathcal{W}}) dt], \ m = 1, 2, \dots, \tag{95}$$

When the above property is satisfied, the decision variable for antipodal binary signalling is written as

$$\mathcal{U}\mathcal{U}\_1 = \mathfrak{Re}[2\mathcal{E}\sum\_{1}^{L} \alpha\_k^2 + \sum\_{1}^{L} \alpha\_k N\_k] \tag{96}$$

where *a:k* = I *Ck* I and

$$N\_k = e^{j\phi\_k} \int\_0^T z(t)s\_l^\*(t - \frac{k}{W})dt\tag{97}$$

This is identical to the case of maximal ratio combining. When all mean tap gains, *a:k,* are equal then it reduces to the case we discussed under maximal ratio combining. However , in the case of Rake Receiver, *a:k's* are not always equal, thus, a new expression for the error probability is needed. Under the condition that mean square values of { a:d are distinct, we may write

$$P\_2(\gamma) = Q(\sqrt{\gamma(1-\rho\_r)})\tag{98}$$

where *Pr* = -1 for antipodal signals and O for orthogonal signals, and

$$\gamma = \frac{\mathcal{E}}{N\_o} \sum\_{k=1}^{L} \alpha\_k^2 = \sum\_{k=1}^{L} \gamma\_k \tag{99}$$

Each *'Yk* is distributed according to chi-square distribution with two degrees of freedom, i.e.

$$p(\gamma\_k) = \frac{1}{\Gamma\_k} e^{-\frac{\gamma\_k}{\Gamma\_k}} \tag{100}$$

where r *k* is the average SNR for the *kth* path, defined as

$$
\Gamma\_k = \frac{\mathcal{E}}{N\_0} \overline{a\_k^2} \tag{101}
$$

Now 'Y is sum of L statistically independent components { 'Yk}, it can be shown by following the procedure similar to the one used for diversity, that the pdf of ')' is

$$p(\gamma) = \sum\_{\substack{k=1 \\ k \neq 1 \\ i \neq k}}^{L} \prod\_{i=1}^{L} \frac{\Gamma\_k}{\Gamma\_k(\Gamma\_k - \Gamma\_i)} e^{-\frac{\gamma}{\Gamma\_k}} \tag{102}$$

The error probability for binary CDMA signaling is then

$$P\_2 = \frac{1}{2} \sum\_{\substack{k=1 \\ k \neq i}}^L \frac{\Gamma\_k}{\Gamma\_k - \Gamma\_i} \left[ 1 - \sqrt{\frac{\Gamma\_k (1 - \rho\_r)}{2 + \Gamma\_k (1 - \rho\_r)}} \right] \tag{103}$$

When r *k* » l, the error probability is approximated as

$$P\_2 \approx \binom{2L-1}{L} \prod\_{k=1}^{L} \frac{1}{2\Gamma\_k(1-\rho\_r)}\tag{104}$$

#### **5. High speed communication over time selective fading channels**

The past decade saw tremendous increase in the use of Internet over wire line systems and its use has now migrated to wireless communication. The wireless channels being impaired as well as bandlimited place an upper bound on the transmission rate . The limitation of transmission rate results in restrictions on the type of services that can be offered over these channels. Current research is motivated by the desire to increase the transmission speed over wireless systems. Several possibilities exist to achieve this gaol. One possibility is to undo the channel induced impairments over the transmission bandwidth in order to present the channel as ideal as possible. The performance of digital systems can also be enhanced with the use of channel equalization . In [Lo et.al. (1991)), a combination of diversity and channel equalization has been shown to be a powerful technique. Figure 15 shows a schematic of a typical channel equalizer. The other techniques that are used for high speed transmission over frequency selective channels are spread spectrum communications, multi-carrier communications, and orthogonal frequency division multiplexing transmission .

Interference mitigation is another effective tool to counter the effect of the channel. In this regard serial (SIC), parallel (PIC) and hybrid (HIC) interference canceling techniques are significant. Theses systems were proposed as suboptimum alternative to multiuser detectors (MUD), which have excellent immunity against near-far interference but because of its extreme complexity practical implementation of MUD has been found to be prohibitively complex . Two approaches combine in OFDM, where the channel induced delay spread is taken care of with the use of cyclic prefix in the transmission frame and the data rate is slowed down by using a number of orthogonal carriers in parallel. The slower transmission rate over each carrier is more immune to channel delay spread [Cimini (1985)]. The other important concept that developed for high speed transmission was that of multi-input multi-output (MlMO) system. The total channel capacity is increased by creating a number of preferably non-interacting paths between the transmitter and the receiver. The channel capacity increases with the number of paths and decreases with increase in mutual correlations between them. In the context of wireless communications, the availability of the channel state information and the effect of characteristic of the environment influence the channel capacity. This concept is likely to become the main stay of the high capacity wireless systems [Paulraj et.al (2004)].

z-*1* Delay of *T* sec

Fig. 15. Schematic of Equalizer
