3. Introducing the PSM parameters

One of the methods we have proposed is based on experiments executed by Xiang-kun Kong, Shao-bin Liu, Hai-feng Zhang, Bo-rui Bian, Hai-ming Li et al. [8] in which they put three layers of plasma joined and alternated with one of them magnetized in the core and the other two unmagnetized in the extremes of the device; when this plasma sandwich is submitted to an external electric potential, it is observed that for a range of values of the external potential, the refraction index is negative [15, 19]. When we analyzed those experiments, we conclude that for this range of the electric potential, the plasma sandwich brakes the confinement of the evanescent waves as occurs in a left-hand material and we proposed a model named the plasma sandwich model for the behavior of the propagation media. Depending on the particular conditions of the propagation media, that is, depending of the values of the plasma sandwich parameters, and for particular conditions of the external electric potential, the propagation media may behave like the plasma sandwich and acquire a negative refraction index. In this section, we introduce the PSM parameters and find the resonant frequencies for a specific problem, underlying that resonant frequencies can be used only to associate an interval of frequencies of a real signal to a device that could be an antenna and not to a single emitted frequency by them; this is because resonant waves are released evanescent waves that vanish in the resource sites and not precisely information carriers. The frequency bands we can build from the resonant frequencies can be considered as convenient highways for the transit of information. Every kernel depends on the response of the media in circumstances that can vary for different time intervals. In this manner, we present an example very easy to work but in which is not relevant the particular behavior of the signal we used to get it. Next, we can find the resonant frequencies for an academic example. First, we choose an appropriate discrete kernel , for convenience; in this particular kernel, we do not take into account the three components of the electromagnetic field (usually represented for the indices n and m). However, we propose a system constituted by two emitting antennas. One possible may be written [1, 3–7]:

$$\mathbf{K}^{(+)} (a\nu) = \begin{pmatrix} \frac{\sin(\alpha - \alpha\_{\rho})\theta}{(\alpha - \alpha\_{\rho})\delta} & -I \frac{\cot(\alpha - \alpha\_{\rho})\delta}{(\alpha - \alpha\_{\rho})\delta} \\\\ i \frac{\cos(\alpha - \alpha\_{\rho})\delta}{(\alpha - \alpha\_{\rho})\delta} & \frac{\sin(\alpha - \alpha\_{\rho})\delta}{(\alpha - \alpha\_{\rho})\delta} \end{pmatrix}\_{\rho} \tag{5}$$

In kernel (5), we have introduced the plasma sandwich model (PSM) parameter , which is defined as:

$$
\delta \dots \bar{\kappa} d\_M \tag{6}
$$

The last two ubiquitous conditions to achieve resonance are the vanishing of Fredholm's determinant for Eq. (4), and that Fredholm's eigenvalue λ equals to 1 [6, 11, 22, 23]. The last two conditions give us the expected resonant frequencies for the system constituted by two antennas dependent on the PSM parameters. Now, we must remember that resonances have a special behavior that can be represented

The transformation of the evanescent waves for traveling ones is due precisely

Substituting expressions (9) and (10) into Eq. (3), we can write the resonance

to the imaginary part In addition, the relation between and the wave

The abbreviated components of the matrix in (11) are explicitly

In Eqs. (12) and (13), we have used the following definitions:

To have an image of the solutions of Eq. (11) (see Figure 1), we can make and those are the real and imaginary parts of , and fix the value for the

plasma frequency so we have the following image:

ð9Þ

ð10Þ

ð11Þ

ð12Þ

ð13Þ

ð14Þ

ð15Þ

ð16Þ

ð17Þ

by a complex frequency:

Optimum Efficiency on Broadcasting Communications DOI: http://dx.doi.org/10.5772/intechopen.84954

number is:

condition as:

and

25

Definition (6) involves with the physical meaning of the wave number of an incident beam that interacts with the magnetic and electric fields in a way that the whole kernel is the expressed in Eq. (5); is the average thickness of a plasmamagnetized layer that generates this interaction; parameter is the average value for the plasma frequency in the magnetized plasma layer which can be written in terms of the local electron concentration in the layer as:

$$\rho o\_p = \frac{1}{2\pi} \left(\frac{Ne^2}{m\varepsilon\_0}\right)^{\frac{1}{2}} \tag{7}$$

In this definition, is the electron concentration, is the electronic charge, and is the permittivity of vacuum.

It is possible to note that any change in the parameter values gives different broadcasting conditions [5]. PSM suggests that there is not a single stationary set of iterated layers but a bunch of sets evolving in time and in consequence with different effects for each frequency. We must remember that the equation to solve is Eq. (3) where,

$$\mathbf{K}\_{m}^{n(\circ)}\left(\mathbf{r}',\mathbf{r};o\right) = \begin{cases} \mathbf{0} \text{ if } \mathbf{r}' = \mathbf{r} \\\ \mathbf{U}^{nm}\left(\mathbf{r}'\right)\mathbf{G}\_{o}^{nm(\circ)}\left(\mathbf{r}',\mathbf{r}\right)\mathbf{j}'\mathbf{r}' \neq \mathbf{r} \end{cases} \tag{8}$$

Optimum Efficiency on Broadcasting Communications DOI: http://dx.doi.org/10.5772/intechopen.84954

The last two ubiquitous conditions to achieve resonance are the vanishing of Fredholm's determinant for Eq. (4), and that Fredholm's eigenvalue λ equals to 1 [6, 11, 22, 23]. The last two conditions give us the expected resonant frequencies for the system constituted by two antennas dependent on the PSM parameters. Now, we must remember that resonances have a special behavior that can be represented by a complex frequency:

$$
\rho = \mathbf{K} - i\Lambda \tag{9}
$$

The transformation of the evanescent waves for traveling ones is due precisely to the imaginary part In addition, the relation between and the wave number is:

$$
\kappa = \sqrt{\mu \varepsilon} \alpha \tag{10}
$$

Substituting expressions (9) and (10) into Eq. (3), we can write the resonance condition as:

$$
\Delta \begin{pmatrix} \mathcal{M} & \mathcal{N} \\ \mathcal{N} & -\mathcal{M} \end{pmatrix} = 0 \tag{11}
$$

The abbreviated components of the matrix in (11) are explicitly

$$\begin{aligned} \mathcal{M} &= \rho\_p [\sin(\rho\_p) ch(\mathcal{Y}\_p) - \hat{\lambda}\_p] + \gamma\_p sh(\mathcal{Y}\_p) \cos(\rho\_p) \\\\ &\star \hat{\alpha} sh(\nu) \cos(\rho\_p) + \nu \hat{\lambda} \quad \text{(2)} \end{aligned} \tag{12}$$

and

ð5Þ

ð6Þ

ð7Þ

(8)

the plasma sandwich model for the behavior of the propagation media. Depending on the particular conditions of the propagation media, that is, depending of the values of the plasma sandwich parameters, and for particular conditions of the external electric potential, the propagation media may behave like the plasma sandwich and acquire a negative refraction index. In this section, we introduce the PSM parameters and find the resonant frequencies for a specific problem, underlying that resonant frequencies can be used only to associate an interval of frequencies of a real signal to a device that could be an antenna and not to a single emitted frequency by them; this is because resonant waves are released evanescent waves that vanish in the resource sites and not precisely information carriers. The frequency bands we can build from the resonant frequencies can be considered as convenient highways for the transit of information. Every kernel depends on the response of the media in circumstances that can vary for different time intervals. In this manner, we present an example very easy to work but in which is not relevant the particular behavior of the signal we used to get it. Next, we can find the resonant frequencies for an academic example. First, we choose an appropriate discrete kernel , for convenience; in this particular kernel, we do not take into account the three components of the electromagnetic field (usually represented for the indices n and m). However, we propose a system constituted by two emit-

Telecommunication Systems – Principles and Applications of Wireless-Optical Technologies

In kernel (5), we have introduced the plasma sandwich model (PSM) parameter

Definition (6) involves with the physical meaning of the wave number of an incident beam that interacts with the magnetic and electric fields in a way that the whole kernel is the expressed in Eq. (5); is the average thickness of a plasmamagnetized layer that generates this interaction; parameter is the average value for the plasma frequency in the magnetized plasma layer which can be written in

In this definition, is the electron concentration, is the electronic charge, and

It is possible to note that any change in the parameter values gives different broadcasting conditions [5]. PSM suggests that there is not a single stationary set of iterated layers but a bunch of sets evolving in time and in consequence with different effects for each frequency. We must remember that the equation to solve is

<sup>¼</sup> <sup>0</sup> if <sup>r</sup>

Unm r � �<sup>0</sup>

(

0 ¼ r

> 0 6¼ r

Gnmð Þ<sup>∘</sup> <sup>ω</sup> r 0 ; r � � if r

ting antennas. One possible may be written [1, 3–7]:

terms of the local electron concentration in the layer as:

is the permittivity of vacuum.

K<sup>n</sup>ð Þ<sup>∘</sup> <sup>m</sup> r 0 ; r;ω � �

Eq. (3) where,

24

, which is defined as:

ð13Þ

$$\text{In Eqs. (12) and (13), we have used the following definitions:}$$

$$
\sigma\_M = \overline{d}\_M \sqrt{\mu \varepsilon} \tag{14}
$$

$$
\rho\_p = \sigma\_M (\mathbf{K}^2 - \Lambda^2 - \rho\_p \mathbf{K}) \tag{15}
$$

$$\mathcal{Y}\_p = \sigma\_M \Lambda (o \nu\_p - 2\mathbf{K}) \tag{16}$$

$$
\mathcal{A}\_p = \mathcal{A}(\rho\_p^2 + \mathcal{Y}\_p^2) \tag{17}
$$

To have an image of the solutions of Eq. (11) (see Figure 1), we can make and those are the real and imaginary parts of , and fix the value for the plasma frequency so we have the following image:

We obtain for the particular conditions:

$$\mathbf{K} = \Lambda \tag{18}$$

T available for a time-reversal process. Then, defining a joint average for the power

CT <sup>¼</sup> <sup>Θ</sup> log <sup>P</sup> <sup>þ</sup> Q nð Þ ; <sup>T</sup>

This remains equal to zero when . The very significant feature of this proposal is the explicit dependence on , in both the joint average power and the channel capacity, as opposed to the conventional treatment of the signal time duration that is considered as a limit process which tends to infinity. This is a consequence of the explicit form of the Fourier transform of the time-reversed Green function that changes with a factor , so even if we are not forced to do so, we can think of it as a parameter that defines the channel. We can think of an arbitrary channel but, when we use it to reverse any signal in time, we follow a different process depending on the time we decide to fit. Then, we can label the channel with each as a different one and of course with a different capacity with those corresponding to other values of . Because of the arguments expressed previously in this work, we can use this measure to the same extent on LHM, ATR, and TRT. For a related discussion of the equivalence of the time-reversal methods and the employment of left-hand materials, we can see ref. [30], and for the use of

In order to give an insight into information measurement applied to TR, let us propose that our system behaves like a filter. So, in this particular example, we have

sin 2ð Þ πΘt

And, that we have instead of the incoming signal in Eq. (15) another like [10]

sin <sup>2</sup>ð Þ <sup>π</sup>Θ<sup>t</sup>

The input function Eq. (26) is a sample of a more general function generated by

sin 2ð Þ πΘt

The channel capacity would be [23] approximately (provided that <sup>S</sup>=<sup>N</sup> is small)

In the time-reversal process, we have shown that for each Fourier component, we should add a complex exponential factor dependent on T. But we know now that the tool is the same and that only the numerical value of channel capacity CT

Q nð Þ ; T

CT <sup>¼</sup> <sup>Θ</sup> log <sup>S</sup> <sup>þ</sup> Q nð Þ ; <sup>T</sup>

where a, the amplitude of the sample is no greater than (S is the peak

no loss if we select t < T. We also propose that we have a signal like [12]:

1 2

a

Q nð Þ ; T

(25)

<sup>2</sup>πΘ<sup>t</sup> (26)

ð Þ <sup>π</sup>Θ<sup>t</sup> <sup>2</sup> (27)

<sup>2</sup>πΘ<sup>t</sup> (28)

(29)

, the channel capacity is:

Optimum Efficiency on Broadcasting Communications DOI: http://dx.doi.org/10.5772/intechopen.84954

time reversal on antennas, we can see also ref. [16].

5. An academic example

the sum of a series of shifted functions

allowed transmitter power).

27

$$
\rho \sigma\_p = 10^6 \,\text{Hz} \tag{19}
$$

The solutions (resonances):

$$
\dot{x}\_1 = \\$.009 \times 10^\circ H \text{z} \tag{20}
$$

$$x\_2 = -985.99H\text{Hz} \tag{21}$$

In this case only, is properly a resonance and has not physical meaning but maintain their orthogonality properties.

### 4. Communication theory measurement of information loss

Because we have now a wide vision of the loss of information and we know that this is the reason that the images are not perfect, we can use the results of Shannon, Nyquist, Wiener, Hartley, Hopf [25–29], and other authors that have formulated a measure of the loss of information in communication systems. We support our mathematical results on related works [6, 11, 24, 26, 28], which give us a solid theoretical frame to our present and future papers. Indeed, because the capacity of a channel and entropy are very close concepts, we can use some of the results we have cited above to answer the problem for TRT and LHM.

Basically, we recall two theorems:

Theorem I.

If the signal and noise are independent and the received signal is the sum of the transmitted signal and the noise, then the rate of transmission is:

$$R = H(\mathcal{y}) - H(\eta) \tag{22}$$

This means that the rate of transmission is the entropy of the received signal less the entropy of the noise. The channel capacity is:

$$C = \underset{p\_{(x)}}{\text{Max}} H(\text{y}) - H(n) \tag{23}$$

Theorem II.

The capacity of a channel of band Θ perturbed by white thermal noise power when the average transmitter power is limited to is given by:

$$C = \Theta \log\left(\frac{P+N}{N}\right) \tag{24}$$

In this expression, P is the average power of the transmitted signal and N is the average noise power.

From these two theorems, we make our proposal for a channel where we have lost information in three ways. That is, we have limitations on the maximum frequency Θ (band), the presence of different classes of noise, and on a limited time Optimum Efficiency on Broadcasting Communications DOI: http://dx.doi.org/10.5772/intechopen.84954

We obtain for the particular conditions:

In this case only, is properly a resonance and has not physical meaning but

Telecommunication Systems – Principles and Applications of Wireless-Optical Technologies

Because we have now a wide vision of the loss of information and we know that this is the reason that the images are not perfect, we can use the results of Shannon, Nyquist, Wiener, Hartley, Hopf [25–29], and other authors that have formulated a measure of the loss of information in communication systems. We support our mathematical results on related works [6, 11, 24, 26, 28], which give us a solid theoretical frame to our present and future papers. Indeed, because the capacity of a channel and entropy are very close concepts, we can use some of the results we have

If the signal and noise are independent and the received signal is the sum of the

This means that the rate of transmission is the entropy of the received signal less

The capacity of a channel of band Θ perturbed by white thermal noise power

<sup>C</sup> <sup>¼</sup> <sup>Θ</sup> log <sup>P</sup> <sup>þ</sup> <sup>N</sup>

In this expression, P is the average power of the transmitted signal and N is the

From these two theorems, we make our proposal for a channel where we have

lost information in three ways. That is, we have limitations on the maximum frequency Θ (band), the presence of different classes of noise, and on a limited time

N 

4. Communication theory measurement of information loss

cited above to answer the problem for TRT and LHM.

the entropy of the noise. The channel capacity is:

transmitted signal and the noise, then the rate of transmission is:

when the average transmitter power is limited to is given by:

Basically, we recall two theorems:

Theorem I.

Theorem II.

average noise power.

26

The solutions (resonances):

maintain their orthogonality properties.

ð18Þ

ð19Þ

ð20Þ

ð21Þ

ð22Þ

ð23Þ

(24)

T available for a time-reversal process. Then, defining a joint average for the power , the channel capacity is:

$$C\_T = \Theta \log \left( \frac{P + Q(n, T)}{Q(n, T)} \right) \tag{25}$$

This remains equal to zero when . The very significant feature of this proposal is the explicit dependence on , in both the joint average power and the channel capacity, as opposed to the conventional treatment of the signal time duration that is considered as a limit process which tends to infinity. This is a consequence of the explicit form of the Fourier transform of the time-reversed Green function that changes with a factor , so even if we are not forced to do so, we can think of it as a parameter that defines the channel. We can think of an arbitrary channel but, when we use it to reverse any signal in time, we follow a different process depending on the time we decide to fit. Then, we can label the channel with each as a different one and of course with a different capacity with those corresponding to other values of . Because of the arguments expressed previously in this work, we can use this measure to the same extent on LHM, ATR, and TRT. For a related discussion of the equivalence of the time-reversal methods and the employment of left-hand materials, we can see ref. [30], and for the use of time reversal on antennas, we can see also ref. [16].

### 5. An academic example

In order to give an insight into information measurement applied to TR, let us propose that our system behaves like a filter. So, in this particular example, we have no loss if we select t < T. We also propose that we have a signal like [12]:

$$\frac{\sin\left(2\pi\Theta t\right)}{2\pi\Theta t}\tag{26}$$

And, that we have instead of the incoming signal in Eq. (15) another like [10]

$$\frac{1}{2} \frac{\sin^2(\pi \Theta t)}{\left(\pi \Theta t\right)^2} \tag{27}$$

The input function Eq. (26) is a sample of a more general function generated by the sum of a series of shifted functions

$$a \frac{\sin\left(2\pi\Theta t\right)}{2\pi\Theta t} \tag{28}$$

where a, the amplitude of the sample is no greater than (S is the peak allowed transmitter power).

The channel capacity would be [23] approximately (provided that <sup>S</sup>=<sup>N</sup> is small)

$$C\_T = \Theta \log \left( \frac{\mathbb{S} + Q(n, T)}{Q(n, T)} \right) \tag{29}$$

In the time-reversal process, we have shown that for each Fourier component, we should add a complex exponential factor dependent on T. But we know now that the tool is the same and that only the numerical value of channel capacity CT

changes. We see how in practice the time-reversal parameter T appears explicitly but also that when we cut the time duration of reversed signal, it is possible to consider them as an additive contribution to . But the form of Eq. (25) suggests a generalized measure of a blend or mix channel capacity when sharing the same band W and differ only by the recording time

$$\mathbf{C}\_{T\_{1\star}T\_{2\star}\cdots,T\_{n}} = \Theta \log \left( \frac{\mathbb{S} + Q(n, T\_1, T\_2, \cdots, T\_n)}{Q(n, T\_1, T\_2, \cdots, T\_n)} \right) \tag{30}$$

The fact that we are using the same band but different cutting limits also suggests that we can design an appropriate filter that can distinguish between signals according to the recording time that is we can superpose signals with the same frequency range but with different recording times. In a previous work, we have sketched a filter, but now we give a better-defined device, so we propose (see Figure 2) as a hint to get the filter, the following steps for both the transmitter and the receiver:
