1. Introduction

As one of the emerging optical wireless communication techniques, the visible light communication (VLC) has drawn considerable attention recently from both the academy and industry [1–3]. Compared to the traditional radio frequency (RF) wireless communication, VLC has many advantages, such as freedom from hazardous electromagnetic radiation, no licensing requirements, low-cost frontends, large spectrum bandwidth (as shown in Figure 1), large channel capacity, and so on. In VLC, both illumination and communication are simultaneously implemented. Moreover, the transmitted optical signal is non-negative. Therefore, the developed theory and analysis results in traditional RF wireless communication are not directly applicable to VLC.

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Up to now, the research on VLC can be divided into two categories: the demo system design and theoretical analysis. As research continues, a variety of demo platforms arise. Table 1 shows the development of the VLC demo systems. As can be seen in Table 1, the transmit rate of the VLC system increases from several Mbps to several Gbps in the last decade, which indicates that the VLC has attractive prospects of development. Specifically, the transmit rates of the early demo systems are low, but the transmit distances are long and the data are processed in real time. With the development of communication techniques, more and more

Figure 1. The electromagnetic spectrum.


Table 1. The development of the VLC demo systems.

VLC testbeds with high transmit rates are developed successfully, but the real-time processing becomes very hard. Therefore, more advanced processing techniques are needed for VLC.

Up to now, the research on VLC can be divided into two categories: the demo system design and theoretical analysis. As research continues, a variety of demo platforms arise. Table 1 shows the development of the VLC demo systems. As can be seen in Table 1, the transmit rate of the VLC system increases from several Mbps to several Gbps in the last decade, which indicates that the VLC has attractive prospects of development. Specifically, the transmit rates of the early demo systems are low, but the transmit distances are long and the data are processed in real time. With the development of communication techniques, more and more

rate (bit/s)

Transmit distance (m) Data processing mode

Offline Online

Figure 1. The electromagnetic spectrum.

144 Optical Fiber and Wireless Communications

Table 1. The development of the VLC demo systems.

Time (year) Research & Development Group Transmit

2000 Keio University, Japan 10 M 5 √ 2002 Keio University, Japan 87 M 1.65 √ 2008 Taiyo Yuden Co., Ltd, Japan 100 M 0.2 √

2009 University of Oxford, UK, et al. 100 M 0.1 √

Jinan University, China 4 M 2.5 √

2011 Heinrich Hertz Institute, Germany 803 M 0.12 √ 2012 Kinki University, Japan 614 M √

2013 University of Strathclyde, UK 1.5 G √

Southeast University, China 480 M 3 √ 2014 Fudan University, China 3.25 G √ 2015 Pknuyong National University, Korea 3 G 2.15 √

Heinrich Hertz Institute, Germany 125 M 5 √

National Chiao Tung University, Taiwan 1.1 G 0.23 √ Santa Ana school for Advanced Studies, Italy 3.4 G 0.3 √

National Chiao Tung University, Taiwan 3.22 G 0.25 √ The University of Edinburgh, UK 10 G √

The PLA Information Engineering University, China 50 G √

In the aspect of theoretical analysis, much work has been done on VLC. In Ref. [4], the channel capacity for VLC using inverse source coding is investigated. However, the theoretical expression of the capacity is not presented. Under the non-negative and average optical intensity constraints, the closed expression of capacity bounds is derived in Ref. [5]. Based on Ref. [5], a tight upper bound on the capacity is derived in Ref. [6]. By adding a peak optical intensity constraint, tight capacity bounds are further derived in Ref. [7]. In Ref. [8], the capacity bounds for multiple-input-multiple-output VLC are derived. In Ref. [9], the capacity and outage probability for the parallel optical wireless channels are analysed. Furthermore, low signal-tonoise ratio (SNR) capacity for the parallel optical wireless channels is obtained in Ref. [10]. It should be noted that the noises in Refs. [4–10] are all assumed to be independent with the input signal. This assumption is reasonable if the ambient light is strong or if the receiver suffers from intensive thermal noise. However, in practical VLC systems, typical illumination scenarios offer very high SNR [11, 12]. For high power, this assumption neglects a fundamental issue of VLC: due to the random nature of photon emission in the light emitting diode (LED), the strength of noise depends on the signal itself [13]. Up to now, the performance of the VLC with input-dependent noise has not been discussed completely.

In this chapter, we consider a VLC system with input-dependent Gaussian noise and investigate the fundamental performance of the VLC system. The main contributions of this chapter are given as follows:


The remainder of this chapter is organized as follows. The system model is described in Section 2. Section 3 presents the exact expression and the lower bound of the mutual information. In Section 4, the theoretical expression of the BER is derived. Numerical results are given in Section 5 before conclusions are drawn in Section 6.

#### 2. System model

Consider a point-to-point VLC system, as shown in Figure 2. At the transmitter, an LED is employed as the lighting source, which performs the electrical-to-optical conversion. Then, the optical signal is propagated through the VLC channel. At the receiver, a PIN photodiode (PD) is used to perform the optical-to-electrical conversion. To amplify the derived electrical signal, a high impedance amplifier is employed. In this chapter, the main noise sources include thermal noise, shot noise and amplifier noise. The thermal noise and the amplifier noise are independent of the signal, and each of the two noise sources can be well modelled by Gaussian distribution [14]. Although its distribution can also be assumed to be Gaussian, the strength of the shot noise depends on the signal itself. Mathematically, the received electrical signal Y at the receiver can be written as [13]

$$\mathbf{Y} = r\mathbf{G}\mathbf{X} + \sqrt{r\mathbf{G}\mathbf{X}}\mathbf{Z}\_1 + \mathbf{Z}\_0 \tag{1}$$

where <sup>r</sup> denotes the optoelectronic conversion factor of the PD. <sup>Z</sup>0∼Nð0, <sup>σ</sup><sup>2</sup><sup>Þ</sup> denotes the inputindependent Gaussian noise. <sup>Z</sup>1∼Nð0, <sup>ς</sup><sup>2</sup>σ<sup>2</sup><sup>Þ</sup> denotes the input-dependent Gaussian noise, where ς<sup>2</sup> ≥ 0 denotes the ratio of the input-dependent noise variance to the input-independent noise variance. Z<sup>0</sup> and Z<sup>1</sup> are independent with each other.

In Eq. (1), G denotes the channel gain between the LED and the PD, which can be expressed as [15]

$$G = \frac{(m+1)A}{2\pi d^2} \cos^m(\varphi) T(\psi) g(\psi) \cos\left(\psi\right) \tag{2}$$

where m denotes the order of the Lambertian emission, A is the physical area of the PD and d,ϕ and ψ are the distance, the angle of irradiance and the angle of incidence from the LED to the PD, respectively. TðψÞ is the gain of an optical filter and gðψÞ is the gain of an optical concentrator.

Note that the channel gain in Eq. (2) is a constant, where the positions of the LED and the PD are given. Moreover, r in Eq. (1) is a constant for a fixed PD. Without loss of generality, the values of both G and r are set to be one. Therefore, Eq. (1) can be simplified as [16]

Figure 2. The point-to-point VLC system.

Fundamental Analysis for Visible Light Communication with Input‐Dependent Noise http://dx.doi.org/10.5772/68019 147

$$Y = X + \underbrace{\sqrt{X}Z\_1 + Z\_0}\_{\triangleq Z}.\tag{3}$$

In VLC, information is transmitted by modulating the instantaneous optical intensity [17], and thus, X should be non-negative, that is,

$$X \succeq 0.\tag{4}$$

Due to the eye and skin safety regulations, the peak optical intensity of the LED is limited [17], that is,

$$X \triangleleft A \tag{5}$$

where A is the peak optical intensity of the LED.

2. System model

146 Optical Fiber and Wireless Communications

the receiver can be written as [13]

Figure 2. The point-to-point VLC system.

as [15]

Consider a point-to-point VLC system, as shown in Figure 2. At the transmitter, an LED is employed as the lighting source, which performs the electrical-to-optical conversion. Then, the optical signal is propagated through the VLC channel. At the receiver, a PIN photodiode (PD) is used to perform the optical-to-electrical conversion. To amplify the derived electrical signal, a high impedance amplifier is employed. In this chapter, the main noise sources include thermal noise, shot noise and amplifier noise. The thermal noise and the amplifier noise are independent of the signal, and each of the two noise sources can be well modelled by Gaussian distribution [14]. Although its distribution can also be assumed to be Gaussian, the strength of the shot noise depends on the signal itself. Mathematically, the received electrical signal Y at

<sup>Y</sup> <sup>¼</sup> rGX <sup>þ</sup> ffiffiffiffiffiffiffiffiffi

noise variance. Z<sup>0</sup> and Z<sup>1</sup> are independent with each other.

<sup>G</sup> <sup>¼</sup> <sup>ð</sup><sup>m</sup> <sup>þ</sup> <sup>1</sup>Þ<sup>A</sup>

rGX

where <sup>r</sup> denotes the optoelectronic conversion factor of the PD. <sup>Z</sup>0∼Nð0, <sup>σ</sup><sup>2</sup><sup>Þ</sup> denotes the inputindependent Gaussian noise. <sup>Z</sup>1∼Nð0, <sup>ς</sup><sup>2</sup>σ<sup>2</sup><sup>Þ</sup> denotes the input-dependent Gaussian noise, where ς<sup>2</sup> ≥ 0 denotes the ratio of the input-dependent noise variance to the input-independent

In Eq. (1), G denotes the channel gain between the LED and the PD, which can be expressed

where m denotes the order of the Lambertian emission, A is the physical area of the PD and d,ϕ and ψ are the distance, the angle of irradiance and the angle of incidence from the LED to the PD, respectively. TðψÞ is the gain of an optical filter and gðψÞ is the gain of an optical concentrator. Note that the channel gain in Eq. (2) is a constant, where the positions of the LED and the PD are given. Moreover, r in Eq. (1) is a constant for a fixed PD. Without loss of generality, the

values of both G and r are set to be one. Therefore, Eq. (1) can be simplified as [16]

<sup>p</sup> <sup>Z</sup><sup>1</sup> <sup>þ</sup> <sup>Z</sup><sup>0</sup> <sup>ð</sup>1<sup>Þ</sup>

<sup>2</sup>πd<sup>2</sup> cos <sup>m</sup>ðϕÞTðψÞgðψ<sup>Þ</sup> cos <sup>ð</sup>ψÞ ð2<sup>Þ</sup>

Considering the illumination requirement in VLC, the average optical intensity cannot be changed but can be adjusted according to the users' requirement (dimming target) [18]. Therefore, the average optical intensity constraint is given by

$$E(X) = \xi P$$

where Eð�Þ denotes the expectation operator and ξ∈ð0, 1� denotes the dimming target. P ≤ A is the normal optical intensity of the LED.

#### 3. Mutual information analysis

Mutual information is an important performance indicator for wireless communication systems. In this section, the exact expression of the mutual information and the closedform expression of the lower bound on the mutual information for the VLC will be derived, respectively.

#### 3.1. Exact expression of mutual information

Assume that N-ary intensity modulation is employed. Let X ∈{x1, x2, ⋯, xN} be the optical intensity symbol drawn from the equiprobable modulation constellation, that is,

$$\Pr(X = x\_i) = \frac{1}{N}.\tag{7}$$

According to Eq. (3), the conditional probability density function (PDF) of Y when given X ¼ xi can be written as [19]

$$f\_{Y|X}(y|\mathbf{x}\_i) = \frac{1}{\sqrt{2\pi(1+\mathbf{x}\_i\boldsymbol{\zeta}^2)}\sigma} \exp\left(-\frac{\left(y-\mathbf{x}\_i\right)^2}{2(1+\mathbf{x}\_i\boldsymbol{\zeta}^2)\sigma^2}\right). \tag{8}$$

Furthermore, the PDF of Y can be expressed as

$$f\_{Y}(\mathbf{y}) = \sum\_{i=1}^{N} \Pr(X = \mathbf{x}\_{i}) f\_{Y|X}(\mathbf{y}|\mathbf{x}\_{i})$$

$$= \frac{1}{N} \sum\_{i=1}^{N} \frac{1}{\sqrt{2\pi (1 + \mathbf{x}\_{i}\boldsymbol{\varsigma}^{2})\sigma}} \exp\left(-\frac{(\mathbf{y} - \mathbf{x}\_{i})^{2}}{2(1 + \mathbf{x}\_{i}\boldsymbol{\varsigma}^{2})\sigma^{2}}\right). \tag{9}$$

The mutual information between X and Y is given by

$$I(\mathbf{X};Y) = H(\mathbf{X}) - H(\mathbf{X}|\mathbf{Y})$$

$$= \sum\_{i=1}^{N} \frac{1}{N} \log\_2 N - \sum\_{i=1}^{N} \left[ \sum\_{\alpha=\alpha}^{\alpha} \frac{1}{N} f\_{Y|\mathbf{X}}(y|\mathbf{x}\_i) \log\_2 \left( \frac{f\_Y(y)}{\Pr(\mathbf{X}=\mathbf{x}\_i) f\_{Y|\mathbf{X}}(y|\mathbf{x}\_i)} \right) \mathbf{d}y \right]$$

$$= \log\_2 N - \frac{1}{N} \sum\_{i=1}^{N} \underbrace{\int\_{-\infty}^{\infty} \frac{(y-\mathbf{x}\_i)^2}{\sqrt{2\pi(1+\mathbf{x}\_i\varepsilon^2)\sigma^2}} \log\_2 \left( \underbrace{\sum\_{i=1}^{N} \frac{\exp\left(-\frac{(y-\mathbf{x}\_i)^2}{2(1+\mathbf{x}\_i\varepsilon^2)\sigma^2}\right)}}\_{\quad \left(\underbrace{-\frac{(y-\mathbf{x}\_i)^2}{2(1+\mathbf{x}\_i\varepsilon^2)\sigma^2}}\_{\quad \mathbf{exp}\left(-\frac{(y-\mathbf{x}\_i)^2}{2(1+\mathbf{x}\_i\varepsilon^2)\sigma^2}\right)}}\_{\quad \mathbf{d}f\_1} \right) \mathbf{d}y$$

where Hð�Þ denotes the entropy.

From Eq. (3), we have Z ¼ Y � X. Therefore, let z ¼ y � xi, and thus, I<sup>1</sup> in Eq. (10) can be further written as

$$\begin{split} I\_{1} = \int\_{-\infty}^{\infty} \frac{\exp\left(-\frac{z^{2}}{2(1+\chi\_{t}z^{2})\sigma^{2}}\right)}{\sqrt{2\pi(1+\chi\_{t}z^{2})\sigma}} \log\_{2}\left[\frac{\sum\_{i=1}^{N}\exp\left(-\frac{(z+\mathbf{x}\_{i}-\mathbf{x}\_{i})^{2}}{2(1+\chi\_{t}z^{2})\sigma^{2}}\right)}{\exp\left(-\frac{z^{2}}{2(1+\chi\_{t}z^{2})\sigma^{2}}\right)}\right] \mathrm{d}y \\ = \varprojlim\_{Z} \left\{ \begin{split} &\frac{\mathrm{exp}\left(-\frac{(z+\mathbf{x}\_{i}-\mathbf{x}\_{i})^{2}}{2(1+\chi\_{t}z^{2})\sigma^{2}}\right)}{\sqrt{2\pi(1+\chi\_{t}z^{2})\sigma^{2}}} \\ &\frac{\mathrm{exp}\left(-\frac{(z+\mathbf{x}\_{i}-\mathbf{x}\_{i})^{2}}{2(1+\chi\_{t}z^{2})\sigma^{2}}\right)}{\sqrt{2\pi(1+\chi\_{t}z^{2})\sigma^{2}}} \end{split} \right\} \\ &= E\_{Z}\left\{ \log\_{2}\left[\sum\_{i=1}^{N}\frac{\sqrt{1+\chi\_{t}z^{2}}}{\sqrt{2\pi(1+\chi\_{t}z^{2})\sigma}}\exp\left(\frac{z^{2}}{2(1+\chi\_{t}z^{2})\sigma^{2}} - \frac{(z+\mathbf{x}\_{i}-\mathbf{x}\_{i})^{2}}{2(1+\chi\_{t}z^{2})\sigma^{2}}\right)\right] \end{split} \tag{11}$$

Therefore, Eq. (10) can be further written as

$$I(\mathbf{X};Y) = \log\_2 N - \frac{1}{N} \sum\_{i=1}^{N} E\_{\overline{Z}} \left\{ \log\_2 \left[ 1 + \sum\_{\substack{t=1 \\ t \neq i}}^{N} \frac{\sqrt{1 + \mathbf{x}\_i \varsigma^2}}{\sqrt{1 + \mathbf{x}\_i \varsigma^2}} \exp \left( \frac{\mathbf{z}^2}{2(1 + \mathbf{x}\_i \varsigma^2)\sigma^2} - \frac{(\mathbf{z} + \mathbf{x}\_i - \mathbf{x}\_t)^2}{2(1 + \mathbf{x}\_i \varsigma^2)\sigma^2} \right) \right] \right\} \tag{12}$$

Remark 1: Let the average SNR be <sup>γ</sup> <sup>¼</sup> <sup>ξ</sup>P=½ð<sup>1</sup> <sup>þ</sup> <sup>ξ</sup>Pς<sup>2</sup>Þσ<sup>2</sup>�. Because <sup>ξ</sup>, P and <sup>ς</sup> are non-negative and finite numbers, <sup>γ</sup> ! <sup>∞</sup> (or 0) is equivalent to <sup>σ</sup><sup>2</sup> ! 0 (or <sup>∞</sup>). Apparently, <sup>I</sup>ðX; Y<sup>Þ</sup> in Eq. (12) is a monotonic increasing function with respect to γ. Therefore, we have

$$\lim\_{\mathcal{Y}\to\infty} I(\mathbf{X};\mathcal{Y}) = \log\_2 \mathcal{N} \tag{13}$$

which indicates that the maximum value of IðX; YÞ is log2N.

Moreover, we have

Furthermore, the PDF of Y can be expressed as

¼ 1 N X N

The mutual information between X and Y is given by

<sup>N</sup> log2<sup>N</sup> �<sup>X</sup>

<sup>¼</sup> <sup>X</sup> N

148 Optical Fiber and Wireless Communications

<sup>¼</sup> log2<sup>N</sup> � <sup>1</sup>

further written as

i¼1

N X N

where Hð�Þ denotes the entropy.

I<sup>1</sup> ¼ ð∞ �∞

¼ EZ log2

i¼1

ð∞ �∞

1

i¼1

N

ð∞ �∞

exp � <sup>ð</sup><sup>y</sup> � xi<sup>Þ</sup>

1

i¼1

exp � <sup>z</sup><sup>2</sup>

¼ EZ log2

>>>>>>>>>>:

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup> exp

X N

t¼1

8

>>>>>>>>>><

2ð1 þ xiς<sup>2</sup>Þσ<sup>2</sup> � �

> X N

t¼1

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup>

<sup>f</sup> <sup>Y</sup>ðyÞ ¼ <sup>X</sup> N

i¼1

IðX; YÞ ¼ HðXÞ � HðXjYÞ

<sup>N</sup> <sup>f</sup> <sup>Y</sup>j<sup>X</sup>ðyjxiÞlog2

2

log2

From Eq. (3), we have Z ¼ Y � X. Therefore, let z ¼ y � xi, and thus, I<sup>1</sup> in Eq. (10) can be

log2

X N

0

BBBBBBBBBB@


> X N

t¼1

exp � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup>

exp � <sup>z</sup><sup>2</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup>

2ð1 þ xiς<sup>2</sup>Þσ<sup>2</sup> � �

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup>

z2

( ) " # !

2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup> !

t¼1

2ð1 þ xiς<sup>2</sup>Þσ<sup>2</sup> !

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup>

1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup>

Prð<sup>X</sup> <sup>¼</sup> xiÞ<sup>f</sup> <sup>Y</sup>j<sup>X</sup>ðyjxi<sup>Þ</sup>

exp � <sup>ð</sup><sup>y</sup> � xi<sup>Þ</sup>

2

:

dy

2

1

CCCCCCCCCCA dy

2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup> !

2

2

dy

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup>

2ð1 þ xiς<sup>2</sup>Þσ<sup>2</sup> !

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup> ð9Þ

ð10Þ

ð11Þ

2ð1 þ xiς<sup>2</sup>Þσ<sup>2</sup> !

> f <sup>Y</sup>ðyÞ Prð<sup>X</sup> <sup>¼</sup> xiÞ<sup>f</sup> <sup>Y</sup>j<sup>X</sup>ðyjxi<sup>Þ</sup>

> > exp � <sup>ð</sup><sup>y</sup> � xt<sup>Þ</sup>

exp � <sup>ð</sup><sup>y</sup> � xi<sup>Þ</sup>

exp � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup>

exp � <sup>z</sup><sup>2</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup>

2ð1 þ xiς<sup>2</sup>Þσ<sup>2</sup> � �

9

>>>>>>>>>>=

>>>>>>>>>>;

2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup>

2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup><sup>Þ</sup> <sup>p</sup> <sup>σ</sup>

2

<sup>2</sup>ð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup>Þσ<sup>2</sup> � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup>

2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup> !

!

$$\lim\_{\gamma \to 0} I(\mathbf{X}; \mathbf{Y}) = \log\_2 \mathbf{N} - \frac{1}{N} \sum\_{i=1}^{N} \log\_2 \left( 1 + \sum\_{\substack{t=1\\t \neq i}}^{N} \frac{\sqrt{1 + \mathbf{x}\_i \varsigma^2}}{\sqrt{1 + \mathbf{x}\_t \varsigma^2}} \right) \tag{14}$$

Remark 2: When ς ¼ 0, Eq. (3) reduces to Y ¼ X þ Z0. Therefore, the mutual information can be simplified as

$$I(\mathbf{X};Y)|\_{\boldsymbol{\varphi}=\mathbf{0}} = \log\_2 \mathbf{N} - \frac{1}{N} \sum\_{i=1}^{N} E\_{\mathcal{Z}} \left\{ \log\_2 \left[ 1 + \sum\_{\substack{t=1\\t \neq i}}^{N} \exp \left( \frac{z^2 - (\mathbf{z} + \mathbf{x}\_i - \mathbf{x}\_t)^2}{2\sigma^2} \right) \right] \right\} \tag{15}$$

#### 3.2. Lower bound on mutual information

It should be noted that it is very hard to derive a closed-form expression of Eq. (12). In this subsection, a lower bound on the mutual information will be derived.

To facilitate the description, Eq. (12) can be further expressed as

$$I(X;Y) = \log\_2 N - \frac{1}{N} \underbrace{\sum\_{i=1}^{N} \underbrace{E\_Z \left\{ \log\_2 \left[ \exp \left( \frac{z^2}{2(1+\mathbf{x}\_i \varsigma^2)\sigma^2} \right) \right] \right\}}\_{I\_2}}\_{I\_2}$$

$$-\frac{1}{N} \underbrace{\sum\_{i=1}^{N} E\_Z \left\{ \log\_2 \left[ \sum\_{t=1}^{N} \frac{\sqrt{1+\mathbf{x}\_i \varsigma^2}}{\sqrt{1+\mathbf{x}\_i \varsigma^2}} \exp \left( -\frac{(\mathbf{z}+\mathbf{x}\_i-\mathbf{x}\_t)^2}{2(1+\mathbf{x}\_i \varsigma^2)\sigma^2} \right) \right] \right\}}\_{I\_2} \tag{16}$$

For I<sup>2</sup> in Eq. (16), we have

$$\begin{split} I\_{2} &= \frac{\log\_{2}(e)}{2(1+\mathbf{x}\_{i}\boldsymbol{\zeta}^{2})\sigma^{2}} \int\_{-\infty}^{+\infty} \mathbf{z}^{2} \frac{\exp\left(-\frac{\mathbf{z}^{2}}{2(1+\mathbf{x}\_{i}\boldsymbol{\zeta}^{2})\sigma^{2}}\right)}{\sqrt{2\pi(1+\mathbf{x}\_{i}\boldsymbol{\zeta}^{2})\sigma^{2}}} d\mathbf{z} \\ &= \frac{\log\_{2}(e)}{2(1+\mathbf{x}\_{i}\boldsymbol{\zeta}^{2})\sigma^{2}}(1+\mathbf{x}\_{i}\boldsymbol{\zeta}^{2})\sigma^{2} \\ &= \frac{1}{2}\log\_{2}(e). \end{split} \tag{17}$$

Using the Jensen's inequality for concave function, an upper bound of I<sup>3</sup> in Eq. (16) can be written as

I<sup>3</sup> ¼ EZ log2 X N t¼1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup> exp � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup> 2 2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup> ( ) " # ! ≤ log2 X N t¼1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup> EZ exp � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup> 2 2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup> ( ) " # ! ¼ log2 X N t¼1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup> ðþ<sup>∞</sup> �∞ exp � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup> <sup>2</sup> <sup>þ</sup> <sup>z</sup><sup>2</sup> 2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup> ! ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup>Þσ<sup>2</sup> <sup>p</sup> <sup>d</sup><sup>z</sup> 8 >>>>< >>>>: 9 >>>>= >>>>; ¼ log2 X N t¼1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>ð<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup><sup>Þ</sup> <sup>p</sup> exp � <sup>ð</sup>xi � xt<sup>Þ</sup> 2 4ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup> ! ! : ð18Þ

Substituting Eqs. (17) and (18) into Eq. (16), a lower bound of IðX; YÞ can be derived as

$$I\_{\rm Low}(\mathbf{X};Y) = \log\_2 N - \frac{1}{2}\log\_2(\varepsilon) + \frac{1}{2}$$

$$-\frac{1}{N}\sum\_{i=1}^{N}\log\_2\left(1 + \sum\_{\substack{t=1\\t\neq i}}^{N}\frac{\sqrt{1+\mathbf{x}\_i\varsigma^2}}{\sqrt{1+\mathbf{x}\_i\varsigma^2}}\exp\left(-\frac{\left(\mathbf{x}\_i-\mathbf{x}\_l\right)^2}{4(1+\mathbf{x}\_i\varsigma^2)\sigma^2}\right)\right).\tag{19}$$

Remark 3: Obviously, ILowðX; YÞ in Eq. (19) is a monotonic increasing function with respect to γ. Therefore, we have

$$\lim\_{\gamma \to \infty} I\_{\text{Low}}(X; Y) = \log\_2 N - \frac{1}{2} \log\_2(e) + \frac{1}{2} \tag{20}$$

which indicates that the maximum value of ILowðX; YÞ is log2N � log2ðeÞ=2 � 1=2. Moreover, we have

$$\lim\_{\gamma \to 0} I\_{\text{Low}}(\mathbf{X}; Y) = \log\_2 N - \frac{1}{2} \log\_2(e) + \frac{1}{2} - \frac{1}{N} \sum\_{i=1}^{N} \log\_2 \left( 1 + \sum\_{\substack{t=1 \\ t \neq i}}^{N} \frac{\sqrt{1 + \mathbf{x}\_i \boldsymbol{\varsigma}^2}}{\sqrt{1 + \mathbf{x}\_t \boldsymbol{\varsigma}^2}} \right) \tag{21}$$

Remark 4: According to Eqs. (13) and (20), we have

$$\lim\_{\gamma \to \infty} I(\mathbf{X}; \mathbf{Y}) - \lim\_{\gamma \to \infty} I\_{\text{Low}}(\mathbf{X}; \mathbf{Y}) = \frac{1}{2} [\log\_2(e) - 1]. \tag{22}$$

Similarly, from Eqs. (14) and (21), we have

$$\lim\_{\gamma \to 0} I(\mathbf{X}; \mathbf{Y}) - \lim\_{\gamma \to 0} I\_{\text{Low}}(\mathbf{X}; \mathbf{Y}) = \frac{1}{2} [\log\_2(e) - 1]. \tag{23}$$

From Eqs. (22) and (23), it can be concluded that a constant performance gap ½log2ðeÞ � 1�=2 exists between IðX; YÞ and ILowðX; YÞ at low and high SNR regions.

Remark 5: When ς ¼ 0, ILowðX; YÞ can be simplified as

$$\begin{aligned} \left. I\_{\text{Low}}(\mathbf{X}; Y) \right|\_{\varepsilon=0} &= \log\_2 \mathbf{N} - \frac{1}{2} \log\_2(e) + \frac{1}{2} \\\ -\frac{1}{N} \sum\_{i=1}^{N} \log\_2 \left( 1 + \sum\_{\substack{t=1\\t \neq i}}^{N} \exp \left( -\frac{\left( \mathbf{x}\_i - \mathbf{x}\_t \right)^2}{4\sigma^2} \right) \right) . \end{aligned} \tag{24}$$

#### 4. BER analysis

For I<sup>2</sup> in Eq. (16), we have

150 Optical Fiber and Wireless Communications

written as

<sup>I</sup><sup>2</sup> <sup>¼</sup> log2ðe<sup>Þ</sup> 2ð1 þ xiς<sup>2</sup>Þσ<sup>2</sup>

<sup>¼</sup> log2ðe<sup>Þ</sup>

X N

t¼1

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup>

> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

> > N

t¼1 t6¼i

<sup>γ</sup>!<sup>∞</sup> <sup>I</sup>LowðX; YÞ ¼ log2<sup>N</sup> � <sup>1</sup>

which indicates that the maximum value of ILowðX; YÞ is log2N � log2ðeÞ=2 � 1=2.

log2ðeÞ þ <sup>1</sup>

2 � 1 N X N

2

¼ 1 2 log2ðeÞ:

I<sup>3</sup> ¼ EZ log2

X N

t¼1

8 >>>><

>>>>:

X N

t¼1

X N

t¼1

log2 <sup>1</sup> <sup>þ</sup><sup>X</sup>

B@

lim

<sup>I</sup>LowðX; YÞ ¼ log2<sup>N</sup> � <sup>1</sup>

≤ log2

¼ log2

¼ log2

� 1 N X N

to γ. Therefore, we have

Moreover, we have

lim γ!0 i¼1

ðþ<sup>∞</sup> �∞ z2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

> ðþ<sup>∞</sup> �∞

<sup>2</sup>ð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup>Þσ<sup>2</sup> <sup>ð</sup><sup>1</sup> <sup>þ</sup> xiς<sup>2</sup>

exp � <sup>z</sup><sup>2</sup>

Þσ2

<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup> exp � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup>

Using the Jensen's inequality for concave function, an upper bound of I<sup>3</sup> in Eq. (16) can be

<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup> EZ exp � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup>

<sup>2</sup>ð<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup><sup>Þ</sup> <sup>p</sup> exp � <sup>ð</sup>xi � xt<sup>Þ</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

Remark 3: Obviously, ILowðX; YÞ in Eq. (19) is a monotonic increasing function with respect

Substituting Eqs. (17) and (18) into Eq. (16), a lower bound of IðX; YÞ can be derived as <sup>I</sup>LowðX; YÞ ¼ log2<sup>N</sup> � <sup>1</sup>

! !

( ) " # !

( ) " # !

2ð1 þ xiς<sup>2</sup>Þσ<sup>2</sup> � �

2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup>

2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup> !

2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup>Þσ<sup>2</sup> <sup>p</sup> <sup>d</sup><sup>z</sup>

4ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup>

log2ðeÞ þ <sup>1</sup>

2

4ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup>

2

! <sup>0</sup>

<sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup> exp � <sup>ð</sup>xi � xt<sup>Þ</sup>

2

i¼1

log2ðeÞ þ <sup>1</sup>

log2 <sup>1</sup> <sup>þ</sup><sup>X</sup>

0 B@

N

t¼1 t6¼i

2ð1 þ xtς<sup>2</sup>Þσ<sup>2</sup>

exp � <sup>ð</sup><sup>z</sup> <sup>þ</sup> xi � xt<sup>Þ</sup>

2

2

<sup>2</sup> <sup>þ</sup> <sup>z</sup><sup>2</sup>

:

2

1 CA:

<sup>2</sup> <sup>ð</sup>20<sup>Þ</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xiς<sup>2</sup> <sup>p</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> xtς<sup>2</sup> <sup>p</sup>

1

CA <sup>ð</sup>21<sup>Þ</sup>

9 >>>>=

>>>>;

ð17Þ

ð18Þ

ð19Þ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>πð<sup>1</sup> <sup>þ</sup> xiς<sup>2</sup>Þσ<sup>2</sup> <sup>p</sup> <sup>d</sup><sup>z</sup>

> In this section, the BER of the VLC with input-dependent noise is analysed. To facilitate the analysis, OOK is employed as the modulation scheme. Suppose that the transmitted optical signal is drawn equiprobably from the OOK constellation and 2ξP ≤ A always holds, we have

$$X \in \{0, 2\xi P\}. \tag{25}$$

Therefore, the BER for the VLC with OOK can be written as

$$BER = \Pr(\text{off})\Pr(\text{on}|\text{off}) + \Pr(\text{on})\Pr(\text{off}|\text{on}) \tag{26}$$

where PrðonÞ and PrðoffÞ are the probabilities of sending "on" and "off" bits, respectively. Because the transmitted signal is taken as symbols drawn equiprobably, thus PrðonÞ ¼ PrðoffÞ ¼ 0:5. PrðonjoffÞ and PrðoffjonÞ are the conditional bit error probabilities when the transmitted bit is "off" and "on," respectively.

According to Eq. (8), PrðoffjonÞ can be written as

$$\begin{split} \Pr(\text{off}|\text{on}) &= \Pr(y < \xi P|\text{on}) \\ &= \int\_{-\infty}^{\xi P} \frac{1}{\sqrt{2\pi (1 + 2\xi P \zeta^2)} \sigma} e^{-\frac{(y - 2\xi P)^2}{2(1 + 2\xi P \zeta^2) \sigma^2}} dy \\ &= \mathcal{Q} \left( \frac{\xi P}{\sqrt{1 + 2\xi P \zeta^2} \sigma} \right) \end{split} \tag{27}$$

where QðxÞ is the Gaussian Q-function.

Moreover, PrðoffjonÞ can be similarly written as

$$\begin{array}{lcl} \Pr(\text{off}|\text{on}) = \Pr(y > \xi P|\text{off}) \\ = \int\_{\xi P}^{\infty} \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{y^2}{2\sigma^2}} dy \\ = \mathcal{Q}\left(\frac{\xi P}{\sigma}\right). \end{array} \tag{28}$$

Therefore, the BER can be finally written as

$$BER = \frac{1}{2} \left[ \mathcal{Q} \left( \frac{\xi P}{\sqrt{1 + 2\xi P \zeta^2} \sigma} \right) + \mathcal{Q} \left( \frac{\xi P}{\sigma} \right) \right]. \tag{29}$$

Remark 6: Let the average SNR be <sup>γ</sup> <sup>¼</sup> <sup>ξ</sup>P=½ð<sup>1</sup> <sup>þ</sup> <sup>ξ</sup>Pς<sup>2</sup>Þσ<sup>2</sup>�. Because <sup>ξ</sup>, P and <sup>ς</sup> are non-negative and finite numbers, <sup>γ</sup> ! <sup>∞</sup> (or 0) is equivalent to <sup>σ</sup><sup>2</sup> ! 0 (or <sup>∞</sup>). Apparently, BER in Eq. (29) is a monotonic decreasing function with respect to γ. Therefore, we have

$$\lim\_{\gamma \to \infty} BER = 0\tag{30}$$

$$\lim\_{\gamma \to 0} BER = \frac{1}{2} \tag{31}$$

This indicates that the minimum BER and the maximum BER are 0 and 0.5, respectively.

Remark 7: When ς ¼ 0, BER can be simplified as

$$BER|\_{\varsigma=0} = \mathcal{Q}\left(\frac{\xi P}{\sigma}\right). \tag{32}$$

#### 5. Numerical results

In this section, some classical numerical results will be presented. The derived theoretical expressions of the mutual information, the lower bound of mutual information and the BER will be verified.

#### 5.1. Results of mutual information

Figure 3 shows the mutual information (i.e., IðX; YÞ in Eq. (12)) and its lower bound (i.e., ILowðX; YÞ in Eq. (19)) versus SNR with different modulation orders N. In the simulation, without loss of generality, ξ, P and ς are set to be one. In Figure 3, it can be seen that IðX; YÞ and ILowðX; YÞ are monotonic increasing functions with respect to SNR. Moreover, with the increase of N, IðX; YÞ and ILowðX; YÞ also increase. It can also be found that the maximum value of IðX; YÞ is log2N, and the maximum value of ILowðX; YÞ is log2N � log2ðeÞ=2 � 1=2, which coincides with Remark 1. Furthermore, the gap between IðX; YÞ and ILowðX; YÞ is ðlog2e � 1Þ=2 bits at low and high SNR regions, which coincides with Remark 4.

Figure 4 shows the mutual information (i.e., IðX; YÞin Eq. (12)) and its lower bound (i.e., ILowðX; YÞ in Eq. (19)) versus dimming targets ξ with different ς. In the simulation, P is set to be one, γ ¼ 20dB and N ¼ 4.As can be seen, when ς ¼ 1 andς ¼ 10,IðX; YÞ andILowðX; YÞincrease with the increase of ξ, while IðX; YÞ and ILowðX; YÞ do not change with the increase of ξ when ς ¼ 0. Moreover, it can be seen that IðX; YÞ and ILowðX; YÞ are both the monotonic increasing functions with respect to ς.

Figure 3. Mutual information and its lower bound versus SNR with different N.

Moreover, PrðoffjonÞ can be similarly written as

152 Optical Fiber and Wireless Communications

Therefore, the BER can be finally written as

Remark 7: When ς ¼ 0, BER can be simplified as

5. Numerical results

5.1. Results of mutual information

will be verified.

BER <sup>¼</sup> <sup>1</sup>

monotonic decreasing function with respect to γ. Therefore, we have

bits at low and high SNR regions, which coincides with Remark 4.

PrðoffjonÞ ¼ Prðy > ξPjoffÞ

<sup>¼</sup> <sup>Q</sup> <sup>ξ</sup><sup>P</sup> σ � � :

� � � �

1 ffiffiffiffiffiffi <sup>2</sup><sup>π</sup> <sup>p</sup> <sup>σ</sup> e � y2 <sup>2</sup>σ2dy

> <sup>þ</sup> <sup>Q</sup> <sup>ξ</sup><sup>P</sup> σ

<sup>γ</sup>!<sup>∞</sup> BER <sup>¼</sup> <sup>0</sup> <sup>ð</sup>30<sup>Þ</sup>

<sup>2</sup> <sup>ð</sup>31<sup>Þ</sup>

: ð32Þ

ð28Þ

: ð29Þ

¼ ð∞ ξP

<sup>2</sup> <sup>Q</sup> <sup>ξ</sup><sup>P</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>1</sup> <sup>þ</sup> <sup>2</sup>ξPς<sup>2</sup> <sup>p</sup> <sup>σ</sup> � �

lim

lim γ!0

BERj

This indicates that the minimum BER and the maximum BER are 0 and 0.5, respectively.

Remark 6: Let the average SNR be <sup>γ</sup> <sup>¼</sup> <sup>ξ</sup>P=½ð<sup>1</sup> <sup>þ</sup> <sup>ξ</sup>Pς<sup>2</sup>Þσ<sup>2</sup>�. Because <sup>ξ</sup>, P and <sup>ς</sup> are non-negative and finite numbers, <sup>γ</sup> ! <sup>∞</sup> (or 0) is equivalent to <sup>σ</sup><sup>2</sup> ! 0 (or <sup>∞</sup>). Apparently, BER in Eq. (29) is a

BER <sup>¼</sup> <sup>1</sup>

<sup>ς</sup>¼<sup>0</sup> <sup>¼</sup> <sup>Q</sup> <sup>ξ</sup><sup>P</sup>

In this section, some classical numerical results will be presented. The derived theoretical expressions of the mutual information, the lower bound of mutual information and the BER

Figure 3 shows the mutual information (i.e., IðX; YÞ in Eq. (12)) and its lower bound (i.e., ILowðX; YÞ in Eq. (19)) versus SNR with different modulation orders N. In the simulation, without loss of generality, ξ, P and ς are set to be one. In Figure 3, it can be seen that IðX; YÞ and ILowðX; YÞ are monotonic increasing functions with respect to SNR. Moreover, with the increase of N, IðX; YÞ and ILowðX; YÞ also increase. It can also be found that the maximum value of IðX; YÞ is log2N, and the maximum value of ILowðX; YÞ is log2N � log2ðeÞ=2 � 1=2, which coincides with Remark 1. Furthermore, the gap between IðX; YÞ and ILowðX; YÞ is ðlog2e � 1Þ=2

Figure 4 shows the mutual information (i.e., IðX; YÞin Eq. (12)) and its lower bound (i.e., ILowðX; YÞ in Eq. (19)) versus dimming targets ξ with different ς. In the simulation, P is set to be one, γ ¼ 20dB and N ¼ 4.As can be seen, when ς ¼ 1 andς ¼ 10,IðX; YÞ andILowðX; YÞincrease with the increase of ξ, while IðX; YÞ and ILowðX; YÞ do not change with the increase of ξ when ς ¼ 0. Moreover, it can be seen that IðX; YÞ and ILowðX; YÞ are both the monotonic increasing functions with respect to ς.

σ � �

Figure 4. Mutual information and its lower bound versus dimming target ξ with different ς.

Figure 5. BER versus ς with different ξ.

Figure 6. BER versus SNR with different ς.

#### 5.2. Results of BER

Figure 5 shows BER versus ς with different dimming targets ξ. In the simulation, both P and σ are set to be one. It can be seen that the best BER performance is achieved when ς ¼ 0, which

indicates that the performance for the system with only input-independent noise outperforms that with input-dependent noise. Moreover, with the increase of ς, the BER performance degrades. Furthermore, it can be observed that with the increase of ξ, the value of the BER reduces, which indicates that the system performance improves. In addition, it can be found that the theoretical results show close agreement with the Monte-Carlo simulation results, which verifies the correctness of the derived theoretical expression of the BER.

Figure 6 shows the BER versus the SNR with different ς. It can be observed that the value of the BER decreases with the increase of the SNR. This is because large SNR will generate a small BER, and thus it will result in good performance. Moreover, at low SNR region, the curve with ς ¼ 10 achieves the best BER performance. At high SNR region, the curve with ς ¼ 0 achieves the best BER performance. Once again, the gap between the theoretical results and the simulation results is so small enough to be ignored, which verifies the accuracy of the derived theoretical expression of the BER.
