**3. General theorems**

First, consider the case where the random vectors f g *Sn <sup>n</sup>*≥<sup>1</sup> are formed as growing sums of independent random variables. Namely, let *X*1,*X*2, … be independent *r*-valued random vectors, and for *n*∈ let

$$\mathcal{S}\_n = X\_1 + \dots + X\_n.$$

Consider a sequence of integer-valued positive random variables f g *Nn <sup>n</sup>*≥<sup>1</sup> such that for each *n* ≥1 the random variable *Nn* is independent of the sequence f g *Sk <sup>k</sup>*≥1. Let f g *bn <sup>n</sup>*≥<sup>1</sup> be an infinitely increasing sequence of positive numbers such that

$$
\mathcal{L}\left(\frac{\mathbb{S}\_n}{\sqrt{b\_n}}\right) \Rightarrow \Phi\_\Sigma \tag{7}
$$

as *n* ! ∞, where Σ is some positive definite matrix.

Let f g *dn <sup>n</sup>*≥<sup>1</sup> be an infinitely increasing sequence of positive numbers. As *Zn* take the scalar normalized random vector

$$Z\_n = \frac{\mathcal{S}\_{N\_n}}{\sqrt{d\_n}}.$$

Theorem 2. *Let Nn* ! ∞ *in probability as n* ! ∞*. Assume that the random variables X*1, *X*2, … *satisfy condition* ð Þ 6 *with an asymptotic covariance matrix* Σ*. Then a distribution F such that*

$$L(Z\_n) \Rightarrow F \qquad (n \to \infty),\tag{8}$$

(the last equality holds since any constant is independent of any random vari-

*bk*

Pð Þ *Nn* ¼ *k* P

<sup>&</sup>gt; *<sup>x</sup> R*

ffiffiffiffiffi *bk dn*

an *n*<sup>0</sup> ¼ *n*0ð Þϵ such that Pð Þ *Nn* ≤*n*<sup>0</sup> <ϵ for all *n*≥ *n*0. Therefore, with the account of the tightness of the sequence f g *Zn <sup>n</sup>*≥<sup>1</sup> that follows from its weak convergence to the

> ffiffiffiffiffiffiffi *bNn dn*

!

> *x R*

, *n* ¼ 1, 2, …

s

ffiffiffiffiffiffiffi *bNn dn*

is not tight. In that case, there exists an *α* > 0 and sequences N of natural and f g *xn <sup>n</sup>*<sup>∈</sup> <sup>N</sup> of real numbers satisfying the conditions *xn*↑<sup>∞</sup> ð Þ *<sup>n</sup>* ! <sup>∞</sup>, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and

s

> *xn* !

But, according to (12), for any ϵ > 0 there exist *M* ¼ *M*ð Þϵ and *n*<sup>0</sup> ¼ *n*0ð Þϵ

s

Choose ϵ<*α=*2 where *α* is the number from (13). Then for all *n*∈ N large enough, in accordance with (13), the inequality opposite to (14) must hold. The obtained contradiction by the Prokhorov theorem proves the tightness of the

ffiffiffiffiffiffiffi *bNn dn*

> *M*ð Þϵ !

*<sup>n</sup>*≥<sup>1</sup> or, which in this case is the same as that, of the sequence

s

� � � � > *R* � � <sup>&</sup>gt; *<sup>δ</sup>=*<sup>2</sup>

*From Asymptotic Normality to Heavy-Tailedness via Limit Theorems for Random Sums…*

<sup>p</sup> ) *<sup>Y</sup>* takes place as *<sup>k</sup>* ! <sup>∞</sup>, from (9) it

ffiffiffiffiffi *bk dn*

!

<sup>&</sup>gt; *<sup>x</sup> R*

s

Pð Þþ ∥*Zn*∥ > *x* Pð Þ *Nn* ≤*k*<sup>0</sup> *:* (11)

*<sup>P</sup>* <sup>∞</sup> as *<sup>n</sup>* ! <sup>∞</sup>, it follows that for any <sup>ϵ</sup> <sup>&</sup>gt; 0 there exists

¼

<sup>&</sup>gt; *<sup>x</sup> R*

" #

≤ϵ, (12)

> *α*, *n* ∈ N *:* (13)

≤2ϵ*:* (14)

� Pð Þ *Nn* ≤*k*<sup>0</sup>

*:*

ffiffiffiffiffiffiffi *bNn dn*

!

s

≥ *δ* 2 P

able). Since by (7) the convergence *Sk=* ffiffiffiffiffi

*DOI: http://dx.doi.org/10.5772/intechopen.89659*

follows that there exists a number *k*<sup>0</sup> ¼ *k*0ð Þ *R*, *δ* such that

for all *k* > *k*0. Therefore, continuing (10) we obtain

Pð Þ ∥*Zn*∥ > *x* ≥

" # !

ffiffiffiffiffiffiffi *bNn dn*

!

> *x R* ≤ 2 *δ*

random element *Z* with Lð Þ¼ *Z F* implied by (8), relation (11) implies

lim*<sup>x</sup>*!∞*<sup>s</sup> upn*≥*n*0ð Þ<sup>ϵ</sup> <sup>P</sup>

*<sup>n</sup> BNn* ∥ ¼

ffiffiffiffiffiffiffi *bNn dn*

s

*s upn*≥*n*0ð Þ<sup>ϵ</sup> P

whatever ϵ > 0 is. Now assume that the sequence

∥*D*�<sup>1</sup>

P

�<sup>X</sup> *k*0

P

From the condition *Nn* !

*k*¼1

s

¼ *δ* 2 P ffiffiffiffiffiffiffi *bNn dn*

!

<sup>&</sup>gt; *<sup>x</sup> R*

s

Hence,

such that

sequence ∥*D*�<sup>1</sup>

*bNn* f g *=dn <sup>n</sup>*≥<sup>1</sup>.

**173**

*<sup>n</sup> BNn* <sup>∥</sup> � �

<sup>P</sup> *Sk* ffiffiffiffiffi *bk* p � � � �

> *δ* 2 X∞ *k*¼*k*0þ1

Pð Þ *Nn* ¼ *k* P

exists if and only if there exists a distribution function *V x*ð Þ satisfying the conditions

$$\text{i. } V(\mathbf{x}) = \mathbf{0} \text{ for } \mathbf{x} < \mathbf{0};$$

ii. *for any A* <sup>∈</sup><sup>B</sup> *<sup>r</sup>* ð Þ,

$$F(A) = \mathbb{E}\Phi\_{U\Sigma}(A) = \int\_0^\infty \Phi\_{u\Sigma}(A)dV(u), \ \varkappa \in \mathbb{R}^1;$$

$$\text{iii.}\,P(b\_{N\_n} < d\_n \mathbf{x}) \Rightarrow V(\mathbf{x}), n \to \infty.$$

Proof. *The "if" part*. We will essentially exploit Theorem 1. For each *n*≥1, set *an* <sup>¼</sup> *cn* <sup>¼</sup> 0, *Bn* <sup>¼</sup> *Dn* <sup>¼</sup> ffiffiffiffiffi *dn* <sup>p</sup> *Ir*. For the convenience of notation, introduce a random variable *U* with the distribution function *V x*ð Þ. Note that the conditions of the theorem guarantee the tightness of the sequence of random variables

$$\|\|D\_n^{-1}B\_{N\_n}\|\| = \sqrt{\frac{b\_{N\_n}}{d\_n}}, \quad n = 1, 2, \dots$$

implied by its weak convergence to the random variable ffiffiffiffi *U* <sup>p</sup> . Further, in the case under consideration, we have *Wn*ð Þ¼ *<sup>g</sup>* ffiffiffiffiffiffiffiffiffiffiffiffiffiffi *bNn =dn* <sup>p</sup> � *<sup>g</sup>*, *<sup>g</sup>* <sup>∈</sup> *<sup>r</sup>* . Therefore, the condition *Nn=dn* ) *<sup>U</sup>* implies *Wn*ð Þ)*<sup>g</sup>* ffiffiffiffi *U* <sup>p</sup> *<sup>g</sup>* for all *<sup>g</sup>* <sup>∈</sup> *<sup>r</sup>* . Condition (7) means that in the case under consideration, *<sup>H</sup>* <sup>¼</sup> ΦΣ. Hence, by Theorem 1, *Zn* ) ffiffiffiffi *U* <sup>p</sup> *<sup>Y</sup>* where *<sup>Y</sup>* is a random element with the distribution ΦΣ independent of the random variable *U*. It is easy to see that the distribution of the random element ffiffiffiffi *U* <sup>p</sup> *<sup>Y</sup>* coincides with EΦ*<sup>U</sup>*Σð Þ� where the matrix Σ satisfies (7).

*The "only if" part*. Let condition (8) hold. Make sure that the sequence ∥*D*�<sup>1</sup> *<sup>n</sup> BNn* <sup>∥</sup> � � *<sup>n</sup>*≥<sup>1</sup> is tight. Let *Y* be a random element with the distribution ΦΣ. There exist *δ* > 0 and *R* > 0 such that

$$\mathbb{P}(\|Y\| > R) > \delta. \tag{9}$$

For *R* specified above and an arbitrary *x* > 0, we have

$$\begin{split} \mathbf{P}(\left\|\left\|Z\_{n}\right\|>\mathbf{x}) \ge & \mathbf{P}\left(\left\|\frac{S\_{N\_{n}}}{\sqrt{d\_{n}}}\right\|>\mathbf{x}; \left\|\frac{S\_{N\_{n}}}{\sqrt{b\_{N\_{n}}}}\right\|>\mathbf{R}\right) = \\ =& \mathbf{P}\left(\sqrt{\frac{b\_{N\_{n}}}{d\_{n}}}>\mathbf{x}: \left\|\frac{S\_{N\_{n}}}{\sqrt{b\_{N\_{n}}}}\right\|^{-1}; \left\|\frac{S\_{N\_{n}}}{\sqrt{b\_{N\_{n}}}}\right\|>\mathbf{R}\right) \ge \mathbf{P}\left(\sqrt{\frac{b\_{N\_{n}}}{d\_{n}}}>\frac{\mathbf{x}}{\mathbf{R}}; \left\|\frac{S\_{N\_{n}}}{\sqrt{b\_{N\_{n}}}}\right\|>\mathbf{R}\right) = \\ =& \sum\_{k=1}^{\infty} \mathbf{P}(N\_{n}=k) \mathbf{P}\left(\sqrt{\frac{b\_{k}}{d\_{n}}}>\frac{\mathbf{x}}{\mathbf{R}}; \left\|\frac{S\_{k}}{\sqrt{b\_{k}}}\right\|>\mathbf{R}\right) = \sum\_{k=1}^{\infty} \mathbf{P}(N\_{n}=k) \mathbf{P}\left(\sqrt{\frac{b\_{k}}{d\_{n}}}>\frac{\mathbf{x}}{\mathbf{R}}\right) \mathbf{P}\left(\left\|\frac{S\_{k}}{\sqrt{b\_{k}}}\right\|>\mathbf{R}\right) \end{split} \tag{10}$$

*From Asymptotic Normality to Heavy-Tailedness via Limit Theorems for Random Sums… DOI: http://dx.doi.org/10.5772/intechopen.89659*

(the last equality holds since any constant is independent of any random variable). Since by (7) the convergence *Sk=* ffiffiffiffiffi *bk* <sup>p</sup> ) *<sup>Y</sup>* takes place as *<sup>k</sup>* ! <sup>∞</sup>, from (9) it follows that there exists a number *k*<sup>0</sup> ¼ *k*0ð Þ *R*, *δ* such that

$$\mathbb{P}\left(\left\|\frac{\mathbb{S}\_k}{\sqrt{b\_k}}\right\| > R\right) > \delta/2$$

for all *k* > *k*0. Therefore, continuing (10) we obtain

$$\begin{split} \mathrm{P}(\|Z\_{n}\| > \chi) &\geq \frac{\delta}{2} \sum\_{k=k\_{0}+1}^{\infty} \mathrm{P}(N\_{n} = k) \mathrm{P}\left(\sqrt{\frac{b\_{k}}{d\_{n}}} > \frac{\chi}{R}\right) = \\ \mathrm{P}\left[\mathrm{P}\left(\sqrt{\frac{b\_{N\_{n}}}{d\_{n}}} > \frac{\chi}{R}\right) - \sum\_{k=1}^{k\_{0}} \mathrm{P}(N\_{n} = k) \mathrm{P}\left(\sqrt{\frac{b\_{k}}{d\_{n}}} > \frac{\chi}{R}\right)\right] &\geq \frac{\delta}{2} \left[\mathrm{P}\left(\sqrt{\frac{b\_{N\_{n}}}{d\_{n}}} > \frac{\chi}{R}\right) - \mathrm{P}(N\_{n} \leq k\_{0})\right]. \end{split}$$

Hence,

*Zn* <sup>¼</sup> *SNn* ffiffiffiffiffi *dn* p *:*

Theorem 2. *Let Nn* ! ∞ *in probability as n* ! ∞*. Assume that the random variables X*1, *X*2, … *satisfy condition* ð Þ 6 *with an asymptotic covariance matrix* Σ*. Then a*

exists if and only if there exists a distribution function *V x*ð Þ satisfying the

ð<sup>∞</sup> 0

Proof. *The "if" part*. We will essentially exploit Theorem 1. For each *n*≥1, set

ffiffiffiffiffiffiffi *bNn dn*

*bNn =dn* <sup>p</sup> � *<sup>g</sup>*, *<sup>g</sup>* <sup>∈</sup> *<sup>r</sup>*

*<sup>n</sup>*≥<sup>1</sup> is tight. Let *Y* be a random element with the distribution ΦΣ. There

¼

1 A ≥P

<sup>¼</sup> <sup>X</sup><sup>∞</sup> *k*¼1

variable *U* with the distribution function *V x*ð Þ. Note that the conditions of the

s

<sup>p</sup> *<sup>g</sup>* for all *<sup>g</sup>* <sup>∈</sup> *<sup>r</sup>*

*The "only if" part*. Let condition (8) hold. Make sure that the sequence

*SNn* ffiffiffiffiffiffiffi *bNn* p � � � � �

!

� � � � > *R* � � � � � > *R*

� � � � � > *R*

random element with the distribution ΦΣ independent of the random variable *U*. It

theorem guarantee the tightness of the sequence of random variables

*<sup>n</sup> BNn* ∥ ¼

implied by its weak convergence to the random variable ffiffiffiffi

*U*

is easy to see that the distribution of the random element ffiffiffiffi

For *R* specified above and an arbitrary *x* > 0, we have

*dn* p � � � �

<sup>&</sup>gt; *<sup>x</sup>* � *SNn* ffiffiffiffiffiffiffi *bNn* p � � � � �

> <sup>&</sup>gt; *<sup>x</sup> R* ; *Sk* ffiffiffiffiffi *bk* p � � � �

ffiffiffiffiffi *bk dn*

s

� � � � > *x*;

> � � � � �

!

�1 ; *SNn* ffiffiffiffiffiffiffi *bNn* p � � � � �

case under consideration, *<sup>H</sup>* <sup>¼</sup> ΦΣ. Hence, by Theorem 1, *Zn* ) ffiffiffiffi

*F A*ð Þ¼ EΦ*<sup>U</sup>*Σð Þ¼ *A*

*dn*

∥*D*�<sup>1</sup>

under consideration, we have *Wn*ð Þ¼ *<sup>g</sup>* ffiffiffiffiffiffiffiffiffiffiffiffiffiffi

*Nn=dn* ) *<sup>U</sup>* implies *Wn*ð Þ)*<sup>g</sup>* ffiffiffiffi

exist *δ* > 0 and *R* > 0 such that

<sup>P</sup>ð Þ <sup>∥</sup>*Zn*<sup>∥</sup> <sup>&</sup>gt; *<sup>x</sup>* <sup>≥</sup><sup>P</sup> *SNn* ffiffiffiffiffi

ffiffiffiffiffiffiffi *bNn dn*

s

0 @

Pð Þ *Nn* ¼ *k* P

∥*D*�<sup>1</sup> *<sup>n</sup> BNn* <sup>∥</sup> � �

¼ P

<sup>¼</sup> <sup>X</sup><sup>∞</sup> *k*¼1

**172**

EΦ*<sup>U</sup>*Σð Þ� where the matrix Σ satisfies (7).

*L Z*ð Þ)*<sup>n</sup> F n*ð Þ ! ∞ , (8)

<sup>Φ</sup>*<sup>u</sup>*Σð Þ *<sup>A</sup> dV u*ð Þ, *<sup>x</sup>*<sup>∈</sup> <sup>1</sup>

<sup>p</sup> *Ir*. For the convenience of notation, introduce a random

, *n* ¼ 1, 2, …

;

*U*

*U*

Pð Þ ∥*Y*∥ > *R* > *δ:* (9)

ffiffiffiffiffiffiffi *bNn dn*

<sup>&</sup>gt; *<sup>x</sup> R* ; *SNn* ffiffiffiffiffiffiffi *bNn* p � � � � �

s

!

ffiffiffiffiffi *bk dn*

!

<sup>&</sup>gt; *<sup>x</sup> R*

s

Pð Þ *Nn* ¼ *k* P

. Condition (7) means that in the

*U*

<sup>p</sup> *<sup>Y</sup>* coincides with

� � � � � > *R*

¼

� � � � > *R* � �

(10)

*<sup>P</sup> Sk* ffiffiffiffiffi *bk* p � � � �

<sup>p</sup> . Further, in the case

. Therefore, the condition

<sup>p</sup> *<sup>Y</sup>* where *<sup>Y</sup>* is a

*distribution F such that*

*Probability, Combinatorics and Control*

i. *V x*ð Þ¼ 0 *for x*<0;

ii. *for any A* <sup>∈</sup><sup>B</sup> *<sup>r</sup>* ð Þ,

*an* <sup>¼</sup> *cn* <sup>¼</sup> 0, *Bn* <sup>¼</sup> *Dn* <sup>¼</sup> ffiffiffiffiffi

iii. Pð Þ) *bNn* <*dnx V x*ð Þ, *n* ! ∞.

conditions

$$\mathbb{P}\left(\sqrt{\frac{b\_{N\_n}}{d\_n}} > \frac{\varkappa}{R}\right) \le \frac{2}{\delta} \mathbb{P}(\|Z\_n\| > \varkappa) + \mathbb{P}(N\_n \le k\_0).\tag{11}$$

From the condition *Nn* ! *<sup>P</sup>* <sup>∞</sup> as *<sup>n</sup>* ! <sup>∞</sup>, it follows that for any <sup>ϵ</sup> <sup>&</sup>gt; 0 there exists an *n*<sup>0</sup> ¼ *n*0ð Þϵ such that Pð Þ *Nn* ≤*n*<sup>0</sup> <ϵ for all *n*≥ *n*0. Therefore, with the account of the tightness of the sequence f g *Zn <sup>n</sup>*≥<sup>1</sup> that follows from its weak convergence to the random element *Z* with Lð Þ¼ *Z F* implied by (8), relation (11) implies

$$\lim\_{\kappa \to \infty} \sup\_{} \sup\_{n \ge n\_0(e)} \mathbf{P} \left( \sqrt{\frac{b\_{N\_n}}{d\_n}} > \frac{\varkappa}{R} \right) \le \epsilon,\tag{12}$$

whatever ϵ > 0 is. Now assume that the sequence

$$\|\|D\_n^{-1}B\_{N\_n}\|\| = \sqrt{\frac{b\_{N\_n}}{d\_n}}, \quad n = 1, 2, \dots$$

is not tight. In that case, there exists an *α* > 0 and sequences N of natural and f g *xn <sup>n</sup>*<sup>∈</sup> <sup>N</sup> of real numbers satisfying the conditions *xn*↑<sup>∞</sup> ð Þ *<sup>n</sup>* ! <sup>∞</sup>, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and

$$\mathbb{P}\left(\sqrt{\frac{b\_{N\_n}}{d\_n}} > \mathfrak{x}\_n\right) > a, \ n \in \mathcal{N}.\tag{13}$$

But, according to (12), for any ϵ > 0 there exist *M* ¼ *M*ð Þϵ and *n*<sup>0</sup> ¼ *n*0ð Þϵ such that

$$\sup \sup\_{n \ge n\_0(\varepsilon)} \mathbf{P} \left( \sqrt{\frac{b\_{N\_n}}{d\_n}} > M(\varepsilon) \right) \le 2\varepsilon. \tag{14}$$

Choose ϵ<*α=*2 where *α* is the number from (13). Then for all *n*∈ N large enough, in accordance with (13), the inequality opposite to (14) must hold. The obtained contradiction by the Prokhorov theorem proves the tightness of the sequence ∥*D*�<sup>1</sup> *<sup>n</sup> BNn* <sup>∥</sup> � � *<sup>n</sup>*≥<sup>1</sup> or, which in this case is the same as that, of the sequence *bNn* f g *=dn <sup>n</sup>*≥<sup>1</sup>.

Introduce the set *W Z*ð Þ containing all nonnegative random variables *U* such that Pð Þ¼ *<sup>Z</sup>* <sup>∈</sup> *<sup>A</sup>* <sup>E</sup>Φ*U*Σð Þ *<sup>A</sup>* for any *<sup>A</sup>* <sup>∈</sup><sup>B</sup> *<sup>r</sup>* ð Þ. Let *<sup>L</sup>*ð Þ �, � be any probability metric that metrizes weak convergence in the space of random variables, or, which is the same in this context, n the space of distribution functions, say, the Lévy metric or the smoothed Kolmogorov distance. If *X*<sup>1</sup> and *X*<sup>2</sup> are random variables with the distribution functions *F*<sup>1</sup> and *F*<sup>2</sup> respectively, then we identify *L X*ð Þ 1, *X*<sup>2</sup> and *L F*ð Þ 1, *F*<sup>2</sup> . Show that there exists a sequence of random variables f g *Un <sup>n</sup>*≥1, *Un* ∈ Wð Þ *Z* , such that

$$L\left(\frac{b\_{N\_n}}{d\_n}, U\_n\right) \to 0 \; (n \to \infty). \tag{15}$$

Now consider a formally more general setting.

be a statistic taking values in *<sup>r</sup>*

*DOI: http://dx.doi.org/10.5772/intechopen.89659*

for every elementary outcome *ω*∈ Ω.

(random element) *TNn* by setting

set

*a distribution F such that*

(i) *V x*ð Þ¼ 0 *for x*<0; (ii) *for any A* <sup>∈</sup><sup>B</sup> *<sup>r</sup>* ð Þ

(iii) Pð Þ) *Nn* < *dnx V x*ð Þ, *n* ! ∞.

conditions.

*dn=Nn*.

*that*

**175**

values and is independent of the sequence *W*1,*W*2, … . Let

<sup>L</sup> ffiffiffi

estimators, sample moments, sample quantiles, etc.

Let *N*1, *N*2, … and *W*1,*W*2, … be random variables and random vectors, respectively, such that for each *n*≥1 the random variable *Nn* takes only natural

*From Asymptotic Normality to Heavy-Tailedness via Limit Theorems for Random Sums…*

*Tn* ¼ *Tn*ð Þ¼ *W*1, … ,*Wn* ð Þ *Tn*,1ð Þ *W*1, … ,*Wn* , … , *Tn*,*r*ð Þ *W*1, … ,*Wn*

*TNn* ð Þ¼ *<sup>ω</sup> TNn*ð Þ *<sup>ω</sup> <sup>W</sup>*1ð Þ *<sup>ω</sup>* , … ,*WNn*ð Þ *<sup>ω</sup>* ð Þ *<sup>ω</sup>* � �

We shall say that a statistic *Tn* is *asymptotically normal* with the asymptotic covariance matrix Σ if there exists a non-random *r*-dimensional vector *t* such that

Examples of asymptotically normal statistics are well known. Under certain conditions, the property of asymptotic normality is inherent in maximum likelihood

*TNn* , that is, of statistics constructed from samples with random sizes *Nn*.

*Zn* <sup>¼</sup> ffiffiffiffiffi *dn* <sup>p</sup> ð Þ *TNn* � *<sup>t</sup> :*

Our nearest aim is to describe the asymptotic behavior of the random elements

Again let f g *dn <sup>n</sup>*≥<sup>1</sup> be an infinitely increasing sequence of positive numbers. Now

Theorem 3. *Let Nn* ! ∞ *in probability as n* ! ∞*. Assume that a statistic Tn is asymptotically normal in the sense of* ð Þ 16 *with an asymptotic covariance matrix* Σ*. Then*

Lð Þ) *Zn F n*ð Þ ! ∞ ,

<sup>Φ</sup>*<sup>u</sup>*�1Σð Þ *<sup>A</sup> dV u*ð Þ, *<sup>x</sup>*<sup>∈</sup> <sup>1</sup>

The proof of Theorem 3 relies on Theorem 1 with (16) playing the role of (3) and Lemma 1 and differs from the proof of Theorem 2 only by that *bNn =dn* is replaced by

) *c n*ð Þ ! ∞ *:*

Corollary 2. *Under the conditions of Theorem* 3 *the statistic TNn is asymptotically normal with some covariance matrix* Σ<sup>0</sup> *if and only if there exists a number c* > 0 *such*

;

exists if and only if there exists a distribution function *V x*ð Þ satisfying the

*F A*ð Þ¼ <sup>ð</sup><sup>∞</sup>

0

*Nn dn*

, *r*≥ 1. For each *n*≥ 1 define a random vector

*<sup>n</sup>* <sup>p</sup> ð Þ *Tn* � *<sup>t</sup>* � � ) ΦΣ ð Þ *<sup>n</sup>* ! <sup>∞</sup> *:* (16)

Denote

$$\beta\_n = \inf \left\{ L\left(\frac{b\_{N\_n}}{d\_n}, U\right) \, : \, U \in \mathcal{W}(Z) \right\}.$$

Prove that *β<sup>n</sup>* ! 0 as *n* ! ∞. Assume the contrary. In that case, *β<sup>n</sup>* ≥ *δ* for some *δ* > 0 and all *n* from some subsequence N of natural numbers. Choose a subsequence <sup>N</sup> <sup>1</sup> <sup>⊆</sup> <sup>N</sup> so that the sequence *bNn* f g *<sup>=</sup>dn <sup>n</sup>*<sup>∈</sup> <sup>N</sup> <sup>1</sup> weakly converges to a random variable *U* (this is possible due to the tightness of the family *bNn* f g *=dn <sup>n</sup>*≥<sup>1</sup> established above). But then *Wn*ð Þ)*<sup>g</sup>* ffiffiffiffi *U* <sup>p</sup> *g n*<sup>ð</sup> ! <sup>∞</sup>, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> <sup>1</sup><sup>Þ</sup> for any *<sup>g</sup>* <sup>∈</sup> *<sup>r</sup>* . Applying Theorem 1 to *n* ∈ N <sup>1</sup> with condition (7) playing the role of condition (3), we make sure that *U* ∈ Wð Þ *Z* , since condition (8) provides the coincidence of the limits of all weakly convergent subsequences. So, we arrive at the contradiction to the assumption that *β<sup>n</sup>* ≥*δ* for all *n* ∈ N 1. Hence, *β<sup>n</sup>* ! 0 as *n* ! ∞.

For any *n* ¼ 1, 2, … , choose a random variable *Un* from Wð Þ *Z* satisfying the condition

$$L\left(\frac{b\_{N\_n}}{d\_n}, U\_n\right) \le \beta\_n + \frac{1}{n}.$$

This sequence obviously satisfies condition (15). Now consider the structure of the set Wð Þ *Z* . This set contains all the random variable's defining the family of special mixtures of multivariate normal laws considered in Lemma 1, according to which this family is identifiable. So, whatever a random element *Z* is, the set Wð Þ *Z* contains at most one element. Therefore, actually condition (15) is equivalent to

$$\frac{b\_{N\_n}}{d\_n} \Rightarrow U \qquad (n \to \infty),$$

that is, to condition (iii) of the theorem. The theorem is proved.

Corollary 1. *Under the conditions of Theorem* 2*, non-randomly normalized random sums SNn =dn are asymptotically normal with some covariance matrix* Σ<sup>0</sup> *if and only if there exists a number c* > 0 *such that*

$$\frac{b\_{N\_n}}{d\_n} \Rightarrow c \quad (n \to \infty).$$

Moreover, in this case, Σ<sup>0</sup> ¼ *c*Σ.

This statement immediately follows from Theorem 2 with the account of Lemma 1.

*From Asymptotic Normality to Heavy-Tailedness via Limit Theorems for Random Sums… DOI: http://dx.doi.org/10.5772/intechopen.89659*

Now consider a formally more general setting.

Introduce the set *W Z*ð Þ containing all nonnegative random variables *U* such that Pð Þ¼ *<sup>Z</sup>* <sup>∈</sup> *<sup>A</sup>* <sup>E</sup>Φ*U*Σð Þ *<sup>A</sup>* for any *<sup>A</sup>* <sup>∈</sup><sup>B</sup> *<sup>r</sup>* ð Þ. Let *<sup>L</sup>*ð Þ �, � be any probability metric that metrizes weak convergence in the space of random variables, or, which is the same in this context, n the space of distribution functions, say, the Lévy metric or the smoothed Kolmogorov distance. If *X*<sup>1</sup> and *X*<sup>2</sup> are random variables with the distribution functions *F*<sup>1</sup> and *F*<sup>2</sup> respectively, then we identify *L X*ð Þ 1, *X*<sup>2</sup> and *L F*ð Þ 1, *F*<sup>2</sup> . Show that there exists a sequence of random variables f g *Un <sup>n</sup>*≥1,

! 0 ð Þ *n* ! ∞ *:* (15)

*:*

<sup>p</sup> *g n*<sup>ð</sup> ! <sup>∞</sup>, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> <sup>1</sup><sup>Þ</sup> for any *<sup>g</sup>* <sup>∈</sup> *<sup>r</sup>*

. Apply-

: *U* ∈ Wð Þ *Z*

� �

Prove that *β<sup>n</sup>* ! 0 as *n* ! ∞. Assume the contrary. In that case, *β<sup>n</sup>* ≥ *δ* for some

*<sup>L</sup> bNn dn* , *Un* � �

*<sup>β</sup><sup>n</sup>* <sup>¼</sup> inf *<sup>L</sup> bNn*

assumption that *β<sup>n</sup>* ≥*δ* for all *n* ∈ N 1. Hence, *β<sup>n</sup>* ! 0 as *n* ! ∞.

*<sup>L</sup> bNn dn* , *Un* � �

*bNn dn*

that is, to condition (iii) of the theorem. The theorem is proved.

*bNn dn*

established above). But then *Wn*ð Þ)*<sup>g</sup>* ffiffiffiffi

*there exists a number c* > 0 *such that*

Moreover, in this case, Σ<sup>0</sup> ¼ *c*Σ.

*dn* , *U* � �

subsequence <sup>N</sup> <sup>1</sup> <sup>⊆</sup> <sup>N</sup> so that the sequence *bNn* f g *<sup>=</sup>dn <sup>n</sup>*<sup>∈</sup> <sup>N</sup> <sup>1</sup> weakly converges to a random variable *U* (this is possible due to the tightness of the family *bNn* f g *=dn <sup>n</sup>*≥<sup>1</sup>

*U*

ing Theorem 1 to *n* ∈ N <sup>1</sup> with condition (7) playing the role of condition (3), we make sure that *U* ∈ Wð Þ *Z* , since condition (8) provides the coincidence of the limits of all weakly convergent subsequences. So, we arrive at the contradiction to the

For any *n* ¼ 1, 2, … , choose a random variable *Un* from Wð Þ *Z* satisfying the

This sequence obviously satisfies condition (15). Now consider the structure of

) *U n*ð Þ ! ∞ ,

Corollary 1. *Under the conditions of Theorem* 2*, non-randomly normalized random sums SNn =dn are asymptotically normal with some covariance matrix* Σ<sup>0</sup> *if and only if*

) *c n*ð Þ ! ∞ *:*

This statement immediately follows from Theorem 2 with the account of

the set Wð Þ *Z* . This set contains all the random variable's defining the family of special mixtures of multivariate normal laws considered in Lemma 1, according to which this family is identifiable. So, whatever a random element *Z* is, the set Wð Þ *Z* contains at most one element. Therefore, actually condition (15) is equivalent to

≤*β<sup>n</sup>* þ

1 *n :*

*δ* > 0 and all *n* from some subsequence N of natural numbers. Choose a

*Un* ∈ Wð Þ *Z* , such that

*Probability, Combinatorics and Control*

Denote

condition

Lemma 1.

**174**

Let *N*1, *N*2, … and *W*1,*W*2, … be random variables and random vectors, respectively, such that for each *n*≥1 the random variable *Nn* takes only natural values and is independent of the sequence *W*1,*W*2, … . Let

$$T\_n = T\_n(W\_1, \dots, W\_n) = (T\_{n,1}(W\_1, \dots, W\_n), \dots, T\_{n,r}(W\_1, \dots, W\_n))$$

be a statistic taking values in *<sup>r</sup>* , *r*≥ 1. For each *n*≥ 1 define a random vector (random element) *TNn* by setting

$$T\_{N\_u}(o) = T\_{N\_{\pi}(o)}\left(\mathcal{W}\_1(o), \dots, \mathcal{W}\_{N\_{\pi}(o)}(o)\right).$$

for every elementary outcome *ω*∈ Ω.

We shall say that a statistic *Tn* is *asymptotically normal* with the asymptotic covariance matrix Σ if there exists a non-random *r*-dimensional vector *t* such that

$$
\mathcal{L}\left(\sqrt{n}(T\_n - t)\right) \Rightarrow \Phi\_\Sigma \quad (n \to \infty). \tag{16}
$$

Examples of asymptotically normal statistics are well known. Under certain conditions, the property of asymptotic normality is inherent in maximum likelihood estimators, sample moments, sample quantiles, etc.

Our nearest aim is to describe the asymptotic behavior of the random elements *TNn* , that is, of statistics constructed from samples with random sizes *Nn*.

Again let f g *dn <sup>n</sup>*≥<sup>1</sup> be an infinitely increasing sequence of positive numbers. Now set

$$Z\_n = \sqrt{d\_n}(T\_{N\_n} - t).$$

Theorem 3. *Let Nn* ! ∞ *in probability as n* ! ∞*. Assume that a statistic Tn is asymptotically normal in the sense of* ð Þ 16 *with an asymptotic covariance matrix* Σ*. Then a distribution F such that*

$$
\mathcal{L}(Z\_n) \Rightarrow F \qquad (n \to \infty),
$$

exists if and only if there exists a distribution function *V x*ð Þ satisfying the conditions.

(i) *V x*ð Þ¼ 0 *for x*<0; (ii) *for any A* <sup>∈</sup><sup>B</sup> *<sup>r</sup>* ð Þ

$$F(A) = \int\_0^\infty \Phi\_{\mathfrak{u}^{-1}\Sigma}(A)dV(u), \quad x \in \mathbb{R}^1;$$

$$(\text{iii)} \text{ P}(N\_n < d\_n \mathfrak{x}) \Rightarrow V(\mathfrak{x}), n \to \infty.$$

The proof of Theorem 3 relies on Theorem 1 with (16) playing the role of (3) and Lemma 1 and differs from the proof of Theorem 2 only by that *bNn =dn* is replaced by *dn=Nn*.

Corollary 2. *Under the conditions of Theorem* 3 *the statistic TNn is asymptotically normal with some covariance matrix* Σ<sup>0</sup> *if and only if there exists a number c* > 0 *such that*

$$\frac{N\_n}{d\_n} \Rightarrow c \quad (n \to \infty).$$

Moreover, in this case, <sup>Σ</sup><sup>0</sup> <sup>¼</sup> *<sup>c</sup>*�1Σ.

This statement immediately follows from Theorem 2 with the account of Lemma 1.
