A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

Nilgün Yıldız

#### Abstract

In this study, we proposed an alternative biased estimator. The linear regression model might lead to ill-conditioned design matrices because of the multicollinearity and thus result in inadequacy of the ordinary least squares estimator (OLS). Scientists have developed alternative estimation techniques that would eradicate the instability in the estimates. Several biased estimators such as Stein estimator, the ordinary ridge regression (ORR) estimator, the principal components regression (PCR) estimator. Liu developed a Liu estimator (LE) by combining the Stein estimator with the ORR estimator. Since both ORR and LE depend on OLS estimator, multicollinearity affects them both. Therefore, the ORR and LE may give misleading information in the presence of multicollinearity. To overcome this problem, Liu introduced a new estimator, which is based on k and d biasing parameters, the authors worked on developing an estimator that would still have the valuable characteristics of the Liu-type estimator (LTE) but have a smaller bias. We are proposing a modified jackknife Liu-type estimator (MJLTE) that was created by combining the ideas underlying both the LTE and JLTE. Under mean square error matrix criteria, the MJLTE is superior to Liu-type estimator (LTE) and jackknifed Liu-type estimator (JLTE). Finally, a real data example and a Monte Carlo simulation are also given to illustrate theoretical results.

Keywords: jackknifed estimators, jackknified Liu-type estimator, multicollinearity, MSE, Liu-type estimator

#### 1. Introduction

With regression analysis; Is there a relationship between dependent and independent variables? If there is a relationship, what is the power of this relationship? What is the relationship between variables? Is it possible to predict prospective variables and how should they be estimated? What is the effect of a particular variable or group of variables on other variables or variables in the event that certain conditions are checked? Try to search for answers to questions such as. Linear regression is very important, popular method in statistics. According to Web of Science, the number of publications about linear regression between 2014 and 2018 is given in Figure 1.

According to Figure 1, the number of studies conducted in 2014 is 12,381, while the number of studies conducted in 2018 is 13,137.

The number of publications about linear regression by document types is given in Figure 2.

The most common type of document about linear regression is the article. This is followed by proceeding paper, review, and editorial material.

The number of publications about linear regression by research area is given in Figure 3.

The most widely published area related to linear regression is engineering, followed by mathematics, computer science, environmental sciences, ecology and other scientific fields.

The number of publications about linear regression by countries is given in Figure 4.

The countries with the most publications on linear regression are USA, China,

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

In regression analysis, the most commonly used method for estimating coefficients is ordinary least squares (OLS). We considered the multiple linear regression model

where y is ð Þ n � 1 observable random vector, X is a ð Þ n � p matrix of non-stochastic (independent) variables of rank p; β is ð Þ p � 1 vector of unknown parameters associated with X, and ε is a ð Þ n � 1 vector of error terms with

E eð Þ¼ <sup>0</sup>, Cov eð Þ¼ <sup>σ</sup><sup>2</sup>

In regression analysis, there are several methods to estimate unknown parameters. The most frequently used method is the least squares method (OLS). Apart from this method, there are three general estimation methods: maximum likelihood, generalized least squares, and best linear unbiased esti-

Since the use of once very popular estimators such as the ordinary least squares (OLS) estimation has become limited due to multicollinearity, which makes them unstable and results in bias and reduced variance of the regression

We can give, it is a linear (or close to linear) relationship between the indepen-

• In the case of multicollinearity, linear regression coefficients are uncertain and

• The regression coefficients of the multicollinearity increase the variance and

• The value of the model R<sup>2</sup> is high but none of the independent variables is

dependent variable may contradict the theoretical and empirical expectations.

• The direction of the related independent variables' relations with the

dent variables as the definition of multicollinearity. In the regression analysis,

y ¼ Xβ þ ε (1)

I (2)

England, Germany, Canada, Australia, respectively.

multicollinearity leads to the following problems:

significant compared to the partial t test.

the standard errors of these coefficients are infinite.

given as

Figure 4.

Number of publications by countries.

DOI: http://dx.doi.org/10.5772/intechopen.82366

mator BLUE [1].

covariance of OLS.

coefficients.

129

Figure 1. Number of publications published between 2014 and 2018.

Figure 2. Number of publications by document types.

Figure 3. Number of publications by research area.

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366

Figure 4. Number of publications by countries.

The number of publications about linear regression by document types is given

The most common type of document about linear regression is the article. This is

The number of publications about linear regression by research area is given in

The most widely published area related to linear regression is engineering, followed by mathematics, computer science, environmental sciences, ecology and

The number of publications about linear regression by countries is given in

followed by proceeding paper, review, and editorial material.

in Figure 2.

Figure 3.

Figure 4.

Figure 1.

Figure 2.

Figure 3.

128

Number of publications by document types.

Number of publications by research area.

Number of publications published between 2014 and 2018.

other scientific fields.

Statistical Methodologies

The countries with the most publications on linear regression are USA, China, England, Germany, Canada, Australia, respectively.

In regression analysis, the most commonly used method for estimating coefficients is ordinary least squares (OLS). We considered the multiple linear regression model given as

$$y = X\beta + \varepsilon \tag{1}$$

where y is ð Þ n � 1 observable random vector, X is a ð Þ n � p matrix of non-stochastic (independent) variables of rank p; β is ð Þ p � 1 vector of unknown parameters associated with X, and ε is a ð Þ n � 1 vector of error terms with

$$E(e) = 0, \text{Cov}(e) = \sigma^2 I \tag{2}$$

In regression analysis, there are several methods to estimate unknown parameters. The most frequently used method is the least squares method (OLS). Apart from this method, there are three general estimation methods: maximum likelihood, generalized least squares, and best linear unbiased estimator BLUE [1].

Since the use of once very popular estimators such as the ordinary least squares (OLS) estimation has become limited due to multicollinearity, which makes them unstable and results in bias and reduced variance of the regression coefficients.

We can give, it is a linear (or close to linear) relationship between the independent variables as the definition of multicollinearity. In the regression analysis, multicollinearity leads to the following problems:


• If independent variables are interrelated, some of them may need to be removed from the model. But what variables will be extracted? Removing an incorrect variable from the model will result in a model error. On the other hand, there are no simple rules that we can use to include and subtract the arguments in the model.

Methods for dealing with multicollinearity are collecting additional data, model respecification. Instead of two related variables, the sum of these two variables (as a single variable) can be taken and use of biased estimators. In this book provides information on biased estimators used as OLS alternatives. In literature many researchers have developed biased regression estimators [2, 3].

Examples of such biased estimators are the ordinary ridge regression (ORR) estimator introduced by Hoerl and Kennard [4].

$$
\hat{\boldsymbol{\beta}}\_k = \left(\mathbf{X}^\prime \mathbf{X} + k\mathbf{I}\right)^{-1} \mathbf{X}^\prime \mathbf{y} \qquad k \ge \mathbf{0} \tag{3}
$$

2. The model

sion models is

where Z ¼ XT, γ ¼ T<sup>0</sup>

DOI: http://dx.doi.org/10.5772/intechopen.82366

(OLSE) of γ is given by

(LTE), and defined as

where

observation zi

A � zizi

^γLTE�<sup>i</sup>

131

^γLTE has bias vector

and covariance matrix

0 ; yi � � as

<sup>0</sup> ð Þ�<sup>1</sup> <sup>¼</sup> <sup>A</sup>�<sup>1</sup> <sup>þ</sup>

ð Þ¼ k; d A � zizi

<sup>¼</sup> <sup>A</sup>�<sup>1</sup> Z0

S ¼ X<sup>0</sup>

We assume that two or more regressors in X are closely linearly related, therefore model suffers from multicollinearity problem. A symmetric matrix

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

X has an eigenvalue–eigenvector decomposition of the form S ¼ TΛT<sup>0</sup>

Z0

Liu [8] proposed a new biased estimator for γ, called the Liu-type estimators

^γLTEð Þ¼ <sup>k</sup>; <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> <sup>Z</sup><sup>0</sup> ð Þ <sup>y</sup> � <sup>d</sup>^<sup>γ</sup> for <sup>k</sup>≥0 and � ∞ ≤ <sup>d</sup><sup>≤</sup> <sup>þ</sup> <sup>∞</sup>

F kð Þ¼ ; <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup>

F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup>

By using Hinkley [10], Singh et al. [11], Nyquist [12], and Batah et al. [13] we can propose the jackknifed form of ^γLTE. Quenouille [9] and Tukey [14] introduced the jackknife technique to reduce the bias. Hinkley [10] stated that with few exceptions, the jackknife had been applied to balanced models. After some algebraic manipulations, the corresponding jackknife estimator is obtained by deleting the ith

> y � ziyi � � <sup>¼</sup> <sup>A</sup>�<sup>1</sup> <sup>þ</sup>

> > A�<sup>1</sup> zizi 0 A�<sup>1</sup>

1 � zi 0 A�<sup>1</sup> zi Z0

ziyi þ

Cov ^γLTE ð Þ¼ <sup>σ</sup><sup>2</sup>

A�<sup>1</sup> zizi 0 A�<sup>1</sup>

<sup>0</sup> ð Þ�<sup>1</sup> <sup>Z</sup><sup>0</sup>

1 � zi 0 A�<sup>1</sup> zi

<sup>y</sup> � <sup>A</sup>�<sup>1</sup>

Z0

<sup>y</sup> <sup>¼</sup> <sup>Λ</sup>�<sup>1</sup>

Z0

β þ ε ¼ Zγ þ ε (6)

y (7)

ð Þ Λ � dI (10)

F kð Þ ; d <sup>0</sup> (12)

y � ziyi � �

Z ¼ Λ. The ordinary least squares estimator

Bias ^γLTE ð Þ¼ ½ � ð Þ F kð Þ� ; d I γ (11)

A�<sup>1</sup> zizi 0 A�<sup>1</sup>

1 � zi 0 A�<sup>1</sup> zi

<sup>y</sup> � <sup>A</sup>�<sup>1</sup>

1 � zi 0 A�<sup>1</sup> zi ziyi

� � <sup>Z</sup><sup>0</sup>

zizi 0 A�<sup>1</sup>

where T is an orthogonal matrix and Λ is (real) a diagonal matrix. The diagonal elements of Λ are the eigenvalues of S and the column vectors of T are the eigenvectors of S. The orthogonal version of the standard multiple linear regres-

y ¼ XTT<sup>0</sup>

^<sup>γ</sup> <sup>¼</sup> <sup>Z</sup><sup>0</sup> ð Þ <sup>Z</sup> �<sup>1</sup>

<sup>y</sup> � <sup>d</sup>Λ�<sup>1</sup>

y � �

ð Þ k þ d

β and Z<sup>0</sup>

<sup>¼</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> <sup>Z</sup><sup>0</sup>

<sup>¼</sup> <sup>I</sup> � ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup>

¼ F kð Þ ; d ^γ

h i^<sup>γ</sup>

,

(8)

where k is a biasing parameter, in later years researchers combined various estimators to obtain better results. For example, Baye and Parker [5] introduced ð Þ r � k class estimator, which combines the ORR and principal component regression (PCR). In addition, Baye and Parker also showed that ð Þ r � k class estimator is superior to PCR estimator based on the scalar mean square error (SMSE) criterion.

Since both ORR and LE depend on OLS estimator, multicollinearity affects both of them. Therefore, the ORR and LE may give misleading information in the presence of multicollinearity. Liu estimator (LE) was developed by Liu [6] by combining the Stein [7] estimator with the ORR estimator.

$$
\hat{\beta}\_d = \left(\mathbf{X}^\prime \mathbf{X} + \mathbf{I}\right)^{-1} \left(\mathbf{X}^\prime \mathbf{y} + d\hat{\beta}\right) \qquad \mathbf{0} \le d \le \mathbf{1} \tag{4}
$$

To overcome this problem, Liu [8] introduced a new estimator, which is based on k and d biasing parameters as follows

$$\hat{\boldsymbol{\beta}}\_{\text{LTE}} = \left(\mathbf{X}^\prime \mathbf{X} + \mathbf{k} \,\boldsymbol{I}\right)^{-1} \left(\mathbf{X}^\prime \mathbf{y} + d\hat{\boldsymbol{\beta}}\right) \quad \boldsymbol{k} \ge \mathbf{0}, \qquad -\infty \le \mathbf{d} \le \infty \tag{5}$$

Next, the authors worked on developing an estimator that would still have valuable characteristics of the Liu-type estimator (LTE), but have a smaller bias. In 1956, Quenouille [9] suggested that it is possible to reduce bias by applying a jackknife procedure to a biased estimator.

This procedure enables processing of experimental data to get statistical estimator for unknown parameters. A truncated sample is used calculate specific function of estimators. The advantage of jackknife procedure is that it presents an estimator that has a small bias while still providing beneficial properties of large samples. In this article, we applied the jackknife technique to the LTE. Further, we established the mean squared error superiority of the proposed estimator over both the LTE and the jackknifed Liu-type estimator (JLTE).

The article is organized as follows: The model as well as LTE and the JLTE are described in Section 2. The proposed new estimator is introduced in Section 3. Superiority of the new estimator vis-a-vis the LTE and the JLTE are studied and the performance of the modified Jackknife Liu-type estimator (MJLTE) is compared to that of the JLTE in Section 4. Sections 5 and 6 consider a real data example and a simulation study to justify the superiority of the suggested estimator.

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366

#### 2. The model

• If independent variables are interrelated, some of them may need to be

arguments in the model.

Statistical Methodologies

estimator introduced by Hoerl and Kennard [4].

[2, 3].

(SMSE) criterion.

removed from the model. But what variables will be extracted? Removing an incorrect variable from the model will result in a model error. On the other hand, there are no simple rules that we can use to include and subtract the

Methods for dealing with multicollinearity are collecting additional data, model respecification. Instead of two related variables, the sum of these two variables (as a single variable) can be taken and use of biased estimators. In this book provides information on biased estimators used as OLS alternatives. In literature many researchers have developed biased regression estimators

Examples of such biased estimators are the ordinary ridge regression (ORR)

where k is a biasing parameter, in later years researchers combined various estimators to obtain better results. For example, Baye and Parker [5] introduced ð Þ r � k class estimator, which combines the ORR and principal component regression (PCR). In addition, Baye and Parker also showed that ð Þ r � k class estimator is superior to PCR estimator based on the scalar mean square error

Since both ORR and LE depend on OLS estimator, multicollinearity affects both

To overcome this problem, Liu [8] introduced a new estimator, which is based

Next, the authors worked on developing an estimator that would still have valuable characteristics of the Liu-type estimator (LTE), but have a smaller bias. In 1956, Quenouille [9] suggested that it is possible to reduce bias by applying a

This procedure enables processing of experimental data to get statistical estimator for unknown parameters. A truncated sample is used calculate specific function of estimators. The advantage of jackknife procedure is that it presents an estimator that has a small bias while still providing beneficial properties of large samples. In this article, we applied the jackknife technique to the LTE. Further, we established the mean squared error superiority of the proposed estimator over both the LTE and

The article is organized as follows: The model as well as LTE and the JLTE are described in Section 2. The proposed new estimator is introduced in Section 3. Superiority of the new estimator vis-a-vis the LTE and the JLTE are studied and the performance of the modified Jackknife Liu-type estimator (MJLTE) is compared to that of the JLTE in Section 4. Sections 5 and 6 consider a real data example and a simulation study to justify the superiority of the suggested

of them. Therefore, the ORR and LE may give misleading information in the presence of multicollinearity. Liu estimator (LE) was developed by Liu [6] by

combining the Stein [7] estimator with the ORR estimator.

on k and d biasing parameters as follows

jackknife procedure to a biased estimator.

the jackknifed Liu-type estimator (JLTE).

estimator.

130

<sup>β</sup>^LTE <sup>¼</sup> <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>þ</sup> k I �<sup>1</sup> <sup>X</sup><sup>0</sup>

<sup>β</sup>^<sup>d</sup> <sup>¼</sup> <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>þ</sup> <sup>I</sup> �<sup>1</sup> <sup>X</sup><sup>0</sup>

X0

y k≥0 (3)

<sup>y</sup> <sup>þ</sup> <sup>d</sup>β^ <sup>0</sup> , <sup>d</sup> , <sup>1</sup> (4)

<sup>y</sup> <sup>þ</sup> <sup>d</sup>β^ <sup>k</sup> . <sup>0</sup>, � <sup>∞</sup> , <sup>d</sup> , <sup>∞</sup> (5)

<sup>β</sup>^<sup>k</sup> <sup>¼</sup> <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>þ</sup> kI �<sup>1</sup>

We assume that two or more regressors in X are closely linearly related, therefore model suffers from multicollinearity problem. A symmetric matrix S ¼ X<sup>0</sup> X has an eigenvalue–eigenvector decomposition of the form S ¼ TΛT<sup>0</sup> , where T is an orthogonal matrix and Λ is (real) a diagonal matrix. The diagonal elements of Λ are the eigenvalues of S and the column vectors of T are the eigenvectors of S. The orthogonal version of the standard multiple linear regression models is

$$
\gamma = \text{XTT}'\beta + \varepsilon = \text{Z}\gamma + \varepsilon \tag{6}
$$

where Z ¼ XT, γ ¼ T<sup>0</sup> β and Z<sup>0</sup> Z ¼ Λ. The ordinary least squares estimator (OLSE) of γ is given by

$$
\hat{\mathbf{y}} = \left(\mathbf{Z}^\prime \mathbf{Z}\right)^{-1} \mathbf{Z}^\prime \mathbf{y} = \boldsymbol{\Lambda}^{-1} \mathbf{Z}^\prime \mathbf{y} \tag{7}
$$

Liu [8] proposed a new biased estimator for γ, called the Liu-type estimators (LTE), and defined as

$$\begin{aligned} \dot{\gamma}\_{LTE}(k,d) &= \left(\Lambda + kI\right)^{-1}(Z'y - d\dot{\gamma}) \qquad \text{for } k \ge 0 \quad \text{and} \quad -\infty \le d \le +\infty \\ &= \left(\Lambda + kI\right)^{-1}(Z'y - d\Lambda^{-1}Z'y) \\ &= \left[I - (\Lambda + kI)^{-1}(k+d)\right] \dot{\gamma} \\ &= F(k,d)\dot{\gamma} \end{aligned} \tag{8}$$

where

$$F(k,d) = (\Lambda + kI)^{-1}(\Lambda - dI) \tag{10}$$

^γLTE has bias vector

$$\text{Bias}(\dot{\gamma}\_{LTE}) = [(F(k,d) - I)\chi] \tag{11}$$

and covariance matrix

$$\text{Cov}(\hat{\boldsymbol{\gamma}}\_{\text{LTE}}) = \sigma^2 F(\boldsymbol{k}, \boldsymbol{d}) \boldsymbol{\Lambda}^{-1} F(\boldsymbol{k}, \boldsymbol{d})' \tag{12}$$

By using Hinkley [10], Singh et al. [11], Nyquist [12], and Batah et al. [13] we can propose the jackknifed form of ^γLTE. Quenouille [9] and Tukey [14] introduced the jackknife technique to reduce the bias. Hinkley [10] stated that with few exceptions, the jackknife had been applied to balanced models. After some algebraic manipulations, the corresponding jackknife estimator is obtained by deleting the ith observation zi 0 ; yi � � as

$$\begin{split} (A - z\_i z\_i')^{-1} &= A^{-1} + \frac{A^{-1} z\_i z\_i' A^{-1}}{1 - z\_i' A^{-1} z\_i} \\ \hat{\gamma}\_{\mathrm{LTE\_{-i}}}(k, d) &= (A - z\_i z\_i')^{-1} (Z' \boldsymbol{\gamma} - z\_i \boldsymbol{\gamma}\_i) = \left( A^{-1} + \frac{A^{-1} z\_i z\_i' A^{-1}}{1 - z\_i' A^{-1} z\_i} \right) (Z' \boldsymbol{\gamma} - z\_i \boldsymbol{\gamma}\_i) \\ &= A^{-1} Z' \boldsymbol{\gamma} - A^{-1} z\_i \boldsymbol{\gamma}\_i + \frac{A^{-1} z\_i z\_i' A^{-1}}{1 - z\_i' A^{-1} z\_i} Z' \boldsymbol{\gamma} - \frac{A^{-1} z\_i z\_i' A^{-1}}{1 - z\_i' A^{-1} z\_i} z\_i \boldsymbol{\gamma}\_i \end{split}$$

$$\begin{split} \dot{\gamma} &= \dot{\gamma}\_{\rm LTE}(k,d) + A^{-1} \mathbf{z}\_i \mathbf{y}\_i \left( \mathbf{1} + \frac{\mathbf{z}\_i' A^{-1} \mathbf{z}\_i}{\mathbf{1} - \mathbf{z}\_i' A^{-1} \mathbf{z}\_i} \right) + \frac{A^{-1} \mathbf{z}\_i \mathbf{z}\_i'}{\mathbf{1} - \mathbf{z}\_i' A^{-1} \mathbf{z}\_i} \dot{\gamma}\_{\rm LTE}(k,d) \\ \dot{\gamma} &= \dot{\gamma}\_{\rm LTE}(k,d) - A^{-1} \mathbf{z}\_i \frac{A^{-1} \mathbf{z}\_i \left( \mathbf{y}\_i - \mathbf{z}\_i' \dot{\gamma}\_{\rm LTE}(k,d) \right)}{\mathbf{1} - \mathbf{z}\_i' A^{-1} \mathbf{z}\_i} \\ \dot{\gamma} &= \dot{\gamma}\_{\rm LTE}(k,d) - \frac{A^{-1} \mathbf{z}\_i \mathbf{e}\_i}{\mathbf{1} - \mathbf{w}\_i} \end{split} \tag{13}$$

^γMJLTEð Þ¼ <sup>k</sup>; <sup>d</sup> <sup>I</sup> � ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> <sup>2</sup>

DOI: http://dx.doi.org/10.5772/intechopen.82366

MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � <sup>¼</sup> <sup>σ</sup><sup>2</sup>

and <sup>Φ</sup> <sup>¼</sup> ð Þ <sup>2</sup><sup>I</sup> � F kð Þ ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>2</sup>

4. Properties of the MJLTE

Bias ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � � �

Bias ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � � �

where

133

where <sup>W</sup> <sup>¼</sup> <sup>I</sup> <sup>þ</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> � ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> <sup>2</sup>

� 2

�

Proof. From 11 and 23, we can obtain that

smaller than that of LTE, namely Biasð^γMJLTEð Þ k; d

Proof. From 12 and 24, it can be shown that

<sup>¼</sup> ð Þ <sup>I</sup> � F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup> � <sup>I</sup> � F kð Þ ; <sup>d</sup> <sup>2</sup>

ð Þ λ<sup>i</sup> þ ki

H is a diagonal matrix and ith element

hii ¼

obtained as

from.

ð Þ <sup>Λ</sup> <sup>þ</sup> k I �<sup>2</sup> h i <sup>I</sup> � ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> k I �<sup>1</sup> h i^<sup>γ</sup> (22)

ΦΛ�<sup>1</sup>

ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup>

<sup>W</sup>ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> h i<sup>0</sup> (25)

<sup>W</sup>ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup>

γ (23)

Φ<sup>0</sup> (24)

ð Þ <sup>Λ</sup> <sup>þ</sup> k I �<sup>2</sup> <sup>W</sup><sup>2</sup> � ð Þ <sup>Λ</sup> <sup>þ</sup> k I <sup>2</sup> h i

� � � i

� ΦΛ�<sup>1</sup>

2

Φ0

<sup>W</sup>ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup>

ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>2</sup> <sup>¼</sup> <sup>I</sup> <sup>þ</sup> F kð Þ� ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>2</sup>

It may be noted that the proposed estimator MJLTE in (22) is obtained as in the case of JLTE but by plugging in the LTE instead of the OLSE. The expressions for bias, covariance and mean squared error matrix (MSEM) of ^γMJLTEð Þ k; d are

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

<sup>Φ</sup><sup>0</sup> <sup>þ</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> <sup>2</sup>

One of the most prominent features of our novel MJLTE estimator is that its bias, under some conditions, is less than LTE estimator from which it originates

Theorem 4.1. Under the model (1) with the assumptions (2), the inequality

<sup>2</sup> � Bias ^<sup>γ</sup> k k ð Þ LTEð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup> <sup>¼</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> <sup>2</sup>

� �

Theorem 4.2. The MJLTE has smaller variance than the LTE

<sup>H</sup> <sup>¼</sup> <sup>I</sup> <sup>þ</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> h iΛ�<sup>1</sup> <sup>I</sup> <sup>þ</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> h i<sup>0</sup>

<sup>4</sup> � ð Þ <sup>λ</sup><sup>i</sup> <sup>þ</sup> <sup>2</sup>ki <sup>þ</sup> di

It is obvious that the difference is greater than 0, because it consists of the product of the squares in the expression above. Thus, the proof is completed.

Cov ^γð Þ� LTEð Þ <sup>k</sup>; <sup>d</sup> Cov ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � <sup>¼</sup> <sup>σ</sup><sup>2</sup>

Λ�<sup>1</sup>

2

6

<sup>2</sup> h ið Þ <sup>λ</sup><sup>i</sup> � di

λið Þ λ<sup>i</sup> þ ki

Corollary 4.1. The bias of the absolute value of the i th component of MJLTE is

, Bias ^<sup>γ</sup> k k ð Þ LTEð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup> holds true for <sup>d</sup> . 0 and <sup>k</sup> . <sup>d</sup>

ð Þ <sup>Λ</sup> <sup>þ</sup> k I �<sup>2</sup> . <sup>0</sup>

� � �

ðI � Fðk; dÞ <sup>2</sup><sup>0</sup> h i � � ð Þ <sup>I</sup> � F kð Þ ; <sup>d</sup>

ð Þ λ<sup>i</sup> � di

, Biasð^γLTEð Þ k; d

H

� � � i

� .

Bias ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � ¼ �ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup>

ΦΛ�<sup>1</sup>

γγ<sup>0</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup>

Cov ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � <sup>¼</sup> <sup>σ</sup><sup>2</sup>

where zi <sup>0</sup> is the i th row of Z, ei ¼ yi � zi 0 ^γLTEð Þ k; d is the Liu-type residual, wi ¼ zi 0 A�<sup>1</sup> zi is the distance factor and <sup>A</sup>�<sup>1</sup> <sup>¼</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> <sup>I</sup> � <sup>d</sup>Λ�<sup>1</sup> <sup>¼</sup> F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup> . In the view of the non-zero value of wi reflecting the lack of balance in the model, we use the weighted jackknife procedure. Thus, weighted pseudo values are defined as

$$Q\_i = \hat{\gamma}\_{L\text{TE}}(k, d) + n(\mathbf{1} - \boldsymbol{w}\_i) \left( \hat{\boldsymbol{\gamma}}\_{L\text{TE}}(k, d) - \hat{\boldsymbol{\gamma}}\_{L\text{TE}\_{-i}}(k, d) \right)$$

the weighted jackknifed estimator of γ is obtained as

$$
\hat{\boldsymbol{\gamma}}\_{\text{LTE}}(\mathbf{k},d) = \frac{1}{n} \sum\_{i=1}^{n} \mathbf{Q}\_{i} = \hat{\boldsymbol{\gamma}}\_{\text{LTE}}(\mathbf{k},d) + \mathbf{A}^{-1} \sum\_{i=1}^{n} \mathbf{z}\_{i} \mathbf{e}\_{i} \tag{14}
$$

$$
\sum\_{i=1}^{n} \mathbf{z}\_{i} \mathbf{e}\_{i} = \sum\_{i=1}^{n} \mathbf{z}\_{i} \left(\mathbf{y}\_{i} - \mathbf{z}\_{i}^{\prime} \hat{\boldsymbol{\gamma}}\_{\text{LTE}}(\mathbf{k},d)\right) = \left(I - A^{-1}\right) \mathbf{Z}\_{i}^{\prime} \mathbf{y}
$$

$$
\mathbf{z}\_{i} \quad \text{or} \quad \mathbf{z}^{-1} \mathbf{z}^{\prime} \qquad \mathbf{z}^{-1} \quad \mathbf{z}^{-1} \quad \text{or} \quad \mathbf{z}^{-1} \mathbf{z}^{-1} \tag{15}
$$

^γJLTEð Þ¼ <sup>k</sup>; <sup>d</sup> ^γLTEð Þþ <sup>k</sup>; <sup>d</sup> <sup>A</sup>�<sup>1</sup> Z0 <sup>y</sup> � <sup>A</sup>�<sup>1</sup> Λ A�<sup>1</sup> Z0 <sup>y</sup> <sup>¼</sup> <sup>2</sup><sup>I</sup> � <sup>A</sup>�<sup>1</sup> <sup>Λ</sup> ^γLTEð Þ <sup>k</sup>; <sup>d</sup> (15)

However, since <sup>I</sup> � <sup>A</sup>�<sup>1</sup> <sup>Λ</sup> <sup>¼</sup> <sup>I</sup> � ð Þ <sup>Λ</sup> <sup>þ</sup> k I �<sup>1</sup> ð Þ¼ Λ � dI I � F kð Þ ; d , we obtain

$$
\hat{\gamma}\_{\rm LTE}(k,d) = (\mathcal{Q}I - F(k,d))\hat{\gamma}\_{\rm LTE}(k,d) \tag{16}
$$

From (9) we have

$$
\hat{\gamma}\_{\rm LTE}(k,d) = (2I - F(k,d))F(k,d)\hat{\gamma} \tag{17}
$$

$$\text{Bias}\left(\hat{\boldsymbol{\gamma}}\_{\text{JLE}}(\boldsymbol{k},d)\right) = (\boldsymbol{I} - \boldsymbol{F}(\boldsymbol{k},d))^2 \boldsymbol{\gamma} \tag{18}$$

Variance of the JLTE as,

$$\text{Cov}\left(\hat{\boldsymbol{\gamma}}\_{\text{LTE}}(\boldsymbol{k},d)\right) = \sigma^2 (2\boldsymbol{I} - F(\boldsymbol{k},d)) F(\boldsymbol{k},d) \Lambda^{-1} F(\boldsymbol{k},d)' (2\boldsymbol{I} - F(\boldsymbol{k},d))' \tag{19}$$

MSEMs of the JLTE and LTE as

$$\begin{split} \text{MSEM}\left(\hat{\boldsymbol{\gamma}}\_{\text{ILTE}}(\boldsymbol{k},d)\right) &= \text{Cov}\left(\hat{\boldsymbol{\gamma}}\_{\text{ILTE}}(\boldsymbol{k},d)\right) + \text{Bias}\left(\hat{\boldsymbol{\gamma}}\_{\text{ILTE}}(\boldsymbol{k},d)\right)\left(\hat{\boldsymbol{\gamma}}\_{\text{ILTE}}(\boldsymbol{k},d)\right)' \\ &= \sigma^2(2\boldsymbol{I} - F(\boldsymbol{k},d))F(\boldsymbol{k},d)\boldsymbol{\Lambda}^{-1}F(\boldsymbol{k},d)'(2\boldsymbol{I} - F(\boldsymbol{k},d))' \\ &+ F(\boldsymbol{k},d)^2\boldsymbol{\gamma}\boldsymbol{\gamma}'(\boldsymbol{I} - F(\boldsymbol{k},d))^{2'} \end{split} \tag{20}$$

$$\text{MSEM}(\hat{\boldsymbol{\eta}}\_{\text{LTE}}(\boldsymbol{k},d)) = \sigma^2 \mathbf{F}(\boldsymbol{k},d)\boldsymbol{\Lambda}^{-1}\mathbf{F}(\boldsymbol{k},d)^\prime + (\mathbf{F}(\boldsymbol{k},d) - \boldsymbol{I})\boldsymbol{\beta}\boldsymbol{\beta}^\prime (\mathbf{F}(\boldsymbol{k},d) - \boldsymbol{I}) \tag{21}$$

#### 3. Our novel MJLTE estimator

In this section, Yıldız [15] propose a new estimator for γ. The proposed estimator is designated as the modified jackknifed Liu-type estimator (MJLTE) denoted by ^γMJLTEð Þ k; d

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366

$$\hat{\gamma}\_{M\text{MLE}}(\mathbf{k},d) = \left[I - (\mathbf{k}+d)^2(\Lambda+\mathbf{k}\,I)^{-2}\right] \left[I - (\mathbf{k}+d)(\Lambda+\mathbf{k}\,I)^{-1}\right] \hat{\boldsymbol{\gamma}}\tag{22}$$

It may be noted that the proposed estimator MJLTE in (22) is obtained as in the case of JLTE but by plugging in the LTE instead of the OLSE. The expressions for bias, covariance and mean squared error matrix (MSEM) of ^γMJLTEð Þ k; d are obtained as

$$\text{Bias}\left(\hat{\gamma}\_{\text{MjLTE}}(\mathbf{k},d)\right) = -(\mathbf{k}+d)(\boldsymbol{\Lambda}+k\boldsymbol{I})^{-1}\boldsymbol{W}(\boldsymbol{\Lambda}+k\boldsymbol{I})^{-1}\boldsymbol{\gamma}\tag{23}$$

$$\text{Cov}\left(\hat{\boldsymbol{\gamma}}\_{\text{M/LTE}}(\boldsymbol{k},d)\right) = \sigma^2 \boldsymbol{\Phi} \boldsymbol{\Lambda}^{-1} \boldsymbol{\Phi}' \tag{24}$$

$$\begin{aligned} \text{MSEM}\left(\hat{\boldsymbol{\eta}}\_{M\text{/LTE}}(\boldsymbol{k},d)\right) &= \sigma^2 \boldsymbol{\Phi} \boldsymbol{\Lambda}^{-1} \boldsymbol{\Phi}' + (\boldsymbol{k}+d)^2 (\boldsymbol{\Lambda}+\boldsymbol{k}\boldsymbol{I})^{-1} \boldsymbol{\mathcal{W}} (\boldsymbol{\Lambda}+\boldsymbol{k}\boldsymbol{I})^{-1} \\ &\quad \boldsymbol{\eta}\gamma' \left[\left(\boldsymbol{\Lambda}+\boldsymbol{k}\boldsymbol{I}\right)^{-1} \boldsymbol{\mathcal{W}} (\boldsymbol{\Lambda}+\boldsymbol{k}\boldsymbol{I})^{-1}\right]' \end{aligned} \tag{25}$$

where <sup>W</sup> <sup>¼</sup> <sup>I</sup> <sup>þ</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> � ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> <sup>2</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>2</sup> <sup>¼</sup> <sup>I</sup> <sup>þ</sup> F kð Þ� ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>2</sup> and <sup>Φ</sup> <sup>¼</sup> ð Þ <sup>2</sup><sup>I</sup> � F kð Þ ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>2</sup>

#### 4. Properties of the MJLTE

<sup>¼</sup> ^γLTEð Þþ <sup>k</sup>; <sup>d</sup> <sup>A</sup>�<sup>1</sup>

Statistical Methodologies

<sup>¼</sup> ^γLTEð Þ� <sup>k</sup>; <sup>d</sup> <sup>A</sup>�<sup>1</sup>

<sup>¼</sup> ^γLTEð Þ� <sup>k</sup>; <sup>d</sup> <sup>A</sup>�<sup>1</sup>

where zi

wi ¼ zi 0 A�<sup>1</sup>

defined as

ziyi 1 þ

zi A�<sup>1</sup>

ziei 1 � wi

<sup>0</sup> is the i th row of Z, ei ¼ yi � zi

the weighted jackknifed estimator of γ is obtained as

<sup>n</sup> <sup>∑</sup> n i¼1

zi yi � zi

Z0

^γJLTEð Þ¼ <sup>k</sup>; <sup>d</sup> <sup>1</sup>

zi ei ¼ ∑ n i¼1

∑ n i¼1

^γJLTEð Þ¼ <sup>k</sup>; <sup>d</sup> ^γLTEð Þþ <sup>k</sup>; <sup>d</sup> <sup>A</sup>�<sup>1</sup>

However, since <sup>I</sup> � <sup>A</sup>�<sup>1</sup>

Variance of the JLTE as,

Cov ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> <sup>¼</sup> <sup>σ</sup><sup>2</sup>

MSEM ^γð Þ¼ LTEð Þ <sup>k</sup>; <sup>d</sup> <sup>σ</sup><sup>2</sup>

^γMJLTEð Þ k; d

132

3. Our novel MJLTE estimator

MSEMs of the JLTE and LTE as

From (9) we have

zi 0 A�<sup>1</sup> zi

0 ^γLTEð Þ <sup>k</sup>; <sup>d</sup>

0

In the view of the non-zero value of wi reflecting the lack of balance in the model, we use the weighted jackknife procedure. Thus, weighted pseudo values are

Q <sup>i</sup> ¼ ^γLTEð Þþ k; d nð Þ 1 � wi ^γLTEð Þ� k; d ^γLTE�<sup>i</sup>

0

<sup>y</sup> � <sup>A</sup>�<sup>1</sup>

<sup>Λ</sup> <sup>¼</sup> <sup>I</sup> � ð Þ <sup>Λ</sup> <sup>þ</sup> k I �<sup>1</sup>

zi is the distance factor and <sup>A</sup>�<sup>1</sup> <sup>¼</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> kI �<sup>1</sup> <sup>I</sup> � <sup>d</sup>Λ�<sup>1</sup> <sup>¼</sup> F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup>

Qi <sup>¼</sup> ^γLTEð Þþ <sup>k</sup>; <sup>d</sup> <sup>A</sup>�<sup>1</sup> <sup>∑</sup>

^γLTEð Þ <sup>k</sup>; <sup>d</sup> <sup>¼</sup> <sup>I</sup> � <sup>A</sup>�<sup>1</sup> <sup>Z</sup><sup>0</sup>

Λ A�<sup>1</sup> Z0

Bias ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> <sup>¼</sup> ð Þ <sup>I</sup> � F kð Þ ; <sup>d</sup> <sup>2</sup>

ð Þ <sup>2</sup><sup>I</sup> � F kð Þ ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup>

MSEM ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> <sup>¼</sup> Cov ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> <sup>þ</sup> Bias ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> <sup>0</sup> <sup>¼</sup> <sup>σ</sup><sup>2</sup>ð Þ <sup>2</sup><sup>I</sup> � F kð Þ ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup>

γγ<sup>0</sup>

F kð Þ ; d <sup>0</sup>

In this section, Yıldız [15] propose a new estimator for γ. The proposed estimator is designated as the modified jackknifed Liu-type estimator (MJLTE) denoted by

ð Þ <sup>I</sup> � F kð Þ ; <sup>d</sup> <sup>2</sup><sup>0</sup>

<sup>þ</sup> F kð Þ ; <sup>d</sup> <sup>2</sup>

F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup>

þ

A�<sup>1</sup> zizi 0

^γLTEð Þ k; d

^γLTEð Þ k; d is the Liu-type residual,

ð Þ <sup>k</sup>; <sup>d</sup>

<sup>y</sup> <sup>¼</sup> <sup>2</sup><sup>I</sup> � <sup>A</sup>�<sup>1</sup>

^γJLTEð Þ¼ k; d ð Þ 2I � F kð Þ ; d ^γLTEð Þ k; d (16)

^γJLTEð Þ¼ k; d ð Þ 2I � F kð Þ ; d F kð Þ ; d ^γ (17)

F kð Þ ; d <sup>0</sup>

F kð Þ ; d <sup>0</sup>

þ ð Þ F kð Þ� ; d I ββ<sup>0</sup>

n i¼1

y

ð Þ¼ Λ � dI I � F kð Þ ; d , we obtain

<sup>Λ</sup> ^γLTEð Þ <sup>k</sup>; <sup>d</sup> (15)

γ (18)

ð Þ 2I � F kð Þ ; d <sup>0</sup> (19)

ð Þ 2I � F kð Þ ; d <sup>0</sup>

ð Þ F kð Þ� ; d I (21)

(20)

ziei (14)

(13)

.

1 � zi 0 A�<sup>1</sup> zi

1 � zi 0 A�<sup>1</sup> zi

zi yi � zi

1 � zi 0 A�<sup>1</sup> zi

> One of the most prominent features of our novel MJLTE estimator is that its bias, under some conditions, is less than LTE estimator from which it originates from.

Theorem 4.1. Under the model (1) with the assumptions (2), the inequality Bias ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � � � � 2 , Bias ^<sup>γ</sup> k k ð Þ LTEð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup> holds true for <sup>d</sup> . 0 and <sup>k</sup> . <sup>d</sup> Proof. From 11 and 23, we can obtain that

$$\begin{aligned} \left\| \operatorname{Bias} \left( \hat{\boldsymbol{\gamma}}\_{\text{M\text{M-TE}}} (\boldsymbol{k}, \boldsymbol{d}) \right) \right\|^{2} - \left\| \operatorname{Bias} (\hat{\boldsymbol{\gamma}}\_{\text{LTE}} (\boldsymbol{k}, \boldsymbol{d})) \right\|^{2} &= (\boldsymbol{k} + \boldsymbol{d})^{2} (\boldsymbol{\Lambda} + \boldsymbol{k} \, \boldsymbol{I})^{-2} \left[ \boldsymbol{W}^{2} - (\boldsymbol{\Lambda} + \boldsymbol{k} \, \boldsymbol{I})^{2} \right] \\ & (\boldsymbol{\Lambda} + \boldsymbol{k} \, \boldsymbol{I})^{-2} \ge 0 \end{aligned}$$

It is obvious that the difference is greater than 0, because it consists of the product of the squares in the expression above. Thus, the proof is completed.

Corollary 4.1. The bias of the absolute value of the i th component of MJLTE is

smaller than that of LTE, namely Biasð^γMJLTEð Þ k; d � � � i , Biasð^γLTEð Þ k; d � � � i � � � � � � .

Theorem 4.2. The MJLTE has smaller variance than the LTE Proof. From 12 and 24, it can be shown that

$$\text{Cov}(\hat{\boldsymbol{\gamma}}\_{\text{LTE}}(\boldsymbol{k}, \boldsymbol{d})) - \text{Cov}\left(\hat{\boldsymbol{\gamma}}\_{\text{MJLTE}}(\boldsymbol{k}, \boldsymbol{d})\right) = \sigma^2 H^2$$

where

$$H = \left[I + (k+d)\left(\Lambda + kI\right)^{-1}\right] \Lambda^{-1} \left[I + (k+d)\left(\Lambda + kI\right)^{-1}\right]' - \Phi \Lambda^{-1} \Phi'$$

$$= \left(I - F(k,d)\right) \left[\Lambda^{-1} - \left(I - F(k,d)^2 \Lambda^{-1} (I - F(k,d)^2)\right)\right] \left(I - F(k,d)\right)$$

H is a diagonal matrix and ith element

$$h\_{ii} = \frac{\left[\left(\lambda\_i + k\_i\right)^4 - \left(\lambda\_i + 2k\_i + d\_i\right)^2 \left(\lambda\_i - d\_i\right)^2\right] \left(\lambda\_i - d\_i\right)^2}{\lambda\_i \left(\lambda\_i + k\_i\right)^6}$$

is a positive number. Thus we conclude that H is a positive definite matrix. This completes the proof.

Next, we prove necessary and sufficient condition for the MJLTE to outperform the LTE using the MSEM condition. The proof requires the following lemma.

Lemma 4.1. Let M be a positive definite matrix, namely M . 0, α be some vector, then

M � αα<sup>0</sup> ≥ 0 if and only if α<sup>0</sup> M�<sup>1</sup> α≤ 1

Proof. see Farebrother [16]

Theorem 4.3. MJLTE is superior to the LTE in the MSEM sense, namelyMSEM ^γð Þ� LTEð Þ <sup>k</sup>; <sup>d</sup> MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � . <sup>0</sup>, if the inequality

<sup>Δ</sup><sup>1</sup> <sup>¼</sup> MSEM ^γð Þ� LTEð Þ <sup>k</sup>; <sup>d</sup> MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � is nonnegative definite matrix if and if the inequality

$$\gamma' \left[ L^{-1} (\sigma^2 H + F^\*(k, d) \gamma' F^\*(k, d)') L^{-1'} \right]^{-1} \gamma \le 1 \tag{26}$$

Proof. From (19, 24) it can be written as

DOI: http://dx.doi.org/10.5772/intechopen.82366

<sup>¼</sup> <sup>σ</sup><sup>2</sup>

Cov ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � <sup>¼</sup> <sup>σ</sup><sup>2</sup>

V<sup>0</sup> � �U<sup>0</sup>

nal element of Cov ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � Cov ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � is

<sup>σ</sup><sup>2</sup>ð Þ <sup>λ</sup><sup>i</sup> <sup>þ</sup> <sup>k</sup> <sup>þ</sup> <sup>2</sup>di

proof of the theorem is similar to that of Theorem 4.3.

<sup>Σ</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>

ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup> γγ0 F∗ ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup><sup>0</sup> � �L�1<sup>0</sup> h i�<sup>1</sup>

<sup>Δ</sup><sup>2</sup> <sup>¼</sup> MSEM ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � �

<sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup><sup>0</sup>

can be written as

<sup>Σ</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>

The difference Δ<sup>2</sup> is a nonnegative definite matrix, if and only if L�<sup>1</sup>

metric and positive definite, using Lemma 4.1, we may conclude that L�<sup>1</sup>

ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup> γγ<sup>0</sup> F∗ ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup><sup>0</sup> � �L�1<sup>0</sup> h i�<sup>1</sup>

We have seen from Theorem 4.4 that Σ is a positive definite matrix. Therefore,

ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup> γγ<sup>0</sup> F∗ ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup><sup>0</sup> � �L�1<sup>0</sup>

γγ<sup>0</sup>

γ<sup>0</sup> L�<sup>1</sup> σ<sup>2</sup>

<sup>Σ</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup>

Δ2L�1<sup>0</sup>

the difference Δ<sup>2</sup> is a nonnegative definite, if and only if L�<sup>1</sup>

<sup>¼</sup> <sup>L</sup>�<sup>1</sup> <sup>σ</sup><sup>2</sup>

nonnegative definite matrix. Since the matrix <sup>σ</sup><sup>2</sup><sup>Σ</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup>

<sup>Σ</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>

Proof. From (20, 25) we have

<sup>¼</sup> <sup>σ</sup><sup>2</sup>

definite. The matrix L�<sup>1</sup>

135

L�<sup>1</sup>

Δ2L�1<sup>0</sup>

nonnegative definite, if and only if the inequality

γ<sup>0</sup> L�<sup>1</sup> σ<sup>2</sup>

VUΛ�<sup>1</sup>

ð Þ <sup>2</sup><sup>I</sup> � F kð Þ ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup>

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

ΦΛ�<sup>1</sup>

where V ¼ I � F kð Þ ; d and U ¼ I þ F kð Þ ; d , respectively. It can be shown that

Cov ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � Cov ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � <sup>¼</sup> <sup>σ</sup><sup>2</sup>

V0

ð Þ λ<sup>i</sup> � di 2

λ<sup>i</sup> þ ð Þ λ<sup>i</sup> þ ki

Hence of Cov ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � Cov ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � . 0 which completes the proof. In the following theorem, we have obtained a necessary and sufficient condition for the MJLTE to outperform the JLTE in terms of matrix mean square error. The

<sup>Δ</sup><sup>2</sup> <sup>¼</sup> MSEM ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � is a nonnegative definite matrix,

2

<sup>Φ</sup><sup>0</sup> <sup>¼</sup> <sup>σ</sup><sup>2</sup>

6

U0

F kð Þ ; d <sup>0</sup>

VUVΛ�<sup>1</sup>

ð Þ k þ di ð Þ 2λ<sup>i</sup> þ k þ di

� <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>W</sup>γγ<sup>0</sup>

W<sup>0</sup>

Δ2L�1<sup>0</sup>

γγ<sup>0</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup><sup>0</sup> � � is sym-

γ ≤ 1

<sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>0</sup>

� γγ<sup>0</sup>

<sup>V</sup><sup>0</sup> (29)

V0 U0

, Σ is a diagonal matrix. Then ith the diago-

V<sup>0</sup> (30)

Σ (31)

γ ≤ 1 (32)

is a nonnegative

Δ2L�1<sup>0</sup>

Δ2L�1<sup>0</sup>

is a

is

ð Þ 2I � F kð Þ ; d <sup>0</sup>

Cov ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � <sup>¼</sup> <sup>σ</sup><sup>2</sup>

where <sup>Σ</sup> <sup>¼</sup> VU <sup>Λ</sup>�<sup>1</sup> � <sup>V</sup>Λ�<sup>1</sup>

Theorem 4.5.

if and if the inequality

is satisfied.

and

is satisfied with L <sup>¼</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> W, F<sup>∗</sup>ð Þ¼ <sup>k</sup>; <sup>d</sup> F kð Þ� ; <sup>d</sup> I and W <sup>¼</sup> <sup>I</sup> <sup>þ</sup> F kð Þ ; <sup>d</sup> �F kð Þ ; <sup>d</sup> <sup>2</sup>

Proof.

We consider the difference from (21, 25) we have

$$\begin{split} \Delta\_{1} &= \text{MSEM}\left(\hat{\boldsymbol{\gamma}}\_{\text{LTE}}(\boldsymbol{k},d)\right) - \text{MSEM}\left(\hat{\boldsymbol{\gamma}}\_{\text{MJLTE}}(\boldsymbol{k},d)\right) \\ &= \sigma^{2}\boldsymbol{H} + F^{\*}(\boldsymbol{k},d)\boldsymbol{\gamma}\boldsymbol{\gamma}^{\prime}\boldsymbol{F}^{\*}(\boldsymbol{k},d)^{\prime} - \text{L}\boldsymbol{\gamma}\boldsymbol{\gamma}^{\prime}\boldsymbol{L}^{\prime} \end{split} \tag{27}$$

where

$$\begin{aligned} H &= \left[ I + (k+d)(\Lambda + k \operatorname{I})^{-1} \right] \Lambda^{-1} \Big[ I + (k+d)(\Lambda + k \operatorname{I})^{-1} \Big]' - \Phi \Lambda^{-1} \Phi' \\ &= (I - F(k,d)) \Big[ \Lambda^{-1} - \left( I - F(k,d)^2 \Lambda^{-1} (I - F(k,d)^2) \right) \Big] (I - F(k,d)) \end{aligned}$$

<sup>W</sup> <sup>¼</sup> <sup>I</sup> <sup>þ</sup> F kð Þ� ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>2</sup> is a positive definite matrix. We have seen <sup>H</sup> is a positive definite matrix from Theorem 2. Therefore, the difference Δ<sup>1</sup> is a nonnegative definite, if and only if L�<sup>1</sup> Δ1L�1<sup>0</sup> is a nonnegative definite. The matrix L�<sup>1</sup> Δ1L�1<sup>0</sup> can be written as

$$L^{-1} \Delta\_1 L^{-1'} = L^{-1} \left( \sigma^2 H + F^\*(k, d) \gamma \gamma' F^\*(k, d)' \right) L^{-1'} - \gamma \gamma' \tag{28}$$

Since the matrix <sup>σ</sup><sup>2</sup><sup>H</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> γγ<sup>0</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>0</sup> is symmetric and positive definite, using Lemma 4.1, we may conclude that L�<sup>1</sup> Δ1L�1<sup>0</sup> is a nonnegative definite, if and only if the inequality

$$\gamma' \left[ L^{-1} \left( \sigma^2 H + F^\*(k, d) \gamma \gamma' F^\*(k, d)' \right) L^{-1'} \right]^{-1} \gamma \le 1$$

is satisfied.

#### 4.1 Comparison between the JLTE and the MJLTE

Here, we show that the MJLTE outperforms the JLTE in terms of the sampling variance.

Theorem 4.4. The variance of MJLTE has a smaller variance than that of the JLTE for d . 0 and k . d

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366

Proof. From (19, 24) it can be written as

$$\begin{split} \text{Cov}\left(\hat{\boldsymbol{\gamma}}\_{\text{ILTE}}(\boldsymbol{k},d)\right) &= \sigma^2 (2\boldsymbol{I} - \boldsymbol{F}(\boldsymbol{k},d)) \mathbf{F}(\boldsymbol{k},d) \boldsymbol{\Lambda}^{-1} \mathbf{F}(\boldsymbol{k},d)' (2\boldsymbol{I} - \boldsymbol{F}(\boldsymbol{k},d))' \\ &= \sigma^2 \mathbf{V} \mathbf{U} \boldsymbol{\Lambda}^{-1} \mathbf{U}' \mathbf{V}' \end{split} \tag{29}$$

and

is a positive number. Thus we conclude that H is a positive definite matrix. This

Next, we prove necessary and sufficient condition for the MJLTE to outperform

<sup>Δ</sup><sup>1</sup> <sup>¼</sup> MSEM ^γð Þ� LTEð Þ <sup>k</sup>; <sup>d</sup> MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � is nonnegative definite matrix if

F∗ ð Þ <sup>k</sup>; <sup>d</sup> <sup>0</sup> � �L�1<sup>0</sup> h i�<sup>1</sup>

γ ≤ 1 (26)

<sup>L</sup><sup>0</sup> (27)

� ΦΛ�<sup>1</sup>

ð Þ I � F kð Þ ; d

� γγ<sup>0</sup> (28)

is a nonnegative definite, if and

Φ0

ð Þ k; d γγ<sup>0</sup>

is satisfied with L <sup>¼</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> W, F<sup>∗</sup>ð Þ¼ <sup>k</sup>; <sup>d</sup> F kð Þ� ; <sup>d</sup> I and W <sup>¼</sup> <sup>I</sup> <sup>þ</sup> F kð Þ ; <sup>d</sup>

<sup>Δ</sup><sup>1</sup> <sup>¼</sup> MSEM ^γð Þ� LTEð Þ <sup>k</sup>; <sup>d</sup> MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � �

<sup>W</sup> <sup>¼</sup> <sup>I</sup> <sup>þ</sup> F kð Þ� ; <sup>d</sup> F kð Þ ; <sup>d</sup> <sup>2</sup> is a positive definite matrix. We have seen <sup>H</sup> is a positive definite matrix from Theorem 2. Therefore, the difference Δ<sup>1</sup> is a nonneg-

<sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>0</sup>

� Lγγ<sup>0</sup>

<sup>Λ</sup>�<sup>1</sup> <sup>I</sup> <sup>þ</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> k I �<sup>1</sup> h i<sup>0</sup>

is a nonnegative definite. The matrix

<sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>0</sup> is symmetric and positive definite,

γ ≤ 1

Λ�<sup>1</sup> <sup>ð</sup><sup>I</sup> � <sup>F</sup>ðk; <sup>d</sup>Þ<sup>2</sup><sup>0</sup> h i � �

ð Þ k; d γγ<sup>0</sup>

Δ1L�1<sup>0</sup>

F∗ ð Þ <sup>k</sup>; <sup>d</sup> <sup>0</sup> � �L�1<sup>0</sup>

the LTE using the MSEM condition. The proof requires the following lemma. Lemma 4.1. Let M be a positive definite matrix, namely M . 0, α be some

α≤ 1

M�<sup>1</sup>

<sup>H</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>

<sup>H</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> γγ<sup>0</sup>

Δ1L�1<sup>0</sup>

<sup>H</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>

<sup>H</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> γγ<sup>0</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>0</sup> � �L�1<sup>0</sup> h i�<sup>1</sup>

Here, we show that the MJLTE outperforms the JLTE in terms of the sampling

Theorem 4.4. The variance of MJLTE has a smaller variance than that of the

We consider the difference from (21, 25) we have

<sup>¼</sup> ð Þ <sup>I</sup> � F kð Þ ; <sup>d</sup> <sup>Λ</sup>�<sup>1</sup> � <sup>I</sup> � F kð Þ ; <sup>d</sup> <sup>2</sup>

<sup>¼</sup> <sup>L</sup>�<sup>1</sup> <sup>σ</sup><sup>2</sup>

Theorem 4.3. MJLTE is superior to the LTE in the MSEM sense, namelyMSEM ^γð Þ� LTEð Þ <sup>k</sup>; <sup>d</sup> MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � . <sup>0</sup>, if the inequality

completes the proof.

Statistical Methodologies

and if the inequality

�F kð Þ ; <sup>d</sup> <sup>2</sup> Proof.

where

L�<sup>1</sup>

Δ1L�1<sup>0</sup>

only if the inequality

is satisfied.

JLTE for d . 0 and k . d

variance.

134

M � αα<sup>0</sup> ≥ 0 if and only if α<sup>0</sup>

γ<sup>0</sup> L�<sup>1</sup> σ<sup>2</sup>

<sup>¼</sup> <sup>σ</sup><sup>2</sup>

<sup>H</sup> <sup>¼</sup> <sup>I</sup> <sup>þ</sup> ð Þ <sup>k</sup> <sup>þ</sup> <sup>d</sup> ð Þ <sup>Λ</sup> <sup>þ</sup> k I �<sup>1</sup> h i

ative definite, if and only if L�<sup>1</sup>

L�<sup>1</sup>

can be written as

Δ1L�1<sup>0</sup>

using Lemma 4.1, we may conclude that L�<sup>1</sup>

γ<sup>0</sup> L�<sup>1</sup> σ<sup>2</sup>

4.1 Comparison between the JLTE and the MJLTE

Since the matrix <sup>σ</sup><sup>2</sup><sup>H</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> γγ<sup>0</sup>

Proof. see Farebrother [16]

vector, then

$$\text{Cov}\left(\hat{\gamma}\_{\text{M/LTE}}(\mathbf{k},d)\right) = \sigma^2 \Phi \Lambda^{-1} \Phi' = \sigma^2 \text{VUV}\Lambda^{-1} \text{V}^\prime \text{U}^\prime \text{V}^\prime \tag{30}$$

where V ¼ I � F kð Þ ; d and U ¼ I þ F kð Þ ; d , respectively. It can be shown that

$$\text{Cov}\left(\hat{\boldsymbol{\gamma}}\_{\text{J}\text{LTE}}(\boldsymbol{k},d)\right) - \text{Cov}\left(\hat{\boldsymbol{\gamma}}\_{\text{MJ}\text{LTE}}(\boldsymbol{k},d)\right) = \sigma^2 \boldsymbol{\Sigma} \tag{31}$$

where <sup>Σ</sup> <sup>¼</sup> VU <sup>Λ</sup>�<sup>1</sup> � <sup>V</sup>Λ�<sup>1</sup> V<sup>0</sup> � �U<sup>0</sup> V0 , Σ is a diagonal matrix. Then ith the diagonal element of Cov ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � Cov ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � is

$$\frac{\sigma^2(\lambda\_i + k + 2d\_i)^2(\lambda\_i - d\_i)^2(k + d\_i)(2\lambda\_i + k + d\_i)}{\lambda\_i + (\lambda\_i + k\_i)^6}$$

Hence of Cov ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � Cov ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � . 0 which completes the proof. In the following theorem, we have obtained a necessary and sufficient condition for the MJLTE to outperform the JLTE in terms of matrix mean square error. The proof of the theorem is similar to that of Theorem 4.3.

#### Theorem 4.5.

<sup>Δ</sup><sup>2</sup> <sup>¼</sup> MSEM ^γJLTEð Þ <sup>k</sup>; <sup>d</sup> � � � MSEM ^γMJLTEð Þ <sup>k</sup>; <sup>d</sup> � � is a nonnegative definite matrix, if and if the inequality

$$\gamma' \left[ L^{-1} \left( \sigma^2 \Sigma + F^\*(k, d) \right)^2 \gamma \gamma' F^\*(k, d)^{2'} \right] L^{-1'} \Big|^{-1} \gamma \le 1 \tag{32}$$

is satisfied. Proof. From (20, 25) we have

$$\Delta\_2 = \text{MSEM}\left(\hat{\boldsymbol{\gamma}}\_{\text{JLEE}}(\boldsymbol{k}, d)\right) - \text{MSEM}\left(\hat{\boldsymbol{\gamma}}\_{\text{MJLTE}}(\boldsymbol{k}, d)\right)$$

$$= \sigma^2 \Sigma + F^\*(\boldsymbol{k}, d)^2 \boldsymbol{\gamma} \boldsymbol{\gamma} \boldsymbol{F}^\*(\boldsymbol{k}, d)^2 - F^\*(\boldsymbol{k}, d) \boldsymbol{W} \boldsymbol{\gamma} \boldsymbol{\gamma} \boldsymbol{W} \boldsymbol{F}^\* \boldsymbol{F}^\*(\boldsymbol{k}, d)^2$$

We have seen from Theorem 4.4 that Σ is a positive definite matrix. Therefore, the difference Δ<sup>2</sup> is a nonnegative definite, if and only if L�<sup>1</sup> Δ2L�1<sup>0</sup> is a nonnegative definite. The matrix L�<sup>1</sup> Δ2L�1<sup>0</sup> can be written as

$$L^{-1} \Delta\_2 L^{-1'} = L^{-1} \left( \sigma^2 \Sigma + F^\*(\mathbb{k}, d)^2 \gamma \gamma' F^\*(\mathbb{k}, d)^{2'} \right) L^{-1'} - \gamma \gamma'$$

The difference Δ<sup>2</sup> is a nonnegative definite matrix, if and only if L�<sup>1</sup> Δ2L�1<sup>0</sup> is a nonnegative definite matrix. Since the matrix <sup>σ</sup><sup>2</sup><sup>Σ</sup> <sup>þ</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup> γγ<sup>0</sup> <sup>F</sup><sup>∗</sup>ð Þ <sup>k</sup>; <sup>d</sup> <sup>2</sup><sup>0</sup> � � is symmetric and positive definite, using Lemma 4.1, we may conclude that L�<sup>1</sup> Δ2L�1<sup>0</sup> is nonnegative definite, if and only if the inequality

$$\gamma' \left[ L^{-1} \left( \sigma^2 \Sigma + F^\*(k,d)^2 \gamma \gamma' F^\*(k,d)^{2'} \right) L^{-1'} \right]^{-1} \gamma \le 1$$

is satisfied. This confirms our validation. Theorems 4–6 showed that the estimator we proposed was superior to the LTE estimator and JLTE estimator. Accordingly, we can easily say that the MJLTE estimator is better than other estimators LTE, JLTE.

#### 5. Numerical example

To motivate the problem of estimation in the linear regression model, we consider the hedonic prices of housing attributes. The data consists of 92 detached homes in the Ottawa area sold during 1987 (see Yatchew [17]).

Let y be the sale price (sp) of the house, X be a 92 � 9 observation matrix consisting of the variables: frplc: dummy for fireplace(s), grge: dummy for garage, lux: dummy for luxury appointment, avginc: average neighborhood income, dhwy: distance to highway, lot area: area of lot, nrbed: number of bedrooms, usespc: usable space. The data are given in Table 1.

The eigenvalues of the matrix X<sup>0</sup> X: 9 � 9 are given by λ<sup>1</sup> ¼ 1:47, λ<sup>2</sup> ¼ 3:77, λ<sup>3</sup> ¼ 4:52, λ<sup>4</sup> ¼ 15:33, λ<sup>5</sup> ¼ 18:57, λ<sup>6</sup> ¼ 20:97, λ<sup>7</sup> ¼ 41:79, λ<sup>8</sup> ¼ 271:15 and λ<sup>9</sup> ¼ 239153:68.

If we use the spectral norm, then the corresponding measure of conditioning of <sup>X</sup> is the number <sup>κ</sup>ð Þ¼ <sup>X</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>λ</sup>max <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>=</sup>λmin <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>p</sup> whereκð Þ: <sup>∈</sup>½ Þ <sup>1</sup>; <sup>∞</sup> . We obtained κð Þ¼ X 403:27, which is large and so X may be considered as being ill-conditioned.

In this case, the regression coefficients become insignificant and therefore, it is hard to make a valid inference or prediction using OLS method. To overcome many of the difficulties associated with the OLS estimates, the LTE. When <sup>β</sup>^ <sup>¼</sup> <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> �<sup>1</sup> X0 y and <sup>k</sup> and <sup>d</sup> are biasing parameters the use of <sup>β</sup>^LTE <sup>¼</sup> <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>þ</sup> kI �<sup>1</sup> <sup>X</sup><sup>0</sup> <sup>y</sup> <sup>þ</sup> <sup>d</sup>^<sup>β</sup> � �, <sup>k</sup> . 0, �∞ , d , þ ∞ has become conventional. The LTE estimator will be used for the following example. The original model was used to reconstruct a canonical form as shown in (6) y ¼ Zγ þ ε. Estimators ^γLTE,^γJLTE and ^γMJLTE used data d ¼ 0:10, 0:30, 0:70, 1 and k ¼ 0:30, 0:50, 0:70, 1. Then, the original variable scale was obtained by using the coefficients estimated by these estimators. The individual values of d and k for the scalar MSE (SMSE = trace (MSEM)) of the estimators are shown in Tables 2–5. The effects of different values of d on MSE can be seen in Figures 5–8 that clearly show that the proposed estimator (MJLTE) has smaller estimated MSE values compared to those of the LTE, JLTE.

We observed that for all values of d SMSE(MJLTE) assumed smaller values compared to both SMSE(JLTE) and SMSE(LTE). The estimators' SMSE values are affected by increasing values of k, however the estimator that is affected the least by these changes is our proposed MJLTE estimator. When compared to the other two estimators, the SMSE values of MJLTE gave the best results for both the small and large values of k and d.

#### 6. A simulation study

We want to illustrate the behavior of the proposed parameter estimator by a Monte Carlo simulation. The main purpose of this article is to demonstrate the construction and the details of the simulation which is designed to evaluate the performances of the estimators LTE, JLTE and MJLTE when the regressors are highly intercorrelated. According to Liu [8] and Kibria [18] the explanatory variables and response variable are generated by using the following equations

sellprix

137

180 135 165.9

101 127 235 195 184.5

106 156 195 206 157 180 193 230 212 102 137 187 103 100

0

 0

 0

 59.5774

 4.22521

 0.36649

 1.22908

 4

 3

 1.07483

 2.169

 3.626

 0.19308

 0.44147

0

 0

 0

 59.5774

 4.16338

 0.35961

 1.14419

 4

 2

 0.68893

 2.193

 3.539

 0.19657

 0.42953

0

 1

 0

 59.5774

 4.13723

 0.35669

 1.00413

 6.10686

 3

 0.91936

 2.297

 3.441

 0.21168

 0.41608

1

 1

 0

 59.5774

 4.13141

 0.35605

 1.01187

 2.4

 3

 0.753

 2.285

 3.442

 0.20994

 0.41622

1

 0

 0

 59.5774

 4.11494

 0.35421

 1.18386

 4

 2

 1.24813

 2.113

 3.531

 0.18495

 0.42843

0

 1

 1

 53.8647

 4.04329

 0.34623

 0.22395

 5

 4

 1.08724

 2.845

 2.873

 0.2913

 0.33814

1

 1

 0

 52.3552

 3.97284

 0.33839

 0.39307

 5

 3

 1.08421

 2.661

 2.95

 0.26456

 0.3487

1

 1

 0

 79.4583

 3.9626

 0.33725

 0.19503

 6.35545

 3

 1.50436

 3.063

 2.514

 0.32297

 0.28887

1

 0

 0

 79.4583

 3.96236

 0.33722

 0.12078

 4

 3

 0.95527

 3.015

 2.571

 0.316

 0.29669

0

 1

 0

 52.3552

 3.95888

 0.33683

 0.43005

 5

 3

 0.901

 2.621

 2.967

 0.25875

 0.35104

1

 1

 0

 52.3552

 3.92493

 0.33305

 0.69495

 4.75

 4

 1.7111

 2.38

 3.121

 0.22374

 0.37217

1

 0

 0

 52.3552

 3.88283

 0.32836

 0.67211

 4.8

 4

 1.29465

 2.363

 3.081

 0.22127

 0.36668

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

1

 1

 0

 52.3552

 3.85306

 0.32505

 0.39006

 5

 3

 0.90239

 2.563

 2.877

 0.25033

 0.33869

1

 1

 0

 38.4946

 3.83086

 0.32258

 0.87056

 3.234

 2

 0.79507

 2.148

 3.172

 0.19003

 0.37917

 1

 0

 0

 62.3053

 3.4844

 0.28399

 0.33009

 4.224

 3

 0.98228

 2.785

 2.094

 0.28258

 0.23123

1

 1

 0

 65.8278

 3.08747

 0.23979

 0.30295

 4.23

 3

 1.16829

 2.564

 1.72

 0.25047

 0.17991

1

 0

 1

 67.0056

 2.77147

 0.2046

 0.22935

 6.695

 4

 1.3518

 2.352

 1.466

 0.21967

 0.14505

0

 0

 0

 37.8348

 2.5458

 0.17947

 0.16621

 2.7

 3

 0.73789

 2.193

 1.293

 0.19657

 0.12131

DOI: http://dx.doi.org/10.5772/intechopen.82366

0

 0

 0

 37.8348

 2.54381

 0.17924

 0.26872

 3.136

 2

 0.71445

 2.252

 1.183

 0.20514

 0.10622

 0

 1

 0

 32.0654

 2.31148

 0.15337

 0.43422

 5.72

 3

 0.87508

 1.673

 1.595

 0.12102

 0.16276

0

 1

 0

 31.5016

 2.18624

 0.13942

 0.66452

 6.5

 3

 0.84592

 1.44

 1.645

 0.087171

 0.16962

0

 1

 0

 32.3163

 0.93428

 0

 0.63807

 3.63297

 3

 1.23309

 0.84

 0.409

 0

0

 fireplac

 garage

 luxbath

 avginc

 crowdist

 ncrosdst

 disthwy

 lotarea

 nrbed

 usespace

 south

 west

 nsouth


#### A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366

is satisfied. This confirms our validation. Theorems 4–6 showed that the estimator we proposed was superior to the LTE estimator and JLTE estimator. Accordingly, we can easily say that the MJLTE estimator is better than other estimators

To motivate the problem of estimation in the linear regression model, we consider the hedonic prices of housing attributes. The data consists of 92 detached

If we use the spectral norm, then the corresponding measure of conditioning of

κð Þ¼ X 403:27, which is large and so X may be considered as being ill-conditioned. In this case, the regression coefficients become insignificant and therefore, it is hard to make a valid inference or prediction using OLS method. To overcome many of the difficulties associated with the OLS estimates, the LTE. When <sup>β</sup>^ <sup>¼</sup> <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> �<sup>1</sup>

�∞ , d , þ ∞ has become conventional. The LTE estimator will be used for the following example. The original model was used to reconstruct a canonical form as

d ¼ 0:10, 0:30, 0:70, 1 and k ¼ 0:30, 0:50, 0:70, 1. Then, the original variable scale was obtained by using the coefficients estimated by these estimators. The individual values of d and k for the scalar MSE (SMSE = trace (MSEM)) of the estimators are shown in Tables 2–5. The effects of different values of d on MSE can be seen in Figures 5–8 that clearly show that the proposed estimator (MJLTE) has smaller

We observed that for all values of d SMSE(MJLTE) assumed smaller values compared to both SMSE(JLTE) and SMSE(LTE). The estimators' SMSE values are affected by increasing values of k, however the estimator that is affected the least by these changes is our proposed MJLTE estimator. When compared to the other two estimators, the SMSE values of MJLTE gave the best results for both the small and

We want to illustrate the behavior of the proposed parameter estimator by a Monte Carlo simulation. The main purpose of this article is to demonstrate the construction and the details of the simulation which is designed to evaluate the performances of the estimators LTE, JLTE and MJLTE when the regressors are highly intercorrelated. According to Liu [8] and Kibria [18] the explanatory variables and response variable are generated by using the following equations

X: 9 � 9 are given by λ<sup>1</sup> ¼ 1:47, λ<sup>2</sup> ¼ 3:77,

X0 y

<sup>y</sup> <sup>þ</sup> <sup>d</sup>^<sup>β</sup> � �, <sup>k</sup> . 0,

<sup>λ</sup>max <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>=</sup>λmin <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>p</sup> whereκð Þ: <sup>∈</sup>½ Þ <sup>1</sup>; <sup>∞</sup> . We obtained

Let y be the sale price (sp) of the house, X be a 92 � 9 observation matrix consisting of the variables: frplc: dummy for fireplace(s), grge: dummy for garage, lux: dummy for luxury appointment, avginc: average neighborhood income, dhwy: distance to highway, lot area: area of lot, nrbed: number of bedrooms, usespc: usable

λ<sup>3</sup> ¼ 4:52, λ<sup>4</sup> ¼ 15:33, λ<sup>5</sup> ¼ 18:57, λ<sup>6</sup> ¼ 20:97, λ<sup>7</sup> ¼ 41:79, λ<sup>8</sup> ¼ 271:15 and

and <sup>k</sup> and <sup>d</sup> are biasing parameters the use of <sup>β</sup>^LTE <sup>¼</sup> <sup>X</sup><sup>0</sup> ð Þ <sup>X</sup> <sup>þ</sup> kI �<sup>1</sup> <sup>X</sup><sup>0</sup>

shown in (6) y ¼ Zγ þ ε. Estimators ^γLTE,^γJLTE and ^γMJLTE used data

estimated MSE values compared to those of the LTE, JLTE.

homes in the Ottawa area sold during 1987 (see Yatchew [17]).

LTE, JLTE.

λ<sup>9</sup> ¼ 239153:68.

large values of k and d.

6. A simulation study

136

5. Numerical example

Statistical Methodologies

space. The data are given in Table 1. The eigenvalues of the matrix X<sup>0</sup>

<sup>X</sup> is the number <sup>κ</sup>ð Þ¼ <sup>X</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi


sellprix

139

168 180 156 145 144 140 236 172 148 153.5

154 145.5

149 138 141.5

125 130 132 132.9

122 162 127.5

 1

 0

 0

 73.4464

 7.6503

 0.74793

 0.68757

 6.785

 3

 0.84692

 5.488

 5.33

 0.67529

 0.67531

1

 1

 0

 51.5087

 7.63628

 0.74637

 0.35916

 4.992

 4

 1.4318

 5.705

 5.076

 0.70681

 0.64046

1

 1

 0

 52.6378

 7.6221

 0.74479

 0.85984

 5

 3

 0.80494

 6.435

 4.085

 0.81287

 0.50446

 1

 1

 0

 52.6378

 7.58783

 0.74097

 0.88591

 6

 3

 1.06059

 6.421

 4.043

 0.81084

 0.4987

0

 1

 0

 52.6378

 7.55757

 0.7376

 0.91361

 5.1

 4

 1.20377

 6.411

 4.002

 0.80939

 0.49307

0

 1

 0

 52.6378

 7.52266

 0.73371

 0.93924

 6.5

 4

 1.09757

 6.396

 3.96

 0.80721

 0.48731

1

 0

 0

 52.6378

 7.46892

 0.72773

 0.9696

 7.02

 4

 1.16772

 6.368

 3.903

 0.80314

 0.47948

 1

 0

 0

 52.6378

 7.44672

 0.72525

 1.07227

 6.54

 4

 1.03462

 6.403

 3.802

 0.80822

 0.46562

1

 0

 0

 50.7101

 7.3538

 0.71491

 0.58063

 5.3352

 4

 1.50339

 6.062

 4.163

 0.75868

 0.51516

1

 0

 0

 50.7101

 7.35272

 0.71479

 0.4738

 6.0888

 4

 1.26687

 6

 4.25

 0.74967

 0.5271

 1

 1

 0

 50.7101

 7.3023

 0.70917

 0.6233

 6.735

 4

 1.49297

 6.044

 4.098

 0.75607

 0.50624

1

 1

 0

 71.0269

 7.157

 0.69299

 1.21892

 5.292

 3

 1.44025

 4.696

 5.401

 0.56022

 0.68506

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

 1

 1

 0

 71.0269

 7.1224

 0.68914

 1.54559

 7.3832

 4

 1.433

 4.406

 5.596

 0.51809

 0.71182

1

 0

 0

 71.0269

 7.1147

 0.68828

 1.42051

 6

 3

 1.14276

 4.501

 5.51

 0.53189

 0.70001

1

 1

 0

 53.5907

 6.8231

 0.65581

 0.98439

 5

 4

 1.22956

 4.596

 5.043

 0.54569

 0.63593

1

 1

 1

 44.2475

 6.7398

 0.64653

 1.59621

 4.725

 4

 1.63184

 4.037

 5.397

 0.46448

 0.68451

1

 1

 0

 57.6861

 6.69752

 0.64182

 0.63376

 6.3

 3

 1.14778

 5.556

 3.74

 0.68517

 0.45712

1

 1

 0

 57.6861

 6.6854

 0.64047

 0.76067

 6.825

 4

 1.689

 5.616

 3.627

 0.69388

 0.44161

DOI: http://dx.doi.org/10.5772/intechopen.82366

1

 1

 0

 44.2475

 6.6469

 0.63618

 1.67399

 6.572

 4

 1.22169

 3.891

 5.389

 0.44327

 0.68341

1

 0

 0

 57.7446

 6.6398

 0.63539

 1.09965

 5

 3

 1.41006

 4.354

 5.013

 0.51053

 0.63181

1

 1

 0

 58.4763

 6.3173

 0.59948

 1.03263

 5.3014

 4

 1.59782

 4.135

 4.776

 0.47872

 0.59929

1

 1

 0

 44.2475

 6.2599

 0.59309

 1.49998

 5

 3

 1.30275

 3.706

 5.045

 0.41639

 0.6362

 fireplac

 garage

 luxbath

 avginc

 crowdist

 ncrosdst

 disthwy

 lotarea

 nrbed

 usespace

 south

 west

 nsouth


#### A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366

sellprix

138

152 127 119.5

103

99 75 128 132 132 134 120 125 135 139 151 116.5

137 149 167 163 147.5

237.7

 1

 1

 0

 58.4763

 6.0752

 0.57252

 0.62363

 5.4495

 4

 1.99967

 4.241

 4.35

 0.49412

 0.54083

 0

 1

 0

 58.4763

 6.0749

 0.57248

 0.73326

 5.1825

 3

 0.972

 4.16

 4.427

 0.48235

 0.55139

1

 1

 0

 58.4763

 5.8213

 0.54424

 1.03375

 4.92

 4

 1.60065

 3.716

 4.481

 0.41784

 0.5588

1

 1

 0

 58.4763

 5.8055

 0.54248

 1.20341

 6.4152

 4

 1.48624

 3.565

 4.582

 0.3959

 0.57266

1

 1

 0

 55.8667

 5.75411

 0.53676

 1.38296

 5.6

 3

 1.19255

 3.37

 4.664

 0.36757

 0.58392

1

 1

 0

 50.0548

 5.56767

 0.516

 0.70637

 6.6

 3

 0.8773

 3.757

 4.109

 0.4238

 0.50775

 0

 1

 0

 38.1216

 5.53704

 0.51258

 0.51961

 5.3

 3

 0.99325

 4.544

 3.164

 0.53814

 0.37807

0

 1

 0

 34.4049

 5.34316

 0.49099

 0.96285

 6.6

 3

 1.03205

 3.368

 4.148

 0.36728

 0.51311

1

 1

 0

 38.24

 5.26572

 0.48237

 0.92231

 4.08

 4

 1.44414

 3.335

 4.075

 0.36249

 0.50309

1

 1

 0

 25.9545

 5.23599

 0.47906

 1.01716

 5.4

 3

 0.88273

 4.563

 2.568

 0.5409

 0.29628

0

 1

 0

 39.8732

 5.21855

 0.47712

 0.7121

 6.96

 2

 0.62472

 4.391

 2.82

 0.51591

 0.33086

1

 1

 0

 39.8732

 5.08006

 0.46169

 0.58519

 5

 3

 1.2452

 4.208

 2.846

 0.48932

 0.33443

0

 0

 0

 39.5

 5.04763

 0.45808

 0.27429

 4.725

 3

 0.80536

 3.64

 3.497

 0.4068

 0.42377

0

 0

 0

 38.1216

 4.80701

 0.43128

 0.35373

 6.5415

 4

 1.25312

 3.851

 2.877

 0.43745

 0.33869

1

 0

 0

 35.6

 4.75938

 0.42598

 1.23504

 5.0892

 3

 1.32579

 4.273

 2.096

 0.49877

 0.23151

1

 1

 0

 39.5229

 4.73353

 0.4231

 0.7881

 5

 3

 1.03911

 2.992

 3.668

 0.31265

 0.44723

0

 0

 0

 39.5

 4.69941

 0.4193

 0.49382

 1.891

 3

 1.1436

 3.192

 3.449

 0.34171

 0.41718

0

 1

 0

 30.6514

 4.69599

 0.41892

 1.00164

 5

 4

 0.8655

 4.112

 2.268

 0.47537

 0.25511

0

 0

 0

 39.5229

 4.63841

 0.41251

 0.64774

 4.6158

 2

 1.14358

 3.023

 3.518

 0.31716

 0.42665

 0

 1

 0

 35.6

 4.59649

 0.40784

 1.39392

 4.9

 2

 0.9768

 4.206

 1.854

 0.48903

 0.1983

1

 1

 0

 39.5229

 4.52879

 0.4003

 0.6903

 5.16

 4

 1.64743

 2.897

 3.481

 0.29885

 0.42157

Statistical Methodologies

1

 1

 0

 39.7652

 4.29688

 0.37447

 1.02071

 9.9

 4

 1.01668

 3.793

 2.019

 0.42903

 0.22094

 fireplac

 garage

 luxbath

 avginc

 crowdist

 ncrosdst

 disthwy

 lotarea

 nrbed

 usespace

 south

 west

 nsouth


sellprix

141

157.5

115 126.5

155 Table 1. Data set.

1

 1

 0

 55.2901

 9.9138

 1

 0.49461

 5

 3

 1.41319

 7.723

 6.216

 1

 0.7969

DOI: http://dx.doi.org/10.5772/intechopen.82366

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

 1

 1

 0

 55.2901

 9.8628

 0.99432

 0.412

 5

 3

 0.902

 7.637

 6.241

 0.98751

 0.80033

1

 0

 0

 47.8688

 9.7586

 0.98272

 0.92055

 5

 3

 0.90163

 6.651

 7.141

 0.84425

 0.92384

 1

 0

 0

 51.1569

 9.744

 0.98109

 1.71459

 5

 4

 1.07226

 6.039

 7.647

 0.75534

 0.99328

 fireplac

 garage

 luxbath

 avginc

 crowdist

 ncrosdst

 disthwy

 lotarea

 nrbed

 usespace

 south

 west

 nsouth

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366


Table 1. Data set.

sellprix

140

87 139.9

240 134 136.5

143.5

140 123 147 134.9

154 143.9

126 118.5

158 118 109.25

124 137 142 120.5

123

0

 1

 0

 51.1569

 9.7401

 0.98066

 1.79905

 5

 2

 0.68574

 5.97

 7.696

 0.74531

 1

 0

 0

 0

 26.9947

 9.6483

 0.97043

 1.51784

 5

 3

 0.69971

 6.132

 7.449

 0.76885

 0.9661

1

 1

 0

 30.5844

 9.6273

 0.96809

 1.59895

 5

 4

 1.4761

 6.056

 7.484

 0.75781

 0.97091

1

 0

 0

 35.5188

 9.5795

 0.96277

 1.7827

 5

 3

 0.97388

 5.882

 7.561

 0.73253

 0.98147

0

 0

 0

 49.0297

 9.5445

 0.95887

 0.43645

 6

 3

 1.46683

 6.863

 6.633

 0.87505

 0.85412

 0

 0

 0

 30.9704

 9.4572

 0.94915

 0.70329

 5.4

 3

 0.67449

 7.56

 5.682

 0.97632

 0.72362

0

 0

 0

 37.1457

 9.2964

 0.93124

 0.18611

 5.035

 3

 0.93595

 7.127

 5.969

 0.91341

 0.763

1

 1

 1

 39.5262

 8.9552

 0.89325

 1.86432

 5.035

 3

 1.1192

 5.425

 7.125

 0.66613

 0.92164

 1

 0

 0

 40.0618

 8.9486

 0.89251

 0.10317

 5.1

 3

 1.05885

 6.85

 5.758

 0.87317

 0.73405

1

 1

 0

 53.8029

 8.62504

 0.85648

 0.48411

 5

 4

 0.88838

 6.885

 5.195

 0.87825

 0.65679

 1

 0

 0

 40.9246

 8.56769

 0.85009

 0.53129

 5

 3

 0.91112

 6.877

 5.11

 0.87709

 0.64512

1

 1

 0

 57.5706

 8.23868

 0.81345

 0.85344

 5.27

 4

 1.60938

 6.855

 4.57

 0.87389

 0.57102

 1

 1

 0

 57.5706

 8.16678

 0.80544

 0.77684

 5.185

 4

 1.35781

 6.763

 4.578

 0.86053

 0.57211

1

 1

 0

 57.5706

 8.06875

 0.79453

 0.5518

 5.04

 3

 0.96054

 6.565

 4.691

 0.83176

 0.58762

1

 0

 0

 51.5087

 8.02072

 0.78918

 0.7685

 6.757

 4

 1.28782

 5.665

 5.678

 0.701

 0.72307

1

 1

 0

 57.5706

 7.99308

 0.7861

 0.94096

 7.856

 3

 1.03167

 6.743

 4.292

 0.85762

 0.53287

 1

 1

 0

 51.5087

 7.98552

 0.78526

 0.19433

 5

 4

 1.44972

 6.038

 5.226

 0.75519

 0.66104

 1

 0

 0

 42.3138

 7.9381

 0.77998

 1.58184

 4.625

 3

 1.06694

 5.008

 6.159

 0.60555

 0.78908

1

 1

 0

 52.6378

 7.92504

 0.77852

 1.08098

 4.2581

 3

 1.2298

 6.776

 4.11

 0.86241

 0.50789

1

 1

 1

 41.95

 7.9194

 0.77789

 1.16065

 5.004

 4

 1.3972

 5.316

 5.87

 0.6503

 0.74942

 1

 1

 0

 40.9246

 7.90353

 0.77613

 0.40138

 5.5

 3

 1.19235

 6.366

 4.684

 0.80285

 0.58666

Statistical Methodologies

0

 0

 0

 42.3138

 7.6785

 0.75107

 1.67919

 4.6

 4

 0.74017

 4.769

 6.018

 0.57083

 0.76973

 fireplac

 garage

 luxbath

 avginc

 crowdist

 ncrosdst

 disthwy

 lotarea

 nrbed

 usespace

 south

 west

 nsouth

#### Statistical Methodologies


#### Table 2.

The estimated MSE values of LTE, JLTE and MJLTE k = 0.30.


#### Table 3

The estimated MSE values of LTE, JLTE and MJLTE k = 0.50.


#### Table 4.

The estimated MSE values of LTE, JLTE and MJLTE k = 0.70.


#### Table 5.

The estimated MSE values of LTE, JLTE and MJLTE k = 1.

xij <sup>¼</sup> <sup>1</sup> � <sup>γ</sup><sup>2</sup> <sup>1</sup>=<sup>2</sup>

Figure 7.

143

Figure 6.

zij <sup>þ</sup> <sup>γ</sup>zip, yi <sup>¼</sup> <sup>1</sup> � <sup>γ</sup><sup>2</sup> <sup>1</sup>=<sup>2</sup>

Various MSE of the proposed estimator compared to others for different values of d when k ¼ 0.50.

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

DOI: http://dx.doi.org/10.5772/intechopen.82366

SMSE β^

where zij is an independent standard normal pseudo-random number and p is specified so that correlation between any two explanatory variables is given by γ2. In this study, we used γ ¼ 0:90, 0:95, 0:99 to investigate the effects of different degrees of collinearity with sample sizes n ¼ 20, 50 and 100, while four different combinations for ð Þ k; d are taken as (0.8, 0.5), (1, 0.7), (1.5, 0.9), (2, 1). The standard deviations considered in the simulation study are σ ¼ 0:1; 1:0; 10. For each choice of γ, σ<sup>2</sup> and n, the experiment was replicated 1000 times by generating new

error terms. The average SMSE was computed using the following formula

Various MSE of the proposed estimator compared to others for different values of d when k ¼ 0.70.

<sup>1000</sup> <sup>∑</sup> 1000 j¼1

Let us consider the LTE, JLTE and MJLTE and compute their respective estimated MSE values with the different levels of multicollinearity. According to the

β<sup>j</sup> � β <sup>0</sup>

β<sup>j</sup> � β 

<sup>¼</sup> <sup>1</sup>

zi þ γzip i ¼ 1, 2, …, n, j ¼ 1, 2, …, p

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366

Figure 6.

d = 0.10 d = 0.30 d = 0.70 d = 1

d = 0.10 d = 0.30 d = 0.70 d = 1

d = 0.10 d = 0.30 d = 0.70 d = 1

d = 0.10 d = 0.30 d = 0.70 d = 1

MSE(LTE) 810.4511 1037.6454 1900.68971 2905.5467 MSE(JLTE) 733.5563 729.5050 977.1382 1688.9649 MSE(MJLTE) 631.2267 669.0754 967.7905 1289.1137

MSE(LTE) 957.6623 1245.7243 2157.4693 3134.9466 MSE(JLTE) 725.6311 752.2125 1102.8970 1872.6471 MSE(MJLTE) 608.2459 656.6023 892.2214 1115.3394

MSE(LTE) 1133.2567 1459.7360 2393.9079 2042.7127 MSE(JLTE) 734.5155 795.8311 1234.9720 3340.5986 MSE(MJLTE) 587.0096 633.9972 815.8143 973.5845

MSE(LTE) 1415.148 1774.1222 1774.1222 3613.8006 MSE(JLTE) 779.0405 891.0250 891.0250 2274.5162 MSE(MJLTE) 551.0494 588.8484 588.8484 807.4456

Various MSE of the proposed estimator compared to others for different values of d when k ¼ 0.30.

The estimated MSE values of LTE, JLTE and MJLTE k = 0.30.

The estimated MSE values of LTE, JLTE and MJLTE k = 0.50.

The estimated MSE values of LTE, JLTE and MJLTE k = 0.70.

The estimated MSE values of LTE, JLTE and MJLTE k = 1.

Table 2.

Statistical Methodologies

Table 3

Table 4.

Table 5.

Figure 5.

142

Various MSE of the proposed estimator compared to others for different values of d when k ¼ 0.50.

#### Figure 7.

Various MSE of the proposed estimator compared to others for different values of d when k ¼ 0.70.

$$\mathbf{x}\_{\vec{\eta}} = \left(\mathbf{1} - \boldsymbol{\eta}^2\right)^{1/2} \mathbf{z}\_{\vec{\eta}} + \eta \mathbf{z}\_{\vec{\eta}\mathbf{p}}, \\ \mathbf{y}\_i = \left(\mathbf{1} - \boldsymbol{\eta}^2\right)^{1/2} \mathbf{z}\_i + \eta \mathbf{z}\_{\vec{\eta}\mathbf{p}} \ \mathbf{i} = \mathbf{1}, \\ \mathbf{2}, \dots, \mathbf{n}, \ j = \mathbf{1}, \mathbf{2}, \dots, \mathbf{p}, $$

where zij is an independent standard normal pseudo-random number and p is specified so that correlation between any two explanatory variables is given by γ2. In this study, we used γ ¼ 0:90, 0:95, 0:99 to investigate the effects of different degrees of collinearity with sample sizes n ¼ 20, 50 and 100, while four different combinations for ð Þ k; d are taken as (0.8, 0.5), (1, 0.7), (1.5, 0.9), (2, 1). The standard deviations considered in the simulation study are σ ¼ 0:1; 1:0; 10. For each choice of γ, σ<sup>2</sup> and n, the experiment was replicated 1000 times by generating new error terms. The average SMSE was computed using the following formula

$$\text{SMSE}(\hat{\boldsymbol{\beta}}) = \frac{1}{1000} \sum\_{j=1}^{1000} \left(\boldsymbol{\beta}\_{j} - \boldsymbol{\beta}\right)' \left(\boldsymbol{\beta}\_{j} - \boldsymbol{\beta}\right)'$$

Let us consider the LTE, JLTE and MJLTE and compute their respective estimated MSE values with the different levels of multicollinearity. According to the

#### Figure 8.

Various MSE of the proposed estimator compared to others for different values of d when k ¼ 1.

simulation results shown in Tables 4 and 5 for LTE, JLTE and MJLTE with increasing levels of multicollinearity there was a general increase in the estimated MSE values Moreover, increasing level of multicollinearity also lead to the increase in the MSE estimators for fixed d and k.

In Table 4, the MSE values of the estimators corresponding to different values of d are given for k = 0.70. For all values of d, the smallest MSE value appears to belong to the MJLTE estimator. The least affected by multicollinearity is MJLTE according to MSE criteria.

In Table 5, the MSE values of the estimators corresponding to different values of d are given for k = 1.

For all values of d, the smallest MSE value appears to belong to the MJLTE estimator. The least affected by multicollinearity is MJLTE according to MSE criteria.

We can see that MJLTE is much better than the competing estimator when the explanatory variables are severely collinear. Moreover, we can see that for all cases of LTE, JLTE and MJLTE in MSE criterion the MJLTE has smaller estimated MSE values than those of the LTE and JLTE.

Author details

Kuyubaşı, Istanbul, Turkey

\*Address all correspondence to: ncelebiyil@gmail.com

provided the original work is properly cited.

The Department of Mathematics, Faculty of Arts and Sciences, Marmara University

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators

DOI: http://dx.doi.org/10.5772/intechopen.82366

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

Nilgün Yıldız

145

#### 7. Conclusion

In this paper, we combined the LTE and JLTE estimators to introduce a new estimator, which we called MJLTE. Combining the underlying criteria of LTE and JLTE estimators enabled us to create a new estimator for regression coefficients of a linear regression model that is affected by multicollinearity. Moreover, the use of jackknife procedure enabled as to produce an estimator with a smaller bias. We compared our MJLTE to its originators LTE and JLTE in terms of MSEM and found that MJLTE has a smaller variance compared to both LTE and JLTE. Thus, MJLTE is superior to both LTE and JLTE under certain conditions.

A Study on the Comparison of the Effectiveness of the Jackknife Method in the Biased Estimators DOI: http://dx.doi.org/10.5772/intechopen.82366

## Author details

Nilgün Yıldız

simulation results shown in Tables 4 and 5 for LTE, JLTE and MJLTE with increasing levels of multicollinearity there was a general increase in the estimated MSE values Moreover, increasing level of multicollinearity also lead to the increase in the

Various MSE of the proposed estimator compared to others for different values of d when k ¼ 1.

In Table 4, the MSE values of the estimators corresponding to different values of d are given for k = 0.70. For all values of d, the smallest MSE value appears to belong to the MJLTE estimator. The least affected by multicollinearity is MJLTE

In Table 5, the MSE values of the estimators corresponding to different values of

We can see that MJLTE is much better than the competing estimator when the explanatory variables are severely collinear. Moreover, we can see that for all cases of LTE, JLTE and MJLTE in MSE criterion the MJLTE has smaller estimated MSE

In this paper, we combined the LTE and JLTE estimators to introduce a new estimator, which we called MJLTE. Combining the underlying criteria of LTE and JLTE estimators enabled us to create a new estimator for regression coefficients of a linear regression model that is affected by multicollinearity. Moreover, the use of jackknife procedure enabled as to produce an estimator with a smaller bias. We compared our MJLTE to its originators LTE and JLTE in terms of MSEM and found that MJLTE has a smaller variance compared to both LTE and JLTE. Thus, MJLTE is superior to both LTE and JLTE under certain

For all values of d, the smallest MSE value appears to belong to the MJLTE estimator. The least affected by multicollinearity is MJLTE according to MSE

MSE estimators for fixed d and k.

values than those of the LTE and JLTE.

according to MSE criteria.

d are given for k = 1.

criteria.

Figure 8.

Statistical Methodologies

7. Conclusion

conditions.

144

The Department of Mathematics, Faculty of Arts and Sciences, Marmara University Kuyubaşı, Istanbul, Turkey

\*Address all correspondence to: ncelebiyil@gmail.com

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## References

[1] Rao RC, Toutenburg H. Linear Models Least Squares and Alternatives. New york: Springer University Press; 1995

[2] Montgomery DC, Peck EA, Vınıng GG. Introduction to Linear Regression Analysis. New york: Wiley; 2006

[3] Chatterjee S, Hadi AS. Regression Analysis by Example. New york: Wiley; 2006

[4] Hoerl A, Kennard R. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics. 1970;12:55-67

[5] Parker DF, Baye MR. Combining ridge and principal component regression: A money demand illustration. Communication in Statistics: Theory and Methods. 1984; 13(2):197-205

[6] Liu K. A new class of biased estimate in linear regression. Communication in Statistics: Theory and Methods. 1993; 22(2):393-402

[7] Stein C. Inadmissibility of the usual estimator for mean of multivariate normal distribution. In: Neyman J, editor. Proceedings of the Third Berkley Symposium on Mathematical and Statistics Probability. 1956. pp. 197-206

[8] Liu K. Using Liu-type estimator to combat collinearity. Communications in Statistics. 2003;32(5):1009-1020

[9] Quenouille MH. Notes on bias in estimation. Biometrika. 1956;43:353-360

[10] Hinkley DV. Jackknifing in unbalanced situations. Technometrics. 1977;19:285-292

[11] Singh B, Chaubey YP, Dwivedi TD. An almost unbiased ridge estimator. Sankhya. 1986;48:342-346

[12] Nyquist H. Applications of the jackknife procedure in ridge regression. Computational Statistics & Data Analysis. 1988;6:177-183

[13] Batah F, Ramanathan TK, Gore SD. The efficiency of modified jackknife and ridge type regression estimators: A comparison. Surveys in Mathematics and its Applications. 2008;3:111-122

[14] Tukey JW. Bias and confidence in not quite large samples (abstract). Annals of Mathematical Statistics. 1958; 29:614

[15] Yıldız N. On the performance of the jackknified Liu-type estimator in linear regression model. Communication in Statistics: Theory and Methods. 2018; 47(9):2278-2290

[16] Farebrother RW. Further results on the mean square error of ridge regression. Journal of the Royal Statistical Society, Series B. 1976;38(B): 248-250

[17] Yatchew A. Semiparametric Regression for the Applied Econometrician. Cambridge: Cambridge University Press; 2003

[18] Kibria BMG. Performance of some new ridge regression estimators. Communication in Statistics: Simulation and Computation. 2003;32:2389-2413

References

Statistical Methodologies

2006

13(2):197-205

22(2):393-402

[1] Rao RC, Toutenburg H. Linear Models Least Squares and Alternatives. New york: Springer University Press; 1995

[12] Nyquist H. Applications of the jackknife procedure in ridge regression. Computational Statistics & Data

[13] Batah F, Ramanathan TK, Gore SD. The efficiency of modified jackknife and ridge type regression estimators: A comparison. Surveys in Mathematics and its Applications. 2008;3:111-122

[14] Tukey JW. Bias and confidence in not quite large samples (abstract). Annals of Mathematical Statistics. 1958;

[15] Yıldız N. On the performance of the jackknified Liu-type estimator in linear regression model. Communication in Statistics: Theory and Methods. 2018;

[16] Farebrother RW. Further results on

Statistical Society, Series B. 1976;38(B):

Econometrician. Cambridge: Cambridge

[18] Kibria BMG. Performance of some new ridge regression estimators.

Communication in Statistics: Simulation and Computation. 2003;32:2389-2413

the mean square error of ridge regression. Journal of the Royal

[17] Yatchew A. Semiparametric Regression for the Applied

University Press; 2003

Analysis. 1988;6:177-183

29:614

248-250

47(9):2278-2290

[2] Montgomery DC, Peck EA, Vınıng GG. Introduction to Linear Regression Analysis. New york: Wiley; 2006

[3] Chatterjee S, Hadi AS. Regression Analysis by Example. New york: Wiley;

[5] Parker DF, Baye MR. Combining ridge and principal component regression: A money demand illustration. Communication in Statistics: Theory and Methods. 1984;

[6] Liu K. A new class of biased estimate in linear regression. Communication in Statistics: Theory and Methods. 1993;

[7] Stein C. Inadmissibility of the usual estimator for mean of multivariate normal distribution. In: Neyman J, editor. Proceedings of the Third Berkley Symposium on Mathematical and Statistics Probability. 1956. pp. 197-206

[8] Liu K. Using Liu-type estimator to combat collinearity. Communications in

Statistics. 2003;32(5):1009-1020

[10] Hinkley DV. Jackknifing in unbalanced situations. Technometrics.

Sankhya. 1986;48:342-346

1977;19:285-292

146

[9] Quenouille MH. Notes on bias in estimation. Biometrika. 1956;43:353-360

[11] Singh B, Chaubey YP, Dwivedi TD. An almost unbiased ridge estimator.

[4] Hoerl A, Kennard R. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics. 1970;12:55-67
