**4. General linear hypothesis**

In this section, we consider to test a general linear hypothesis

$$H\_{\boldsymbol{\beta}}: \mathbf{C} \boldsymbol{\Theta} \mathbf{D} = \mathbf{O},\tag{4.1}$$

against alternatives *Kg* : **C***Θ***D** ≠ **O** under a multivariate linear model given by (2.1), where **C** is a *c* × *k* given matrix with rank *c* and **D** is a *p* × *d* given matrix with rank *d*. When **C** = (**O I***<sup>k</sup>* − *<sup>j</sup>* ) and **D** = **I***p*, the hypothesis *Hg* becomes *H* : *Θ*2 = **O**.

For the derivation of LR test of (4.1), we can use the following conventional approach: If **U**=**YD,** then the rows of **U** are independent and normally distributed with the identical covariance matrix **D**′*Σ***D**, and

$$\mathbf{E}(\mathbf{U}) = \mathbf{X}\mathbf{E},\tag{4.2}$$

where **Ξ**=**ΘD**. The hypothesis (4.1) is expressed as

$$H\_g: \mathbf{C}\Xi = \mathbf{O}.\tag{4.3}$$

Applying a general theory for testing *Hg* in (2.1), we have the LRC *λ*:

$$\mathcal{X}^{2/n} = \Lambda = \frac{|\mathbf{S}\_e|}{|\mathbf{S}\_e + \mathbf{S}\_h|},\tag{4.4}$$

where

$$\begin{aligned} \mathbf{S}\_{\ast} &= \mathbf{U} (\mathbf{I}\_{\ast} - \mathbf{P}\_{\mathbf{x}}) \mathbf{U} \\ &= \mathbf{D}' \mathbf{V}' (\mathbf{I}\_{\ast} - \mathbf{P}\_{\mathbf{A}}) \mathbf{V} \mathbf{D}\_{\ast} \end{aligned}$$

and

$$\begin{aligned} \mathbf{S}\_{\mathbb{A}} &= (\mathbf{C}(\mathbf{X}\mathbf{X})^{-1}\mathbf{X}\mathbf{U})^{\prime}\langle\mathbf{C}(\mathbf{X}\mathbf{X})^{-1}\mathbf{C}\rangle^{-1}\mathbf{C}(\mathbf{X}\mathbf{X})^{-1}\mathbf{X}\mathbf{U}, \\ &= (\mathbf{C}(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\mathbf{V}\mathbf{D})^{\prime}\langle\mathbf{C}(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{C}\rangle^{-1}\mathbf{C}(\mathbf{X}\mathbf{X})^{-1}\mathbf{X}^{\prime}\mathbf{V}\mathbf{D}. \end{aligned}$$

**Theorem 4.1***The statistic Λ in* (4.4) *is an LR statistic for testing* (4.1) *under* (2.1)*. Further, under Hg, Λ* ∼ *Λd*(*c*, *n* − *k*)*.*

*Proof.* Let **G** = (**G**<sup>1</sup> **G**2) be a *p* × *p* matrix such that **G**1 = **D, G**<sup>1</sup> **′ G**<sup>2</sup> =**O,** and |**G**| ≠ 0. Consider a transformation from **Y** to (**U V**) =**Y**(**G**<sup>1</sup> **G**2).

Then the rows of (**U V**) are independently normal with the same covariance matrix

$$\Psi = \mathbf{G} \mathfrak{D} \mathbf{G} = \begin{pmatrix} \Psi\_{11} & \Psi\_{12} \\ \Psi\_{21} & \Psi\_{22} \end{pmatrix}, \quad \Psi\_{12} : d \times (p - d),$$

and

The expressions (2.11) for **S***e* and **S***h* in terms of **S** can be obtained from projection matrices

W +-

RRR

ÇW - -

R

= [ ] = [ ] [( )X], = [( ) ].

**X 1 IP IP P X**

*<sup>n</sup> n n <sup>n</sup>*

against alternatives *Kg* : **C***Θ***D** ≠ **O** under a multivariate linear model given by (2.1), where **C** is a *c* × *k* given matrix with rank *c* and **D** is a *p* × *d* given matrix with rank *d*. When **C** = (**O** 

For the derivation of LR test of (4.1), we can use the following conventional approach: If **U**=**YD,** then the rows of **U** are independent and normally distributed with the identical covariance

w^

In this section, we consider to test a general linear hypothesis

) and **D** = **I***p*, the hypothesis *Hg* becomes *H* : *Θ*2 = **O**.

where **Ξ**=**ΘD**. The hypothesis (4.1) is expressed as

Applying a general theory for testing *Hg* in (2.1), we have the LRC *λ*:

lL

2/ | | == , | | *n e*

> = '( ) = ( ),

**SU U**

**I P DIPD**

*n* - ¢ ¢ - **X A**

**Y Y**

*e n*

*e h*

+ **S**

**4. General linear hypothesis**

( )2 <sup>1</sup>

**1**

: =, *Hg* **CD O** Q (4.1)

E( ) = , **U X**X (4.2)

: =. *Hg* **C O** X (4.3)

**S S** (4.4)

*n n <sup>n</sup>*


**1 IP X <sup>1</sup>**

based on

148 Applied Linear Algebra in Action

**I***<sup>k</sup>* − *<sup>j</sup>*

where

and

matrix **D**′*Σ***D**, and

$$\begin{aligned} E[(\mathbf{U}\,\mathbf{V})] &= \mathbf{X}\Theta(\mathbf{G}\_1\,\mathbf{G}\_2) \\ &= \mathbf{X}(\Xi\,\Lambda), \quad \Xi = \Theta\mathbf{G}\_1, \,\Lambda = \Theta\mathbf{G}\_2. \end{aligned}$$

The conditional of **V** given **U** is normal. The rows of **V** given **U** are independently normal with the same covariance matrix Ψ11⋅2 , and

$$\begin{aligned} \mathbf{E}(\mathbf{V} \mid \mathbf{U}) &= \mathbf{X}\mathbf{A} + (\mathbf{U} - \mathbf{X}\mathbf{E})\mathbf{T}, \\ &= \mathbf{X}\mathbf{A}^\* + \mathbf{U}\Gamma, \end{aligned}$$

where **Δ**\* =**Δ**−**ΞΓ** and **Γ**=**Ψ**<sup>11</sup> <sup>−</sup>1**Ψ**12. We see that the maximum likelihood of **V** given **U** does not depend on the hypothesis. Therefore, an LR statistic is obtained from the marginal distribution of **U,** which implies the results required.

#### **5. Additional information tests for response variables**

We consider a multivariate regression model with an intercept term *x*0 and *k* explanatory variables *x*1, …, *xk* as follows.

$$\mathbf{Y} = \mathbf{1}\mathbf{\bar{O}}' + \mathbf{X}\Theta + \mathbf{E},\tag{5.1}$$

where **Y** and **X** are the observation matrices on *y* = (*y*1, …, *yp*)′ and *x* = (*x*1, …, *xk*)′. We assume that the error matrix **E** has the same property as in (2.1), and rank (**1***<sup>n</sup>* **X**) = *k* + 1. Our interest is to test a hypothesis *H*2 ⋅ 1 on no additional information of *y*2 = (*yq* + 1, …, *yp*)′ in presence of *y*1 = (*y*1, …, *yq*)′.

Along the partition of *y* into (*y*1′, *y*2′) let **Y,** *θ,* Θ, and Σ partition as

$$\begin{array}{cc} \mathbf{M} = (\mathbf{M}\_1 \; \mathbf{M}\_2), & \mathbf{O} = (\mathbf{O}\_1 \; \mathbf{O}\_2), \\ \mathbf{H} = \begin{pmatrix} \mathbf{O}\_1 \\ \mathbf{O}\_2 \end{pmatrix}, & \mathbf{E} = \begin{pmatrix} \mathbf{E}\_{11} & \mathbf{E}\_{12} \\ \mathbf{E}\_{21} & \mathbf{E}\_{22} \end{pmatrix}. \end{array}$$

The conditional distribution of **Y**2 given **Y**1 is normal with mean

$$\begin{split} \mathbb{E}(\mathbf{Y}\_{2} \mid \mathbf{Y}\_{1}) &= \mathbf{1}\boldsymbol{\theta}\_{2}^{\prime} + \mathbf{X}\boldsymbol{\Theta}\_{2} + (\mathbf{Y}\_{1} - \mathbf{1}\_{n}\boldsymbol{\theta}\_{1}^{\prime} - \mathbf{X}\boldsymbol{\Theta}\_{1})\boldsymbol{\Sigma}\_{11}^{-1}\boldsymbol{\Sigma}\_{12} \\ &= \mathbf{1}\_{n}\tilde{\boldsymbol{\theta}}\_{02}^{\prime} + \mathbf{X}\tilde{\boldsymbol{\Theta}}\_{2} + \mathbf{Y}\_{1}\mathbf{Z}\_{11}^{-1}\boldsymbol{\Sigma}\_{12}, \end{split} \tag{5.2}$$

and the conditional covariance matrix is expressed as

$$\text{Var}[\text{VecC}\left(\mathbf{Y}\_2 \mid \mathbf{Y}\_1\right)] = \mathbf{Z}\_{22\cdot l} \otimes \mathbf{I}\_u,\tag{5.3}$$

where **Σ**22⋅<sup>1</sup> =**Σ**<sup>22</sup> −**Σ**21**Σ**<sup>11</sup> <sup>−</sup>1**Σ**12 , and

$$
\tilde{\boldsymbol{\Theta}}'\_2 = \boldsymbol{\Theta}'\_2 - \boldsymbol{\Theta}'\_1 \boldsymbol{\Sigma}\_{11}^{-1} \boldsymbol{\Sigma}\_{12}, \quad \tilde{\boldsymbol{\Theta}}\_2 = \boldsymbol{\Theta}\_2 - \boldsymbol{\Theta}\_1 \boldsymbol{\Sigma}\_{11}^{-1} \boldsymbol{\Sigma}\_{12}.
$$

Here, for an *n* × *p* matrix **Y**=(**y**(1), …, **y**(p,)) v*ec* (**Y**) means an *np*-vector (**y**(1) *′* , …, **<sup>y</sup>**( *<sup>p</sup>*) *′* )′ . Now we define the hypothesis *H*2 ⋅ 1 as

$$H\_{2,1} \colon \Theta\_2 = \Theta\_1 \Sigma\_{11}^{-1} \Sigma\_{12} \Longleftrightarrow \tilde{\Theta}\_2 = \mathcal{O}.\tag{5.4}$$

The hypothesis *H*2 ⋅ 1 means that *y*2 after removing the effects of *y*1 does not depend on *x*. In other words, the relationship between *y*2 and *x* can be described by the relationship between *y*1 and **x**. In this sense, **y**2 is redundant in the relationship between *y* and *x*.

The LR criterion for testing the hypothesis *H*2 ⋅<sup>1</sup> against alternatives *K*2⋅<sup>1</sup> :**Θ**˜ <sup>2</sup>⋅<sup>1</sup> ≠**O** can be obtained through the following steps.

(D1) The density function of **Y**=(**Y**<sup>1</sup> **Y**2) can be expressed as the product of the marginal density function of **Y**1 and the conditional density function of **Y**2 given **Y**1. Note that the density functions of **Y**1 under *H*2 ⋅ 1 and *K*2 ⋅ 1 are the same.

(D2) The spaces spanned by each column of E(**Y**<sup>2</sup> |**Y**1) are the same, and let the spaces under *K*2 ⋅ 1 and *H*2 ⋅ 1 denote by *Ω* and *ω,* respectively. Then

$$
\Omega = \mathcal{R}[(\mathbf{1}\_n \ \mathbf{Y}\_1 \ \mathbf{X})], \quad \boldsymbol{\alpha} = \mathcal{R}[(\mathbf{1}\_n \ \mathbf{Y}\_1)],
$$

and dim(*Ω*) = *q* + *k* + 1, dim(*ω*) = *k* + 1.

to test a hypothesis *H*2 ⋅ 1 on no additional information of *y*2 = (*yq* + 1, …, *yp*)′ in presence of

1 2 1 2 1 11 12 2 21 22

Q Q Q S S

S S

1

**<sup>Y</sup>** (5.2)

*′* , …, **<sup>y</sup>**( *<sup>p</sup>*) *′* )′

. Now we

<sup>2</sup>⋅<sup>1</sup> ≠**O** can be


Var[ ( | )] = , vec 2 1 22 1<sup>×</sup> Ä *<sup>n</sup>* **Y Y** S **I** (5.3)

2 1 2 1 11 12 2 : = = O. - *H* <sup>×</sup> Q Q S S Û Q% (5.4)

Q S S

= ,= . æö æ ö ç÷ ç ÷ èø è ø

S

2 1 2 2 1 1 1 11 12 1

Q S S

¢ ¢ + Q+ - -

**1X 1 X**

*n*

q

1 1 2 2 1 11 12 2 2 1 11 12 = ,= . - - ¢¢¢ - -

S S Q% Q Q S S


02 2 1 11 12

q

**Y YY**

q

E( | ) = ( ) = ,

¢ + +

**1 X** % %

q

Here, for an *n* × *p* matrix **Y**=(**y**(1), …, **y**(p,)) v*ec* (**Y**) means an *np*-vector (**y**(1)

*y*1 and **x**. In this sense, **y**2 is redundant in the relationship between *y* and *x*.

The LR criterion for testing the hypothesis *H*2 ⋅<sup>1</sup> against alternatives *K*2⋅<sup>1</sup> :**Θ**˜

1

The hypothesis *H*2 ⋅ 1 means that *y*2 after removing the effects of *y*1 does not depend on *x*. In other words, the relationship between *y*2 and *x* can be described by the relationship between

(D1) The density function of **Y**=(**Y**<sup>1</sup> **Y**2) can be expressed as the product of the marginal density function of **Y**1 and the conditional density function of **Y**2 given **Y**1. Note that the density

= ( ), = ( ),

Along the partition of *y* into (*y*1′, *y*2′) let **Y,** *θ,* Θ, and Σ partition as

q

The conditional distribution of **Y**2 given **Y**1 is normal with mean

*n*

and the conditional covariance matrix is expressed as

<sup>−</sup>1**Σ**12 , and

q

% q

q

**Y Y Y**

q

*y*1 = (*y*1, …, *yq*)′.

150 Applied Linear Algebra in Action

where **Σ**22⋅<sup>1</sup> =**Σ**<sup>22</sup> −**Σ**21**Σ**<sup>11</sup>

define the hypothesis *H*2 ⋅ 1 as

obtained through the following steps.

functions of **Y**1 under *H*2 ⋅ 1 and *K*2 ⋅ 1 are the same.

(D3) The likelihood ratio criterion *λ* is expressed as

$$\mathcal{A}^{2/n} = \Lambda = \frac{|\mathbf{S}\_{\Omega}|}{|\mathbf{S}\_{\alpha}|} = \frac{|\mathbf{S}\_{\Omega}|}{|\mathbf{S}\_{\Omega} + (\mathbf{S}\_{\alpha} - \mathbf{S}\_{\Omega})|}.$$

where **S***<sup>Ω</sup>* =**Y**<sup>2</sup> **′** (**I***<sup>n</sup>* −**P***Ω*)**Y**2 and **S***<sup>ω</sup>* =**Y**<sup>2</sup> **′** (**I***<sup>n</sup>* −**P***ω*)**Y**2.

(D4) Note that E(**Y**<sup>2</sup> |**Y**1)′ (**P***<sup>ω</sup>* −**P***ω*)E(**Y**<sup>2</sup> |**Y**1) =**O** under *H*2 ⋅ 1. The conditional distribution of *Λ* under *H*2 ⋅ 1 is *Λ<sup>p</sup>* <sup>−</sup> *<sup>q</sup>*(*k*, *n* − *q* − *k* − 1), and hence the distribution of *Λ* under *H*2 ⋅ 1 is *Λ<sup>p</sup>* <sup>−</sup> *<sup>q</sup>*(*k*, *n* − *q* − *k* − 1).

Note that the *Λ* statistic is defined through **Y**<sup>2</sup> **′** (**I***<sup>n</sup>* −**P***Ω*)**Y**2 and **Y**<sup>2</sup> **′** (**P***<sup>Ω</sup>* −**P***ω*)**Y**2 , which involve *n* × *n* matrices. We try to write these statistics in terms of the SSP matrix of (*y*′, *x*′)′ defined by

$$\begin{aligned} \mathbf{S} &= \sum\_{i=1}^{n} \begin{pmatrix} \mathbf{y}\_{i} - \overline{\mathbf{y}} \\ \mathbf{x}\_{i} - \overline{\mathbf{x}} \end{pmatrix} \begin{pmatrix} \mathbf{y}\_{i} - \overline{\mathbf{y}} \\ \mathbf{x}\_{i} - \overline{\mathbf{x}} \end{pmatrix}, \\ &= \begin{pmatrix} \mathbf{S}\_{\overline{\mathbf{y}}} & \mathbf{S}\_{\overline{\mathbf{x}}} \\ \mathbf{S}\_{\overline{\mathbf{y}}} & \mathbf{S}\_{\overline{\mathbf{x}}} \end{pmatrix}, \end{aligned}$$

where **y**¯ and **x**¯ are the sample mean vectors. Along the partition of **y**=(**y**<sup>1</sup> *′* , **y**<sup>2</sup> *′* )′, we partition **S** as

$$\mathbf{S} = \begin{pmatrix} \mathbf{S}\_{11} & \mathbf{S}\_{12} & \mathbf{S}\_{1\times} \\ \mathbf{S}\_{21} & \mathbf{S}\_{22} & \mathbf{S}\_{2\times} \\ \mathbf{S}\_{\times 1} & \mathbf{S}\_{\times 2} & \mathbf{S}\_{\times \times} \end{pmatrix}.$$

We can show that

$$\begin{aligned} \mathbf{S}\_{\boldsymbol{\alpha}} &= \mathbf{S}\_{22 \cdot 1} = \mathbf{S}\_{22} - \mathbf{S}\_{21} \mathbf{S}\_{11}^{-1} \mathbf{S}\_{12}, \\ \mathbf{S}\_{\boldsymbol{\alpha}} &= \mathbf{S}\_{22 \cdot 1 \times} = \mathbf{S}\_{22 \cdot \boldsymbol{\alpha}} - \mathbf{S}\_{21 \cdot \boldsymbol{\alpha}} \mathbf{S}\_{11 \cdot \boldsymbol{\alpha}}^{-1} \mathbf{S}\_{12 \cdot \boldsymbol{\alpha}}. \end{aligned}$$

The first result is obtained by using

$$
\rho = \mathcal{R}[\mathbf{1}\_n] + \mathcal{R}[(\mathbf{I}\_n - \mathbf{P}\_{\mathbf{1}\_n})\mathbf{Y}\_1].
$$

The second result is obtained by using

$$\begin{split} \boldsymbol{\Omega} &= \mathcal{R}[\mathbf{1}\_{\boldsymbol{\pi}}] + \mathcal{R}[(\boldsymbol{\tilde{\mathbf{Y}}}\_{\mathrm{l}}, \boldsymbol{\tilde{\mathbf{X}}})] \\ &= \mathcal{R}[\mathbf{1}\_{\boldsymbol{\pi}}] + \mathcal{R}[(\mathbf{I}\_{\boldsymbol{\pi}} - \mathbf{P}\_{0})\mathbf{X}] + \mathcal{R}[(\mathbf{I}\_{\boldsymbol{\pi}} - \mathbf{P}\_{\mathbf{X}})(\mathbf{I}\_{\boldsymbol{\pi}} - \mathbf{P}\_{0})\mathbf{Y}\_{\boldsymbol{\pi}}], \end{split}$$

where **Y**˜ <sup>1</sup> =(**I***<sup>n</sup>* <sup>−</sup>**P1***<sup>n</sup>* )**Y**1 and **X**˜ =(**I***<sup>n</sup>* <sup>−</sup>**P1***<sup>n</sup>* )**X**.

Summarizing the above results, we have the following theorem.

**Theorem 5.1***In the multivariate regression model* (5.1)*, consider to test the hypothesis H*2 ⋅<sup>1</sup> *in* (5.4) *against K*2 ⋅ 1*. Then the LR criterion λ is given by*

$$\mathcal{A}^{2\vee n} = \Lambda = \frac{|\mathbf{S}\_{22\cdot \mathbb{N}}|}{|\mathbf{S}\_{22\cdot \mathbb{I}}|},$$

*whose null distribution isΛ<sup>p</sup>* − *<sup>q</sup>*(*k*, *n* − *q* − *k* − 1).

Note that **S**22⋅1 can be decomposed as

$$\mathbf{S}\_{22\cdot 1} = \mathbf{S}\_{22\cdot 1\times} + \mathbf{S}\_{2\times 1} \mathbf{S}\_{\times \times 1}^{-1} \mathbf{S}\_{\times 2\cdot 1}.$$

This decomposition is obtained by expressing **S**22⋅1*<sup>x</sup>* in terms of **S**22⋅<sup>1</sup> , **S**2*x*⋅<sup>1</sup> , **S***xx*⋅<sup>1</sup> , and **S***x*2⋅<sup>1</sup> by using an inverse formula

$$
\begin{pmatrix}
\mathbf{H}\_{11} & \mathbf{H}\_{12} \\
\mathbf{H}\_{21} & \mathbf{H}\_{22}
\end{pmatrix}^{-1} = \begin{pmatrix}
\mathbf{H}\_{11}^{-1} & \mathbf{O} \\
\mathbf{O} & \mathbf{O}
\end{pmatrix} + \begin{pmatrix}
\mathbf{I}
\end{pmatrix} \mathbf{H}\_{22,1}^{-1} \begin{pmatrix}
\end{pmatrix}.\tag{5.5}
$$

The decomposition is expressed as

$$\mathbf{S}\_{22\cdot 1} - \mathbf{S}\_{22\cdot 1x} = \mathbf{S}\_{2x\cdot 1} \mathbf{S}\_{x\cdot 1}^{-1} \mathbf{S}\_{x2\cdot 1}.\tag{5.6}$$

The result may be also obtained by the following algebraic method. We have

$$\begin{aligned} \mathbf{S}\_{22 \cdot 1} - \mathbf{S}\_{22 \cdot 1x} &= \mathbf{Y}\_2' (\mathbf{P}\_{\Omega} - \mathbf{P}\_{\boldsymbol{\alpha}}) \mathbf{Y}\_2 \\ &= \mathbf{Y}\_2' (\mathbf{P}\_{\boldsymbol{\alpha}^\perp \cap \Omega}) \mathbf{Y}\_2, \end{aligned}$$

and

= [ ] [( ) ]. *n n* <sup>1</sup> *<sup>n</sup>*

= [ ] [( ) ] [( )( ) ],

**1 I P I PI P**

*n n n n* **X**

**Theorem 5.1***In the multivariate regression model* (5.1)*, consider to test the hypothesis H*2 ⋅<sup>1</sup> *in* (5.4)

2/ 22 1

<sup>L</sup> **<sup>S</sup> S**

22 1 22 1 2 1 1 2 1 = . *x x xx x* - **S S SSS** × × ×× × +

This decomposition is obtained by expressing **S**22⋅1*<sup>x</sup>* in terms of **S**22⋅<sup>1</sup> , **S**2*x*⋅<sup>1</sup> , **S***xx*⋅<sup>1</sup> , and **S***x*2⋅<sup>1</sup>

( ) <sup>1</sup> 1 1

= .

1

22 1 22 1 2 2


**SS Y Y**

×× W

2 2

w

^w ÇW

**Y Y**

=( ) = ( ),

**P**

11 12 11 11 12 1 1

**H H H O HH H HH I**

22 1 22 1 2 1 1 2 1 = . *x x xx x*


æö - æ öæ ö ç ÷ ç ÷ç ÷ + è ø è øè ø

The result may be also obtained by the following algebraic method. We have

<sup>×</sup>

22 1 | | == , | | *n x*

×

1

22 1 21 11


**H H OO I** (5.5)


+-+- -

0 0 1

**X Y**

+ - **<sup>1</sup>** R R **1 I PY**

w

= [ ] [( , )]

% % *<sup>n</sup>*

W +

)**Y**1 and **X**˜ =(**I***<sup>n</sup>* <sup>−</sup>**P1***<sup>n</sup>*

*against K*2 ⋅ 1*. Then the LR criterion λ is given by*

*whose null distribution isΛ<sup>p</sup>* − *<sup>q</sup>*(*k*, *n* − *q* − *k* − 1).

21 22

The decomposition is expressed as

Note that **S**22⋅1 can be decomposed as

by using an inverse formula

**1**

R R

Summarizing the above results, we have the following theorem.

1

RR R **Y X**

l

)**X**.

The second result is obtained by using

where **Y**˜ <sup>1</sup> =(**I***<sup>n</sup>* <sup>−</sup>**P1***<sup>n</sup>*

152 Applied Linear Algebra in Action

$$
\Omega = \mathcal{R}[\mathbf{1}\_{\boldsymbol{u}}] + \mathcal{R}[(\tilde{\mathbf{Y}}\_{\boldsymbol{1}} \tilde{\mathbf{X}})], \quad \boldsymbol{\phi} = \mathcal{R}[\mathbf{1}\_{\boldsymbol{u}}] + \mathcal{R}[\tilde{\mathbf{Y}}\_{\boldsymbol{1}}].
$$

Therefore,

$$\begin{aligned} \boldsymbol{\alpha}^{\perp} \cap \Omega &= \mathcal{R} [ (\mathbf{I}\_{\boldsymbol{\pi}} - \mathbf{P}\_{\mathbf{1}\_{\boldsymbol{\pi}}} - \mathbf{P}\_{\boldsymbol{\tilde{\mathbf{y}}}\_{\boldsymbol{\tilde{\mathbf{y}}}}}) (\mathbf{\tilde{\mathbf{Y}}}\_{\boldsymbol{1}} \mathbf{\tilde{X}}) ], \\ &= \mathcal{R} [ (\mathbf{I}\_{\boldsymbol{\pi}} - \mathbf{P}\_{\mathbf{1}\_{\boldsymbol{\pi}}} - \mathbf{P}\_{\boldsymbol{\tilde{\mathbf{y}}}\_{\boldsymbol{1}}}) \mathbf{\tilde{X}} ], \end{aligned}$$

which gives an expression for **P***<sup>ω</sup>* ⊥∩*Ω* by using Theorem 3.1 (1). This leads to (5.6).
