**3. Updating the principal components efficiently - discrete case in R***<sup>d</sup>*

In this section, we consider the problem of updating the covariance matrix Σ of a discrete point set *<sup>P</sup>* <sup>=</sup> {*<sup>p</sup>*1,*<sup>p</sup>*2,...,*pn*} in **<sup>R</sup>***d*, when *<sup>m</sup>* points are added or deleted from *<sup>P</sup>*. We give closed-form solutions for computing the components of the new covariance matrix Σ� . Those closed-form solutions are based on the already computed components of Σ. The recent result of Pébay (2008) implies the same solution for additions that will be presented in the sequel. The main result of this section is given in the following theorem.

**Theorem 3.1.** *Let P be a set of n points in* **R***<sup>d</sup> with known covariance matrix* Σ*. Let P*� *be a point set in* **R***d, obtained by adding or deleting m points from P. The principal components of P*� *can be computed in O*(*m*) *time for fixed d.*

#### *Proof.* **Adding points**

Let *Pm* <sup>=</sup> {*pn*+1,*pn*+2,...,*pn*+*m*} be a point set with center of gravity *<sup>μ</sup><sup>m</sup>* = (*μ<sup>m</sup>* <sup>1</sup> , *<sup>μ</sup><sup>m</sup>* <sup>2</sup> ,..., *<sup>μ</sup><sup>m</sup> d* ). We add *Pm* to *P* obtaining new point set *P*� . The *j*-th component, *μ*� *j* , 1 ≤ *j* ≤ *d*, of the center of gravity *μ*� = (*μ*� <sup>1</sup>, *μ*� <sup>2</sup>,..., *μ*� *<sup>d</sup>*) of *P*� is

$$\mu\_j' = \frac{1}{n+m} \sum\_{k=1}^{n+m} p\_{k,j} = \frac{1}{n+m} \left( \sum\_{k=1}^n p\_{k,j} + \sum\_{k=n+1}^{n+m} p\_{k,j} \right) = \frac{n}{n+m} \mu\_j + \frac{m}{n+m} \mu\_j^m.$$

The (*i*, *j*)-th component, *σ*� *ij*, 1 ≤ *i*, *j* ≤ *d*, of the covariance matrix Σ� of *P*� is

$$\begin{split} \sigma'\_{ij} &= \frac{1}{n+m} \sum\_{k=1}^{n+m} (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j) \\ &= \frac{1}{n+m} \sum\_{i=1}^n (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j) + \frac{1}{n+m} \sum\_{k=n+1}^{n+m} (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j). \end{split}$$

Let

2 Will-be-set-by-IN-TECH

For implementation and verification of some the theoretical results presented here, we refer interested readers to Dimitrov, Holst, Knauer & Kriegel (2009) and Dimitrov et al. (2011).

The central idea and motivation of PCA is to reduce the dimensionality of a point set by identifying *the most significant directions (principal components)*. Let *P* = {*p*1,*p*2,...,*pn*} be a set of vectors (points) in **<sup>R</sup>***d*, and *<sup>μ</sup>* = (*μ*1, *<sup>μ</sup>*2,..., *<sup>μ</sup>d*) <sup>∈</sup> **<sup>R</sup>***<sup>d</sup>* be the center of gravity of *<sup>P</sup>*. For 1 ≤ *k* ≤ *d*, we use *pi*,*<sup>k</sup>* to denote the *k*-th coordinate of the vector *pi*. Given two vectors *u* and *<sup>v</sup>*, we use �*<sup>u</sup>*,*<sup>v</sup>*� to denote their inner product. For any unit vector *<sup>v</sup>* <sup>∈</sup> **<sup>R</sup>***d*, the *variance of P in*

*n*

*n* ∑ *i*=1

The most significant direction corresponds to the unit vector *v*<sup>1</sup> such that var(*P*,*v*1) is maximum. In general, after identifying the *j* most significant directions *v*1,...,*vj*, the (*j* + 1)-th most significant direction corresponds to the unit vector *vj*<sup>+</sup><sup>1</sup> such that var(*P*,*vj*<sup>+</sup>1)

where Σ is the *covariance matrix* of *P*. Σ is a symmetric *d* × *d* matrix where the (*i*, *j*)-th

The procedure of finding the most significant directions, in the sense mentioned above, can be formulated as an eigenvalue problem. If *λ*<sup>1</sup> ≥ *λ*<sup>2</sup> ≥ ··· ≥ *λ<sup>d</sup>* are the eigenvalues of Σ, then the unit eigenvector *vj* for *λ<sup>j</sup>* is the *j*-th most significant direction. Since the matrix Σ is symmetric positive semidefinite, its eigenvectors are orthogonal, all *λj*s are non-negative and

Computation of the eigenvalues, when *d* is not very large, can be done in *O*(*d*3) time, for example with the *Jacobi* or the *QR method* (Press et al. (1995)). Thus, the time complexity of computing principal components of *n* points in **R***<sup>d</sup>* is *O*(*n* + *d*3). The additive factor of *O*(*d*3) throughout the paper will be omitted, since we will assume that *d* is fixed. For very large *d*, the problem of computing eigenvalues is non-trivial. In practice, the above mentioned methods for computing eigenvalues converge rapidly. In theory, it is unclear how to bound the running time combinatorially and how to compute the eigenvalues in decreasing order. In Cheng & Y. Wang (2008) a modification of the *Power method* (Parlett (1998)) is presented, which can give

In this section, we consider the problem of updating the covariance matrix Σ of a discrete point set *<sup>P</sup>* <sup>=</sup> {*<sup>p</sup>*1,*<sup>p</sup>*2,...,*pn*} in **<sup>R</sup>***d*, when *<sup>m</sup>* points are added or deleted from *<sup>P</sup>*. We give

�*pi* <sup>−</sup>*<sup>μ</sup>* , *<sup>v</sup>*�2. (1)

var(*P*,*v*) = �Σ*v*,*v*�, (2)

(*pi*,*<sup>k</sup>* − *μi*)(*pj*,*<sup>k</sup>* − *μj*). (3)

**2. Basics. Computing principal components - discrete case in R***<sup>d</sup>*

var(*P*,*<sup>v</sup>*) = <sup>1</sup>

is maximum among all unit vectors perpendicular to *v*1,*v*2,...,*vj*.

*<sup>σ</sup>ij* <sup>=</sup> <sup>1</sup> *n*

a guaranteed approximation of the eigenvalues with high probability.

**3. Updating the principal components efficiently - discrete case in R***<sup>d</sup>*

*n* ∑ *k*=1

It can be verified that for any unit vector *<sup>v</sup>* <sup>∈</sup> **<sup>R</sup>***d*,

component, *σij*, 1 ≤ *i*, *j* ≤ *d*, is defined as

*direction v* is

*λ<sup>j</sup>* = var(*X*,*vj*).

$$
\sigma'\_{ij} = \sigma'\_{ij,1} + \sigma'\_{ij,2'},
$$

where,

$$
\sigma'\_{ij,1} = \frac{1}{n+m} \sum\_{k=1}^{n} (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j),
\tag{4}
$$

and

$$
\sigma'\_{ij,2} = \frac{1}{n+m} \sum\_{k=n+1}^{n+m} (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j). \tag{5}
$$

Plugging-in the values of *μ*� *<sup>i</sup>* and *μ*� *<sup>j</sup>* in (4), we obtain:

$$\begin{split} \nu'\_{ij,1} &= \frac{1}{n+m} \sum\_{k=1}^{n} \left( p\_{k,i} - \frac{n}{n+m} \mu\_i - \frac{m}{n+m} \mu\_i^m \right) (p\_{k,j} - \frac{n}{n+m} \mu\_j - \frac{m}{n+m} \mu\_j^m) \\ &= \frac{1}{n+m} \sum\_{k=1}^{m} \left( p\_{k,i} - \mu\_i + \frac{m}{n+m} \mu\_i - \frac{m}{n+m} \mu\_i^m \right) (p\_{k,j} - \mu\_j + \frac{m}{n+m} \mu\_j - \frac{m}{n+m} \mu\_i^m) \\ &= \frac{1}{n+m} \sum\_{k=1}^{n} \left( p\_{k,i} - \mu\_i \right) (p\_{k,j} - \mu\_j) + \frac{1}{n+m} \sum\_{k=1}^{n} \left( p\_{k,i} - \mu\_i \right) (\frac{m}{n+m} \mu\_j - \frac{m}{n+m} \mu\_j^m) + \\ &\quad \frac{1}{n+m} \sum\_{k=1}^{n} \left( \frac{m}{n+m} \mu\_i - \frac{m}{n+m} \mu\_i^m \right) (p\_{k,j} - \mu\_j) + \\ &\quad \frac{1}{n+m} \sum\_{k=1}^{n} \left( \frac{m}{n+m} \mu\_i - \frac{m}{n+m} \mu\_i^m \right) (\frac{m}{n+m} \mu\_j - \frac{m}{n+m} \mu\_j^m). \end{split}$$

of Discrete and Continuous Point Sets 5

Computing and Updating Principal Components of Discrete and Continuous Point Sets 267

**Input**: point set *P*, the center of gravity *μ* of *P*, the covariance matrix Σ of *P*,

*<sup>j</sup>* , for 1 ≤ *j* ≤ *d*

*<sup>i</sup>*)(*pk*,*<sup>j</sup>* − *μ*�

*<sup>i</sup>*)(*pk*,*<sup>j</sup>* − *μ*�

*σ*� *ij* = *σ*�

> *n* ∑ *k*=1

*n* ∑ *k*=*n*−*m*+1

*<sup>j</sup>* in (9), we obtain:

*m <sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>i</sup>* <sup>−</sup> *<sup>m</sup>*

*n* − *m*

(*n*+*m*)<sup>2</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*j* )

*j* ) <sup>−</sup> <sup>1</sup> *n* − *m*

*ij*, 1 ≤ *i*, *j* ≤ *d*, of the covariance matrix Σ� of *P*� is

*ij*,1 − *σ*� *ij*,2,

(*pk*,*<sup>i</sup>* − *μ*�

(*pk*,*<sup>i</sup>* − *μ*�

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*

*n* ∑ *k*=1

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>−</sup> *<sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*n* − *m*

*<sup>i</sup>* )( *<sup>m</sup>*

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* − *μj*) +

*<sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*n* ∑ *k*=*n*−*m*+1

*<sup>i</sup>*)(*pk*,*<sup>j</sup>* − *μ*�

*j*

*<sup>i</sup>*)(*pk*,*<sup>j</sup>* − *μ*�

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>+</sup>

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup>i*)( *<sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup> <sup>i</sup>* ).

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* − *μ<sup>j</sup>* +

*j*

*m <sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup> j* )

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>−</sup> *<sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>−</sup> *<sup>m</sup>*

*m*

*<sup>j</sup>* ), for 1 ≤ *i*, *j* ≤ *d*

(*pk*,*<sup>i</sup>* − *μ*�

*<sup>i</sup>*)(*pk*,*<sup>j</sup>* − *μ*�

), (9)

). (10)

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup> j* )

*<sup>j</sup>* ) +

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*j* ).

*ij* ) + *nm*

**Algorithm 3.1** : ADDINGPOINTS(*P*, *μ*, Σ, *Pm*)

*<sup>n</sup>*+*<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>+</sup> *<sup>m</sup>*

*n*−*m* ∑ *k*=1

*n* ∑ *k*=1

*<sup>n</sup>*+*<sup>m</sup>* (*nσij* <sup>+</sup> *<sup>m</sup>σ<sup>m</sup>*

5: compute the covariance Σ� of *P* ∪ *Pm*:

*<sup>n</sup>*+*<sup>m</sup> <sup>μ</sup><sup>m</sup>*

(*pk*,*<sup>i</sup>* − *μ*�

(*pk*,*<sup>i</sup>* − *μ*�

*σ*�

*ij*,2 <sup>=</sup> <sup>1</sup>

*<sup>i</sup>* and *μ*�

*σ*�

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>n</sup>*

(*pk*,*<sup>i</sup>* − *μ<sup>i</sup>* +

*ij*,1 <sup>=</sup> <sup>1</sup>

*n* − *m*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>i</sup>* <sup>+</sup>

*m*

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup>i*)(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup>j*) + <sup>1</sup>

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>i</sup>* <sup>−</sup> *<sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>i</sup>* <sup>−</sup> *<sup>m</sup>*

point set *Pm* added to *P* **Output**: principal componenets of *P* ∪ *Pm* 1: compute the center of gravity *μ<sup>m</sup>* of *Pm* 2: compute the covariance matrix Σ*<sup>m</sup>* of *Pm* 3: compute the center of gravity *μ*� of *P* ∪ *Pm*:

*<sup>j</sup>* <sup>=</sup> *<sup>n</sup>*

*ij* <sup>=</sup> <sup>1</sup>

7: **return** the eigenvectors of Σ�

The (*i*, *j*)-th component, *σ*�

<sup>=</sup> <sup>1</sup> *n* − *m*

Plugging-in the values of *μ*�

*n* − *m*

1 *n* − *m*

1 *n* − *m*

*n* ∑ *k*=1

*m* ∑ *k*=1

*n* ∑ *k*=1

*n* ∑ *k*=1

*n* ∑ *k*=1

( *<sup>m</sup>*

( *<sup>m</sup>*

*σ*� *ij* <sup>=</sup> <sup>1</sup> *n* − *m*

Let

where

and

*σ*�

*ij*,1 <sup>=</sup> <sup>1</sup>

<sup>=</sup> <sup>1</sup> *n* − *m*

<sup>=</sup> <sup>1</sup> *n* − *m*

4: *μ*�

6: *σ*�

Since ∑*<sup>n</sup> <sup>k</sup>*=1(*pk*,*<sup>i</sup>* − *μi*) = 0, 1 ≤ *i* ≤ *d*, we have

$$
\sigma\_{ij,1}' = \frac{n}{n+m} \sigma\_{ij} + \frac{nm^2}{(n+m)^3} (\mu\_i - \mu\_i^m)(\mu\_j - \mu\_j^m). \tag{6}
$$

Plugging-in the values of *μ*� *<sup>i</sup>* and *μ*� *<sup>j</sup>* in (5), we obtain:

$$\begin{split} \sigma\_{ij,2}^{\prime} &= \frac{1}{n+m} \sum\_{k=n+1}^{n+m} (p\_{k,i} - \frac{n}{n+m} \mu\_i - \frac{m}{n+m} \mu\_i^m) (p\_{k,j} - \frac{n}{n+m} \mu\_j - \frac{m}{n+m} \mu\_j^m) \\ &= \frac{1}{n+m} \sum\_{k=n+1}^{n+m} (p\_{k,i} - \mu\_i^m + \frac{n}{n+m} \mu\_i^m - \frac{n}{n+m} \mu\_i) (p\_{k,j} - \mu\_j^m + \frac{n}{n+m} \mu\_j^m - \frac{n}{n+m} \mu\_j) \\ &= \frac{1}{n+m} \sum\_{k=n+1}^{n+m} (p\_{k,i} - \mu\_i^m) (p\_{k,j} - \mu\_j^m) + \frac{1}{n+m} \sum\_{k=n+1}^{n+m} (p\_{k,i} - \mu\_i^m) \frac{n}{n+m} (\mu\_j^m - \mu\_j) + \\ &\quad \frac{1}{n+m} \sum\_{k=n+1}^{n+m} \frac{n}{n+m} (\mu\_i^m - \mu\_i) (p\_{k,j} - \mu\_j^m) + \\ &\quad \frac{1}{n+m} \sum\_{k=n+1}^{n+m} \frac{n}{n+m} (\mu\_i^m - \mu\_i) \frac{n}{n+m} (\mu\_j^m - \mu\_j). \end{split}$$

Since ∑*n*+*<sup>m</sup> <sup>k</sup>*=*n*+1(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup> <sup>i</sup>* ) = 0, 1 ≤ *i* ≤ *d*, we have

$$
\sigma\_{ij,2}^{\prime} = \frac{m}{n+m} \sigma\_{ij}^{m} + \frac{n^2 m}{(n+m)^3} (\mu\_i - \mu\_i^m)(\mu\_j - \mu\_j^m),
\tag{7}
$$

where

$$
\sigma\_{ij}^m = \frac{1}{m} \sum\_{k=n+1}^{n+m} (p\_{k,i} - \mu\_i^m)(p\_{k,j} - \mu\_j^m)), \quad 1 \le i, j \le d\_{\nu}
$$

is the *i*, *j*-th element of the covariance matrix Σ*<sup>m</sup>* of the point set *Pm*. Finally, we have

$$
\sigma\_{\rm ij}^{\prime} = \sigma\_{\rm ij,1}^{\prime} + \sigma\_{\rm ij,2}^{\prime} = \frac{1}{n+m}(n\sigma\_{\rm ij} + m\sigma\_{\rm ij}^{m}) + \frac{nm}{(n+m)^{2}}(\mu\_{\rm i} - \mu\_{\rm i}^{m})(\mu\_{\rm j} - \mu\_{\rm j}^{m}).\tag{8}
$$

Note that *σ<sup>m</sup> ij* , and therefore *σ*� *ij*, can be computed in *O*(*m*) time. Thus, for a fixed dimension *d*, the covariance matrix Σ also can be computed in *O*(*m*) time.

The above derivation of the new principal components is summarized in Algorithm 3.1.

#### **Deleting points**

Let *Pm* <sup>=</sup> {*pn*−*m*+1,*pn*−*m*,...,*pn*} be a subset of the point set P, and let *<sup>μ</sup><sup>m</sup>* = (*μ<sup>m</sup>* <sup>1</sup> , *<sup>μ</sup><sup>m</sup>* <sup>2</sup> ,..., *<sup>μ</sup><sup>m</sup> d* ) be the center of gravity of *Pm*. We subtract *Pm* from *P*, obtaining a new point set *P*� . The *j*-th component, *μ*� *j* , 1 ≤ *j* ≤ *d*, of the center of gravity *μ*� = (*μ*� <sup>1</sup>, *μ*� <sup>2</sup>,..., *μ*� *<sup>d</sup>*) of *P*� is

$$\mu\_j' = \frac{1}{n-m} \sum\_{k=1}^{n-m} p\_{k,j} = \frac{1}{n-m} \left( \sum\_{k=1}^n p\_{k,j} - \sum\_{k=n-m+1}^n p\_{k,j} \right) = \frac{n}{n-m} \mu\_j - \frac{m}{n-m} \mu\_j^m \dots$$

**Algorithm 3.1** : ADDINGPOINTS(*P*, *μ*, Σ, *Pm*)

**Input**: point set *P*, the center of gravity *μ* of *P*, the covariance matrix Σ of *P*, point set *Pm* added to *P*

**Output**: principal componenets of *P* ∪ *Pm*


$$4; \qquad \mu'\_j = \frac{n}{n+m}\mu\_j + \frac{m}{n+m}\mu\_j^m, \text{ for } 1 \le j \le d$$

5: compute the covariance Σ� of *P* ∪ *Pm*:

6: *σ*� *ij* <sup>=</sup> <sup>1</sup> *<sup>n</sup>*+*<sup>m</sup>* (*nσij* <sup>+</sup> *<sup>m</sup>σ<sup>m</sup> ij* ) + *nm* (*n*+*m*)<sup>2</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup> <sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup> <sup>j</sup>* ), for 1 ≤ *i*, *j* ≤ *d* 7: **return** the eigenvectors of Σ�

The (*i*, *j*)-th component, *σ*� *ij*, 1 ≤ *i*, *j* ≤ *d*, of the covariance matrix Σ� of *P*� is

$$\begin{split} \sigma'\_{ij} &= \frac{1}{n-m} \sum\_{k=1}^{n-m} (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j) \\ &= \frac{1}{n-m} \sum\_{k=1}^n (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j) - \frac{1}{n-m} \sum\_{k=n-m+1}^n (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j). \end{split}$$

Let

4 Will-be-set-by-IN-TECH

*nm*<sup>2</sup>

*<sup>j</sup>* in (5), we obtain:

*<sup>n</sup>* <sup>+</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* <sup>−</sup> *<sup>n</sup>*

*<sup>j</sup>* ) + <sup>1</sup> *n* + *m*

*<sup>n</sup>* <sup>+</sup> *<sup>m</sup>* (*μ<sup>m</sup>*

*n*2*m*

*<sup>j</sup>* ) +

*<sup>j</sup>* − *μj*).

(*<sup>n</sup>* <sup>+</sup> *<sup>m</sup>*)<sup>3</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*ij* ) + *nm*

(*<sup>n</sup>* <sup>+</sup> *<sup>m</sup>*)<sup>3</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*

*<sup>n</sup>* <sup>+</sup> *<sup>m</sup> <sup>μ</sup>i*)(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*n*+*m* ∑ *k*=*n*+1

*<sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>+</sup> *<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>−</sup> *<sup>m</sup>*

*<sup>j</sup>* +

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>j</sup>* )), 1 ≤ *i*, *j* ≤ *d*,

(*<sup>n</sup>* <sup>+</sup> *<sup>m</sup>*)<sup>2</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*ij*, can be computed in *O*(*m*) time. Thus, for a fixed dimension *d*,

<sup>1</sup>, *μ*�

*pk*,*<sup>j</sup>* 

*n* ∑ *k*=*n*−*m*+1 <sup>2</sup>,..., *μ*�

<sup>=</sup> *<sup>n</sup>*

*<sup>d</sup>*) of *P*� is

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>−</sup> *<sup>m</sup>*

*<sup>n</sup>* <sup>+</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup> j* )

*n <sup>n</sup>* <sup>+</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* ) *<sup>n</sup> <sup>n</sup>* <sup>+</sup> *<sup>m</sup>* (*μ<sup>m</sup>*

*<sup>j</sup>* ). (6)

*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*

*<sup>j</sup>* ), (7)

*<sup>j</sup>* ). (8)

<sup>1</sup> , *<sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup> j* .

<sup>2</sup> ,..., *<sup>μ</sup><sup>m</sup> d* )

. The *j*-th

*<sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>+</sup> *<sup>m</sup> <sup>μ</sup>j*)

*<sup>j</sup>* − *μj*) +

*<sup>k</sup>*=1(*pk*,*<sup>i</sup>* − *μi*) = 0, 1 ≤ *i* ≤ *d*, we have

*ij*,1 <sup>=</sup> *<sup>n</sup>*

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>n</sup>*

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*n <sup>n</sup>* <sup>+</sup> *<sup>m</sup>* (*μ<sup>m</sup>*

*n <sup>n</sup>* <sup>+</sup> *<sup>m</sup>* (*μ<sup>m</sup>*

*σ*�

*σm ij* <sup>=</sup> <sup>1</sup> *m*

*ij*,1 + *σ*�

*ij* , and therefore *σ*�

*n*−*m* ∑ *k*=1

*ij*,2 <sup>=</sup> *<sup>m</sup> n* + *m*

*ij*,2 <sup>=</sup> <sup>1</sup>

the covariance matrix Σ also can be computed in *O*(*m*) time.

*pk*,*<sup>j</sup>* <sup>=</sup> <sup>1</sup>

*n* − *m*

*n*+*m* ∑ *k*=*n*+1

*n* + *m*

*<sup>i</sup>* and *μ*�

*<sup>i</sup>* +

*σij* +

*<sup>n</sup>* <sup>+</sup> *<sup>m</sup> <sup>μ</sup><sup>i</sup>* <sup>−</sup> *<sup>m</sup>*

*n <sup>n</sup>* <sup>+</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup>i*) *<sup>n</sup>*

*<sup>i</sup>* ) = 0, 1 ≤ *i* ≤ *d*, we have

*σm ij* +

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>+</sup> *<sup>m</sup>* (*nσij* <sup>+</sup> *<sup>m</sup>σ<sup>m</sup>*

Let *Pm* <sup>=</sup> {*pn*−*m*+1,*pn*−*m*,...,*pn*} be a subset of the point set P, and let *<sup>μ</sup><sup>m</sup>* = (*μ<sup>m</sup>*

, 1 ≤ *j* ≤ *d*, of the center of gravity *μ*� = (*μ*�

 *n* ∑ *k*=1

be the center of gravity of *Pm*. We subtract *Pm* from *P*, obtaining a new point set *P*�

*pk*,*<sup>j</sup>* −

is the *i*, *j*-th element of the covariance matrix Σ*<sup>m</sup>* of the point set *Pm*. Finally, we have

The above derivation of the new principal components is summarized in Algorithm 3.1.

*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup>i*)(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*σ*�

Plugging-in the values of *μ*�

*n*+*m* ∑ *k*=*n*+1

*n*+*m* ∑ *k*=*n*+1

*n*+*m* ∑ *k*=*n*+1

*n*+*m* ∑ *k*=*n*+1

*n*+*m* ∑ *k*=*n*+1

*n* + *m*

Since ∑*<sup>n</sup>*

*σ*�

*ij*,2 <sup>=</sup> <sup>1</sup>

<sup>=</sup> <sup>1</sup> *n* + *m*

<sup>=</sup> <sup>1</sup> *n* + *m*

> 1 *n* + *m*

> 1 *n* + *m*

> > *σ*� *ij* = *σ*�

Note that *σ<sup>m</sup>*

**Deleting points**

component, *μ*�

*μ*� *<sup>j</sup>* <sup>=</sup> <sup>1</sup> *n* − *m*

*j*

*<sup>k</sup>*=*n*+1(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

Since ∑*n*+*<sup>m</sup>*

where

$$
\sigma'\_{ij} = \sigma'\_{ij,1} - \sigma'\_{ij,2},
$$

where

$$
\sigma'\_{ij,1} = \frac{1}{n-m} \sum\_{k=1}^{n} (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j),
\tag{9}
$$

and

$$
\sigma'\_{ij,2} = \frac{1}{n-m} \sum\_{k=n-m+1}^{n} (p\_{k,i} - \mu'\_i)(p\_{k,j} - \mu'\_j). \tag{10}
$$

Plugging-in the values of *μ*� *<sup>i</sup>* and *μ*� *<sup>j</sup>* in (9), we obtain:

$$\begin{split} \nu'\_{ij,1} &= \frac{1}{n-m} \sum\_{k=1}^{n} \left( p\_{k,i} - \frac{n}{n-m} \mu\_i + \frac{m}{n-m} \mu\_i^m \right) (p\_{k,j} - \frac{n}{n-m} \mu\_j + \frac{m}{n-m} \mu\_j^m) \\ &= \frac{1}{n-m} \sum\_{k=1}^{m} (p\_{k,i} - \mu\_i + \frac{m}{n-m} \mu\_i - \frac{m}{n-m} \mu\_i^m) (p\_{k,j} - \mu\_j + \frac{m}{n-m} \mu\_j - \frac{m}{n-m} \mu\_j^m) \\ &= \frac{1}{n-m} \sum\_{k=1}^{n} (p\_{k,i} - \mu\_i)(p\_{k,j} - \mu\_j) + \frac{1}{n-m} \sum\_{k=1}^{n} (p\_{k,i} - \mu\_i)(\frac{m}{n-m} \mu\_j - \frac{m}{n-m} \mu\_j^m) + \\ &\quad \frac{1}{n-m} \sum\_{k=1}^{n} (\frac{m}{n-m} \mu\_i - \frac{m}{n-m} \mu\_i^m)(p\_{k,j} - \mu\_j) + \\ &\quad \frac{1}{n-m} \sum\_{k=1}^{n} (\frac{m}{n-m} \mu\_i - \frac{m}{n-m} \mu\_i^m)(\frac{m}{n-m} \mu\_j - \frac{m}{n-m} \mu\_i^m). \end{split}$$

of Discrete and Continuous Point Sets 7

Computing and Updating Principal Components of Discrete and Continuous Point Sets 269

*ij* ) <sup>−</sup> *nm*

(*n*−*m*)<sup>2</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>j</sup>* ), for 1 ≤ *i*, *j* ≤ *d*

**Input**: point set *P*, the center of gravity *μ* of *P*, the covariance matrix Σ of *P*,

*<sup>j</sup>* , for 1 ≤ *j* ≤ *d*

*<sup>n</sup>*−*<sup>m</sup>* (*nσij* <sup>−</sup> *<sup>m</sup>σ<sup>m</sup>*

**4. Computing and updating the principal components efficiently -**

The above derivation of the new principal components is summarized in Algorithm 3.2.

Notice, that once having closed-form solutions, one can obtain similar algorithms, as presented in this chapter, when the continuous point sets are considered. Therefore, we will

Here, we consider the computation of the principal components of a dynamic continuous point set. We present a closed form-solutions when the point set is a convex polytope or the boundary of a convex polytope in **R**<sup>2</sup> or **R**3. When the point set is the boundary of a convex polytope, we can update the new principal components in *O*(*k*) time, for both deletion and addition, under the assumption that we know the *k* facets in which the polytope changes. Under the same assumption, when the point set is a convex polytope in **R**<sup>2</sup> or **R**3, we can update the principal components in *O*(*k*) time after adding points. But, to update the principal components after deleting points from a convex polytope in **R**<sup>2</sup> or **R**<sup>3</sup> we need *O*(*n*) time. This is due to the fact that, after a deletion the center of gravity of the old convex hull (polyhedron) could lie outside the new convex hull, and therefore, a retetrahedralization is needed (see Subsection 4.1 and Subsection 6.2 for details). Due to better readability and compactness of the chapter, we present in this section only the closed-form solutions for a convex polytope in

Let *P* be a point set in **R**3, and let *X* be its convex hull. We assume that the boundary of *X* is triangulated (if it is not, we can triangulate it in a preprocessing step). We choose an arbitrary point *o* in the interior of *X*, for example, we can choose *o* to be the center of gravity of the boundary of *X*. Each triangle from the boundary together with *o* forms a tetrahedron. Let the number of such formed tetrahedra be *n*. The *k*-th tetrahedron, with vertices *x*1,*k*,*x*2,*k*,*x*3,*k*,*x*4,*<sup>k</sup>* = *o*, can be represented in a parametric form by *Q <sup>i</sup>*(*s*, *t*, *u*) = *x*4,*<sup>i</sup>* + *s*(*x*1,*<sup>i</sup>* − *x*4,*i*) + *t*(*x*2,*<sup>i</sup>* − *x*4,*i*) + *u* (*x*3,*<sup>i</sup>* − *x*4,*i*), for 0 ≤ *s*, *t*, *u* ≤ 1, and *s* + *t* + *u* ≤ 1. For <sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> 3, we use *xi*,*j*,*<sup>k</sup>* to denote the *<sup>i</sup>*-th coordinate of the vertex *xj* of the tetrahedron *<sup>Q</sup> <sup>k</sup>*.

**Algorithm 3.2** : DELETINGPOINTS(*P*, *μ*, Σ, *Pm*)

*<sup>n</sup>*−*<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>−</sup> *<sup>m</sup>*

5: compute the covariance Σ� of *P* \ *Pm*:

*ij*,1 + *σ*�

omit them in the next section and in the appendix.

**R**3, and leave the rest of the results for the appendix.

**4.1 Continuous PCA over a convex polyhedron in R**<sup>3</sup>

*ij* = *σ*�

7: **return** the eigenvectors of Σ�

**mboxcontinuous case R**<sup>3</sup>

*<sup>n</sup>*−*<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*ij*,2 <sup>=</sup> <sup>1</sup>

point set *Pm* deleted from *P* **Output**: principal components of *P* \ *Pm* 1: compute the center of gravity *μ<sup>m</sup>* of *Pm* 2: compute the covariance matrix Σ*<sup>m</sup>* of *Pm* 3: compute the center of gravity *μ*� of *P* \ *Pm*:

4: *μ<sup>j</sup>* = *<sup>n</sup>*

6: *σ*�

Since ∑*<sup>n</sup> <sup>k</sup>*=1(*pk*,*<sup>i</sup>* − *μi*) = 0, 1 ≤ *i* ≤ *d*, we have

$$
\sigma\_{ij,1}^{\prime} = \frac{n}{n-m} \sigma\_{ij} + \frac{nm^2}{(n-m)^3} (\mu\_i - \mu\_i^m)(\mu\_j - \mu\_j^m). \tag{11}
$$

Plugging-in the values of *μ*� *<sup>i</sup>* and *μ*� *<sup>j</sup>* in (10), we obtain:

$$\begin{split} \sigma\_{ij,2}^{\prime} &= \frac{1}{n-m} \sum\_{k=-n-m+1}^{n} (p\_{k,i} - \frac{n}{n-m} \mu\_{i} + \frac{m}{n-m} \mu\_{i}^{m}) (p\_{k,j} - \frac{n}{n-m} \mu\_{j} + \frac{m}{n-m} \mu\_{j}^{m}) \\ &= \frac{1}{n-m} \sum\_{k=-n-m+1}^{n} (p\_{k,i} - \mu\_{i}^{m} + \frac{n}{n-m} \mu\_{i}^{m} - \frac{n}{n-m} \mu\_{i}) (p\_{k,j} - \mu\_{j}^{m} + \frac{n}{n-m} \mu\_{j}^{m} - \frac{n}{n-m} \mu\_{i}) \\ &= \frac{1}{n-m} \sum\_{k=-n-m+1}^{n} (p\_{k,i} - \mu\_{i}^{m}) (p\_{k,j} - \mu\_{j}^{m}) + \frac{1}{n-m} \sum\_{k=n-m+1}^{n-m} (p\_{k,i} - \mu\_{i}^{m}) \frac{n}{n-m} (\mu\_{j}^{m} - \mu\_{j}) + \\ & \frac{1}{n-m} \sum\_{k=n-m+1}^{n} \frac{n}{n-m} (\mu\_{i}^{m} - \mu\_{i}) (p\_{k,j} - \mu\_{j}^{m}) + \\ & \quad \frac{1}{n-m} \sum\_{k=n-m+1}^{n} \frac{n}{n-m} (\mu\_{i}^{m} - \mu\_{i}) \frac{n}{n-m} (\mu\_{j}^{m} - \mu\_{j}). \end{split}$$

Since ∑*<sup>n</sup> <sup>k</sup>*=*n*−*m*+1(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup> <sup>i</sup>* ) = 0, 1 ≤ *i* ≤ *d*, we have

$$
\sigma\_{ij,2}^{\prime} = \frac{n}{n-m} \sigma\_{ij}^{m} + \frac{n^2 m}{(n-m)^3} (\mu\_i - \mu\_i^m)(\mu\_j - \mu\_j^m),
\tag{12}
$$

where

$$
\sigma\_{ij}^m = \frac{1}{m} \sum\_{k=n-m+1}^n (p\_{k,i} - \mu\_i^m)(p\_{k,j} - \mu\_j^m)), \quad 1 \le i, j \le d\_{\nu}
$$

is the *i*, *j*-th element of the covariance matrix Σ*<sup>m</sup>* of the point set *Pm*.

Finally, we have

$$
\sigma\_{\rm ij}^{\prime} = \sigma\_{\rm ij,1}^{\prime} + \sigma\_{\rm ij,2}^{\prime} = \frac{1}{n-m}(n\sigma\_{\rm ij} - m\sigma\_{\rm ij}^{m}) - \frac{nm}{(n-m)^2}(\mu\_{\rm i} - \mu\_{\rm i}^{m})(\mu\_{\rm j} - \mu\_{\rm j}^{m}).\tag{13}
$$

Note that *σ<sup>m</sup> ij* , and therefore *σ*� *ij*, can be computed in *O*(*m*) time. Thus, for a fixed dimension *d*, the covariance matrix Σ also can be computed in *O*(*m*) time.

As a corollary of (13), in the case when only one point, *pe*, is deleted from a point set *P*, the elements of the new covariance matrix are given by

$$
\sigma'\_{\rm ij} = \sigma'\_{\rm ij,1} - \sigma'\_{\rm ij,2} = \frac{m}{m-1} \sigma\_{\rm ij} - \frac{m}{(m-1)^2} (p\_{\varepsilon,i} - \mu\_i)(p\_{\varepsilon,j} - \mu\_j) \tag{14}
$$

and also can be computed in *O*(1) time.

Similar argument holds in the case when only one point is added to a point set *P*, and then the new covariance matrix also can be computer in constant time.

6 Will-be-set-by-IN-TECH

*nm*<sup>2</sup>

*<sup>j</sup>* in (10), we obtain:

*m <sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* <sup>−</sup> *<sup>n</sup>*

*<sup>j</sup>* ) + <sup>1</sup> *n* − *m*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* (*μ<sup>m</sup>*

*n*2*m*

As a corollary of (13), in the case when only one point, *pe*, is deleted from a point set *P*, the

*<sup>σ</sup>ij* <sup>−</sup> *<sup>m</sup>*

Similar argument holds in the case when only one point is added to a point set *P*, and then

*<sup>j</sup>* ) +

*<sup>j</sup>* − *μj*).

(*<sup>n</sup>* <sup>−</sup> *<sup>m</sup>*)<sup>3</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*ij* ) <sup>−</sup> *nm*

(*<sup>n</sup>* <sup>−</sup> *<sup>m</sup>*)<sup>3</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup>i*)(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*n*−*m* ∑ *k*=*n*−*m*+1

*<sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>+</sup>

*<sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>j</sup>* )), 1 ≤ *i*, *j* ≤ *d*,

(*<sup>n</sup>* <sup>−</sup> *<sup>m</sup>*)<sup>2</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*ij*, can be computed in *O*(*m*) time. Thus, for a fixed dimension *d*,

*<sup>j</sup>* +

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>j</sup>* ). (11)

*<sup>j</sup>* <sup>−</sup> *<sup>n</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* (*μ<sup>m</sup>*

*<sup>j</sup>* ), (12)

*<sup>j</sup>* ). (13)

*<sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

(*<sup>m</sup>* <sup>−</sup> <sup>1</sup>)<sup>2</sup> (*pe*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup>i*)(*pe*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup>j*), (14)

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup>j*)

*<sup>j</sup>* − *μj*) +

*m <sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup> j* )

*n <sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* ) *<sup>n</sup>*

*<sup>k</sup>*=1(*pk*,*<sup>i</sup>* − *μi*) = 0, 1 ≤ *i* ≤ *d*, we have

*ij*,1 <sup>=</sup> *<sup>n</sup>*

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>n</sup>*

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*n <sup>n</sup>* <sup>−</sup> *<sup>m</sup>* (*μ<sup>m</sup>*

*n <sup>n</sup>* <sup>−</sup> *<sup>m</sup>* (*μ<sup>m</sup>*

*ij*,2 <sup>=</sup> *<sup>n</sup>*

*ij*,2 <sup>=</sup> <sup>1</sup>

the covariance matrix Σ also can be computed in *O*(*m*) time.

*ij*,2 <sup>=</sup> *<sup>m</sup> m* − 1

the new covariance matrix also can be computer in constant time.

elements of the new covariance matrix are given by

*ij*,1 − *σ*�

*n* − *m*

*n* ∑ *k*=*n*−*m*+1

is the *i*, *j*-th element of the covariance matrix Σ*<sup>m</sup>* of the point set *Pm*.

*σ*�

*σm ij* <sup>=</sup> <sup>1</sup> *m*

*ij*,1 + *σ*�

*ij* , and therefore *σ*�

*σ*� *ij* = *σ*�

and also can be computed in *O*(1) time.

*n* − *m*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>i</sup>* <sup>+</sup>

*n <sup>n</sup>* <sup>−</sup> *<sup>m</sup> <sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* )(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup>i*) *<sup>n</sup>*

*<sup>i</sup>* ) = 0, 1 ≤ *i* ≤ *d*, we have

*σm ij* +

(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* (*nσij* <sup>−</sup> *<sup>m</sup>σ<sup>m</sup>*

*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup>i*)(*pk*,*<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

*<sup>i</sup>* +

*<sup>i</sup>* and *μ*�

*σij* +

*σ*�

Plugging-in the values of *μ*�

*n* ∑ *k*=*n*−*m*+1

*n* ∑ *k*=*n*−*m*+1

*n* ∑ *k*=*n*−*m*+1

*n* ∑ *k*=*n*−*m*+1

*n* ∑ *k*=*n*−*m*+1

*<sup>k</sup>*=*n*−*m*+1(*pk*,*<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup>*

Since ∑*<sup>n</sup>*

*σ*�

*ij*,2 <sup>=</sup> <sup>1</sup>

<sup>=</sup> <sup>1</sup> *n* − *m*

<sup>=</sup> <sup>1</sup> *n* − *m*

*n* − *m*

1 *n* − *m*

1 *n* − *m*

Since ∑*<sup>n</sup>*

where

Finally, we have

Note that *σ<sup>m</sup>*

*σ*� *ij* = *σ*� **Algorithm 3.2** : DELETINGPOINTS(*P*, *μ*, Σ, *Pm*) **Input**: point set *P*, the center of gravity *μ* of *P*, the covariance matrix Σ of *P*, point set *Pm* deleted from *P* **Output**: principal components of *P* \ *Pm* 1: compute the center of gravity *μ<sup>m</sup>* of *Pm* 2: compute the covariance matrix Σ*<sup>m</sup>* of *Pm* 3: compute the center of gravity *μ*� of *P* \ *Pm*: 4: *μ<sup>j</sup>* = *<sup>n</sup> <sup>n</sup>*−*<sup>m</sup> <sup>μ</sup><sup>j</sup>* <sup>−</sup> *<sup>m</sup> <sup>n</sup>*−*<sup>m</sup> <sup>μ</sup><sup>m</sup> <sup>j</sup>* , for 1 ≤ *j* ≤ *d* 5: compute the covariance Σ� of *P* \ *Pm*: 6: *σ*� *ij* = *σ*� *ij*,1 + *σ*� *ij*,2 <sup>=</sup> <sup>1</sup> *<sup>n</sup>*−*<sup>m</sup>* (*nσij* <sup>−</sup> *<sup>m</sup>σ<sup>m</sup> ij* ) <sup>−</sup> *nm* (*n*−*m*)<sup>2</sup> (*μ<sup>i</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup> <sup>i</sup>* )(*μ<sup>j</sup>* <sup>−</sup> *<sup>μ</sup><sup>m</sup> <sup>j</sup>* ), for 1 ≤ *i*, *j* ≤ *d* 7: **return** the eigenvectors of Σ�

The above derivation of the new principal components is summarized in Algorithm 3.2.

Notice, that once having closed-form solutions, one can obtain similar algorithms, as presented in this chapter, when the continuous point sets are considered. Therefore, we will omit them in the next section and in the appendix.
