**2.4. Considered faults**

74 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 1

(misalignment), improper use or combination of these different causes.

**Figure 4.** Low and medium power induction machines faults [1-2]

**Figure 5.** High power induction machines faults [1-2]

classified into four categories [2]:

configuration of the stator and the rotor.

**2.3. WRIM faults** 

[1-2].

This model of the WRIM will be used to simulate both the healthy and the faulted

Despite the constant improvements on technical design of reliable machine, different types of faults still exist. The faults can be resulted by normal wear, poor design, poor assembly

Figure 4 and Figure 5 show the faults distribution carried out by a German company on industrial systems. The Figure 4 shows the faults of the low and medium power machines (50KW to 200KW), and the Figure 5 those of the high power machines (from 200KW)

Figure 4 shows that the most encountered faults of the low and medium power on the induction machines are the stator faults and the Figure 5 shows that the faults due to mechanical defects give the highest percentage. The induction machine faults can be The considered faults are on the resistance values which increase due to the rise of their temperature. In normal operation, a resistance value variation compared to its nominal value (in ambient temperature, 25°C) is a faulted machine due to machine overload or coils fault [1,9]. The resistance versus the temperature is expressed as:

$$R = R\_0(1 + \alpha \Delta T) \tag{15}$$

*R0* is the resistance value at *T0* = 25°C, *α* the temperature coefficient of the resistance and *ΔT* the temperature variation.

### **3. PCA methodology**

The PCA method is based on simple linear algebra. It can be used as exploring tool, analyzing data and models design. The PCA method is based on the transformation of the data space representation. The new space dimension is smaller than that the original space dimension. It is classified as without model method, [5] and it can be considered as full identification method of physical systems [6]. The PCA method allows providing directly the redundancy relations between the variables without identifying the state representation matrix of the system. This task is often difficult to achieve.

#### **3.1. PCA method formulation**

We note by xi(j) = [x1 x2 x3 …xm] the measurements vector. « *i* » represents the measurement variables that must be monitored (*i = 1 to m*) and « *j* » the number of measurements for each variable « *m* », *j = 1 to N*.

The measurements data matrix (Xd € *RN\*m*) can be written as follows:

$$\mathbf{X}\_d = \begin{pmatrix} \mathbf{x}\_1(1) & \dots & \mathbf{x}\_m(1) \\ \dots & \dots & \dots \\ \mathbf{x}\_1(N) & \dots & \mathbf{x}\_m(N) \end{pmatrix} \tag{16}$$

The data matrix is described by a smallest new matrix, that is an orthogonal linear projection of a subspace of *m* dimension on a less dimension subspace *l* (*l<m*). The method consists in identifying the PCA model and it is based on two steps [10]:


#### **3.2. Eigenvalues and eigenvectors determination**

The first step is the data normalization. The variables must be centered and reduced. Then, the obtained normalized matrix is:

$$X = [X\_1...X\_m] \tag{17}$$

And the covariance matrix *R* is given by:

$$R = \frac{1}{N-1} \mathbf{X} \mathbf{X}^T \tag{18}$$

By decomposing *R*, (17) can be expressed as:

$$R = P\Lambda P^T \tag{19}$$

With

$$P P^T = P^T P = I\_m \tag{20}$$

 is the diagonal matrix of the eigenvectors of R and their eigenvalues are ordered in descending order with respect to magnitude values 1 2 ( ... ). *<sup>m</sup>* 

The eigenvectors matrix P is expressed as:

$$P = [p\_1, p\_2, \dots, p\_m] \tag{21}$$

*<sup>i</sup> p* is the orthogonal eigenvectors corresponding to the eigenvalue *<sup>i</sup>* . Then, the principal components matrix can be calculated using:

$$T = XP\tag{22}$$

*N m*\* *T*

#### **3.3. PCA model construction**

76 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 1

1

(1) ... (1) ... ... ... ( ) ... ( )

 

*xN x N*

*x x*

The data matrix is described by a smallest new matrix, that is an orthogonal linear projection of a subspace of *m* dimension on a less dimension subspace *l* (*l<m*). The method consists in

The first step is the data normalization. The variables must be centered and reduced. Then,

1 1 *<sup>T</sup> R XX*

*T T*

descending order with respect to magnitude values 1 2 ( ... ). *<sup>m</sup>*

*<sup>i</sup> p* is the orthogonal eigenvectors corresponding to the eigenvalue *<sup>i</sup>*

is the diagonal matrix of the eigenvectors of R and their eigenvalues are ordered in

*N m*\* *T*

 

 Determination on the eigenvalues and the eigenvectors of the covariance matrix R. Determination of the structure of the model, which consists in calculating the

*m*

(16)

<sup>1</sup> [ ... ] *X XXm* (17)

*N* (18)

*<sup>T</sup> R PP* (19)

*<sup>m</sup> PP P P I* (20)

 

1 2 [ , ,..., ] *<sup>m</sup> P pp p* (21)

*T XP* (22)

. Then, the principal

*m*

1

The measurements data matrix (Xd € *RN\*m*) can be written as follows:

*d*

*X*

identifying the PCA model and it is based on two steps [10]:

**3.2. Eigenvalues and eigenvectors determination** 

the obtained normalized matrix is:

And the covariance matrix *R* is given by:

By decomposing *R*, (17) can be expressed as:

The eigenvectors matrix P is expressed as:

components matrix can be calculated using:

With

component number « *l* » to be retained in the PCA model.

To obtain the structure of the model, the components number « *l* » to be retained must be determined. This step is very important for PCA construction. The component number can be determined by using the following:

$$\left(\sum\_{\substack{i=1\\m\text{ }i\text{ }j\text{ }i\text{ }j\text{ }m\text{ }i\text{ }j\text{ }m\text{ }j\text{ }m\text{ }i\text{ }m\text{ }j\text{ }m\text{ }\mathbf{1}\right)$$

Where "*thc"* is an user defined threshold expressed as percentage. Now, user should retain only the components number « *l* » which was associated in the first term of (23). By reordering the eigenvalues, the minimum numbers of components are retained while still reaching the minimum variance threshold [14].

By taking into account the number of components to be retained and by partitioning the principal component matrix *T*, the eigenvectors matrix *P* and the eigenvalues matrix [12], the constructed PCA model is given by:

$$T = \left[ T\_p^{N^\*l} T\_r^{N^\* \{ m - l \}} \right] \tag{24}$$

$$P = \left[ P\_p^{N^\*l} P\_r^{N^\* \{ m - l \}} \right] \tag{25}$$

$$
\Lambda = \begin{bmatrix}
\Lambda^{l^\*l} & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & \Lambda^{(m-l)(m-l)}
\end{bmatrix} \tag{26}
$$

*<sup>p</sup> T* and *<sup>r</sup> T* are respectively the principal and residual parts of *T*, *<sup>p</sup> P* and *<sup>r</sup> P* are respectively the principal and residual parts of P.

With this PCA model, the centered and reduced matrix X can be written as:

$$X = P\_p T\_p^T + P\_r T\_r^T \tag{27}$$

By considering:

$$X\_p = P\_p T\_p^T = \sum\_{i=1}^{l} P\_i T\_i^T \tag{28}$$

$$E = P\_r T\_r^T = \sum\_{i=l+1}^{m} P\_i T\_i^T \tag{29}$$

The centered and reduced data matrix is given by:

$$X = X\_p + E \tag{30}$$

*Xp* is the principal estimated matrix and *E* the residues matrix which represents information losses due to data matrix *X* reduction. It represents the difference between the exact and the approached representations of *X*. This matrix is associated with the lowest eigenvalues <sup>1</sup> ,..., *l m* . Therefore, in this case, the data compression preserves all the best information that it conveys.
