Weighted Least Squares Perturbation Theory

*Aleksandr N. Khimich, Elena A. Nikolaevskaya and Igor A. Baranov*

## **Abstract**

The interest in the problem of weighted pseudoinverse matrices and the problem of weighted least squares (WLS) is largely due to their numerous applications. In particular, the problem of WLS is used in the design and optimization of building structures, in tomography, in statistics, etc. The first part of the chapter is devoted to the sensitivity of the solution to the WLS problem with approximate initial data. The second part investigates the properties of a SLAE with approximate initial data and presents an algorithm for finding a weighted normal pseudo solution of a WLS problem with approximate initial data, an algorithm for solving a WLS problem with symmetric positive semidefinite matrices and an approximate right side and also a parallel algorithm for solving a WLS problem. The third part is devoted to the analysis of the reliability of computer solutions of the WLS problem with approximate initial data. Here, estimates of the total error of the WLS problem are presented, and also software-algorithmic approaches to improving the accuracy of computer solutions.

**Keywords:** weighted least squares problem, error estimates, weighted matrix pseudoinverse, weighted condition number, weighted singular value decomposition

## **1. Introduction**

The interest in the problem of weighted pseudoinverse matrices and theWLS problem is largely due to their numerous applications. In particular, the problem of weighted least squares is used in the design and optimization of building structures, in tomography, in statistics, etc. A number of properties of weighted pseudoinverse matrices underlie the finding of weighted normal pseudosolutions. The field of application of weighted pseudoinverse matrices and weighted normal pseudosolutions is constantly expanding.

The definition of a weighted pseudoinverse matrix with positive definite weights was first introduced by Chipman in article [1]. In 1968, Milne introduced the definition of a skew pseudoinverse matrix in paper [2]. The study of the properties of weighted pseudoinverse matrices and weighted normal pseudosolutions, as well as the construction of methods for solving these and other problems, are devoted to the works of Mitra, Rao, Van Loan, Wang, Galba, Deineka, Sergienko, Ben-Israel, Elden, Wei, Wei, Ward etc. Weighted pseudoinverse matrices and weighted normal pseudosolutions with degenerate weights were studied in [3–5]. The existence and uniqueness of weighted

pseudoinverse matrices with indefinite and mixed weights, as well as some of their properties, were described in [6–8]. Application of the weighted pseudoinverse matrix in statistics presented, for example, in [9, 10]. Many results on weighted generalized pseudo-inversions can be found in monographs [11, 12]. Much less work is devoted to the study of weighted pseudoinversion under conditions of approximate initial data. These issues are discussed in [13–17]. Analysis of the properties of weighted pseudoinverses and weighted normal pseudosolutions, as well as the creation of solution methods for these and other problems, are described in [18–20].

When solving applied problems, their mathematical models will have, as a rule, approximate initial data as a result of measurements, observations, assumptions, hypotheses, etc. Later, during discretization ('arithmetization') of the mathematical model, these errors are transformed into the errors of the matrix elements and the right parts of the resolving systems of equations. The input data of systems of linear algebraic equations and WLS problems can be determined directly from physical observations, and therefore they can have errors inherent in all measurements. In this case, the original data we have is an approximation of some exact data. And, finally, the initial data of mathematical models formulated in the form of linear algebra problems can be specified exactly in the form of numbers or mathematical formulas, but, given the finite length of a machine word, it is impossible to work with such an exact model on a computer. The machine model of such a problem in the general case will be approximate either due to errors in converting numbers from the decimal system to binary or due to rounding errors in the implementation of calculations on a computer.

The task is to study the properties of the machine model and to form a model of the problem and an algorithm for obtaining an approximate solution in a machine environment that will approximate the solution of a mathematical problem. The key question of numerical simulation is the reliability of the obtained machine solutions.

The most complete systematic exposition of questions related to the approximate nature of the initial data in problems of linear algebra is given in the monographs [21–24]. Various approaches to the study and solution of ill-posed problems were considered, for example, in [25–28]. Problems of the reliability of a machine solution for problems with approximate initial data, i.e. estimates of the proximity of the machine solution to the mathematical solution, estimates of the hereditary error in the mathematical solution and refinement of the solution were considered in the publications [12, 26, 29–33]. Much less work has been devoted to the study of similar questions for the WLS problem. The sensitivity analysis of a weighted normal pseudosolution under perturbation of the matrix and the right-hand side is the subject of papers [16, 34–36].

The chapter is devoted to the solution of the listed topical problems, namely the development of the perturbation theory for the WLS problem with positive definite weights and the development of numerical methods for the study and solution of mathematical models with approximate initial data.

## **2. Weighted least squares problem**

#### **2.1 Preliminaries**

Let the set of all *<sup>m</sup>* � *<sup>n</sup>* matrices is denoted by *<sup>R</sup><sup>m</sup>*�*<sup>n</sup>* . Given a matrix *A* ∈*R<sup>m</sup>*�*<sup>n</sup>* let *A<sup>T</sup>* is the transpose of *A*, rankð Þ *A* is the rank of *A*, ð Þ *A* is the field of values of *A* and ð Þ *A* is the null space of *A*. Additionally, let kk denote the vector 2-norm and the consistent matrix 2-norm, and let *I* be an identity matrix.

Given an arbitrary matrix *A* ∈*Rm*�*<sup>n</sup>* and symmetric positive definite matrices *M* and *N* of orders *m* and *n*, respectively, a unique matrix *X* ∈*Rm*�*n*, satisfying the conditions:

$$\mathbf{A}\mathbf{X}\mathbf{A}=\mathbf{A}, \quad \mathbf{X}\mathbf{A}\mathbf{X}=\mathbf{X}, \quad \left(\mathbf{M}\mathbf{A}\mathbf{X}\right)^{T} = \mathbf{M}\mathbf{A}\mathbf{X}, \left(\mathbf{N}\mathbf{X}\mathbf{A}\right)^{T} = \mathbf{N}\mathbf{X}\mathbf{A}, \tag{1}$$

is called the **weighted Moore–Penrose pseudoinverse** of *A* and is denoted by *X* ¼ *A*<sup>þ</sup> *MN*. Specifically, if *<sup>M</sup>* <sup>¼</sup> *<sup>I</sup>* <sup>∈</sup>*Rm*�*<sup>m</sup>* and *<sup>N</sup>* <sup>¼</sup> *<sup>I</sup>* <sup>∈</sup>*Rn*�*n*, then *<sup>X</sup>* satisfying conditions (1) is called the **Moore–Penrose pseudoinverse** and is designated as *X* ¼ *A*þ.

Let *A*# denote the weighted transpose of *A*, *P* and *Q* be idempotent matrices, and *<sup>А</sup>* <sup>¼</sup> *<sup>A</sup>* <sup>þ</sup> <sup>Δ</sup>*<sup>A</sup>* be a perturbed matrix, i.e.,

$$A^\* = N^{-1} A^T M.\tag{2}$$

$$P = A\_{MN}^{+}A, Q = A A\_{MN}^{+}, \bar{P} = \bar{A}\_{MN}^{+}\bar{A}, \bar{Q} = \bar{A}\bar{A}\_{MN}^{+}.\tag{3}$$

Let *x*∈*R<sup>m</sup>*, *y*∈ *Rn*. The weighted scalar products in *R<sup>m</sup>* and *Rn* are defined as ð Þ *<sup>x</sup>*, *<sup>y</sup> <sup>M</sup>* <sup>¼</sup> *<sup>y</sup>TMx*, *<sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup>*R<sup>m</sup>* and ð Þ *<sup>x</sup>*, *<sup>y</sup> <sup>N</sup>* <sup>¼</sup> *<sup>y</sup>TNx*, *<sup>x</sup>*, *<sup>y</sup>*<sup>∈</sup> *<sup>R</sup><sup>n</sup>*, respectively. The weighted vector norms are defined as:

$$\begin{aligned} \|\boldsymbol{\mathfrak{x}}\|\_{M} &= (\boldsymbol{\mathfrak{x}}, \boldsymbol{\mathfrak{x}})\_{M}^{\frac{1}{2}} = \left(\boldsymbol{\mathfrak{x}}^{T} \boldsymbol{M} \boldsymbol{\mathfrak{x}}\right)^{\frac{1}{2}} = \left\|\boldsymbol{M}^{\frac{1}{2}} \boldsymbol{\mathfrak{x}}\right\|, \boldsymbol{\mathfrak{x}} \in \mathbb{R}^{m}, \\ \|\boldsymbol{\mathfrak{y}}\|\_{N} &= (\boldsymbol{\mathfrak{y}}, \boldsymbol{\mathfrak{y}})\_{N}^{\frac{1}{2}} = \left(\boldsymbol{\mathfrak{y}}^{T} \boldsymbol{N} \boldsymbol{\mathfrak{y}}\right)^{\frac{1}{2}} = \left\|\boldsymbol{N}^{\frac{1}{2}} \boldsymbol{\mathfrak{y}}\right\|, \boldsymbol{\mathfrak{y}} \in \mathbb{R}^{n}. \end{aligned} \tag{4}$$

Let *<sup>x</sup>*, *<sup>y</sup>*<sup>∈</sup> *<sup>R</sup><sup>m</sup>* and ð Þ *<sup>x</sup>*, *<sup>y</sup> <sup>M</sup>* <sup>¼</sup> 0. Then the vectors *<sup>x</sup>* and *<sup>y</sup>* are called *<sup>M</sup>*-orthogonal, i.е. *M*1 <sup>2</sup>*x*- and *M*<sup>1</sup> <sup>2</sup>*y*-orthogonal. It is easy to show that.

$$\left\|\|\mathbf{x} + \mathbf{y}\|\right\|\_{M}^{2} = \left\|\mathbf{x}\right\|\_{M}^{2} + \left\|\mathbf{y}\right\|\_{M}^{2}, \mathbf{x}, \mathbf{y} \in \mathbb{R}^{m}.\tag{5}$$

The weighted matrix norms are defined as:

$$\begin{aligned} \|A\|\_{\text{MN}} &= \max\_{\|\mathbf{x}\|\_{N} = 1} \|A\mathbf{x}\|\_{M} = \left\| \mathcal{M}^{\ddagger} A N^{-\frac{1}{2}} \right\|, A \in \mathbb{R}^{m \times n}, \\ \|B\|\_{\text{NM}} &= \max\_{\|\mathbf{y}\|\_{M} = 1} \|B\mathbf{y}\|\_{N} = \left\| N^{\frac{1}{2}} A M^{-\frac{1}{2}} \right\|, B \in \mathbb{R}^{n \times m}. \end{aligned} \tag{6}$$

**Lemma 1** (see in [37]). Let *<sup>A</sup>* <sup>∈</sup>*R<sup>m</sup>*�*<sup>n</sup>*, rankð Þ¼ *<sup>A</sup> <sup>k</sup>*, *<sup>M</sup>* and *<sup>N</sup>* are positive definite matrices of orders *m* and *n*, respectively. Then, there are matrices *U* ∈*R<sup>m</sup>*�*<sup>m</sup>* and *<sup>V</sup>* <sup>∈</sup>*R<sup>n</sup>*�*<sup>n</sup>*, satisfying *<sup>U</sup>TMU* <sup>¼</sup> *<sup>I</sup>* and *<sup>V</sup>TN*�<sup>1</sup> *V* ¼ *I* such that.

$$A = U \begin{pmatrix} D & 0 \\ 0 & 0 \end{pmatrix} V^T, \qquad A\_{MN}^+ = N^{-1} V \begin{pmatrix} D^{-1} & 0 \\ 0 & 0 \end{pmatrix} U^T M,\tag{7}$$

where *<sup>D</sup>* <sup>¼</sup> diag *<sup>μ</sup>*1, *<sup>μ</sup>*2, … , *<sup>μ</sup><sup>k</sup>* ð Þ, *<sup>μ</sup>*<sup>1</sup> <sup>≥</sup> *<sup>μ</sup>*<sup>2</sup> <sup>≥</sup> … <sup>≥</sup>*μ<sup>k</sup>* <sup>&</sup>gt; 0 and *<sup>μ</sup>*<sup>2</sup> *<sup>i</sup>* are the nonzero eigenvalues of the matrix *A*#*A*. The nonnegative values *μ<sup>i</sup>* are called the weighted singular values of *A*, moreover, k k *A MN* ¼ *μ*1, *A*<sup>þ</sup> *MN* � � � � *NM* <sup>¼</sup> <sup>1</sup> *μk* .

The weighted singular value decomposition of *A* yields an *M*-orthonormal basis of the vectors of *U* and an *N*�<sup>1</sup> -orthonormal basis of the vectors of *V*.

#### **2.2 Statement of the problem**

In the study of the reliability of the obtained machine results, three linear systems are considered. A system of linear algebraic equations with exact input data

$$A\mathfrak{x} = b.\tag{8}$$

We will consider the corresponding weighted least squares problem with positive definite weights *M* and *N*:

$$\min\_{\mathbf{x}\in\mathcal{C}} \|\mathbf{x}\|\_{N}, \mathcal{C} = \left\{\mathbf{x} \, \|\mathbf{A}\mathbf{x} - \mathbf{b}\|\_{M} = \min\right\},\tag{9}$$

where *A* ∈*R<sup>m</sup>*�*<sup>n</sup>* is a rank-deficient matrix and *b*∈*R<sup>m</sup>*.

Along with (9), we consider the mathematical model with approximately specified initial data.

$$\min\_{\mathbf{x}\in\mathcal{C}}||\bar{\mathbf{x}}||\_{N^{\flat}}\mathcal{C} = \left\{\bar{\mathbf{x}}||(\mathcal{A}+\Delta\mathcal{A})\bar{\mathbf{x}} - (b+\Delta b)||\_{M} = \min\right\},\tag{10}$$

where.

$$
\bar{A} = A + \Delta A, \bar{b} = b + \Delta b, \bar{\varkappa} = \varkappa + \Delta \varkappa. \tag{11}
$$

Assume that the errors in the matrix elements and the right-hand side satisfy the relations:

$$\|\|\Delta\mathbf{A}\|\|\_{\text{MN}} \le \varepsilon\_{\mathbf{A}} \|\|\mathbf{A}\|\|\_{\text{MN}}, \quad \|\|\Delta\mathbf{b}\|\|\_{\text{M}} \le \varepsilon\_{\mathbf{b}} \|\|\mathbf{b}\|\|\_{\text{M}}.\tag{12}$$

The problem for the approximate solution *x*�� of a system of linear algebraic equations with approximately given initial data

$$
\bar{A}\bar{\bar{x}} = \bar{b} + \bar{r},
\tag{13}
$$

where �*<sup>r</sup>* <sup>¼</sup> *<sup>A</sup>*�*x*�� � � *b* is the residual vector.

The analysis of the reliability of the obtained solution includes an assessment of the hereditary error k k *<sup>x</sup>* � *<sup>x</sup>*� *<sup>N</sup>*, computational error k k *<sup>x</sup>*� � *<sup>x</sup>*�� *<sup>N</sup>* and total error k k *<sup>x</sup>* � *<sup>x</sup>*�� *<sup>N</sup>*, as well as the refinement of the obtained machine solution to a given accuracy.

#### **2.3 The existence and uniqueness of a weighted normal pseudoinverse**

Let linear manifold *L* be a nonempty subset of space *R*, closed with respect to the operations of addition and multiplication by a scalar (if *x* and *y* are elements of *L* ∀*α*, *β*, the *αx* þ *βy* is an element of *L*). Vector *x* is *N*-orthogonal to the linear manifold *L* (*x*⊥*NL*) if *x* is *N*-orthogonal to each vector from *L.*

**Lemma 2** (see in [38]). There exists a unique decomposition of vector *x*, namely *x* ¼ *x*^ þ *x*~, where *x*^ ∈ *L*, *x*~⊥*NL*.

Let *A* is an arbitrary matrix. The kernel of matrix *A*, denoted by (*A*), is the set of vectors mapped into zero by *А*: ð Þ¼ A f g *x* : *Ax* ¼ 0 .

The set ð Þ *A* of images of matrix *A* is the set of vectors that are images of vectors of the space *R* from the definition domain of *A*, i.e. ð Þ¼ *A* f g *b* : *b* ¼ *Ax*, ∀*x* .

Let *L* be a linear manifold in space *R*, *N*-orthogonal (*M*-orthogonal) complement to *L*, denoted by *L*<sup>⊥</sup>*<sup>N</sup>* (*L*<sup>⊥</sup>*<sup>M</sup>* ), defined as the set of vectors in *R*, each of which is *N*-orthogonal (*M*-orthogonal) to *L*.

**Remark 1**. If *<sup>x</sup>* is a vector from *<sup>R</sup>* and *xTNy* <sup>¼</sup> 0 for any *<sup>y</sup>* from *<sup>R</sup>*, then *<sup>x</sup>* <sup>¼</sup> <sup>0</sup>**. Theorem 1.** Let *<sup>A</sup>* <sup>∈</sup> *Rm*�*n*, then ð Þ¼ *<sup>A</sup>* <sup>⊥</sup> *<sup>A</sup>*# ð Þ.

**Proof**. Vector *x*∈ ð Þ *A* , if and only if *Ax* ¼ 0. Hence, by virtue of Remark1, we get *<sup>x</sup>*<sup>∈</sup> ð Þ *<sup>A</sup>* , if and only if *yTMAx* <sup>¼</sup> 0 for any *<sup>y</sup>*. Since *<sup>y</sup>TMAx* <sup>¼</sup> *<sup>A</sup>*# ð Þ*<sup>y</sup> TNx*, we get *Ax* <sup>¼</sup> 0 if and only if *<sup>x</sup>* is *<sup>N</sup>*-orthogonal to all the vectors of the form *<sup>A</sup>*#*y*. Vectors *<sup>A</sup>*#*<sup>y</sup>* form *<sup>A</sup>*# ð Þ. The required statement follows from here and from the definition <sup>⊥</sup>*<sup>M</sup>* ð Þ *<sup>A</sup>* .

**Theorem 2** (see in [38]). If *A* is an *m* � *n* matrix and *b* is an *m*-dimensional vector, then the unique decomposition *<sup>b</sup>* <sup>¼</sup> ^ *<sup>b</sup>* <sup>þ</sup> <sup>~</sup> *b* holds, where ^ *<sup>b</sup>*<sup>∈</sup> ð Þ *<sup>A</sup>* and <sup>~</sup> *<sup>b</sup>*<sup>∈</sup> *<sup>A</sup>*# ð Þ.

Vector ^ *<sup>b</sup>* is a projection of *<sup>b</sup>* on to ð Þ *<sup>A</sup>* , and <sup>~</sup> *<sup>b</sup>* is a projection of *<sup>b</sup>* on to *<sup>A</sup>*# ð Þ. Vectors ^ *b* and ~ *<sup>b</sup>* are *<sup>M</sup>*-orthogonal. Hence, *<sup>A</sup>*#*<sup>b</sup>* <sup>¼</sup> *<sup>A</sup>*#^ *b*.

By Theorem 1, the following relations hold for the symmetric matrix *A*: ð Þ¼ *A* <sup>⊥</sup>ð Þ *<sup>A</sup>* , ð Þ¼ *<sup>A</sup>* <sup>⊥</sup>ð Þ *<sup>A</sup> :*

**Theorem 3.** Let *<sup>A</sup>* <sup>∈</sup>*R<sup>m</sup>*�*<sup>n</sup>*, then ð Þ¼ *<sup>A</sup> AA*# ð Þ, *<sup>A</sup>*# ð Þ¼ *<sup>A</sup>*# ð Þ *<sup>A</sup>* , ð Þ¼ *<sup>A</sup> <sup>A</sup>*# ð Þ *<sup>A</sup>* and *<sup>A</sup>*# ð Þ¼ *AA*# ð Þ .

**Proof**. It will be to establish that *<sup>A</sup>*# ð Þ¼ *AA*# ð Þ and ð Þ¼ *<sup>A</sup> <sup>A</sup>*# ð Þ *<sup>A</sup>* .

For this purpose, we will use Theorem 1. To prove the coincidence of *<sup>A</sup>*# ð Þ and *AA*# ð Þ, note that *AA*#*<sup>x</sup>* <sup>¼</sup> 0 if *<sup>A</sup>*#*<sup>x</sup>* <sup>¼</sup> 0. On the other hand, if *AA*#*<sup>x</sup>* <sup>¼</sup> 0, then *xTAA*#*<sup>x</sup>* <sup>¼</sup> 0, i.е. *<sup>A</sup>*# k k*<sup>x</sup> <sup>M</sup>* <sup>¼</sup> 0, which entails equality *<sup>A</sup>*#*<sup>x</sup>* <sup>¼</sup> 0. So, *<sup>A</sup>*#*<sup>x</sup>* <sup>¼</sup> 0 if and only if *xTAA*#*<sup>x</sup>* <sup>¼</sup> 0. We can similarly establish that ð Þ¼ *<sup>A</sup> <sup>A</sup>*# ð Þ *<sup>A</sup>* .

Then let us prove the theorem about the existence and uniqueness of the solution vector that minimizes the norm of the residual k k *Ax* � *b <sup>M</sup>* by the technique proposed in [39] for the least-squares problem.

**Theorem 4.** Let *<sup>A</sup>* <sup>∈</sup>*R<sup>m</sup>*�*<sup>n</sup>*, *<sup>b</sup>*∈*R<sup>m</sup>*, *<sup>b</sup>* <sup>∉</sup> ð Þ *<sup>A</sup>* . Then there exists a vector *<sup>x</sup>*^, that minimizes the norm of the residual k k *Ax* � *b <sup>M</sup>* and vector *x*^ is a unique vector from *<sup>A</sup>*# ð Þ, that satisfies the equation *Ax* <sup>¼</sup> ^ *b*, where ^ *b* ¼ *AA*<sup>þ</sup> *MNb* is the projection of *b* onto ð Þ *A* .

**Proof**. By virtue of Theorem 2, we get *<sup>b</sup>* <sup>¼</sup> ^ *<sup>b</sup>* <sup>þ</sup> <sup>~</sup> *b*, where ~ *b* ¼ *I* � *AA*<sup>þ</sup> *MN <sup>b</sup>* is the projection of *<sup>b</sup>* on to *<sup>A</sup>*# ð Þ. Since for every *<sup>x</sup>*, *Ax*<sup>∈</sup> ð Þ *<sup>A</sup>* and <sup>~</sup> *<sup>b</sup>*<sup>∈</sup> <sup>⊥</sup>*<sup>M</sup>* ð Þ *<sup>A</sup>* , then ^ *<sup>b</sup>* � *Ax*<sup>∈</sup> ð Þ *<sup>A</sup>* and <sup>~</sup> *b*⊥^ *b* � *Ax*. Therefore

$$\left\|\left\|\boldsymbol{b} - \boldsymbol{A}\boldsymbol{x}\right\|\right\|\_{M}^{2} = \left\|\left\|\hat{\boldsymbol{b}} - \boldsymbol{A}\boldsymbol{x} + \tilde{\boldsymbol{b}}\right\|\right\|\_{M}^{2} = \left\|\left\|\hat{\boldsymbol{b}} - \boldsymbol{A}\boldsymbol{x}\right\|\right\|\_{M}^{2} + \left\|\left\|\bar{\boldsymbol{b}}\right\|\right\|\_{M}^{2} \geq \left\|\left\|\bar{\boldsymbol{b}}\right\|\right\|\_{M}^{2}.\tag{14}$$

This lower bound is attained since ^ *b* belongs to the set of images *А*, i.е. ^ *b* is an image of some *x*0: ^ *b* ¼ *Ax*0.

Thereby, for this *x*<sup>0</sup> the greatest lower bound is attainable:

$$\left\|\left\|\boldsymbol{b} - \boldsymbol{A}\boldsymbol{x}\_{0}\right\|\right\|\_{M}^{2} = \left\|\left\|\boldsymbol{b} - \boldsymbol{\hat{b}}\right\|\right\|\_{M}^{2} = \left\|\left\|\boldsymbol{\tilde{b}}\right\|\right\|\_{M}^{2}.\tag{15}$$

It was shown earlier that

$$\left\|\left\|\boldsymbol{b} - \boldsymbol{A}\boldsymbol{x}\right\|\right\|\_{\boldsymbol{M}}^2 = \left\|\left\|\boldsymbol{\hat{b}} - \boldsymbol{A}\boldsymbol{x}\right\|\right\|\_{\boldsymbol{M}}^2 + \left\|\left\|\boldsymbol{\tilde{b}}\right\|\right\|\_{\boldsymbol{M}}^2 \tag{16}$$

and hence, the lower bound can only be attained for *<sup>x</sup>*<sup>∗</sup> , for which *Ax*<sup>∗</sup> <sup>¼</sup> ^ *b*. According to Theorem 2, each vector *x*<sup>∗</sup> , can be presented as a sum of two orthogonal vectors: *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> *<sup>x</sup>*^ <sup>∗</sup> <sup>þ</sup> *<sup>x</sup>*<sup>~</sup> <sup>∗</sup> , where *<sup>x</sup>*^ <sup>∗</sup> <sup>∈</sup> *<sup>A</sup>*# ð Þ, *<sup>x</sup>*<sup>~</sup> <sup>∗</sup> <sup>∈</sup> ð Þ *<sup>A</sup>* .

Therefore, *Ax*<sup>∗</sup> <sup>¼</sup> *Ax*^ <sup>∗</sup> and hence, *<sup>b</sup>* � *Ax*<sup>∗</sup> k k<sup>2</sup> *<sup>M</sup>* <sup>¼</sup> *<sup>b</sup>* � *Ax*^ <sup>∗</sup> k k<sup>2</sup> *<sup>M</sup>*. Note that

$$\|\boldsymbol{\mathfrak{x}}^{\*}\|\_{N}^{2} = \|\boldsymbol{\hat{x}}^{\*}\|\_{N}^{2} + \|\boldsymbol{\tilde{x}}^{\*}\|\_{N}^{2} \geq \|\boldsymbol{\hat{x}}^{\*}\|\_{N}^{2},\tag{17}$$

where strict inequality is possible when *<sup>x</sup>*<sup>∗</sup> 6¼ *<sup>x</sup>*^ <sup>∗</sup> (i.е. if *<sup>x</sup>*<sup>∗</sup> does not coincide with its projection on to *<sup>A</sup>*# ð Þ).

It was shown above, that *<sup>x</sup>*<sup>0</sup> minimizes k k *Ax* � *<sup>b</sup> <sup>M</sup>*, if and only if *Ax*<sup>0</sup> <sup>¼</sup> ^ *b*, and among the vectors that minimize k k *Ax* � *b <sup>M</sup>*, each vector with the minimum norm should belong to the set of images *A*#. To establish the uniqueness of a minimumnorm vector, assume that *<sup>x</sup>*^ and *<sup>x</sup>*<sup>∗</sup> belong to *<sup>A</sup>*# ð Þ and that *Ax*^ <sup>¼</sup> *Ax*<sup>∗</sup> <sup>¼</sup> ^ *b*. Then *<sup>x</sup>*<sup>∗</sup> � *<sup>x</sup>*^ <sup>∈</sup> *<sup>A</sup>*# ð Þ, however *A x*ð Þ¼ <sup>∗</sup> � *<sup>x</sup>*^ 0, so *<sup>x</sup>*<sup>∗</sup> � *<sup>x</sup>*^ <sup>∈</sup> ð Þ¼ *<sup>A</sup>* ⊥*<sup>N</sup> <sup>A</sup>*# ð Þ.

As vector *<sup>x</sup>*<sup>∗</sup> � *<sup>x</sup>*^ is *<sup>N</sup>*-orthogonal to itself *<sup>x</sup>*k k <sup>∗</sup> � *<sup>x</sup>*^ *<sup>N</sup>* <sup>¼</sup> 0, i.е. *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> *<sup>x</sup>*^.

**Remark 2**. There is another assertion that is equivalent to Theorem 4. There exists an *n*-dimensional vector *y* such that

$$||b - AA^\*\mathcal{Y}||\_M = \inf\_{\mathbf{x}} ||b - Ax||\_M. \tag{18}$$

If

$$\|\|b - A\infty\_0\|\|\_{M} = \inf\_{\mathbf{x}} \|\|b - A\mathbf{x}\|\|\_{M},\tag{19}$$

then k k *<sup>x</sup>*<sup>0</sup> *<sup>N</sup>* <sup>≥</sup> *<sup>A</sup>*# k k*<sup>y</sup> <sup>N</sup>* with strict inequality for *<sup>x</sup>*<sup>0</sup> 6¼ *<sup>A</sup>*# k k*<sup>y</sup> <sup>N</sup>*.

Vector *<sup>y</sup>* satisfies the equation *AA*#*<sup>y</sup>* <sup>¼</sup> ^ *b*, here ^ *b* is the projection of *b* onto ð Þ *A* . **Theorem 5**. Among all the vectors *x* that minimize the residual k k *Ax* � *b <sup>M</sup>*, vector

*x*^, which has the minimum norm ^*x* ¼ min k k*x <sup>N</sup>*, is a unique vector of the form

$$
\hat{\mathbf{x}} = \mathbf{N}^{-1} \mathbf{A}^T \mathbf{M} \mathbf{y} = \mathbf{A}^\* \mathbf{y},\tag{20}
$$

satisfying the equation

$$A^\*Ax = A^\*b,\tag{21}$$

i.е. *x*^ can be obtained by means of any vector *y*0, that satisfies the equation *<sup>A</sup>*#*AA*#*<sup>y</sup>* <sup>¼</sup> *<sup>A</sup>*#*<sup>b</sup>* by the formula *<sup>x</sup>*^ <sup>¼</sup> *<sup>A</sup>*#*y*0.

**Proof**. According to the condition of Theorem 3 *<sup>A</sup>*# ð Þ¼ *<sup>A</sup>*# ð Þ *<sup>A</sup>* . Since vector *A*#*b* belongs to the set of images *A*#, it should belong to the set of images *A*#*A* and thus should be an image of some vector *x* with respect to the transformation *A*#*A*. In other words, Eq. (21) (with respect to *x*) has at least one solution. If *x* is a solution of Eq. (21), then *<sup>x</sup>*^ is the projection of *<sup>x</sup>* on to *<sup>A</sup>*# ð Þ, since *Ax* <sup>¼</sup> *Ax*^ according to Theorem 2. Since *<sup>x</sup>*^ <sup>∈</sup> *<sup>A</sup>*# ð Þ, vector ^*<sup>x</sup>* is an image of some vector *<sup>y</sup>* with respect to the transformation *<sup>A</sup>*#: *<sup>x</sup>*^ <sup>¼</sup> *<sup>A</sup>*#*y*.

Thus, we have established that there exists at least one solution of Eq. (21) in the form of (20). To establish the uniqueness of this solution, we assume that *<sup>x</sup>*^<sup>1</sup> <sup>¼</sup> *<sup>A</sup>*#*y*<sup>1</sup> and *<sup>x</sup>*^<sup>2</sup> <sup>¼</sup> *<sup>A</sup>*#*y*<sup>2</sup> satisfy Eq. (21). Then *<sup>A</sup>*#*A A*#*y*<sup>1</sup> � *<sup>A</sup>*#*y*<sup>2</sup> <sup>¼</sup> 0, therefore, *<sup>A</sup>*# *<sup>y</sup>*<sup>1</sup> � *<sup>y</sup>*<sup>2</sup> <sup>∈</sup> *<sup>A</sup>*# ð Þ¼ *<sup>A</sup>* ð Þ *<sup>A</sup>* , where from the equality *AA*# *<sup>y</sup>*<sup>1</sup> � *<sup>y</sup>*<sup>2</sup> <sup>¼</sup> 0 follows.

Therefore *<sup>y</sup>*<sup>1</sup> � *<sup>y</sup>*<sup>2</sup> <sup>∈</sup> *AA*# ð Þ¼ *<sup>A</sup>*# ð Þ; hence, ^*x*<sup>1</sup> <sup>¼</sup> *<sup>A</sup>*#*y*<sup>1</sup> <sup>¼</sup> *<sup>A</sup>*#*y*<sup>2</sup> <sup>¼</sup> *<sup>x</sup>*^2.

Thus, there exists exactly one solution of Eq. (21) in the form (20). The proof of Theorem 5 will be completed if we can show that by virtue of Theorem 1 the solution found in the form (14) is also a solution of the equation *Ax* <sup>¼</sup> ^ *b*, where ^ *b* is a weighted projection of *<sup>b</sup>* on to ð Þ *<sup>A</sup>* , i.е. *<sup>A</sup>*#*<sup>b</sup>* <sup>¼</sup> *<sup>A</sup>*#^ *b*.

Theorem 4 establishes that there is a unique, from *<sup>A</sup>*# ð Þ solution of the equation

$$A\mathfrak{x} = \hat{b}.\tag{22}$$

Hence, this unique solution satisfies the equation *<sup>A</sup>*#*Ax* <sup>¼</sup> *<sup>A</sup>*#^ *b*.

According to the equality *<sup>A</sup>*#*<sup>b</sup>* <sup>¼</sup> *<sup>A</sup>*#^ *b* the unique solution of Eq. (22) belonging to *<sup>A</sup>*# ð Þ, should coincide with *<sup>x</sup>*^ which is a unique solution of Eq. (21), which also belongs to *<sup>A</sup>*# ð Þ. Finally, vector *<sup>x</sup>*^, mentioned in the proof of Theorem 5 exactly coincides with the vector *x*^ from Theorem 4. Using the representation of the Moore– Penrose weighted pseudoinverse from [38].

$$A\_{MN}^{+} = A^{\*} (A^{\*}AA^{\*})^{+} A^{\*},\tag{23}$$

we can formulate the following theorem for problem (9) in a shorter form.

**Theorem 6.** Let *<sup>A</sup>* <sup>∈</sup> *Rm*�*<sup>n</sup>*, then *<sup>x</sup>* <sup>¼</sup> *<sup>A</sup>*<sup>þ</sup> *MNb*—is *М*-weighted least squares solution with the minimum *N*-norm of the system *Ax* ¼ *b*.

Note that in [18] a slightly different mathematical apparatus was used to prove the existence and uniqueness of the *M*-weighted least squares solution with the minimum *N*-norm of the system *Ax* ¼ *b*.

## **3. Error estimates for the weighted minimum-norm least squares solution**

### **3.1 Estimates of the hereditary error of a weighted normal pseudosolution**

Consider some properties of the weighted Moore–Penrose pseudoinverse.

**Lemma 3** (see in [16]). Let *<sup>A</sup>*, <sup>Δ</sup>*<sup>A</sup>* <sup>∈</sup>*Rm*�*<sup>n</sup>*, *<sup>μ</sup>i*ð Þ *<sup>A</sup>* and *<sup>μ</sup><sup>i</sup> <sup>A</sup>*� denote the weighted singular values of *A* and *A*� respectively. Then,

$$
\mu\_i(A) - \|\Delta A\|\_{MN} \le \mu\_i(\bar{A}) \le \mu\_i(A) + \|\Delta A\|\_{MN}.\tag{24}
$$

**Lemma 4** (see [40]). Let *<sup>A</sup>*, <sup>Δ</sup>*<sup>A</sup>* <sup>∈</sup> *Rm*�*<sup>n</sup>*, rank *<sup>A</sup>*� <sup>¼</sup> rankð Þ *<sup>A</sup>* and k k Δ*A MN A*<sup>þ</sup> *MN NM* <1. Then

$$\left\| \left\| \bar{A}\_{\rm MN}^{+} \right\| \right\|\_{\rm NM} \leq \frac{\left\| \left\| A\_{\rm MN}^{+} \right\| \right\|\_{\rm NM}}{1 - \left\| \Delta A \left\| \left\| M\_{\rm MN}^{+} \right\| \right\|\_{\rm MN}}. \tag{25}$$

**Lemma 5**. Let *<sup>G</sup>* <sup>¼</sup> *<sup>A</sup>*� <sup>þ</sup> *MN* � *A*<sup>þ</sup> *MN*, *<sup>А</sup>*� <sup>¼</sup> *<sup>А</sup>* <sup>þ</sup> <sup>Δ</sup>*<sup>A</sup>* and rank *<sup>A</sup>*� <sup>¼</sup> rankð Þ *<sup>A</sup>* . Then *<sup>G</sup>* can be represented as the sum of three matrices *G* ¼ *G*<sup>1</sup> þ *G*<sup>2</sup> þ *G*3, where

$$G\_1 = -\bar{A}\_{MN}^+ \Delta A A\_{MN}^+,\tag{26}$$

$$\mathcal{G}\_2 = -(I - \bar{P})\mathcal{N}^{-1}\Delta\mathcal{A}^T\mathcal{A}\_{\text{MN}}^{+T}\mathcal{M}\mathcal{A}\_{\text{MN}}^{+} = -(I - \bar{P})\Delta\mathcal{A}^\*\left(\mathcal{A}\_{\text{MN}}^{+}\right)^{\mathfrak{F}}\mathcal{A}\_{\text{MN}}^{+},\tag{27}$$

$$\mathbf{G}\_3 = \overline{\mathbf{A}}\_{\text{MN}}^{+}(I - \mathbf{Q}). \tag{28}$$

**Proof**. Following [26], *G* can be represented as the sum of the following matrices.

$$\begin{split} G &= \left[ \bar{P} + (I - \bar{P}) \right] \left( \bar{A}\_{\text{MN}}^{+} - A\_{\text{MN}}^{+} \right) \left[ Q + (I - Q) \right] = \\ &= \bar{P} \bar{A}\_{\text{MN}}^{+} Q + \bar{P} \bar{A}\_{\text{MN}}^{+} (I - Q) - \bar{P} A\_{\text{MN}}^{+} Q - \bar{P} A\_{\text{MN}}^{+} (I - Q) + (I - \bar{P}) \bar{A}\_{\text{MN}}^{+} Q + \\ &\quad + (I - \bar{P}) \bar{A}\_{\text{MN}}^{+} (I - Q) - (I - \bar{P}) A\_{\text{MN}}^{+} Q + (I - \bar{P}) A\_{\text{MN}}^{+} (I - Q). \end{split} \tag{29}$$

Since.

$$
\bar{P}\bar{A}\_{\rm MN}^{+} = \bar{A}\_{\rm MN}^{+}, (I - \bar{P})\bar{A}\_{\rm MN}^{+} = \mathbf{0}, \\
A\_{\rm MN}^{+}Q = A\_{\rm MN}^{+}, \\
A\_{\rm MN}^{+}(I - Q) = \mathbf{0}, \tag{30}
$$

we obtain

$$\begin{split} \bar{\mathbf{G}} &= \bar{\mathbf{A}}\_{\text{MN}}^{+} \mathbf{Q} + \bar{\mathbf{A}}\_{\text{MN}}^{+} (\mathbf{I} - \mathbf{Q}) - \bar{P} \mathbf{A}\_{\text{MN}}^{+} + (\mathbf{I} - \bar{\mathbf{P}}) \bar{\mathbf{A}}\_{\text{MN}}^{+} = \\ &= \left( \bar{\mathbf{A}}\_{\text{MN}}^{+} \mathbf{Q} - \bar{P} \mathbf{A}\_{\text{MN}}^{+} \right) - (\mathbf{I} - \bar{\mathbf{P}}) \mathbf{A}\_{\text{MN}}^{+} + \bar{\mathbf{A}}\_{\text{MN}}^{+} (\mathbf{I} - \mathbf{Q}). \end{split} \tag{31}$$

Consider each term in this equality separately

$$\mathbf{G}\_{1} = \bar{\mathbf{A}}\_{\text{MN}}^{+} \mathbf{Q} - \bar{\mathbf{P}} \mathbf{A}\_{\text{MN}}^{+} = \bar{\mathbf{A}}\_{\text{MN}}^{+} \mathbf{A} \mathbf{A}\_{\text{MN}}^{+} - \bar{\mathbf{A}}\_{\text{MN}}^{+} \bar{\mathbf{A}} \mathbf{A}\_{\text{MN}}^{+} = \bar{\mathbf{A}}\_{\text{MN}}^{+} (\mathbf{A} - \bar{\mathbf{A}}) \mathbf{A}\_{\text{MN}}^{+} = \bar{\mathbf{A}}\_{\text{MN}}^{+} \Delta \mathbf{A} \mathbf{A}\_{\text{MN}}^{+}. \tag{32}$$

To estimate the second term, we use properties (1)

$$\begin{split} \mathbf{A}\_{\text{MN}}^{+} &= \left(\mathbf{A}\_{\text{MN}}^{+} \mathbf{A}\right) \mathbf{A}\_{\text{MN}}^{+} = \mathbf{N}^{-1} \left(\mathbf{N} \mathbf{A}\_{\text{MN}}^{+} \mathbf{A}\right)^{T} \mathbf{A}\_{\text{MN}}^{+} = \\ &= \mathbf{N}^{-1} \mathbf{A}^{T} \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} = \mathbf{N}^{-1} \mathbf{\bar{A}}^{T} \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} - \mathbf{N}^{-1} \boldsymbol{\Delta} \mathbf{A}^{T} \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} . \end{split} \tag{33}$$

Substituting (33) into the second term of (31) gives

$$\mathcal{G}\_2 = (I - \bar{P})\mathcal{A}\_{\text{MN}}^+ = (I - \bar{P})\left(\mathcal{N}^{-1}\bar{\mathcal{A}}^T\mathcal{A}\_{\text{MN}}^{+T}\mathcal{N}\mathcal{A}\_{\text{MN}}^+ - \mathcal{N}^{-1}\Delta\mathcal{A}^T\mathcal{A}\_{\text{MN}}^{+T}\mathcal{N}\mathcal{A}\_{\text{MN}}^+\right). \tag{34}$$

Since,

$$\begin{aligned} (I - \bar{P})\mathbf{N}^{-1}\bar{\mathbf{A}}^T \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} &= \mathbf{N}^{-1} \bar{\mathbf{A}}^T \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} - \bar{\mathbf{A}}\_{\text{MN}}^{+} \bar{\mathbf{A}} \mathbf{N}^{-1} \bar{\mathbf{A}}^T \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} = \\ &= \mathbf{N}^{-1} \bar{\mathbf{A}}^T \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} - \mathbf{N}^{-1} \bar{\mathbf{A}}^T \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} = \mathbf{0} \end{aligned} \tag{35}$$

we obtain

$$\mathbf{G}\_{2} = (I - \bar{P})\mathbf{A}\_{\text{MN}}^{+} = -(I - \bar{P})\mathbf{N}^{-1}\Delta\mathbf{A}^{T}\mathbf{A}\_{\text{MN}}^{+T}\mathbf{N}\mathbf{A}\_{\text{MN}}^{+} = -(I - \bar{P})\Delta\mathbf{A}^{\*}\left(\mathbf{A}\_{\text{MN}}^{+}\right)^{\*}\mathbf{A}\_{\text{MN}}^{+}.\tag{36}$$

Finally,

$$\mathbf{G} = \bar{\mathbf{A}}\_{\text{MN}}^{+} - \mathbf{A}\_{\text{MN}}^{+} = -\bar{\mathbf{A}}\_{\text{MN}}^{+} \Delta \mathbf{A} A\_{\text{MN}}^{+} - (I - \bar{P}) \Delta \mathbf{A}^{\ast} \left( \mathbf{A}\_{\text{MN}}^{+} \right)^{\bullet} \mathbf{A}\_{\text{MN}}^{+} + \bar{\mathbf{A}}\_{\text{MN}}^{+} (I - \mathbf{Q}). \tag{37}$$

**Lemma 6** (see in [41]). If rank *<sup>A</sup>* <sup>¼</sup> rankð Þ¼ *<sup>A</sup> <sup>k</sup>*, then

$$\left\| \bar{Q}(I - Q) \right\|\_{\text{MM}} = \left\| Q(I - \bar{Q}) \right\|\_{\text{MM}},\tag{38}$$

*Weighted Least Squares Perturbation Theory DOI: http://dx.doi.org/10.5772/intechopen.102885*

where *Q* and *Q* are defined in (3).

**Lemma 7**. Let *<sup>A</sup>*, <sup>Δ</sup>*<sup>A</sup>* <sup>∈</sup>*Rm*�*n*, rank *<sup>A</sup>* <sup>¼</sup> rankð Þ *<sup>A</sup>* and k k <sup>Δ</sup>*<sup>A</sup> MN <sup>A</sup>*<sup>þ</sup> *MN NM* < 1.

Then the relative estimate of the hereditary error of the weighted pseudoinverse matrix has the form

$$\frac{\left\|\bar{A}\_{\rm MN}^{+} - A\_{\rm MN}^{+}\right\|\_{\rm NM}}{\left\|A\_{\rm MN}^{+}\right\|\_{\rm NM}} \leq C \frac{\varepsilon\_{A}h}{1 - \varepsilon\_{A}h},\tag{39}$$

where *h* ¼ *h A*ð Þ¼ k k *A MN A*<sup>þ</sup> *MN NM* and the estimate of the absolute error

$$\left\| \left| \bar{A}\_{MN}^{+} - A\_{MN}^{+} \right| \right\|\_{\text{NM}} \leq C \frac{\left( \varepsilon\_{A} h \right)^{2}}{1 - \varepsilon\_{A} h},\tag{40}$$

moreover.

if *A* is not a full rank matrix, then *C* ¼ 3, if *m* > *n* ¼ *k* or *n* > *m* ¼ *k*, then *C* ¼ 2, if *m* ¼ *n* ¼ *k*, then *C* ¼ 1*:*

**Proof**. To obtain estimates, we use the results of Lemma 5:

$$
\bar{A}\_{\rm MN}^{+} - A\_{\rm MN}^{+} = -\bar{A}\_{\rm MN}^{+} \Delta A A\_{\rm MN}^{+} - (I - \bar{P}) \Delta A^{\ast} \left( A\_{\rm MN}^{+} \right)^{\ast} A\_{\rm MN}^{+} + \bar{A}\_{\rm MN}^{+} (I - Q). \tag{41}
$$

Passing to the weighted norms, we obtain.

$$\left\| \bar{\boldsymbol{A}}\_{\text{MN}}^{+} - \boldsymbol{A}\_{\text{MN}}^{+} \right\|\_{\text{NM}} \leq \left\| \bar{\boldsymbol{A}}\_{\text{MN}}^{+} \boldsymbol{\Delta} \boldsymbol{A} \boldsymbol{A}\_{\text{MN}}^{+} \right\|\_{\text{NM}} + \left\| \left\| \boldsymbol{\Delta} \boldsymbol{A}^{\text{a}} \left( \boldsymbol{A}\_{\text{MN}}^{+} \right)^{\text{w}} \boldsymbol{A}\_{\text{MN}}^{+} \right\|\_{\text{NM}} + \left\| \bar{\boldsymbol{A}}\_{\text{MN}}^{+} \bar{\boldsymbol{Q}} (\boldsymbol{I} - \boldsymbol{Q}) \right\|\_{\text{NM}}.\tag{42}$$

Using the results of Lemma 6, we can estimate the last summand

$$\begin{split} \left\lVert \bar{\mathcal{A}}\_{\text{MN}}^{+} \bar{\mathcal{Q}}(I-\mathbb{Q}) \right\rVert\_{N} &= \left\lVert \bar{\mathcal{A}}\_{\text{MN}}^{+} \bar{\mathcal{A}} \bar{\mathcal{A}}\_{\text{MN}}^{+}(I-\mathbb{Q}) \right\rVert\_{N} \leq \left\lVert \bar{\mathcal{A}}\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} \left\lVert \bar{\mathcal{Q}}(I-\mathbb{Q}) \right\rVert\_{\text{MM}} \\ &= \left\lVert \bar{\mathcal{A}}\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} \left\lVert \mathcal{Q}(I-\bar{\mathcal{Q}}) \right\rVert\_{\text{MM}}. \end{split} \tag{43}$$

According to (38) and (43), we can rewrite (42) in the form

$$\begin{split} \left\lVert \bar{A}\_{\text{MN}}^{+} - A\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} &\leq \left\lVert \bar{A}\_{\text{MN}}^{+} \Delta A A\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} + \left\lVert \Delta A \left( A\_{\text{MN}}^{+} \right) A\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} + \left\lVert A\_{\text{MN}}^{+} Q \left( I - \bar{Q} \right) \right\rVert\_{\text{NM}} \leq \\ &\leq \left\lVert \bar{A}\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} \left\lVert \Delta A \right\rVert\_{\text{MN}} \left\lVert A\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} + \left\lVert \Delta A \left\rVert\_{\text{MN}} \left\lVert A\_{\text{MN}}^{+} \right\rVert\_{\text{NM}}^{2} + \left\lVert \bar{A}\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} \left\lVert A\_{\text{MN}}^{+} \right\rVert\_{\text{NM}} \left\lVert A A\_{\text{MN}}^{+} \right\rVert\_{\text{NM}}. \end{split} \tag{44}$$

Using the results of Lemma 4, we obtain an estimate for the absolute error of the weighted pseudoinverse matrix *A.*

$$\begin{split} \left\lVert \left| A\_{\rm MN}^{+} - A\_{\rm MN}^{+} \right| \right\rVert\_{\rm NM} &\leq \frac{\left\lVert \left| A\_{\rm MN}^{+} \right| \right\rVert\_{\rm MM}}{\mathbf{1} - \left\lVert \Delta A \right\rVert\_{\rm MN} \left\lVert A\_{\rm MN}^{+} \right\rVert\_{\rm MN}} \left( \left\lVert \Delta A \right\rVert\_{\rm MN} \left\lVert A\_{\rm MN}^{+} \right\rVert\_{\rm MN} + \left\lVert \Delta A \right\rVert\_{\rm MN} \left\lVert A\_{\rm MN}^{+} \right\rVert\_{\rm MN} \right) \\ &+ \left\lVert \left| A\_{\rm MN}^{+} \right| \right\rVert\_{\rm MN} \left\lVert \Delta A \right\rVert\_{\rm MN} \right) = \frac{h\varepsilon\_{A}}{\mathbf{1} - h\varepsilon\_{A}} \left( h\varepsilon\_{A} + h\varepsilon\_{A} + h\varepsilon\_{A} \right) = C \frac{\left( h\varepsilon\_{A} \right)^{2}}{\mathbf{1} - h\varepsilon\_{A}}, C = \mathbf{1}, 2, 3. \end{split} \tag{45}$$

To estimate the relative error, we have

$$\frac{\left\|\bar{A}\_{\rm MN}^{+} - A\_{\rm MN}^{+}\right\|\_{\rm MN}}{\left\|A\_{\rm MN}^{+}\right\|\_{\rm MN}} \le 3 \|\Delta A\|\_{\rm MN} \left\|A\_{\rm MN}^{+}\right\|\_{\rm MN} \frac{1}{1 - \left\|\Delta A\right\|\_{\rm MN} \left\|A\_{\rm MN}^{+}\right\|\_{\rm MN}} = C \frac{h\varepsilon\_{A}}{1 - h\varepsilon\_{A}}, C = 1, 2, 3. \tag{46}$$

Let us estimate the error of the weighted minimum-norm least squares solution. Let us introduce the following notation:

$$\begin{split} \alpha &= \frac{||\Delta\bar{b}||\_M}{||\boldsymbol{A}||\_{MN}||\boldsymbol{\chi}||\_N}, \beta = \frac{||\boldsymbol{r}||\_M}{||\boldsymbol{\chi}||\_N ||\boldsymbol{A}||\_{MN}}, \boldsymbol{\chi} = \frac{||\bar{r}||\_M}{||\boldsymbol{A}||\_{MN} ||\boldsymbol{\chi}||\_N} \alpha = \frac{||\Delta\bar{b}||\_M}{||\boldsymbol{A}||\_{MN} ||\boldsymbol{\chi}||\_N}, \\ \beta\_l &= \frac{||\boldsymbol{r}||\_M}{||\boldsymbol{\chi}||\_N ||\boldsymbol{A}||\_{MN}}, \boldsymbol{\chi}\_l = \frac{||\bar{r}\_l||\_M}{||\boldsymbol{A}||\_{MN} ||\boldsymbol{\chi}\_l||\_N}, \boldsymbol{\chi}\_k = \frac{||\bar{r}\_k||\_M}{||\boldsymbol{A}||\_{MN} ||\boldsymbol{\chi}||\_N}. \end{split} \tag{47}$$

Consider the following three cases.

**Case 1**. *The rank of the original matrix A remains the same under its perturbation, i.e.,* rankð Þ¼ *<sup>A</sup>* rank *<sup>A</sup>*� �*.*

**Theorem 7**. Assume that k k Δ*A MN A*<sup>þ</sup> *MN* � � � � *NM* <sup>&</sup>lt;1, rank *<sup>A</sup>*� � <sup>¼</sup> rankð Þ *<sup>A</sup>* . Then

$$\frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\omega}}||\_N}{||\boldsymbol{\omega}||\_N} \le \frac{h}{1 - h\varepsilon\_A} (2\varepsilon\_A + a + h\varepsilon\_A \beta),\tag{48}$$

where *h A*ð Þ¼ k k *A MN A*<sup>þ</sup> *MN* � � � � *NM* is the weighted condition number of *A*, the symbols kk *MN* and kk *NM* denote the weighted matrix norms defined by Eq. (4)–(6), and *A*<sup>þ</sup> *MN* is the weighted Moore–Penrose pseudoinverse.

**Proof**. The error estimate follows from the relation:

$$\boldsymbol{\mathfrak{x}} - \bar{\boldsymbol{\mathfrak{x}}} = \left( \boldsymbol{A}\_{\text{MN}}^{+} - \bar{\boldsymbol{A}}\_{\text{MN}}^{+} \right) \boldsymbol{b} + \bar{\boldsymbol{A}}\_{\text{MN}}^{+} \left( \boldsymbol{b} - \bar{\boldsymbol{b}} \right). \tag{49}$$

For the error of the matrix pseudoinverse, we use the representation

$$\bar{\mathbf{A}}\_{\text{MN}}^{+} - \mathbf{A}\_{\text{MN}}^{+} = -\bar{\mathbf{A}}\_{\text{MN}}^{+} \Delta \mathbf{A} \mathbf{A}\_{\text{MN}}^{+} - (I - \bar{P}) \mathbf{N}^{-1} \Delta \mathbf{A}^{T} \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} + \bar{\mathbf{A}}\_{\text{MN}}^{+} (I - \mathbf{Q}).\tag{50}$$

Then,

$$\mathbf{x} - \bar{\mathbf{x}} = \left[ \bar{A}\_{\text{MN}}^{+} \Delta A A\_{\text{MN}}^{+} + (I - \bar{P}) \mathbf{N}^{-1} \Delta A^{T} A\_{\text{MN}}^{+T} \mathbf{N} A\_{\text{MN}}^{+} - \bar{A}\_{\text{MN}}^{+} (I - Q) \right] \mathbf{b} + \bar{A}\_{\text{MN}}^{+} (\mathbf{b} - \bar{\mathbf{b}}) = \left[ \bar{A}\_{\text{MN}}^{+} \Delta A^{+} \Delta A^{+} \Delta A^{+} \Delta A^{+} \right] \mathbf{b} + \bar{A}\_{\text{MN}}^{+} (\mathbf{b} - \bar{\mathbf{b}}) = \left[ \bar{A}\_{\text{MN}}^{+} \Delta A^{+} \Delta A^{+} + (I - \bar{I}) \right] \mathbf{b} + \bar{A}\_{\text{MN}}^{+} (\mathbf{b} - \bar{\mathbf{b}})$$

$$= \bar{A}\_{\text{MN}}^{+} \Delta A \mathbf{x} + (I - \bar{P}) \mathbf{N}^{-1} \Delta A^{T} A\_{\text{MN}}^{+T} \mathbf{N} A\_{\text{MN}}^{+} \mathbf{b} - \bar{A}\_{\text{MN}}^{+} (I - Q) \mathbf{b} + \bar{A}\_{\text{MN}}^{+} (\mathbf{b} - \bar{\mathbf{b}})$$

$$= \bar{A}\_{\text{MN}}^{+} \Delta A \mathbf{x} + (I - \bar{P}) \mathbf{N}^{-1} \Delta A^{T} A\_{\text{MN}}^{+T} \mathbf{N} \mathbf{x} - \bar{A}\_{\text{MN}}^{+} (I - Q) \mathbf{b} \quad + \bar{A}\_{\text{MN}}^{+} (\mathbf{b} - \bar{\mathbf{b}}) \tag{51}$$

Thus,

$$\bar{\mathbf{x}} - \mathbf{x} = \bar{\mathbf{A}}\_{\text{MN}}^{+} \Delta \mathbf{A} \mathbf{x} + (I - \bar{P}) \mathbf{N}^{-1} \Delta \mathbf{A}^{T} \mathbf{A}\_{\text{MN}}^{+T} \text{N} \mathbf{x} - \bar{\mathbf{A}}\_{\text{MN}}^{+} (I - \mathbf{Q}) \mathbf{b} + \bar{\mathbf{A}}\_{\text{MN}}^{+} (\mathbf{b} - \bar{\mathbf{b}}).\tag{52}$$

### Passing to the weighted norms yields

$$\begin{split} \|\bar{\mathbf{x}} - \mathbf{x}\|\_{N} &= \left\| \bar{A}\_{\text{MN}} \Delta \mathbf{A} \mathbf{x} + (I - \bar{P}) \mathbf{N}^{-1} \Delta \mathbf{A}^{T} A\_{\text{MN}}^{+T} \mathbf{N} \mathbf{x} - \bar{A}\_{\text{MN}}^{+} (I - \mathbf{Q}) b + \bar{A}\_{\text{MN}}^{+} (b - \bar{b}) \right\|\_{N} \le \\ &\le \left\| \bar{A}\_{\text{MN}} \Delta \mathbf{A} \mathbf{x} \right\|\_{N} + \left\| (I - \bar{P}) \mathbf{N}^{-1} \Delta \mathbf{A}^{T} A\_{\text{MN}}^{+T} \mathbf{N} \mathbf{x} \right\|\_{N} + \left\| \bar{A}\_{\text{MN}}^{+} (I - \mathbf{Q}) b + \bar{A}\_{\text{MN}}^{+} (b - \bar{b}) \right\|\_{N} . \end{split} \tag{53}$$

By taking into account the relations

$$(I - Q)b = (I - Q)r = r,\\ r = b - A\mathbf{x},\\ \mathbf{x} = A\_{MN}^{+}b \tag{54}$$

and applying Lemma 6, the weighted norm of each term in (27) can be rearranged as follows

a. *A* <sup>þ</sup> *MN*Δ*Ax <sup>N</sup>* <sup>¼</sup> *<sup>N</sup>*<sup>1</sup>*=*<sup>2</sup>*A* <sup>þ</sup> *MNM*�<sup>1</sup>*=*2*M*<sup>1</sup>*=*2Δ*AN*�<sup>1</sup>*=*2*N*<sup>1</sup>*=*2*x* ≤ ≤ *N*<sup>1</sup>*=*<sup>2</sup>*A* <sup>þ</sup> *MNM*�<sup>1</sup>*=*2 *M*<sup>1</sup>*=*2Δ*AN*�<sup>1</sup>*=*2 *N*<sup>1</sup>*=*<sup>2</sup>*x* <sup>¼</sup> *<sup>A</sup>* <sup>þ</sup> *MN NM*k k <sup>Δ</sup>*<sup>A</sup> MN*k k*<sup>x</sup> <sup>N</sup>:* <sup>ð</sup>55<sup>Þ</sup> ð Þ *<sup>I</sup>* � *<sup>P</sup> <sup>N</sup>*�<sup>1</sup> Δ*ATA*þ*<sup>T</sup> MNNx <sup>N</sup>* <sup>¼</sup> *<sup>N</sup>*<sup>1</sup>*=*<sup>2</sup>ð Þ *<sup>I</sup>* � *<sup>P</sup> <sup>N</sup>*�<sup>1</sup>*=*2*N*�<sup>1</sup>*=*2Δ*ATM*<sup>1</sup>*=*2*M*�<sup>1</sup>*=*<sup>2</sup>*A*þ*<sup>T</sup> MNN*1*=*2*N*<sup>1</sup>*=*<sup>2</sup>*x* 

b. ≤ *N*<sup>1</sup>*=*<sup>2</sup> *<sup>I</sup>* � *<sup>P</sup>* ð Þ*N*�<sup>1</sup>*=*2 *M*<sup>1</sup>*=*2Δ*AN*�<sup>1</sup>*=*2 *N*<sup>1</sup>*=*<sup>2</sup>*A*þ *MNM*�<sup>1</sup>*=*2 *N*<sup>1</sup>*=*<sup>2</sup>*x* <sup>¼</sup> ð Þ *<sup>I</sup>* � *<sup>P</sup> NN*k k Δ*A MN A*<sup>þ</sup> *MN NM*k k*x <sup>N</sup>* ð56Þ

c. Using Lemma 6, and (28) we can write

$$\begin{aligned} \left\| \bar{A}\_{\text{MN}}^{+} \bar{\mathbf{Q}} (I - \mathbf{Q}) b \right\|\_{N} &= \left\| \bar{A}\_{\text{MN}}^{+} \bar{A} \bar{A}\_{\text{MN}}^{+} (I - \mathbf{Q}) r \right\|\_{N} \leq \left\| \bar{A}\_{\text{MN}}^{+} \right\|\_{\text{NM}} \left\| \bar{\mathbf{Q}} (I - \mathbf{Q}) \right\|\_{\text{MM}} \left\| r \right\|\_{M} \\ &= \left\| \bar{A}\_{\text{MN}}^{+} \right\|\_{\text{NM}} \left\| \mathbf{Q} \left( I - \bar{\mathbf{Q}} \right) \right\|\_{\text{MM}} \left\| r \right\|\_{M} \end{aligned} \tag{57}$$

where

*Q I* � *<sup>Q</sup> MM* ¼ *AA*<sup>þ</sup> *MN <sup>I</sup>* � *<sup>Q</sup> MM* <sup>¼</sup> *<sup>M</sup>*<sup>1</sup>*=*<sup>2</sup>*AA*<sup>þ</sup> *MN <sup>I</sup>* � *<sup>Q</sup> <sup>M</sup>*�<sup>1</sup>*=*2 ¼ ¼ *M*�<sup>1</sup>*=*<sup>2</sup> *MAA*<sup>þ</sup> *MN <sup>T</sup> <sup>I</sup>* � *<sup>Q</sup> <sup>M</sup>*�<sup>1</sup>*=*2 ¼ ¼ *M*�<sup>1</sup>*=*<sup>2</sup> *A*<sup>þ</sup> *MN <sup>T</sup> <sup>A</sup><sup>T</sup>* � *<sup>A</sup><sup>T</sup> M*<sup>1</sup>*=*2*M*<sup>1</sup>*=*<sup>2</sup> *<sup>I</sup>* � *<sup>Q</sup> <sup>M</sup>*�<sup>1</sup>*=*2 ¼ ¼ *M*�<sup>1</sup>*=*<sup>2</sup> *A*<sup>þ</sup> *MN <sup>T</sup>* <sup>Δ</sup>*ATM I* � *<sup>Q</sup> <sup>M</sup>*�<sup>1</sup>*=*2 <sup>≤</sup> *<sup>M</sup>*�<sup>1</sup>*=*<sup>2</sup> *A*<sup>þ</sup> *MN <sup>T</sup>* Δ*ATM*<sup>1</sup>*=*2 ¼ ¼ *M*<sup>1</sup>*=*2Δ*AN*�<sup>1</sup>*=*2*N*<sup>1</sup>*=*<sup>2</sup>*A*<sup>þ</sup> *MNM*�<sup>1</sup>*=*2 <sup>≤</sup> *<sup>M</sup>*<sup>1</sup>*=*2Δ*AN*�<sup>1</sup>*=*2 *N*<sup>1</sup>*=*<sup>2</sup>*A*<sup>þ</sup> *MNM*�<sup>1</sup>*=*2 ≤ ≤k k Δ*A MN A*<sup>þ</sup> *MN NM:* (58)

Substituting this result into (31) gives the inequality

$$\left\| \bar{A}\_{\rm MN}^{+} \bar{\mathbf{Q}} (I - \mathbf{Q}) b \right\|\_{N} \leq \left\| \bar{A}\_{\rm MN}^{+} \right\|\_{\rm MN} \left\| \Delta A \right\|\_{\rm MN} \left\| \left| A\_{\rm MN}^{+} \right| \right\|\_{\rm NM} \left\| r \right\|\_{\rm M}. \tag{59}$$

$$\begin{aligned} \text{d.} \left\| \bar{A}\_{\text{MN}}^{+} (b - \bar{b}) \right\|\_{N} &= \left\| N^{\natural\_{\bar{\zeta}}} \bar{A}\_{\text{MN}}^{+} \mathcal{M}^{-\natural\_{\bar{\zeta}}} \mathcal{M}^{\natural\_{\bar{\zeta}}} (b - \bar{b}) \right\| \le \\ \left\| N^{\natural\_{\bar{\zeta}}} \bar{A}\_{\text{MN}}^{+} \mathcal{M}^{-\natural\_{\bar{\zeta}}} \right\| \left\| \mathcal{M}^{\natural\_{\bar{\zeta}}} (b - \bar{b}) \right\| &= \left\| \bar{A}\_{\text{MN}}^{+} \right\|\_{\text{NM}} \left\| (b - \bar{b}) \right\|\_{M} \end{aligned} \tag{60}$$

Taking into account *<sup>I</sup>* � *<sup>P</sup>* < 1, and applying Lemma 4, we obtain the following weighted-norm estimate for the relative error

k k *x* � *x <sup>N</sup>* k k*x <sup>N</sup>* ≤ *A* <sup>þ</sup> *MN NM*k k <sup>Δ</sup>*<sup>A</sup> MN*k k*<sup>x</sup> <sup>N</sup>* k k*x <sup>N</sup>* <sup>þ</sup> k k <sup>Δ</sup>*<sup>A</sup> MN <sup>A</sup>*<sup>þ</sup> *MN NM*k k*x <sup>N</sup>* k k*x <sup>N</sup>* þ þ *A* <sup>þ</sup> *MN NM A*þ *MN NM*k k Δ*A MN*k k*r <sup>M</sup>* k k*x <sup>N</sup>* þ *A* <sup>þ</sup> *MN NM*k k <sup>Δ</sup>*<sup>b</sup> <sup>M</sup>* k k*x <sup>N</sup>* ≤ ≤ *A* <sup>þ</sup> *MN NM*k k <sup>Δ</sup>*<sup>A</sup> MN* <sup>þ</sup> k k <sup>Δ</sup>*<sup>A</sup> MN <sup>A</sup>*<sup>þ</sup> *MN NM*þ þ *A* <sup>þ</sup> *MN NM A*þ *MN NM*k k Δ*A MN*k k*r <sup>M</sup>* k k*x <sup>N</sup>* þ *A* <sup>þ</sup> *MN NM*k k <sup>Δ</sup>*<sup>b</sup> <sup>M</sup>* k k*x <sup>N</sup>* ≤ ≤ *A*þ *MN NM*k k *A MN* 1 � k k Δ*A MN A*<sup>þ</sup> *MN NM* 2 k k Δ*A MN* k k *A MN* <sup>þ</sup> k k <sup>Δ</sup>*<sup>b</sup> <sup>M</sup>* k k *A MN*k k*x <sup>N</sup>* þ þ *A*<sup>þ</sup> *MN NM*k k *A MN* k k Δ*A MN* k k *A MN* k k*r <sup>M</sup>* k k *A MN*k k*x <sup>N</sup>* ≤ ≤ *h A*ð Þ 1 � *h A*ð Þ*ε<sup>A</sup>* <sup>2</sup>*ε<sup>A</sup>* <sup>þ</sup> k k <sup>Δ</sup>*<sup>b</sup> <sup>M</sup>* k k *A MN*k k*x <sup>N</sup>* þ *h A*ð Þ*ε<sup>A</sup>* k k*r <sup>M</sup>* k k*x <sup>N</sup>*k k *A MN :* (61)

as required.

Specifically, if *<sup>M</sup>* <sup>¼</sup> *<sup>I</sup>* <sup>∈</sup> *Rm*�*<sup>m</sup>* and *<sup>N</sup>* <sup>¼</sup> *<sup>I</sup>* <sup>∈</sup>*R<sup>n</sup>*�*<sup>n</sup>* , then the estimates of the hereditary error of normal pseudosolutions of systems of linear algebraic equations follow from next theorem.

**Theorem 8** (see in [32]). Let║*ΔА*║║*А*<sup>þ</sup>║<sup>&</sup>lt; 1, rank *<sup>A</sup>* <sup>¼</sup> rankð Þ¼ *<sup>A</sup> <sup>k</sup>*. Then

$$\frac{||\mathbf{x} - \bar{\mathbf{x}}||}{||\mathbf{x}||} \le \frac{h}{\mathbf{1} - h\varepsilon\_{\mathbf{A}}} \left( 2\varepsilon\_{\mathbf{A}} + \varepsilon\_{b\_{\mathbf{k}}} + h\varepsilon\_{\mathbf{A}} \frac{||\mathbf{b} - b\_{\mathbf{k}}||}{||b\_{\mathbf{k}}||} \right),\tag{62}$$

where *bk* is the projection of the right-hand side of problem (8) onto the principal left singular subspace of the matrix *A* [42], i.е., *bk* ∈ Im A, *h* ¼ *h A*ð Þ¼ k k *A A*<sup>þ</sup> k k is condition number of *A*, the symbol kk , unless otherwise stated, denotes the Euclidean vector norm and the corresponding spectral matrix norm, *А*<sup>þ</sup> is the Moore–Penrose pseudoinverse.

**Case 2**. *The rank of the perturbed matrix is larger than that of the original matrix A, i.e.*rank *<sup>A</sup>* <sup>&</sup>gt; *rank A*ð Þ¼ *k.*

Define the idempotent matrices:

$$P = A\_{MN}^{+}A, Q = A A\_{MN}^{+}, \quad \bar{P}\_k = \bar{A}\_{kMN}^{+} \bar{A}, \quad \bar{Q}\_k = \bar{A} \bar{A}\_{kMN}^{+}, \tag{63}$$

where *k* is the rank of *A*.

**Theorem 9**. Assume that k k Δ*A MN A*<sup>þ</sup> *MN* � � � � *NM* <sup>&</sup>lt; <sup>1</sup> 2 , rank *A* � �<sup>&</sup>gt; rankð Þ¼ *<sup>A</sup> <sup>k</sup>*. Then

$$\frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\alpha}}\_{k}||\_{N}}{||\boldsymbol{\omega}||\_{N}} \leq \frac{h}{1 - 2h\varepsilon\_{\rm A}} (|\boldsymbol{\omega}\_{\rm A} + \boldsymbol{a} + h\varepsilon\_{\rm A}\boldsymbol{\beta}|.\tag{64}$$

where *h A*ð Þ¼ k k *A MN A*<sup>þ</sup> *MN* � � � � *NM* is the weighted condition number of *A*, the symbols kk *MN* and kk *NM* denote the weighted matrix norms defined Eq. (4)–(6), and *A*<sup>þ</sup> *MN* is the weighted Moore–Penrose pseudoinverse.

**Proof**. The desired estimate is derived using the method of [32], which is based on the singular value decomposition of matrices. Specifically, *А* is represented as a weighted singular value decomposition:

$$
\bar{A} = \bar{U}\bar{D}\bar{V}^T.\tag{65}
$$

Along with (38), we consider the decomposition

$$
\bar{A}\_k = \bar{U}\bar{D}\_k\bar{V}^T,\tag{66}
$$

where *D <sup>k</sup>* is a rectangular matrix whose first *k* diagonal elements are nonzero and equal to the corresponding elements of *D* , while all the other elements are zero.

The weighted minimum-norm least squares solution to problem (10) is approximated by the weighted minimum-norm least squares solution *x<sup>k</sup>* to the problem

$$\min\_{\mathbf{x}\in\mathcal{C}}||\bar{\mathbf{x}}||\_{N}, \mathbf{C} = \left\{\mathbf{x}||\bar{A}\_{k}\bar{\mathbf{x}} - \bar{b}||\_{M} = \min\right\}.\tag{67}$$

The matrix *A <sup>k</sup>* is defined by (48) and has the same rank *k* as the matrix of the unperturbed problem.

Thus, the error estimation of the least-squares solution for matrices with a modified rank is reduced to the case of the same rank. This fact is used to estimate k k *x* � *x<sup>k</sup> <sup>N</sup>=*k k*x <sup>N</sup>*. The error of the weighted pseudoinverse matrix then becomes:

$$\begin{split} \mathbf{G}\_{k} &= \left[\bar{P}\_{k} + (I - \bar{P}\_{k})\right] \left(\bar{A}\_{kMN}^{+} - A\_{MN}^{+}\right) \left[\mathbf{Q} + (I - \mathbf{Q})\right] = \bar{P}\_{k}\bar{A}\_{kMN}^{+}\mathbf{Q} + \bar{P}\_{k}\bar{A}\_{kMN}^{+}(I - \mathbf{Q}) - \\ &- \bar{P}\_{k}A\_{MN}^{+}\mathbf{Q} - \bar{P}\_{k}A\_{MN}^{+}(I - \mathbf{Q}) - (I - \bar{P}\_{k})\bar{A}\_{kMN}^{+}\mathbf{Q} + (I - \bar{P}\_{k})\bar{A}\_{kMN}^{+}(I - \mathbf{Q}) - \\ &- (I - \bar{P}\_{k})A\_{MN}^{+}\mathbf{Q} + (I - \bar{P}\_{k})A\_{MN}^{+}(I - \mathbf{Q}) = \left(\bar{A}\_{kMN}^{+}\mathbf{Q} - \bar{P}\_{k}A\_{MN}^{+}\right) - (I - P\_{k})A\_{MN}^{+} + \\ &+ \bar{A}\_{kMN}^{+}(I - \mathbf{Q}) = \bar{A}\_{kMN}^{+}\mathbf{A}\bar{A}\_{MN}^{+} - \bar{A}\_{kMN}^{+}\bar{A}\bar{A}\_{MN}^{+} - (I - \bar{P}\_{k})A\_{MN}^{+} + \bar{A}\_{kMN}^{+}(I - \mathbf{Q}) = \\ &= \bar{A}\_{kMN}^{+}(A - \bar{A})A\_{MN}^{+} - (I - \bar{P}\_{k})A\_{MN}^{+} + \bar{A}\_{kMN}^{+}(I - \mathbf{Q}), \end{split}$$

Applying Lemma 5 yields

$$\mathbf{G}\_{k} = \bar{\mathbf{A}}\_{k\text{MN}}^{+} - \mathbf{A}\_{\text{MN}}^{+} = -\bar{\mathbf{A}}\_{k\text{MN}}^{+} \Delta \mathbf{A} \mathbf{A}\_{\text{MN}}^{+} - (\mathbf{I} - \bar{\mathbf{P}}\_{k}) \mathbf{N}^{-1} \Delta \mathbf{A}^{T} \mathbf{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{MN}}^{+} + \bar{\mathbf{A}}\_{k\text{MN}}^{+} (\mathbf{I} - \mathbf{Q}\_{k}). \tag{69}$$

For the error of the WLS solution, we obtain

$$\bar{\boldsymbol{\alpha}}\_{k} - \boldsymbol{\pi} = \bar{\boldsymbol{A}}\_{k\text{MN}}^{+} \boldsymbol{\Delta A} \mathbf{x} + (\boldsymbol{I} - \bar{\boldsymbol{P}}\_{k}) \mathbf{N}^{-1} \boldsymbol{\Delta A}^{T} \boldsymbol{A}\_{\text{MN}}^{+T} \mathbf{N} \mathbf{x} - \bar{\boldsymbol{A}}\_{k\text{MN}}^{+} (\boldsymbol{I} - \boldsymbol{Q}\_{k}) \boldsymbol{b} + \bar{\boldsymbol{A}}\_{k\text{MN}}^{+} \left(\boldsymbol{b} - \bar{\boldsymbol{b}}\right). \tag{70}$$

$$\begin{aligned} \|\Delta A\_k\|\_{MN} &= \left\|\bar{A}\_k - A\right\|\_{MN} = \left\|\bar{A}\_k - \bar{A} + \Delta A\right\|\_{MN} \le \left\|\bar{A}\_k - \bar{A}\right\|\_{MN} + \left\|\Delta A\right\|\_{MN} = \\\\ &= \left\|\bar{U} \begin{pmatrix} \mathbf{0} & \mathbf{0} \\\\ \mathbf{0} & D\_{k+1} \end{pmatrix} \bar{V}^T \right\|\_{MN} + \left\|\Delta A\right\|\_{MN} \le 2\|\Delta A\|\_{MN}. \end{aligned} \tag{72}$$

$$\frac{||\boldsymbol{x} - \bar{\boldsymbol{x}}\_{k}||}{||\boldsymbol{x}||} \leq \frac{h}{1 - 2h\varepsilon\_{\rm A}} \left( \left. 2\varepsilon\_{\rm A} + \varepsilon\_{\rm b\_{k}} + h\varepsilon\_{\rm A} \frac{||\boldsymbol{b} - \boldsymbol{b}\_{k}||}{||\boldsymbol{b}\_{k}||} \right. \right). \tag{73}$$

$$P\_l = A\_{l\text{MN}}^+ A\_l Q\_l = A A\_{l\text{MN}}^+, \quad \bar{P} = \bar{A}\_{\text{MN}}^+ \bar{A}, \quad \bar{Q} = \bar{A} \bar{A}\_{\text{MN}}^+,\tag{74}$$

**Theorem 11**. Assume that rankð Þ *<sup>A</sup>* <sup>&</sup>gt; rank *<sup>A</sup>* <sup>¼</sup> *<sup>l</sup>*, k k <sup>Δ</sup>*<sup>A</sup> MN μl* < <sup>1</sup> 2 . Then,

$$\frac{||\boldsymbol{\omega}\_{l} - \bar{\boldsymbol{\omega}}||\_{N}}{||\boldsymbol{\omega}\_{l}||\_{N}} \leq \frac{\mu\_{1}/\mu\_{l}}{1 - 2||\Delta\boldsymbol{A}||\_{MN}/\mu\_{l}} \left(2\varepsilon\_{A} + a\_{l} + \frac{\mu\_{1}}{\mu\_{l}}\varepsilon\_{\rm A}\beta\_{l}\right),\tag{75}$$

where *μ<sup>i</sup>* are the weighted singular values of *A*. **Proof**. Along with (9), we consider the problem

$$\min\_{\mathbf{x}\in\mathcal{C}}||\mathbf{x}\_{l}||\_{N}, \mathbf{C} = \left\{\mathbf{x}||A\_{l}\mathbf{x} - b||\_{M} = \min\right\} \tag{76}$$

with the matrix *Al* <sup>¼</sup> *UDlV<sup>T</sup>* of rank *<sup>l</sup>*.

Similarly, writing (27) for problems (10) and (54), whose matrix ranks coincide, we obtain

$$\mathbf{G}\_{l} = \bar{\mathbf{A}}\_{\text{MN}}^{+} - \mathbf{A}\_{\text{lMN}}^{+} = -\bar{\mathbf{A}}\_{\text{MN}}^{+} \Delta \mathbf{A} \mathbf{A}\_{\text{lMN}}^{+} - (I - \bar{P}) \mathbf{N}^{-1} \Delta \mathbf{A}^{T} \mathbf{A}\_{\text{lMN}}^{+T} \mathbf{N} \mathbf{A}\_{\text{lMN}}^{+} + \bar{\mathbf{A}}\_{\text{lMN}}^{+} (I - \mathbf{Q}\_{l}), \tag{77}$$

$$\bar{\mathbf{x}} - \mathbf{x}\_l = \bar{\mathbf{A}}\_{\text{MN}}^+ \Delta \mathbf{A} \mathbf{x} + (I - \bar{P}) \mathbf{N}^{-1} \Delta \mathbf{A}^T \mathbf{A}\_{l\text{MN}}^{+T} \mathbf{N} \mathbf{x} - \bar{\mathbf{A}}\_{\text{MN}}^+ (I - Q\_l) \mathbf{b} + \bar{\mathbf{A}}\_{\text{MN}}^+ (\mathbf{b} - \bar{\mathbf{b}}).\tag{78}$$

Applying Lemma 4 and passing to the weighted norms yields the estimate

k k *xl* � *x <sup>N</sup>* k k*x <sup>N</sup>* ≤ *A* <sup>þ</sup> *MN NM*k k <sup>Δ</sup>*<sup>A</sup> MN* <sup>þ</sup> k k <sup>Δ</sup>*<sup>A</sup> MN Al* þ *MN NM*þ þ *A* <sup>þ</sup> *MN NM Al* þ *MN NM*k k Δ*A MN*k k*r <sup>M</sup>* k k*x <sup>N</sup>* þ *A* <sup>þ</sup> *MN NM*k k <sup>Δ</sup>*<sup>b</sup> <sup>M</sup>* k k*x <sup>N</sup>* ≤ ≤ *Al* þ *MN NM*k k *A MN* 1 � k k Δ*Al MN Al* þ *MN NM* 2 k k Δ*A MN* k k *A MN* <sup>þ</sup> k k <sup>Δ</sup>*<sup>b</sup> <sup>M</sup>* k k *A MN*k k*x <sup>N</sup>* þ þ *Al* þ *MN NM*k k *A MN* k k Δ*A MN* k k *A MN* k k*r <sup>M</sup>* k k *A MN*k k*x <sup>N</sup>* , (79)

which implies (52). This completes the proof of Theorem 11.

For approximately given initial data, the rank of the original matrix should be specified as the numerical rank of the matrix (see in [28]).

Specifically, if *<sup>M</sup>* <sup>¼</sup> *<sup>I</sup>* <sup>∈</sup> *Rm*�*<sup>m</sup>* and *<sup>N</sup>* <sup>¼</sup> *<sup>I</sup>* <sup>∈</sup>*R<sup>n</sup>*�*<sup>n</sup>* , then the estimates of the hereditary error of normal pseudosolutions of systems of linear algebraic equations for case rankð Þ *<sup>A</sup>* <sup>&</sup>gt;rank *<sup>A</sup>* <sup>¼</sup> *<sup>l</sup>* follows from next theorem.

**Theorem 12** (see in [32]). Let rankð Þ *<sup>A</sup>* <sup>&</sup>gt; rank *<sup>A</sup>* <sup>¼</sup> *<sup>l</sup>*, k k *<sup>Δ</sup><sup>A</sup> μl* < <sup>1</sup> 2 . Then

$$\frac{||\mathbf{x}\_{l} - \overline{\mathbf{x}}||}{||\mathbf{x}\_{l}||} \le \frac{\mu\_{1}/\mu\_{l}}{1 - 2||\Delta A||/\mu\_{l}} \left(2\varepsilon\_{A} + \varepsilon\_{b\_{l}} + \varepsilon\_{\mathbf{A}} \frac{\mu\_{1}}{\mu\_{l}} \frac{||b - b\_{l}||}{||b\_{l}||}\right),\tag{80}$$

where *xl* is the projection of the normal pseudosolution of problem (8) onto the right principal singular subspace of the matrix *A* of dimension *l*, *bl* is projection of the right-hand side *b* onto the principal left singular subspace of dimension *l* of the matrix *A*, *μ<sup>i</sup>* is singular values of the matrix *А*.

## **3.2 Estimates of the hereditary error of a weighted normal pseudosolution for full rank matrices**

For matrices of full rank, it is essential that their rank does not change due to the perturbation of the elements if the condition k k *ΔA MN A*<sup>þ</sup> *MN NM* < 1 is met.

In addition, in what follows we will use the following property of matrices of full rank [28]

$$A\_{\rm MN}^{+} = \left(A^{T} \mathbf{M} \mathbf{A}\right)^{-1} A^{T} \mathbf{M} \text{ for } m \ge n \text{ and } A\_{\rm MN}^{+} = N^{-1} A^{T} \left(A N^{-1} A^{T}\right)^{-1} \text{ for } n \ge m. \tag{81}$$

If *m* ≥ *n*, then problem (9) is reduced to a problem of the form

$$\min\_{\mathbf{x}\in\mathbb{R}^n} \|A\mathbf{x} - b\|\_{\mathbf{M}}.\tag{82}$$

For such a problem, the following theorem is true. **Theorem 13**. Let k k *ΔA MI A*<sup>þ</sup> *MN IM* <1, *m* >*n* ¼ *k*. Then

$$\frac{||\boldsymbol{\omega} - \bar{\boldsymbol{x}}||}{||\boldsymbol{x}||} \le \frac{h}{1 - h\varepsilon\_A} \left( e\_A + \frac{||\Delta b||\_M}{||A||\_{M\\\boldsymbol{I}}||\boldsymbol{x}||} + h\varepsilon\_A \frac{||\boldsymbol{r}||\_M}{||\boldsymbol{x}|| ||A||\_{M\\\boldsymbol{I}}} \right),\tag{83}$$

where *h* ¼ k k *A MI A*<sup>þ</sup> *MN IM*.

**Proof**. To prove Theorem 13, as before, we will use relation (49). By (81) *<sup>P</sup>* <sup>¼</sup> *<sup>A</sup>* <sup>þ</sup> *MNA* <sup>¼</sup> *<sup>I</sup>*, so that from (50) we have the equality

$$
\bar{A}\_{\rm MN}^{+} - A\_{\rm MN}^{+} = -\bar{A}\_{\rm MN}^{+} \Delta A A\_{\rm MN}^{+} + \bar{A}\_{\rm MN}^{+} (I - Q), \tag{84}
$$

using which we obtain (83).

If *n* ≥ *m*, then problem (9) is reduced to a problem of the form

$$\min\_{\mathbf{x}\in\mathcal{C}} \|\mathbf{x}\|\_{N}, \mathbf{C} = \{\mathbf{x} | A\mathbf{x} = b\} \tag{85}$$

and the following theorem holds for it.

**Theorem 14**. Let k k *ΔA IN A*<sup>þ</sup> *MN NI* <1, *n*> *m* ¼ *k*. Then

$$\frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\omega}}||\_N}{||\boldsymbol{\omega}||\_N} \le \frac{h}{1 - h\varepsilon\_A} \left( 2\varepsilon\_A + \frac{||\Delta b||}{||\boldsymbol{A}||\_{IN}||\boldsymbol{\omega}||\_N} \right),\tag{86}$$

where *h* ¼ k k *A IN A*<sup>þ</sup> *MN NI*.

**Proof**. Since in this case *Q* ¼ *AA*<sup>þ</sup> *MN* <sup>¼</sup> *<sup>I</sup>*, then the expression for *<sup>A</sup>* <sup>þ</sup> *MN* � *A*<sup>þ</sup> *MN* by (81) takes the form

$$
\bar{A}\_{\rm MN}^{+} - A\_{\rm MN}^{+} = -\bar{A}\_{\rm MN}^{+} \Delta A A\_{\rm MN}^{+} - (I - \bar{P}) \mathcal{N}^{-1} \Delta A^{T} A\_{\rm MN}^{+T} \mathcal{N} A\_{\rm MN}^{+}.\tag{87}
$$

Further calculations are similar to the previous ones. As a result, we come to estimate (86).

**Remark 3**. The relationship between the condition number of the problem with exact initial data *h(A)* and the condition number of the matrix of the system with approximately given initial data *h A* is established by the estimates

$$
\sigma\_k - ||\Delta A||\_{MN} \le \bar{\sigma}\_k \le \sigma\_k + ||\Delta A||\_{MN}, \\
\sigma\_1 - ||\Delta A||\_{MN} \le \bar{\sigma}\_1 \le \sigma\_1 + ||\Delta A||\_{MN},
$$

$$
\frac{\sigma\_1 - ||\Delta A||\_{MN}}{\sigma\_k + ||\Delta A||\_{MN}} \le \frac{\bar{\sigma}\_1}{\bar{\sigma}\_k} \le \frac{\sigma\_1 + ||\Delta A||\_{MN}}{\sigma\_k - ||\Delta A||\_{MN}}, \\
\frac{1 - \varepsilon\_A}{1 + \varepsilon\_A h} \le \frac{h(\mathbf{A})}{h(\bar{\mathbf{A}})} \le \frac{\mathbf{1} + \varepsilon\_A}{\mathbf{1} - \varepsilon\_A h},
\tag{88}
$$

which are easy to obtain for the weighted matrix norm based on the perturbation theory for symmetric matrices.

**Lemma 7**. Let *A*, Δ*A* ∈*Rm*�*n*, rank *A* � � <sup>¼</sup> rankð Þ *<sup>A</sup>* and k k <sup>Δ</sup>*<sup>A</sup> MN <sup>A</sup>*<sup>þ</sup> *MN* � � � � *NM* < 1. Then the estimate of the relative error of the condition number of the matrix *A* has the form

$$\left|\frac{\bar{h} - h}{h}\right| \le \varepsilon\_A \frac{\mathbf{1} + h}{\mathbf{1} - \varepsilon\_A h} \tag{89}$$

where *h* ¼ *h A*ð Þ¼ k k *A MN A*<sup>þ</sup> *MN* � � � � *NM* is weighted condition number of matrix *A*, *<sup>h</sup>* <sup>¼</sup> *h A*ð Þ¼ *<sup>A</sup>* � � � � *MN <sup>A</sup>* <sup>þ</sup> *MN* � � � � � � *NM* is weighted condition number of the perturbed matrix *<sup>A</sup>* <sup>¼</sup> *<sup>A</sup>* <sup>þ</sup> <sup>Δ</sup>*A*.

Proof of Lemma 7 is easy to obtain using the inequality (25).

**Theorem 15**. Let k k *<sup>Δ</sup><sup>A</sup> MN <sup>A</sup>* <sup>þ</sup> *MN* � � � � � � *NM* <sup>&</sup>lt;1, k k *<sup>Δ</sup><sup>A</sup> MN* <sup>≤</sup>*ε<sup>A</sup> <sup>A</sup>* � � � � *MN*, rank *<sup>A</sup>*� � <sup>¼</sup> rankð Þ *<sup>A</sup>* . Then,

$$\frac{\|\bar{\boldsymbol{x}} - \boldsymbol{x}\|\_{N}}{\|\bar{\boldsymbol{x}}\|\_{N}} \leq \frac{h(\bar{\boldsymbol{A}})}{1 - h(\bar{\boldsymbol{A}})\varepsilon\_{\bar{\boldsymbol{A}}}} \left(2\varepsilon\_{\bar{\boldsymbol{A}}} + \frac{\|\Delta\boldsymbol{b}\|\_{\boldsymbol{M}}}{\|\bar{\boldsymbol{A}}\|\_{\boldsymbol{\mathcal{M}}\boldsymbol{N}}\|\bar{\boldsymbol{x}}\|\_{\boldsymbol{N}}} + h(\bar{\boldsymbol{A}})\varepsilon\_{\bar{\boldsymbol{A}}} \frac{\|\bar{\boldsymbol{r}}\|\_{\boldsymbol{M}}}{\|\bar{\boldsymbol{x}}\|\_{\boldsymbol{N}}\|\bar{\boldsymbol{A}}\|\_{\boldsymbol{\mathcal{M}}\boldsymbol{N}}}\right),\tag{90}$$

where *<sup>h</sup> <sup>A</sup>*� � <sup>¼</sup> *<sup>A</sup>* � � � � *MN <sup>A</sup>* <sup>þ</sup> *MN* � � � � � � *NM* is weighted matrix condition number *<sup>A</sup>*, the symbols kk*MN* and kk*NM* denote the weighted matrix norms defined by Eq. (4)–(6) and *A*<sup>þ</sup> *MN* is the weighted Moore–Penrose pseudoinverse.

Thus, estimates of the hereditary error, the right-hand side of which is determined by approximate data, can be obtained without inequalities (88). Estimates similar to (90) can be obtained for all the previously considered cases.

**Remark 4**. Under the conditions of Theorem 15, using the inequality

$$\frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathbf{x}}}||\_{N}}{||\boldsymbol{\omega}||\_{N}} \leq \frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathbf{x}}}||\_{N}}{||\bar{\boldsymbol{\mathbf{x}}}||\_{N}} \left(1 + \frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathbf{x}}}||\_{N}}{||\boldsymbol{\omega}||\_{N}}\right) \tag{91}$$

and inequality (90) we arrive at the estimate in the following theorem.

**Theorem 16.** Let k k <sup>Δ</sup>*<sup>A</sup> MN <sup>A</sup>* <sup>þ</sup> *MN* � � � � � � *NM* <sup>&</sup>lt;1, k k <sup>Δ</sup>*<sup>A</sup> MN* <sup>≤</sup>*ε<sup>A</sup> <sup>A</sup>* � � � � *MN*, rank *<sup>A</sup>* � � <sup>¼</sup> rankð Þ *<sup>A</sup>* . Then

$$\frac{||\bar{\mathbf{x}} - \mathbf{x}||\_N}{||\mathbf{x}||\_N} \le \frac{\beta}{\mathbf{1} - \beta}, \quad \beta = \frac{h(\bar{A})}{\mathbf{1} - h(\bar{A})\varepsilon\_{\bar{A}}} \left( 2\varepsilon\_{\bar{A}} + \frac{||\Delta b||\_M}{||\bar{A}||\_{\text{MN}}||\bar{\mathbf{x}}||\_N} + h(\bar{A})\varepsilon\_{\bar{A}} \frac{||\bar{r}||\_M}{||\bar{x}||\_N ||\bar{A}||\_{\text{MN}}} \right). \tag{92}$$

Estimates similar to (92) can be obtained for all the previously considered cases.

## **4. Research and solution of the WLS problem with approximate initial data**

### **4.1 Investigation of the properties of WLS problem with approximate initial data**

In the study of the mathematical properties of the weighted least squares problem with approximate initial data associated with computer realization as an approximate model in (10), (11) we will understand exactly the computer model of the problem. We will assume that the error of the initial data Δ*A*, Δ*b*, in this case, contains in addition to everything, the error that occurs when the matrix coefficients are written to the computer memory or its computing.

**Matrix of full rank within the error of initial data**, we assume a matrix that cannot change the rank of Δ*A* change in its elements.

**Matrix of full rank within the machine precision**, we assume a matrix that cannot change the rank when you change the elements within the machine precision.

**Lemma 8**. If rankð Þ¼ *A* min ð Þ *m*, *n* , and

$$\|\|\Delta A\|\|\_{\text{MN}} \|\|A\_{\text{MN}}^{+}\|\|\_{\text{NM}} < \mathbf{1},\tag{93}$$

Then rank *<sup>A</sup>* <sup>¼</sup> rankð Þ *<sup>A</sup>* .

**Proof**. For proof, let, for example, rankð Þ¼ *A m* . Taking equalk k Δ*A MN* ¼ *ε*, in equality (93) can be rewritten as *<sup>ε</sup> μm* <1, which is equivalent

$$
\mu\_m - \varepsilon > 0. \tag{94}
$$

Let *<sup>μ</sup><sup>m</sup>*—*m*-weighted singular value of perturbed matrix *<sup>A</sup>* <sup>¼</sup> *<sup>A</sup>* <sup>þ</sup> <sup>Δ</sup>*A*. According to Lemma 3, we can write *μ<sup>m</sup>* ≥ *μ<sup>m</sup>* � *ε*. Then, taking into account (94), we obtain *μ<sup>m</sup>* ≥ *μ<sup>m</sup>* � *ε*> 0.

Therefore rank *<sup>A</sup>* <sup>≥</sup> *<sup>m</sup>*, whence we come to the conclusion that rank *<sup>A</sup>* <sup>¼</sup> *<sup>m</sup>*, i.е. rank *A* <sup>¼</sup> rankð Þ *<sup>A</sup>* .

Taking into account the results of Lemma 8, the computer algorithm for studying rank completeness is reduced to checking the two relations

$$
\varepsilon\_{\vec{A}}h(\vec{A}) < \mathbf{1},\tag{95}
$$

$$\mathbf{1.0} + \frac{\mathbf{1}}{h(\bar{A})} \neq \mathbf{1.0} \tag{96}$$

where *<sup>h</sup> <sup>A</sup>* <sup>¼</sup> *<sup>A</sup> MN <sup>A</sup>* <sup>þ</sup> *MN NM* is weighted condition number of matrix *<sup>A</sup>*.

The fulfillment of the first condition (95) guarantees that the matrix has a full rank and is within the accuracy of the initial data, and the second (96), which is performed in floating-point arithmetic, means that the matrix has a full rank within the machine precision.

Under these conditions, the solution of the machine problem exists, it is unique and stable. Such a machine problem should be considered as correctly posed within the accuracy of initial data.

Otherwise, the matrix of the perturbed system may be a matrix, not full rank and, therefore, the machine model of the problem (10), (11) should be considered as illposed. A key factor in studying the properties of a machine model is the criterion of

*Weighted Least Squares Perturbation Theory DOI: http://dx.doi.org/10.5772/intechopen.102885*

the correctness of the problem. Thereby, a useful fact is that the condition for studying the machine model of problem (96) includes the value inverse to *h A* . As a result, for large condition numbers of conditionality does not occur an overflow in order. And the disappearance of the order for 1*:*0*=h A* for large condition numbers is not fatal: the machine result is assumed to be equal to zero, which allows us to make the correct conclusion about the loss of the rank of the matrix of the machine problem.

To analyze the properties of a machine model of problems with matrices of incomplete rank under conditions of approximate initial data, a fundamental role is played definition of the rank of a matrix.

The rank of the matrix in the condition of approximate the initial data (effective rank or *δ* -rank) is

$$\text{rank}(A, \delta) = \min\_{||A - B||\_{\text{MN}} \le \delta} \text{rank}(B). \tag{97}$$

This means that the *δ*-rank of the matrix is equal to the minimum rank among all matrices in the neighborhood k k *A* � *B MN* ≤ *δ*.

From [28] that if *r*ð Þ*δ* is the *δ*-rank of the matrix, then

$$
\mu\_1 \ge \dots \ge \mu\_{r(\delta)} > \delta \ge \mu\_{r(\delta)+1} \ge \dots \ge \mu\_p, p = \min(m, n). \tag{98}
$$

The practical algorithm for finding *δ*—rank can be defined as follows: find the value of *r* is equal to the largest value of *i*, for which the inequality is fulfilled

$$\frac{\delta}{\mu\_i} < \mathbf{1}, \mu\_i \neq \mathbf{0}, i = \mathbf{1}, \mathbf{2}.\tag{99}$$

Using the effective rank of a matrix, can always find the number of a stable projection that approximates the solution or projection

To analyze the rank of a matrix of values within the machine precision value *δ* can be attributed to machine precision, for example, setting it equal machepsk k*B* .

## **4.2 Algorithm for finding a weighted normal pseudosolution of the weighted least squares problem with approximate initial data**

The algorithm is based on weighted singular value decomposition of matrices (Lemma 1).

Let *<sup>A</sup>* <sup>∈</sup>*R<sup>m</sup>*�*<sup>n</sup>* and rankð Þ¼ *<sup>A</sup> <sup>k</sup>*, *M-* and *<sup>N</sup>*-positive-defined matrices of order *<sup>m</sup>* and *n*, respectively.

To solve the ill-posed problems in the formulation (10), (11), the algorithm for obtaining an approximate normal pseudosolution of system (9), depending on the ratio of the ranks of the matrices *A* and *A* is reduced to the following three cases.

1. If the rank of the matrix has not changed *rank <sup>A</sup>* <sup>¼</sup> *rank A*ð Þ¼ *<sup>k</sup>*, an approximate weighted normal pseudosolution is constructed by the formula

$$
\bar{\mathfrak{x}} = \bar{A}\_{\text{MN}}^{+} \bar{b}, \tag{100}
$$

where *A* <sup>þ</sup> *MN* is represented as a weighted singular value decomposition (7). In this case, the weighted normal pseudosolution of system (9) is approximated by the weighted normal pseudosolution of system (10) and, if k k *ΔA MN A*<sup>þ</sup> *MN NM* <1, then the error of the solution is estimated by formula (48).

If the rank of the matrix is complete and conditions (95), (96) are satisfied, the rank of the matrix does not change, and to estimate the error, one can use formulas (100), (48).

2.Matrix rank increased rank *A* <sup>&</sup>gt;rankð Þ¼ *<sup>A</sup> <sup>k</sup>* . An approximate weighted normal pseudosolution is constructed by the formula

$$
\bar{\boldsymbol{x}}\_k = \bar{\boldsymbol{A}}\_{k\text{MN}}^{+} \bar{\boldsymbol{b}}.\tag{101}
$$

Weighted pseudoinverse matrix *A <sup>k</sup>* þ *MN* is defined as follows

$$\bar{A}\_{kMN}^{+} = N^{-1} \bar{V} \bar{D}\_{k}^{+} \bar{U}^{T} M,\tag{102}$$

where *D <sup>k</sup>* is a rectangular matrix, the first *k* diagonal elements of which are nonzero and coincide with the corresponding elements of the matrix *D* from (7), and all other elements are equal to zero.

In this case, the weighted normal pseudosolution of system (9) is approximated by the projection of the weighted normal pseudosolution of system (10) onto the right principal weighted singular subspace of dimension *k* of the matrix *A* and, if k k *ΔA MN A*<sup>þ</sup> *MN NM* <sup>&</sup>lt; <sup>1</sup> 2 , then the error of the solution is estimated by formula (64).

3. If the rank of the matrix has decreased rankð Þ *<sup>A</sup>* <sup>&</sup>gt;rank *<sup>A</sup>* <sup>¼</sup> *<sup>l</sup>*, an approximation to the projection of a weighted normal pseudosolution of problem (9) is constructed using formula (100). In this case, the projection of the weighted normal pseudosolution of system (9) onto the principal right weighted singular subspace of dimension *l* of the matrix *A* is approximated by the weighted normal pseudosolution of system (10) and, if k k *<sup>Δ</sup><sup>A</sup> MN μl* < <sup>1</sup> 2 , the projection error is estimated by formula (75).

**Remark 5**. If the rank of the original matrix is unknown, then the *δ*-rank should be taken as the projection number in (101). In this case, it is guaranteed that a stable approximation is found either to a weighted normal pseudosolution or to a projection, respectively, with error estimates.

If the rank of the original matrix is known, then it is guaranteed to find an approximation to the weighted normal pseudosolution with appropriate estimates.

**Remark 6**. Because of the zero columns in the matrix *D*þ, only the largest first *n* columns of the matrix *U* can actually contribute to the product (100). Moreover, if some of the weighted singular numbers are equal to zero, then less than *n* columns of *U* are needed. If *kp* is the number of nonzero weighted singular numbers, then *U* can be reduced to the sizes *m* � *kp*, *D*þ—to the sizes *kp* � *kp*, *VT*—up to size *kp* � *n*. Formally, such matrices *U* and *V* are not *M*-orthogonal and *N*�<sup>1</sup> -orthogonal, respectively, since they are not square. However, their columns are weighted orthonormal systems of vectors.

## **5. Analysis of the reliability of computer solutions to the WLS problem with approximate initial data**

## **5.1 Estimates of the total error of a weighted normal pseudosolution for matrices of arbitrary rank**

Estimates of the total error take into account both the hereditary error due to the error in the initial data and the computational error due to an approximate method for determining the solution to the problem. In this case, the method of obtaining a solution is not taken into account. The computational error can be a consequence of both an approximate method of obtaining a solution and an error due to inaccuracy in performing arithmetic operations on a computer. The residual vector *<sup>r</sup>* <sup>¼</sup> *<sup>A</sup><sup>x</sup>* � *b* takes into account the overall effect of these errors.

Let us obtain estimates for the total error of the weighted normal pseudosolution using the previously introduced notation (47). Let us consider three cases.

**Case 1**. *The rank of the original matrix A remains the same under its perturbation, i.e.,* rankð Þ¼ *<sup>A</sup>* rank *<sup>A</sup> .*

**Theorem 17**. Assume that k k *ΔA MN A*<sup>þ</sup> *MN NM* <sup>&</sup>lt;1, rank *<sup>A</sup>* <sup>¼</sup> rankð Þ¼ *<sup>A</sup> <sup>k</sup>* and let *x* ∈ *A* # . Then

$$\frac{||\bar{\boldsymbol{x}} - \bar{\bar{\boldsymbol{x}}}||\_{N}}{||\boldsymbol{x}||\_{N}} \leq \frac{h}{1 - h\varepsilon\_{\rm A}} (2\varepsilon\_{\rm A} + a + h\varepsilon\_{\rm A}\boldsymbol{\beta} + \boldsymbol{\gamma}).\tag{103}$$

**Proof**. For the hereditary error, in this case, estimate (48) holds. To estimate the computational error *<sup>x</sup>* � *<sup>x</sup>*, we use the relation

$$
\bar{A}\left(\bar{\mathbf{x}} - \bar{\bar{\mathbf{x}}}\right) = \bar{r} = \bar{b}\_k - \bar{A}\bar{\bar{\mathbf{x}}},\tag{104}
$$

where *bk* is projection of the vector *b* on the main left weighted singular subspace of the matrix *A*, i.е. *bk* ∈ *A* .

Considering that *<sup>x</sup>* � *<sup>x</sup>* <sup>∈</sup> *<sup>A</sup>* # and the fact that *<sup>A</sup>* <sup>þ</sup> *MNA* is a projector in *<sup>A</sup>* # , we have

$$
\bar{A}\_{\rm MN}^{+} \bar{A} \left( \bar{\mathbf{x}} - \bar{\bar{\mathbf{x}}} \right) = \bar{\mathbf{x}} - \bar{\bar{\mathbf{x}}} = \bar{A}\_{\rm MN}^{+} \bar{r}. \tag{105}
$$

From this, we obtain an estimate of the computational error

$$\frac{||\bar{\mathbf{x}} - \bar{\bar{\mathbf{x}}}||\_{N}}{||\bar{\mathbf{x}}||\_{N}} \le ||\bar{A}||\_{\text{MN}} \left|| \bar{A}^{+}\_{\text{MN}} \right||\_{\text{NM}} \frac{||\bar{r}||\_{M}}{||\bar{b}\_{k}||\_{M}} \,. \tag{106}$$

An estimate of the total error of the normal pseudosolution follows from the relations

$$\frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathcal{X}}}||\_{N}}{||\boldsymbol{\omega}||\_{N}} \leq \frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathcal{X}}}||\_{N}}{||\boldsymbol{\omega}||\_{N}} + \frac{||\bar{\boldsymbol{\omega}} - \bar{\boldsymbol{\mathcal{X}}}||\_{N}}{||\boldsymbol{\omega}||\_{N}},\tag{107}$$

$$\frac{||\bar{\mathbf{x}} - \bar{\bar{\mathbf{x}}}||\_{N}}{||\boldsymbol{\kappa}||\_{N}} \leq ||\boldsymbol{A}||\_{\text{MN}} \left|| \bar{\mathbf{A}}\_{\text{MN}}^{+} \right||\_{\text{NM}} \frac{||\bar{r}||\_{\text{M}}}{||\bar{b}\_{k}||\_{\text{M}}} \tag{108}$$

and estimates (48), (25). The theorem is proved.

**Case 2**. *The rank of the perturbed matrix is larger than that of the original matrix A, i.e.,* rank *<sup>A</sup>* <sup>&</sup>gt; rankð Þ¼ *<sup>A</sup> k.*

If k k *ΔA MN A*<sup>þ</sup> *MN NM* < 1, then from [26], it follows that the rank of the perturbed matrix cannot decrease.

**Theorem 18**. Assume that k k *ΔA MN A*<sup>þ</sup> *MN NM* <sup>&</sup>lt; <sup>1</sup> 2 , rank *<sup>A</sup>* <sup>&</sup>gt; rankð Þ¼ *<sup>A</sup> <sup>k</sup>* and let *x*∈ *A* # *k* . Then

$$\frac{\|\mathbf{x} - \bar{\bar{\mathbf{x}}}\|\_{N}}{\|\mathbf{x}\|\_{N}} \le \frac{h}{\mathbf{1} - h\varepsilon\_{A}} (2\varepsilon\_{A} + a + h\varepsilon\_{A}\boldsymbol{\beta} + \boldsymbol{\gamma}\_{k}).\tag{109}$$

**Proof**. To estimate the computational error k k *<sup>x</sup><sup>k</sup>* � *<sup>x</sup> <sup>N</sup>*, we use the fact that *<sup>A</sup> kx<sup>k</sup>* <sup>¼</sup> *bk*. Then for arbitrary vector *<sup>x</sup>* <sup>∈</sup> *<sup>A</sup>* # *k* 

$$
\bar{\mathbf{A}}\_{k}(\bar{\mathbf{x}}\_{k} - \bar{\bar{\mathbf{x}}}) = \bar{r}\_{k} = \bar{b}\_{k} - \bar{\mathbf{A}}\_{k}\bar{\bar{\mathbf{x}}},\\\bar{\mathbf{A}}\_{k}^{+}\bar{\mathbf{A}}\_{k}(\bar{\mathbf{x}}\_{k} - \bar{\bar{\mathbf{x}}}) = \bar{\mathbf{A}}\_{k}^{+}\bar{r}\_{k}.\tag{110}
$$

Considering the fact that *xk* � *<sup>x</sup>* <sup>∈</sup> *<sup>Α</sup><sup>k</sup>* # , and operator *A <sup>k</sup>* þ *MNA <sup>k</sup>* is the projection operator in *Α<sup>k</sup>* # , we obtain

$$
\bar{A}\_{k\text{MN}}^{+} \bar{A}\_{k} (\bar{\mathbf{x}}\_{k} - \bar{\bar{\mathbf{x}}}) = \bar{\mathbf{x}}\_{k} - \bar{\bar{\mathbf{x}}} = \bar{A}\_{k\text{MN}}^{+} \bar{r}\_{k}, \\
\bar{\mathbf{x}}\_{k} - \bar{\bar{\mathbf{x}}} = \bar{A}\_{k\text{MN}}^{+} \bar{r}\_{k}. \tag{111}
$$

Hence follows an estimate of the computational error for the projection of the normal pseudosolution

$$\frac{||\bar{\boldsymbol{\varpi}}\_{k} - \bar{\boldsymbol{\varpi}}||\_{N}}{||\bar{\boldsymbol{\varpi}}\_{k}||\_{N}} \leq ||\bar{\boldsymbol{A}}\_{k}||\_{\text{MN}} \left|| \bar{\boldsymbol{A}}\_{k\text{MN}}^{+} \right||\_{\text{NM}} \frac{||\bar{\boldsymbol{r}}\_{k}||\_{\text{M}}}{||\bar{\boldsymbol{b}}\_{k}||\_{\text{M}}} \,. \tag{112}$$

The estimate of the total error follows from the inequalities

$$\frac{\|\mathbf{x} - \bar{\bar{\mathbf{x}}}\|\_{N}}{\|\mathbf{x}\|\_{N}} \le \frac{\|\mathbf{x} - \bar{\mathbf{x}}\_{k}\|\_{N}}{\|\mathbf{x}\|\_{N}} + \frac{\|\bar{\mathbf{x}}\_{k} - \bar{\bar{\mathbf{x}}}\|\_{N}}{\|\mathbf{x}\|\_{N}},\\\frac{\|\bar{\mathbf{x}}\_{k} - \bar{\bar{\mathbf{x}}}\|\_{N}}{\|\mathbf{x}\|\_{N}} \le \frac{\|\mathbf{A}\|\_{\text{MN}}\left\|\bar{\mathbf{A}}\_{k\text{MN}}^{+}\right\|\_{\text{NM}}\|\bar{\mathbf{r}}\_{k}\|\_{\text{M}}}{\|\mathbf{b}\_{k}\|\_{\text{M}}}\tag{113}$$

and estimates (25), (64).

**Case 3**. *The rank of the original matrix is larger than that of the perturbed matrix, i.e.,* rankð Þ *<sup>A</sup>* <sup>&</sup>gt;rank *<sup>A</sup>* <sup>¼</sup> *l.*

Consider the case when the condition k k *ΔA MN A*<sup>þ</sup> *MN NM* < 1 not satisfied and the rank of the perturbed matrix can decrease.

**Theorem 19**. Assume that rankð Þ *<sup>A</sup>* <sup>&</sup>gt; rank *<sup>A</sup>* <sup>¼</sup> *<sup>l</sup>*, k k *<sup>Δ</sup><sup>A</sup> MN μl* < <sup>1</sup> <sup>2</sup> and let *<sup>x</sup>* <sup>∈</sup>*Im <sup>A</sup>* # . Then

$$\frac{||\boldsymbol{\omega}\_{l} - \bar{\boldsymbol{\mathcal{X}}}||\_{N}}{||\boldsymbol{\omega}\_{l}||\_{N}} \leq \frac{\mu\_{1}/\mu\_{l}}{1 - 2||\boldsymbol{\Delta}A||\_{\mathrm{MN}}/\mu\_{l}} \left(2\boldsymbol{\varepsilon}\_{A} + a\_{l} + \frac{\mu\_{1}}{\mu\_{l}}\boldsymbol{\varepsilon}\_{A}\boldsymbol{\beta}\_{l} + \boldsymbol{\gamma}\_{l}\right) \tag{114}$$

**Proof**. For the proof, along with problem (9), consider the problem

$$\min\_{\mathbf{x}\in\mathcal{C}} \|\mathbf{x}\|\_{N}, \mathbf{C} = \left\{\mathbf{x} \|\|A\mathbf{x} - b\|\|\_{M} = \min\right\} \tag{115}$$

With matrix *Al* <sup>¼</sup> *<sup>U</sup>ΣlV<sup>T</sup>* with rang *<sup>l</sup>*.

The estimate of the computational error in this case will be

$$\frac{||\bar{\mathbf{x}} - \bar{\bar{\mathbf{x}}}||\_{N}}{||\bar{\mathbf{x}}||\_{N}} \le ||\bar{\mathbf{A}}||\_{\text{MN}} \left|| \bar{\mathbf{A}}\_{\text{MN}}^{+} \right||\_{\text{NM}} \frac{||\bar{r}||\_{\mathcal{M}}}{||\bar{\mathbf{b}}\_{l}||\_{\mathcal{M}}} \,. \tag{116}$$

The estimate of the total error follows from the inequalities

$$\frac{\|\boldsymbol{\omega}\_{l} - \bar{\boldsymbol{\mathcal{X}}}\|\_{N}}{\|\boldsymbol{\omega}\_{l}\|\_{N}} \leq \frac{\|\boldsymbol{\omega}\_{l} - \bar{\boldsymbol{\mathcal{X}}}\|\_{N}}{\|\boldsymbol{\omega}\_{l}\|\_{N}} + \frac{\|\bar{\boldsymbol{\mathcal{X}}} - \bar{\boldsymbol{\mathcal{X}}}\|\_{N}}{\|\boldsymbol{\omega}\_{l}\|\_{N}},\\\frac{\|\bar{\boldsymbol{\mathcal{X}}} - \bar{\boldsymbol{\mathcal{X}}}\|\_{N}}{\|\boldsymbol{\omega}\|\_{N}} \leq \frac{\|\boldsymbol{\mathcal{A}}\|\_{\text{MN}}\left\|\bar{\boldsymbol{\mathcal{A}}}\_{l}\overline{\boldsymbol{\mathcal{A}}}\_{\text{MN}}\right\|\_{\text{NM}}\|\bar{\boldsymbol{\mathcal{Y}}}\_{l}\|\_{M}}{\|\boldsymbol{b}\_{l}\|\_{\text{M}}},\tag{117}$$

obvious relationships k k *A MN* ¼ k k *Al MN*, *Al* þ *MN NM* ¼ <sup>1</sup>*=μl* , estimates of the hereditary error (75) and the inequality k k *ΔAl MN* ≤2k k *ΔA MN*.

## **5.2 Estimates of the total error of the weighted normal pseudosolution for matrices of full rank**

In the following Theorems 20 and 21, the weighted pseudoinverse *A*<sup>þ</sup> *MN* is represented in accordance with the properties of the full rank matrix (81).

**Theorem 20.** Let k k *ΔA MI A*<sup>þ</sup> *MN IM* <sup>&</sup>lt; 1, *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>¼</sup> *<sup>k</sup>* and *<sup>x</sup>* <sup>∈</sup> *<sup>R</sup> <sup>A</sup>* # *:* Then

$$\frac{\|\boldsymbol{\omega} - \bar{\boldsymbol{\mathcal{X}}}\|}{\|\boldsymbol{\omega}\|} \le \frac{h}{1 - h\varepsilon\_{\rm A}} \left( e\_{\rm A} + \frac{||\Delta b||\_{\rm M}}{||\boldsymbol{A}||\_{\rm M} ||\boldsymbol{\omega}||} + h\varepsilon\_{\rm A} \frac{||\boldsymbol{r}||\_{\rm M}}{||\boldsymbol{A}||\_{\rm M} ||\boldsymbol{\omega}||} + \frac{||\bar{\boldsymbol{r}}\_{k}||\_{\rm M}}{||\boldsymbol{A}||\_{\rm M} ||\boldsymbol{\omega}||} \right). \tag{118}$$

**Proof**. The estimate of the computational error is determined by formula (106), namely

$$\frac{||\bar{\mathcal{X}} - \bar{\bar{\mathcal{X}}}||}{||\bar{\mathcal{X}}||} \le ||\bar{A}||\_{\text{MN}} \left|| \bar{A}^+\_{\text{MN}} \right||\_{\text{NM}} \frac{||\bar{r}||\_{\text{M}}}{||\bar{\mathcal{b}}\_k||\_{\text{M}}} \,. \tag{119}$$

The estimate for the total error (118) follows from the inequalities

$$\frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathcal{X}}}||}{||\boldsymbol{\omega}||} \le \frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathcal{X}}}||}{||\boldsymbol{\omega}||} + \frac{||\bar{\boldsymbol{\omega}} - \bar{\boldsymbol{\mathcal{X}}}||}{||\boldsymbol{\omega}||},\\\frac{||\bar{\boldsymbol{\omega}} - \bar{\boldsymbol{\mathcal{X}}}||}{||\boldsymbol{\omega}||} \le ||\boldsymbol{\mathcal{A}}||\_{\boldsymbol{\mathcal{M}}} \left|| \bar{\boldsymbol{\mathcal{A}}}\_{\boldsymbol{\mathcal{M}}}^{+} \right||\_{\boldsymbol{\mathcal{M}}} \frac{||\bar{r}||\_{\boldsymbol{\mathcal{M}}}}{||\boldsymbol{b}\_{k}||\_{\boldsymbol{\mathcal{M}}}} \tag{120}$$

and estimates for the pseudoinverse matrix (25) and the hereditary error (83). **Theorem 21**. Let k k *ΔA IN A*<sup>þ</sup> *MN NI* <sup>&</sup>lt;1, *<sup>n</sup>*<sup>&</sup>gt; *<sup>m</sup>* <sup>¼</sup> *<sup>k</sup>* and *<sup>x</sup>* <sup>∈</sup> *<sup>R</sup> <sup>A</sup>* # . Then

$$\frac{||\boldsymbol{x} - \bar{\boldsymbol{x}}||\_{N}}{||\boldsymbol{x}||\_{N}} \leq \frac{h}{1 - h\varepsilon\_{\rm A}} \left( 2\varepsilon\_{\rm A} + \frac{||\Delta b||}{||A||\_{IN}||\boldsymbol{x}||\_{N}} + \frac{||\bar{r}||}{||b\_{k}||} \right),\tag{121}$$

The proof of Theorem 21 is similar to the proof of the previous theorem, taking into account the estimate for the hereditary error (86).

**Remark 7**. Here, we did not indicate a method for obtaining an approximate weighted normal pseudosolution *x*, satisfying the conditions of the theorems. Algorithms for obtaining such approximations are considered, for example, in Section 4.2.

**Remark 8**. Along with estimates (103), (109), (114), (118), (121), error estimates can be obtained, the right-hand sides of which depend on the input data of systems of linear algebraic equations with approximately given initial data. For example, the following theorem holds.

**Theorem 22**. Let k k *<sup>Δ</sup><sup>A</sup> MN <sup>A</sup>* <sup>þ</sup> *MN* � � � � � � *NM* <1, and *x* ∈ *R A* # � �. Then, for the total error of the normal pseudosolution, the following estimate is fulfilled

$$\frac{\|\boldsymbol{x} - \bar{\boldsymbol{x}}\|\_{N}}{\|\bar{\boldsymbol{x}}\|\_{N}} \leq \frac{h\left(\bar{\boldsymbol{A}}\right)}{1 - h\left(\bar{\boldsymbol{A}}\right)\varepsilon\_{\bar{\boldsymbol{A}}}} \left(2\varepsilon\_{\bar{\boldsymbol{A}}} + \frac{\|\Delta\boldsymbol{b}\|\_{\boldsymbol{M}}}{\left\|\bar{\boldsymbol{A}}\right\|\_{\boldsymbol{M}\boldsymbol{N}}\left\|\bar{\boldsymbol{x}}\right\|\_{\boldsymbol{N}}} + \\ \quad h\left(\bar{\boldsymbol{A}}\right)\varepsilon\_{\bar{\boldsymbol{A}}} \frac{\left\|\bar{\boldsymbol{r}}\right\|\_{\boldsymbol{M}}}{\left\|\bar{\boldsymbol{A}}\right\|\_{\boldsymbol{M}\boldsymbol{N}}\left\|\bar{\boldsymbol{x}}\right\|\_{\boldsymbol{N}}} + \frac{\left\|\bar{\boldsymbol{r}}\right\|\_{\boldsymbol{M}}}{\left\|\bar{\boldsymbol{b}}\_{k}\right\|\_{\boldsymbol{M}}}\right). \tag{122}$$

Estimate (122) can be obtained from the inequality

$$\frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathcal{X}}}||\_{N}}{||\boldsymbol{\bar{\boldsymbol{\mathcal{X}}}}||\_{N}} \leq \frac{||\boldsymbol{\omega} - \bar{\boldsymbol{\mathcal{X}}}||\_{N}}{||\bar{\boldsymbol{\mathcal{X}}}||\_{N}} + \frac{||\bar{\boldsymbol{\mathcal{X}}} - \bar{\boldsymbol{\mathcal{X}}}||\_{N}}{||\bar{\boldsymbol{\mathcal{X}}}||\_{N}}\tag{123}$$

and estimates (90), (106).

If the weighted pseudoinverse matrix is known or its weighted singular value decomposition is obtained during the process of solving the problem, then a practical estimate of the computational error can be obtained using (104). When calculating the residual *<sup>r</sup>* <sup>¼</sup> *bk* � *<sup>А</sup><sup>x</sup>*, the explicit form of the projection operator onto *<sup>R</sup> <sup>А</sup>*# � � is used.

In conclusion, we note that the determining factor for obtaining estimates is the use of a weighted singular value decomposition [37] and the technique of reducing the problem of estimating the error of a pseudosolution to an estimate of the error [32] for problems with matrices of the same rank. Based on the results obtained, an algorithm for finding the effective rank of matrices can be developed, as well as an algorithm for calculating stable projections of a weighted normal pseudosolution.

## **5.3 Software-algorithmic methods for increasing the accuracy of computer solutions**

The numerical methods we have considered for solving systems of linear algebraic equations and WLS problems have one common property. Namely, the actually calculated solution (pseudosolution) is exact in accordance with the inverse analysis of errors [43] for some perturbed problem. These perturbations are very small and are often commensurate with the rounding errors of the input data. If the input data is given with an error (measurements, calculations, etc.), then usually they already contain significantly larger errors than rounding errors. In this case, any attempt to improve the machine solution (pseudosolution) without involving additional information about the exact problem or errors of the input data errors will be untenable.

The situation changes significantly if a mathematical problem with accurate input data is considered. Now the criterion of bad or good conditionality of the computer model of the problem depends on the mathematical properties of the computer model of the problem and the mathematical properties of the processor (length of the computer word), and it becomes possible in principle to achieve any given accuracy of the computer solution. In this case, as follows from estimates (48), (64), (75), (83), (86), it is obviously possible to refine the computer solution by solving a system with

increased bit depth, in particular, using the GMP library [44] for implementation of computations with arbitrary bit depth.

To predict the length of the mantissa (machine word) that provides a given accuracy for a solution (joint systems), you can use the following rule of thumb: the number of correct decimal significant digits in a computer solution is *μ α*, where μ is the decimal order of the mantissa of a floating-point number ε, α is the decimal order of the condition number. Thus, knowing the conditionality of the matrix of the system and the accuracy of calculations on a computer, it is possible to determine the required bit depth to obtain a reliable solution.

The GMP library is used to work on integers, rational numbers and floating-point numbers. The main feature of the library is the bitness of numbers (precision) is practically unlimited. Therefore, the main field of application is computer algebraic calculations, cryptography, etc. The functions of the GMP library allow not only setting the bit depth at the beginning of the program and performing calculations with this bit depth, but also changing the bit width as needed in the computation process, i.e. execute different fragments of the algorithm with different bit depths.

The library's capabilities were tested in the study of solutions to degenerate and illconditioned systems in [45].

## **6. Conclusions**

In the framework of these studies, estimates of the hereditary error of the weighted normal pseudosolution for matrices of arbitrary form and rank are obtained, including when the rank of the perturbed matrix may change. Three cases are considered: the rank of the matrix does not change when the data is disturbed, the rank increases and the rank decreases. In the first case, the weighted normal pseudosolution of the approximate problem is taken as an approximation to the weighted normal pseudosolution, in the other two, the problem is reduced to the case when the ranks of the matrices are the same. Also, the estimates of the error for the weighted pseudoinverse matrix and the weighted condition number of the matrix are obtained, the existence and uniqueness of the weighted normal pseudosolution are investigated and proved. Estimates of the total error of solving the weighted least squares problem with matrices of arbitrary form and rank are established.

The results obtained in the perturbation theory of weighted least squares problem can be a theoretical basis for further research into various aspects of the WLS problem and the development of methods for calculating weighted pseudoinverse matrices and weighted normal pseudosolutions with approximate initial data, in particular, in the design and optimization of building structures, in tomography, in the calibration of viscometers, in statistics. The results of the research can be used in the educational process when reading special courses on this section of the theory of matrices.

*Matrix Theory - Classics and Advances*

## **Author details**

Aleksandr N. Khimich, Elena A. Nikolaevskaya\* and Igor A. Baranov V.M. Glushkov Institute of Cybernetics of NAS of Ukraine, Kyiv, Ukraine

\*Address all correspondence to: elena\_nea@ukr.net

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Weighted Least Squares Perturbation Theory DOI: http://dx.doi.org/10.5772/intechopen.102885*

## **References**

[1] Chipman JS. On least squares with insufficient observation. Journal of the American Statistical Association. 1964; **59**(308):1078-1111

[2] Milne RD. An oblique matrix pseudoinverse. SIAM Journal on Applied Mathematics. 1968;**16**(5):931-944

[3] Ward JF, Boullion TL, Lewis TO. Weighted pseudoinverses with singular weights. SIAM Journal on Applied Mathematics. 1971;**21**(3):480-482

[4] Galba EF, Deineka VS, Sergienko IV. Weighted pseudoinverses and weighted normal pseudosolutions with singular weights. Computational Mathematics and Mathematical Physics. 2009;**49**(8): 1281-1297

[5] Sergienko IV, Galba EF, Deineka VS. Existence and uniqueness of weighted pseudoinverse matrices and weighted normal pseudosolutions with singular weights. Ukrainian Mathematical Journal. 2011;**63**(1):98-124

[6] Varenyuk NA, Galba EF, Sergienko IV, Khimich AN. Weighted pseudoinversion with indefinite weights. Ukrainian Mathematical Journal. 2018;**70**(6):866-889

[7] Galba EF, Varenyuk NA. Representing weighted pseudoinverse matrices with mixed weights in terms of other pseudoinverses. Cybernetics and System Analysis. 2018;**54**(2):185-192

[8] Galba EF, Varenyuk NA. Expansions of weighted pseudoinverses with mixed weights into matrix power series and power products. Cybernetics and System Analysis. 2019;**55**:760-771. DOI: 10.1007/ s10559-019-00186-9

[9] Goldman AJ, Zelen M. Weak generalized inverses and minimum variance linear unbiased estimation. Journal of Research of the National Bureau of Standards. 1964;**68B**(4):151-172

[10] Rao CR, Mitra SK. Generalized Inverse of Matrices and its Applications. New York: Wiley; 1971. p. 240

[11] Nashed MZ. Generalized Inverses and Applications. New York: Academic Press; 1976. p. 1068

[12] Ben-Israel A, TNE G. Generalized Inverses: Theory and Applications. New York: Springer-Verlag; 2003. p. 420. DOI: 10.1007/b97366

[13] Khimich AN. Perturbation bounds for the least squares problem. Cybernetics and System Analysis. 1996; **32**(3):434-436

[14] Khimich AN, Nikolaevskaya EA. Reliability analysis of computer solutions of systems of linear algebraic equations with approximate initial data. Cybernetics and System Analysis. 2008; **44**(6):863-874

[15] Nikolaevskaya EA, Khimich AN. Error estimation for a weighted minimum-norm least squares solution with positive definite weights. Computational Mathematics and Mathematical Physics. 2009;**49**(3): 409-417

[16] Wei Y, Wang D. Condition numbers and perturbation of the weighted Moore-Penrose inverse and weighted linear least squares problem. Applied Mathematics and Computation. 2003;**145**(1):45-58

[17] Wei Y. A note on the sensitivity of the solution of the weighted linear least squares problem. Applied Mathematics and Computation. 2003;**145**(2–3):481-485

[18] Molchanov IN, Galba EF. A weighed pseudoinverse for complex matrices.

Ukrainian Mathematical Journal. 1983; **35**(1):46-50

[19] Elden L. A weighted pseudoinverse, generalized singular values and constrained least squares problems. BIT. 1982;**22**:487-502

[20] Wei Y. The weighted Moore– Penrose inverse of modified matrices. Applied Mathematics and Computation. 2001;**122**(1):1-13. DOI: 10.1016/S0096- 3003(00)00007-2

[21] Voevodin VV. On the regularization method. Zhurnal Vychislitel<sup>0</sup> noy Matematiki i Matematicheskoy Fiziki. 1969;**9**:673-675

[22] Tikhonov AN. Regularization of illposed problems. Doklady Akademii Nauk SSSR. 1963;**153**:42-52. DOI: 10.1016/S0096-3003(00)00007-2

[23] Ivanov VK, Vasin VV, Tanana VP. Theory of Linear Ill-Posed Problems and Applications. Moscow: Nauka; 1978. 206 p

[24] Morozov VA. Regularization Methods for Unstable Problems. Moscow: Moscow State University; 1987 [in Russian]

[25] Albert AE. Regression and the Moore–Penrose Pseudoinverse. New York: Academic Press; 1972. p. 180

[26] Lawson CL, Hanson RJ. Solving Least Squares Problems. Moscow: Nauka; 1986. p. 232

[27] Kirichenko NF. Analytical representation of perturbations of pseudoinverses. Kibernetika i Sistemnyi Analiz. 1997;**2**:98-107

[28] Golub GH, Van Loan CF. Matrix Computations. Baltimore: Johns Hopkins University Press; 1996. p. 694.

[29] Elden L. Perturbation theory for the least squares problem with linear equality constraints. SIAM Journal on Numerical Analysis. 1980;**17**:338-350

[30] Bjork A. Numerical Methods for Least Squares Problems. Linköping: Linköping University; 1996. p. 407

[31] Voevodin VV. Computational Foundations of Linear Algebra. Moscow: Nauka; 1977

[32] Khimich AN. Estimates of perturbations for least squares solutions. Kibernetika i Sistemnyi Analiz. 1996;**3**: 142-145

[33] Khimich AN, Voitsekhovskii SA, Brusnikin VN. The reliability of solutions to linear mathematical models with approximately given initial data. Mathematical Machines and Systems. 2004;**3**:3-17

[34] Wei Y, Wu H. Expression for the perturbation of the weighted Moore– Penrose inverse. Computers & Mathematcs with Applications. 2000;**39**:13-18

[35] Wei M. Supremum and Stability of Weighted Pseudoinverses and Weighted Least Squares Problems: Analysis and Computations. New York: Huntington; 2001. p. 180

[36] Wang D. Some topics on weighted Moore–Penrose inverse, weighted least squares, and weighted regularized Tikhonov problems. Applied Mathematics and Computation. 2004;**157**:243-267

[37] Van Loan CF. Generalizing the singular value decomposition. SIAM Journal on Numerical Analysis. 1976;**13**: 76-83

[38] Wang G, Wei Y, Qiao S. Generalized Inverses: Theory and Computations. Beijing: Science Press; 2004. p. 390

*Weighted Least Squares Perturbation Theory DOI: http://dx.doi.org/10.5772/intechopen.102885*

[39] Albert A. Regression, Pseudoinversion, and Recurrent Estimation. Moscow: Nauka; 1977. p. 305

[40] Khimich AN, Nikolaevskaya EA. Error estimation for weighted least squares solutions. Computational Mathematics. 2006;**3**:36-45

[41] Khimich AN, Nikolaevskaya EA. Perturbation analysis of weighted least squares solutions. Theory of Optimal Solutions. 2007;**6**:12-18

[42] Voevodin VV, Kuznecov YA. Matrices and Calculations. Мoscow: Nauka; 1984. p. 318

[43] Wilkinson JH, Reinsch C. Handbook Algorithms in ALGOL. Linear Algebra. Mechanical: Moscow; 1976. p. 389

[44] GNU Multiple Precision Arithmetic Library. Availabe from: www.gmplib.org

[45] Nikolaevskaya EA, Khimich AN, Chistyakova TV. Programming with Multiple Precision. Berlin/Heidelberg, Germany: Springer; 2012

## **Chapter 5**
