1. Introduction

There are two main computing classes, these are hard and soft computing. Scientists and engineers generally prefer the first-class computing because they can easily establish an explicit mathematical relationship between the model parameters and their data (observations), not in the second class. The relationships between the parameters and data can be linear or nonlinear.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

If the relations are nonlinear, they should be linearized via Taylor expansion [1–7]. Therefore, the linear models can be solved by linear algebra [8–15].

Mathematical model between data and unknowns can be established by Taylor expansion for any model. However, if <sup>m</sup> pieces function vector <sup>f</sup>mð Þ¼ <sup>b</sup>y; <sup>b</sup><sup>x</sup> <sup>0</sup> is not transformed into <sup>b</sup>y<sup>n</sup> � <sup>f</sup>nð Þ¼ <sup>b</sup><sup>x</sup> <sup>0</sup> (for <sup>m</sup> <sup>¼</sup> <sup>n</sup>), the error in variable solution as in total least squares (TLS) method can be preferred. Therefore, <sup>f</sup>mð Þ¼ <sup>b</sup>y; <sup>b</sup><sup>x</sup> <sup>0</sup> (for <sup>m</sup> 6¼ <sup>n</sup>) should be differenced as following:

> <sup>B</sup>m,n <sup>¼</sup> <sup>∂</sup>fð Þ <sup>b</sup>y; <sup>b</sup><sup>x</sup> ∂by

� � � � y^, x^¼y, x<sup>0</sup> :

Most of science and engineering problems can be modeled as <sup>b</sup><sup>y</sup> � <sup>f</sup>ð Þ¼ <sup>b</sup><sup>x</sup> <sup>0</sup> (<sup>m</sup> <sup>¼</sup> <sup>n</sup>). Therefore, the functional model named as indirect adjustment method in the adjustment literature [3–7] in geomatics engineering has been preferred in the chapter. The weight matrix (Pn,n) of observations (stochastic variables) would be accepted as a unit matrix Pn,n ¼ In,n in here for simplicity.

A generalization for objective functions is Lp � Norm (p ¼ 1, 2, 3, 4…, ∞) [9, 10]. The first-degree objective function is L1-norm estimation which is accepted as a robust estimation method in just

The second-degree objective function is L2-norm estimation which is known as least squares (LS)

The last-degree objective function is L∞-norm estimation which is known as minmax method. In fact, the soft computing techniques use this objective while it applies the trial-and-error method in their learning stages. Eq. (1) under L1-norm and L∞-norm is also solved by means of linear programming methods, for this reason; the methods may give several solutions (as

While a rank is a number that indicates a linear independent column, the number of the coefficient matrix of unknowns in a linear equation system, a rank deficiency represents a linear dependent column number (if it is smaller than the row number) of the coefficient matrix. Inconsistency in the solution stage of a linear equation system results from the (rank) deficiencies.

<sup>T</sup>j j <sup>ε</sup> <sup>↦</sup> min L1 � norm estimation Least absolute residuals ð Þ, (3)

<sup>ε</sup><sup>T</sup><sup>ε</sup> <sup>↦</sup> min L2 � norm estimation Least squares ð Þ, (4)

j j ε max ↦ min L<sup>∞</sup> � norm estimation Minmax absolute residuals ð Þ, (5)

where

2.1. Objective functions

linear models [9–11].

i

<sup>i</sup> <sup>¼</sup> ½ � 1 1 … <sup>1</sup> <sup>T</sup>:

2.2. Rank deficiencies in linear models

method and widely used in hard and soft computations.

being in trial-and-error method) to any interested problem [10, 11].

Bm,n ε<sup>n</sup> � Am,u δ<sup>u</sup> þ l<sup>m</sup> ¼ 0 Pn,n (2)

On Non-Linearity and Convergence in Non-Linear Least Squares

http://dx.doi.org/10.5772/intechopen.76313

59

To overcome complicated real-life problems whose mathematical models are not known, the soft computing techniques have been developed in the last decades. We can count well-known techniques, some as artificial neural network (ANN), artificial intelligence (AI), machine learning (ML), deep learning (DP), fuzzy logic (FL) and genetic algorithms (GA) [16–18]. The techniques inspired by the human intelligence and learning processes can be very timeconsuming according to the data given in run due to their processing based on the trial-anderror method. If these techniques are roughly defined, data (experimental outcomes and observations) are separated into two parts in them, learning (or training) data and test data. Mathematical (functional and/or stochastic) relations between data and model parameters are learned from the learning data. The handled model is tested by means of the test data. After that, the trained and developed model, if meets expectations, is used to estimate for producing unobserved data for the scientific (or engineering) problems [16–18].

In the soft computing techniques, the linear algebra is also a very effective tool to solve the problem as in the hard computing ones. For this reason, we should take a short overview on linear algebra used in science and engineering [16–18].
