Preface

**Section 4**

**II**

*by Francisco Bulnes*

*by Daniel Condurache*

Advanced Topics of Tensor Analysis **85**

**Chapter 6 87**

**Chapter 7 103**

Derived Tensor Products and Their Applications

Higher-Order Kinematics in Dual Lie Algebra

Generalization of the concepts of scalar, vector and matrix, which are independent of any elected coordinates systems, brought forth the concept of tensor. A tensor is a mathematical entity born of the invariance idea of the mentioned concepts in any election of coordinate systems and to any coordinate system transformation.

The concept has been of great importance in describing the invariance of physics laws with respect to any coordinate inertial reference framework where physics phenomena are measured. Likewise, gravitation theory is an example of the importance of describing physics laws through their invariants. Theories like general and special relativity brought forth diverse Einstein summation laws and other properties from Levi-Civita calculus, contributing to special tensors inside the Riemannian structure, which best describes the phenomena of the universe and their relations. Such is the case of Riemann tensors, Pseudo-tensors, and the different curvature tensor types arising from theories on torsion field, Cartan-Einstein theories, or supersymmetries in quantum mechanics. A more mathematical focus on tensors considers the multilinear forms and the tensor product of the vector spaces, which have more relevance to tensor applications of Hilbert spaces, for example, in QED and quantum mechanics.

Applications in classical mechanics, electrodynamics, quantum mechanics and communication theory are well developed through tensors. In quantum communication theory and parallel geometries to Riemannian geometry, such as twistor geometry, spinors and twistors are considered new interpretations of the tensors in fields and waves.

> **Dr. Francisco Bulnes** Professor, IINAMEI, Director

Research Department in Mathematics and Engineering, TESCHA, Mexico

Section 1

Fundamentals

**1**

Section 1 Fundamentals

**Chapter 1**

**Abstract**

*Rodrigo Garcia Eustaquio*

derivatives seen as a tensor.

where *<sup>F</sup>* : —IR*<sup>n</sup>* ! IR*<sup>n</sup>*.

for all *k*∈IN, where

*<sup>x</sup><sup>k</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>x</sup><sup>k</sup>* � *<sup>I</sup>* <sup>þ</sup>

**1. Introduction**

written as

**3**

tensor-free Chebyshev-Halley class

Bilinear Applications and Tensors

In this chapter, a theoretical approach to the vector space of tensor of order 3 and the vector space of bilinear applications will be presented in order to present an isomorphism between these spaces and several properties about tensor and bilinear applications. With this well-defined isomorphism, we will present how to calculate the product between tensor of second derivatives and a vector, where such a product is used in several numerical methods such as Chebyshev-Halley class and others mentioned in the introduction. In addition, concepts on differentiability are presented, allowing a better understanding for the reader about second-order

**Keywords:** tensor, bilinear application, isomorphism, second derivative, inexact

Frequently, discretization of mathematical models demands solving a system of equations, which is generally nonlinear. Such mathematical problems might be

There exist iterative methods for solving (1) that have cubic convergence rate,

<sup>L</sup> *xk <sup>I</sup>* � *<sup>α</sup>*<sup>L</sup> *xk* �<sup>1</sup> *JF <sup>x</sup><sup>k</sup>* �<sup>1</sup>

<sup>T</sup> *<sup>F</sup>*ð Þ *<sup>x</sup> JF*ð Þ *<sup>x</sup>* �<sup>1</sup>

and *JF*ð Þ *x* and T *<sup>F</sup>*ð Þ *x* denote the first and second derivatives of *F* evaluated at *x*, respectively. The parameter *α* is a real number and *I* is the identity matrix in IR*<sup>n</sup>*�*<sup>n</sup>*. Discretized versions of Chebyshev-Halley class have already been considered in

multidimensional matrix. A generalization of the Chebyshev-Halley class 2ð Þ where

[2] in such a way that the tensor of second derivatives of the function *F* was

no second-order derivative information is required but that also has cubic

approximated by bilinear operators. A tensor is a multi-way array or

for instance, the methods belonging to the following class of methods named Chebyshev-Halley class, which was introduced by Hernández and Gutiérrez in [1]:

<sup>∈</sup> IR*<sup>n</sup>*such that *F x*<sup>∗</sup> ð Þ¼ <sup>0</sup> (1)

*F xk* , (2)

*F x*ð Þ , (3)

find *x*<sup>∗</sup>

1 2

Lð Þ¼ *<sup>x</sup> JF*ð Þ *<sup>x</sup>* �<sup>1</sup>

#### **Chapter 1**

## Bilinear Applications and Tensors

*Rodrigo Garcia Eustaquio*

#### **Abstract**

In this chapter, a theoretical approach to the vector space of tensor of order 3 and the vector space of bilinear applications will be presented in order to present an isomorphism between these spaces and several properties about tensor and bilinear applications. With this well-defined isomorphism, we will present how to calculate the product between tensor of second derivatives and a vector, where such a product is used in several numerical methods such as Chebyshev-Halley class and others mentioned in the introduction. In addition, concepts on differentiability are presented, allowing a better understanding for the reader about second-order derivatives seen as a tensor.

**Keywords:** tensor, bilinear application, isomorphism, second derivative, inexact tensor-free Chebyshev-Halley class

#### **1. Introduction**

Frequently, discretization of mathematical models demands solving a system of equations, which is generally nonlinear. Such mathematical problems might be written as

$$\text{find } \mathfrak{x}^\* \in \mathbb{R}^n \\ \text{such that } F(\mathfrak{x}^\*) = \mathbf{0} \tag{1}$$

where *<sup>F</sup>* : —IR*<sup>n</sup>* ! IR*<sup>n</sup>*.

There exist iterative methods for solving (1) that have cubic convergence rate, for instance, the methods belonging to the following class of methods named Chebyshev-Halley class, which was introduced by Hernández and Gutiérrez in [1]:

$$\mathbf{x}^{k+1} = \mathbf{x}^k - \left[ I + \frac{1}{2} \mathcal{L} \left( \mathbf{x}^k \right) \left( I - a \mathcal{L} \left( \mathbf{x}^k \right) \right)^{-1} \right] J\_F \left( \mathbf{x}^k \right)^{-1} F \left( \mathbf{x}^k \right), \tag{2}$$

for all *k*∈IN, where

$$\mathcal{L}(\mathbf{x}) = J\_F(\mathbf{x})^{-1} \mathcal{T}\_F(\mathbf{x}) \left( J\_F(\mathbf{x})^{-1} F(\mathbf{x}) \right), \tag{3}$$

and *JF*ð Þ *x* and T *<sup>F</sup>*ð Þ *x* denote the first and second derivatives of *F* evaluated at *x*, respectively. The parameter *α* is a real number and *I* is the identity matrix in IR*<sup>n</sup>*�*<sup>n</sup>*.

Discretized versions of Chebyshev-Halley class have already been considered in [2] in such a way that the tensor of second derivatives of the function *F* was approximated by bilinear operators. A tensor is a multi-way array or multidimensional matrix. A generalization of the Chebyshev-Halley class 2ð Þ where no second-order derivative information is required but that also has cubic

convergence rate, named inexact tensor-free Chebyshev-Halley class, was introduced by Eustaquio, Ribeiro, and Dumett [3]. Other families of iterative methods with cubic convergence rate were extensively described in Traub's book [4].

Several alternatives exist for the product of the tensor of second derivatives of *F* by vectors [5–8], and this needs to be elucidated.

The aim of this chapter is to present concepts and relationships between tensors of order 3 and bilinear applications, in order to relate them to the second derivative of a two-differentiable application. We will see later that given the vectors *u*, *v*∈IR*<sup>n</sup>* , the *<sup>i</sup>*-th row of the matrix <sup>T</sup> *<sup>F</sup>*ð Þ *<sup>x</sup> <sup>v</sup>* is defined by *<sup>v</sup><sup>T</sup>*∇<sup>2</sup> *fi* ð Þ *<sup>x</sup>* , where <sup>∇</sup><sup>2</sup>*fi* ð Þ *x* is the Hessian of the *i*-th component of *F* evaluated at *x*. The *i*-th component of the vector <sup>T</sup> *<sup>F</sup>*ð Þ *<sup>x</sup> vu* is defined by *vT*∇<sup>2</sup> *fi* ð Þ *x u*.

**Definition 1.2.** A tensor slice of a tensor of order 3 is a two-dimensional section

Generally in tensors of order 3, a fiber is a vector and a slice is a matrix. We have

• column fibers (or mode-1 fiber), where the indices *i*<sup>2</sup> and *i*<sup>3</sup> are fixed;

• row fibers (or mode-2 fiber), where the indices *i*<sup>1</sup> and *i*<sup>3</sup> are fixed; and

For example, consider a tensor <sup>T</sup> <sup>∈</sup>IR<sup>2</sup>�4�<sup>3</sup> with *<sup>i</sup>* <sup>¼</sup> 1, 2, *<sup>j</sup>* <sup>¼</sup> 1, 2, 3, 4, and *<sup>k</sup>* <sup>¼</sup>

1

CCCA,

, is the matrix

24 !*:* (4)

*t* 1 *<sup>i</sup>*<sup>1</sup> *t* 2 *<sup>i</sup>*<sup>1</sup> *t* 3 *i*1

0

BBB@

<sup>T</sup> :*j*: <sup>¼</sup> *<sup>t</sup>*

<sup>T</sup> ::*<sup>k</sup>* <sup>¼</sup> *<sup>t</sup>*

*t* 1 *<sup>i</sup>*<sup>2</sup> *t* 2 *<sup>i</sup>*<sup>2</sup> *t* 3 *i*2

*t* 1 *<sup>i</sup>*<sup>3</sup> *t* 2 *<sup>i</sup>*<sup>3</sup> *t* 3 *i*3

*t* 1 *<sup>i</sup>*<sup>4</sup> *t* 2 *<sup>i</sup>*<sup>4</sup> *t* 3 *i*4

> 1 <sup>1</sup>*<sup>j</sup> t* 2 <sup>1</sup>*<sup>j</sup> t* 3 1*j*

*t* 1 <sup>2</sup>*<sup>j</sup> t* 2 <sup>2</sup>*<sup>j</sup> t* 3 2*j*

*k* <sup>11</sup> *t k* <sup>12</sup> *t k* <sup>13</sup> *t k* 14

*t k* <sup>21</sup> *t k* <sup>22</sup> *t k* <sup>23</sup> *t k*

**Figures 2** and **3** illustrate the three types of fibers and slices, respectively, of a

, is the matrix

!

• tube fibers (or mode-3 fiber), where the indices *i*<sup>1</sup> and *i*<sup>2</sup> are fixed.

(fragment), obtained by fixing only one index.

• horizontal slice, where the index *i*<sup>1</sup> is fixed;

• lateral slice, where the index *i*<sup>2</sup> is fixed; and

1, 2, 3. The *<sup>i</sup>*-th horizontal slice, denoted by <sup>T</sup> *<sup>i</sup>*::, is the matrix

<sup>T</sup> *<sup>i</sup>*:: <sup>¼</sup>

• frontal slice, where the index *i*<sup>3</sup> is fixed.

the *<sup>j</sup>*-th lateral slice, denoted by <sup>T</sup> :*j*:

and the *<sup>k</sup>*-th frontal slice, denoted by <sup>T</sup> ::*<sup>k</sup>*

.

tensor <sup>T</sup> <sup>∈</sup>IR2�4�<sup>3</sup>

**5**

three types of fibers:

**Figure 1.**

*A tensor* <sup>T</sup> <sup>∈</sup> IR<sup>2</sup>�4�<sup>3</sup>*.*

*Bilinear Applications and Tensors*

*DOI: http://dx.doi.org/10.5772/intechopen.90904*

We also have three types of slices:

#### **2. Tensors**

Tensors naturally arise in some applications, such as chemometry [9], signal processing [10], and others. According to [8], for many applications involving highorder tensors, the known results of matrix algebra seemed to be insufficient in the twentieth century. There were some workshops and congresses on the study of tensors, such as:


Readers interested in multilinear singular value decomposition, eigenvalues, and eigenvectors may consult as references [5–8, 13, 14]. In this text, we will focus our attention on tensors of order 3.

Let *I*1,*I*2, and *I*<sup>3</sup> be three positive integers. A tensor T of order 3 is an three-way array where its elements *t i*3 *<sup>i</sup>*1*i*<sup>2</sup> are indexed by *i*<sup>1</sup> ¼ 1, … ,*I*1, *i*<sup>2</sup> ¼ 1, … ,*I*2, and *i*<sup>3</sup> ¼ 1, … ,*I*<sup>3</sup> and the *n*-th dimension of the tensor is denoted by *In*, for *n* ¼ 1, 2, 3. For example, the first, second, and third dimensions of a tensor <sup>T</sup> <sup>∈</sup>IR2�4�<sup>3</sup> are 2, 4, 3, respectively.

Obviously, tensors are generalizations of matrices. A matrix can be viewed as a tensor of order 2, while a vector can be viewed as a tensor of order 1.

From an algebraic point of view, a tensor T of order 3 is an element of the vector space IR*<sup>I</sup>*1�*I*2�*I*<sup>3</sup> , whereas from the geometric point of view, a tensor <sup>T</sup> of order 3 can be seen as a parallelepiped [15], with *I*<sup>1</sup> rows, *I*<sup>2</sup> columns, and *I*<sup>3</sup> tubes. **Figure 1** illustrates a tensor <sup>T</sup> <sup>∈</sup> IR2�4�<sup>3</sup> .

In linear algebra, it is common to see a matrix through its columns. If *A* ∈IR*<sup>m</sup>*�*<sup>n</sup>*, then *<sup>A</sup>* can be viewed as *<sup>A</sup>* <sup>¼</sup> ½ � *<sup>a</sup>*<sup>1</sup> … *an* , where *aj* <sup>∈</sup>IR*<sup>m</sup>* denotes the *<sup>j</sup>*-th column of the matrix *A*. In the case of tensor of order 3, we can see them through fibers and slices. Hence follow the definitions.

**Definition 1.1.** A tensor fiber of a tensor of order 3 is a one-dimensional fragment obtained by fixing only two indices.

*Bilinear Applications and Tensors DOI: http://dx.doi.org/10.5772/intechopen.90904*

**Figure 1.** *A tensor* <sup>T</sup> <sup>∈</sup>IR<sup>2</sup>�4�<sup>3</sup>*.*

convergence rate, named inexact tensor-free Chebyshev-Halley class, was introduced by Eustaquio, Ribeiro, and Dumett [3]. Other families of iterative methods with cubic convergence rate were extensively described in Traub's book [4].

by vectors [5–8], and this needs to be elucidated.

*Advances on Tensor Analysis and Their Applications*

the *<sup>i</sup>*-th row of the matrix <sup>T</sup> *<sup>F</sup>*ð Þ *<sup>x</sup> <sup>v</sup>* is defined by *<sup>v</sup><sup>T</sup>*∇<sup>2</sup>

Nagy, and Van Loan. Details in [11];

Comon and De Lathauwer. Details in [12]; and

*i*3

*fi* ð Þ *x u*.

<sup>T</sup> *<sup>F</sup>*ð Þ *<sup>x</sup> vu* is defined by *vT*∇<sup>2</sup>

**2. Tensors**

tensors, such as:

the Zurich.

attention on tensors of order 3.

illustrates a tensor <sup>T</sup> <sup>∈</sup> IR2�4�<sup>3</sup>

Hence follow the definitions.

ment obtained by fixing only two indices.

array where its elements *t*

respectively.

**4**

Several alternatives exist for the product of the tensor of second derivatives of *F*

The aim of this chapter is to present concepts and relationships between tensors of order 3 and bilinear applications, in order to relate them to the second derivative of a two-differentiable application. We will see later that given the vectors *u*, *v*∈IR*<sup>n</sup>*

Hessian of the *i*-th component of *F* evaluated at *x*. The *i*-th component of the vector

Tensors naturally arise in some applications, such as chemometry [9], signal processing [10], and others. According to [8], for many applications involving highorder tensors, the known results of matrix algebra seemed to be insufficient in the twentieth century. There were some workshops and congresses on the study of

• Workshop on Tensor Decomposition at the American Institute of Mathematics which took place at the Palo Alto, California, 2004, organized by Golub, Kolda,

• Workshop on Tensor Decompositions and Applications, 2005, organized by

• Minisymposium on Numerical Multilinear Algebra: A New Beginning, 2007, organized by Golub, Comon, De Lathauwer, and Lim and which took place at

Readers interested in multilinear singular value decomposition, eigenvalues, and eigenvectors may consult as references [5–8, 13, 14]. In this text, we will focus our

Let *I*1,*I*2, and *I*<sup>3</sup> be three positive integers. A tensor T of order 3 is an three-way

Obviously, tensors are generalizations of matrices. A matrix can be viewed as a

From an algebraic point of view, a tensor T of order 3 is an element of the vector space IR*<sup>I</sup>*1�*I*2�*I*<sup>3</sup> , whereas from the geometric point of view, a tensor <sup>T</sup> of order 3 can be seen as a parallelepiped [15], with *I*<sup>1</sup> rows, *I*<sup>2</sup> columns, and *I*<sup>3</sup> tubes. **Figure 1**

In linear algebra, it is common to see a matrix through its columns. If *A* ∈IR*<sup>m</sup>*�*<sup>n</sup>*, then *<sup>A</sup>* can be viewed as *<sup>A</sup>* <sup>¼</sup> ½ � *<sup>a</sup>*<sup>1</sup> … *an* , where *aj* <sup>∈</sup>IR*<sup>m</sup>* denotes the *<sup>j</sup>*-th column of the matrix *A*. In the case of tensor of order 3, we can see them through fibers and slices.

**Definition 1.1.** A tensor fiber of a tensor of order 3 is a one-dimensional frag-

1, … ,*I*<sup>3</sup> and the *n*-th dimension of the tensor is denoted by *In*, for *n* ¼ 1, 2, 3. For example, the first, second, and third dimensions of a tensor <sup>T</sup> <sup>∈</sup>IR2�4�<sup>3</sup> are 2, 4, 3,

tensor of order 2, while a vector can be viewed as a tensor of order 1.

.

*<sup>i</sup>*1*i*<sup>2</sup> are indexed by *i*<sup>1</sup> ¼ 1, … ,*I*1, *i*<sup>2</sup> ¼ 1, … ,*I*2, and *i*<sup>3</sup> ¼

*fi*

ð Þ *<sup>x</sup>* , where <sup>∇</sup><sup>2</sup>

*fi*

ð Þ *x* is the

,

**Definition 1.2.** A tensor slice of a tensor of order 3 is a two-dimensional section (fragment), obtained by fixing only one index.

Generally in tensors of order 3, a fiber is a vector and a slice is a matrix. We have three types of fibers:


We also have three types of slices:


For example, consider a tensor <sup>T</sup> <sup>∈</sup>IR<sup>2</sup>�4�<sup>3</sup> with *<sup>i</sup>* <sup>¼</sup> 1, 2, *<sup>j</sup>* <sup>¼</sup> 1, 2, 3, 4, and *<sup>k</sup>* <sup>¼</sup> 1, 2, 3. The *<sup>i</sup>*-th horizontal slice, denoted by <sup>T</sup> *<sup>i</sup>*::, is the matrix

$$T^{\vec{\dots}} = \begin{pmatrix} t\_{i1}^1 & t\_{i1}^2 & t\_{i1}^3 \\ t\_{i2}^1 & t\_{i2}^2 & t\_{i2}^3 \\ t\_{i3}^1 & t\_{i3}^2 & t\_{i3}^3 \\ t\_{i4}^1 & t\_{i4}^2 & t\_{i4}^3 \end{pmatrix},$$

the *<sup>j</sup>*-th lateral slice, denoted by <sup>T</sup> :*j*: , is the matrix

$$T^{\vec{\uppi}} = \begin{pmatrix} t\_{\mathfrak{I}\dot{\mathfrak{j}}}^1 & t\_{\mathfrak{I}\dot{\mathfrak{j}}}^2 & t\_{\mathfrak{I}\dot{\mathfrak{j}}}^3 \\ t\_{\mathfrak{I}\dot{\mathfrak{j}}}^1 & t\_{\mathfrak{I}\dot{\mathfrak{j}}}^2 & t\_{\mathfrak{I}\dot{\mathfrak{j}}}^3 \end{pmatrix}.$$

and the *<sup>k</sup>*-th frontal slice, denoted by <sup>T</sup> ::*<sup>k</sup>* , is the matrix

$$T^{\cdots k} = \begin{pmatrix} t\_{11}^k & t\_{12}^k & t\_{13}^k & t\_{14}^k \\ t\_{21}^k & t\_{22}^k & t\_{23}^k & t\_{24}^k \end{pmatrix} . \tag{4}$$

**Figures 2** and **3** illustrate the three types of fibers and slices, respectively, of a tensor <sup>T</sup> <sup>∈</sup>IR2�4�<sup>3</sup> .

The mode-2 product between a tensor <sup>T</sup> <sup>∈</sup>IR*m*�*n*�*<sup>p</sup>* and a matrix *<sup>A</sup>* <sup>∈</sup>IR*R*�*<sup>n</sup>* is a tensor

*ijarj* where *i* ¼ 1, … , *m*,*r* ¼ 1, … , *R* and *k* ¼ 1, … , *p:*

*ijark* where *i* ¼ 1, … , *m*, *j* ¼ 1, … , *n* and *r* ¼ 1, … , *R:*

*:*

, with

The mode-3 product between a tensor <sup>T</sup> <sup>∈</sup>IR*m*�*n*�*<sup>p</sup>* and a matrix *<sup>A</sup>* <sup>∈</sup>IR*R*�*<sup>p</sup>* is a tensor

Y¼T�3*<sup>A</sup>* <sup>∈</sup> IR*m*�*n*�*<sup>R</sup>*

To understand the mode-*n* product in terms of matrix, consider matrices

*<sup>A</sup>*�1*<sup>B</sup>* <sup>¼</sup> *BA* <sup>∈</sup>IR*<sup>k</sup>*�*<sup>n</sup>* and *<sup>A</sup>*�2*<sup>C</sup>* <sup>¼</sup> *AC<sup>T</sup>* <sup>∈</sup>IR*<sup>m</sup>*�*<sup>q</sup>*

*<sup>U</sup>*Σ*V<sup>T</sup>* <sup>¼</sup> ð Þ� <sup>Σ</sup>�1*<sup>U</sup>* <sup>2</sup>*<sup>V</sup>* <sup>¼</sup> ð Þ� <sup>Σ</sup>�2*<sup>V</sup>* <sup>1</sup>*U:*

**Property 1.** Let T be a tensor of order 3 and matrices *A* and *B* of convenient

The idea of Bader and Kolda [5] to calculate the product between tensor and vector is to calculate the inner product of each mode-*n* fiber (column, row, or tube) with the vector. It is not advantageous to treat an *m*-dimensional vector as a matrix *<sup>m</sup>* � 1. For example, if we consider a tensor <sup>T</sup> <sup>∈</sup>IR*<sup>m</sup>*�*n*�*<sup>p</sup>* and a vector *<sup>v</sup>*∈IR*<sup>m</sup>*�<sup>1</sup>

*m*, *n*, *p* 6¼ 1, by Definition 1.3, the product between T and *v* is not well defined, but

*<sup>A</sup>* ¼ T �1*v*<sup>∈</sup> IR*<sup>n</sup>*�*<sup>p</sup>*

*ijvi* where *j* ¼ 1, … , *n* and *k* ¼ 1, … , *p*

**Definition 1.4.** (Contracted product mode-n between tensors and vectors) The contracted product mode-1 between a tensor <sup>T</sup> <sup>∈</sup> IR*<sup>m</sup>*�*n*�*<sup>p</sup>* and a vector *<sup>v</sup>*∈IR*<sup>m</sup>* is the

ð Þ� T �*rA sB* ¼ T� ð Þ� *sB rA* ¼T�*rA*�*sB* for*r* 6¼ *s* and (5)

ð Þ� T �*rA rB* ¼T�*r*ð Þ *BA* (6)

Thus, the singular value decomposition of matrix *A* can be written as

*A* ∈IR*<sup>m</sup>*�*<sup>n</sup>*, *B* ∈IR*<sup>k</sup>*�*<sup>m</sup>*, and *C*∈IR*<sup>q</sup>*�*<sup>n</sup>*. By Definition 1.3 we have

The mode-*n* product satisfies the following property [8]:

Y¼T�2*<sup>A</sup>* <sup>∈</sup>IR*m*�*R*�*<sup>p</sup>*

where its elements are defined by

*DOI: http://dx.doi.org/10.5772/intechopen.90904*

where its elements are defined by

*k*¼1 *t k*

sizes. We have for all *r*, *s* ¼ 1, 2, 3

it is possible to calculate T �1*v<sup>T</sup>*.

where its elements are defined by

*ajk* <sup>¼</sup> <sup>X</sup>*<sup>m</sup> i*¼1 *t k*

where *vi* is the *i*-th component of the vector *v*.

matrix

**7**

*yk ir* <sup>¼</sup> <sup>X</sup>*<sup>n</sup> j*¼1 *t k*

*Bilinear Applications and Tensors*

*yr ij* <sup>¼</sup> <sup>X</sup> *p*

**Figure 2.** *Columns, rows, and tube fibers, respectively.*

**Figure 3.** *Horizontal, lateral, and frontal slices, respectively.*

#### **2.1 Tensor operations**

The first issue to consider in this subsection is how to calculate the product between tensors and matrices. It is well known from elementary algebra that given matrices *A* ∈ IR*<sup>m</sup>*�*<sup>n</sup>* and *B*∈IR*<sup>R</sup>*�*<sup>m</sup>*, it is possible to calculate the product *BA*, because the first dimension (number of rows) of matrix *A* agrees with the second dimension (number of columns) of matrix *B*, and each product element is the result of the inner product between rows of matrix *B* and columns of matrix *A*.

The product between tensors of order 3 and matrices or vectors is a bit more complicated. In order to obtain an element of the product between a tensor and a matrix, it is necessary to specify what dimension of the tensor will be chosen to agree with the number of columns of the matrix, and each resulting element will be a result of the inner product between the mode-*n* fibers (column, row, or tube) and the columns of the matrix. We will use the solution adopted by [8], which defines the product mode-*n* between tensors and matrices and the solution adopted by [5] that defines the contracted product mode-*n* between tensors and vectors.

The mode-*n* product is useful when one wants to decompose into singular values a high-order tensor in order to avoid the use of the generalized transpose concept. We refer to [5, 7, 8, 13] for details.

**Definition 1.3.** (mode-*n* tensor matrix product) The mode-1 product between a tensor <sup>T</sup> <sup>∈</sup>IR*<sup>m</sup>*�*n*�*<sup>p</sup>* and a matrix *<sup>A</sup>* <sup>∈</sup>IR*<sup>R</sup>*�*<sup>m</sup>* is a tensor

$$\mathcal{Y} = \mathcal{T} \times\_1 A \in \mathbb{R}^{R \times n \times p}$$

where its elements are defined by

$$y\_{\eta}^{k} = \sum\_{i=1}^{m} t\_{\eta}^{k} a\_{ri} \quad \text{where} \quad r = 1, \dots, R, j = 1, \dots, n, \text{ and } k = 1, \dots, p.s.$$

The mode-2 product between a tensor <sup>T</sup> <sup>∈</sup>IR*m*�*n*�*<sup>p</sup>* and a matrix *<sup>A</sup>* <sup>∈</sup>IR*R*�*<sup>n</sup>* is a tensor

$$\mathcal{Y} = \mathcal{T} \times\_2 A \in \mathbb{R}^{m \times R \times p}$$

where its elements are defined by

$$y\_{ir}^k = \sum\_{j=1}^n t\_{ij}^k a\_{rj} \quad \text{where} \quad i = 1, \dots, m, r = 1, \dots, R \text{ and } k = 1, \dots, p.s$$

The mode-3 product between a tensor <sup>T</sup> <sup>∈</sup>IR*m*�*n*�*<sup>p</sup>* and a matrix *<sup>A</sup>* <sup>∈</sup>IR*R*�*<sup>p</sup>* is a tensor

$$\mathcal{Y} = \mathcal{T} \times\_{\mathcal{Y}} \mathcal{A} \in \mathbb{R}^{m \times n \times R}$$

where its elements are defined by

$$y\_{ij}^r = \sum\_{k=1}^p t\_{ij}^k a\_{rk} \quad \text{where} \quad i = 1, \dots, m, j = 1, \dots, n \text{ and } r = 1, \dots, R.$$

To understand the mode-*n* product in terms of matrix, consider matrices *A* ∈IR*<sup>m</sup>*�*<sup>n</sup>*, *B* ∈IR*<sup>k</sup>*�*<sup>m</sup>*, and *C*∈IR*<sup>q</sup>*�*<sup>n</sup>*. By Definition 1.3 we have

$$A \times\_1 B = BA \in \mathbb{R}^{k \times n} \text{ and } A \times\_2 \mathbb{C} = AC^T \in \mathbb{R}^{m \times q}.$$

Thus, the singular value decomposition of matrix *A* can be written as

$$U\Sigma V^T = (\Sigma \times\_1 U) \times\_2 V = (\Sigma \times\_2 V) \times\_1 U.$$

The mode-*n* product satisfies the following property [8]:

**Property 1.** Let T be a tensor of order 3 and matrices *A* and *B* of convenient sizes. We have for all *r*, *s* ¼ 1, 2, 3

$$\mathbf{r}\left(\mathcal{T}\times\_r \mathbf{A}\right)\times\_r \mathbf{B} = \left(\mathcal{T}\times\_r \mathbf{B}\right)\times\_r \mathbf{A} = \mathcal{T}\times\_r \mathbf{A}\times\_r \mathbf{B} \quad \text{for} \\ r\neq s \text{ and} \tag{5}$$

$$(\mathcal{T} \times\_r \mathcal{A}) \times\_r \mathcal{B} = \mathcal{T} \times\_r (\mathcal{B}\mathcal{A}) \tag{6}$$

The idea of Bader and Kolda [5] to calculate the product between tensor and vector is to calculate the inner product of each mode-*n* fiber (column, row, or tube) with the vector. It is not advantageous to treat an *m*-dimensional vector as a matrix *<sup>m</sup>* � 1. For example, if we consider a tensor <sup>T</sup> <sup>∈</sup>IR*<sup>m</sup>*�*n*�*<sup>p</sup>* and a vector *<sup>v</sup>*∈IR*<sup>m</sup>*�<sup>1</sup> , with *m*, *n*, *p* 6¼ 1, by Definition 1.3, the product between T and *v* is not well defined, but it is possible to calculate T �1*v<sup>T</sup>*.

**Definition 1.4.** (Contracted product mode-n between tensors and vectors) The contracted product mode-1 between a tensor <sup>T</sup> <sup>∈</sup> IR*<sup>m</sup>*�*n*�*<sup>p</sup>* and a vector *<sup>v</sup>*∈IR*<sup>m</sup>* is the matrix

$$A = \mathcal{T} \overline{\times}\_1 v \in \mathbb{R}^{n \times p}$$

where its elements are defined by

$$a\_{jk} = \sum\_{i=1}^{m} t\_{ij}^k v\_i \quad \text{where} \quad j = 1, \dots, n \text{ and } k = 1, \dots, p$$

where *vi* is the *i*-th component of the vector *v*.

**2.1 Tensor operations**

*Columns, rows, and tube fibers, respectively.*

*Advances on Tensor Analysis and Their Applications*

*Horizontal, lateral, and frontal slices, respectively.*

**Figure 2.**

**Figure 3.**

We refer to [5, 7, 8, 13] for details.

where its elements are defined by

*yk rj* <sup>¼</sup> <sup>X</sup>*<sup>m</sup> i*¼1 *t k*

**6**

tensor <sup>T</sup> <sup>∈</sup>IR*<sup>m</sup>*�*n*�*<sup>p</sup>* and a matrix *<sup>A</sup>* <sup>∈</sup>IR*<sup>R</sup>*�*<sup>m</sup>* is a tensor

The first issue to consider in this subsection is how to calculate the product between tensors and matrices. It is well known from elementary algebra that given matrices *A* ∈ IR*<sup>m</sup>*�*<sup>n</sup>* and *B*∈IR*<sup>R</sup>*�*<sup>m</sup>*, it is possible to calculate the product *BA*, because the first dimension (number of rows) of matrix *A* agrees with the second dimension (number of columns) of matrix *B*, and each product element is the result of the

The product between tensors of order 3 and matrices or vectors is a bit more complicated. In order to obtain an element of the product between a tensor and a matrix, it is necessary to specify what dimension of the tensor will be chosen to agree with the number of columns of the matrix, and each resulting element will be a result of the inner product between the mode-*n* fibers (column, row, or tube) and the columns of the matrix. We will use the solution adopted by [8], which defines the product mode-*n* between tensors and matrices and the solution adopted by [5]

The mode-*n* product is useful when one wants to decompose into singular values a high-order tensor in order to avoid the use of the generalized transpose concept.

**Definition 1.3.** (mode-*n* tensor matrix product) The mode-1 product between a

Y¼T�1*<sup>A</sup>* <sup>∈</sup>IR*<sup>R</sup>*�*n*�*<sup>p</sup>*

*ijari* where *r* ¼ 1, … , *R*, *j* ¼ 1, … , *n*, and *k* ¼ 1, … , *p:*

inner product between rows of matrix *B* and columns of matrix *A*.

that defines the contracted product mode-*n* between tensors and vectors.

The contracted product mode-2 between a tensor <sup>T</sup> <sup>∈</sup>IR*m*�*n*�*<sup>p</sup>* and a vector *v*∈IR*<sup>n</sup>* is the matrix

$$A = \mathcal{T} \overline{\times}\_2 v \in \mathbb{R}^{m \times p}$$

col*k*ð Þ¼ T �1*x*

*DOI: http://dx.doi.org/10.5772/intechopen.90904*

*Bilinear Applications and Tensors*

*a*1*<sup>k</sup> a*2*<sup>k</sup> a*3*<sup>k</sup> a*4*<sup>k</sup>* 1

*t k* <sup>11</sup> *t k* 21 1

CCCCCA

*x*1 *x*2

> *t* 1 <sup>1</sup>*<sup>j</sup> t* 2 <sup>1</sup>*<sup>j</sup> t* 3 11

0 @

*t* 1 <sup>2</sup>*<sup>j</sup> t* 2 <sup>2</sup>*<sup>j</sup> t* 3 21

> *x*1 *x*2 *x*3 *x*4

1

CCCCCA

*t* 1 *<sup>i</sup>*<sup>1</sup> *t* 2 *<sup>i</sup>*<sup>1</sup> *t* 3 *i*1

0

BBBBB@

1 A

0

BB@

*t* 1 *<sup>i</sup>*<sup>1</sup> *t* 1 *<sup>i</sup>*<sup>2</sup> *t* 1 *<sup>i</sup>*<sup>3</sup> *t* 1 *i*4

0

BB@

This example can be easily generalized to arbitrary dimensions. In particular, for

**Lemma 1.5.** Let <sup>T</sup> <sup>∈</sup>IR*<sup>n</sup>*�*n*�*<sup>n</sup>* be a tensor. If <sup>T</sup> *<sup>i</sup>*:: is a symmetric matrix for all

ð Þ T �2*u v* ¼ Tð Þ �2*v u*

*t* 2 *<sup>i</sup>*<sup>1</sup> *t* 2 *<sup>i</sup>*<sup>2</sup> *t* 2 *<sup>i</sup>*<sup>3</sup> *t* 2 *i*4

*t* 3 *<sup>i</sup>*<sup>1</sup> *t* 3 *<sup>i</sup>*<sup>2</sup> *t* 3 *<sup>i</sup>*<sup>3</sup> *t* 3 *i*4

*x*1 *x*2 *x*3 1

CCA

row*i*ð Þ¼ <sup>T</sup> �2*<sup>x</sup> xT*<sup>T</sup> *<sup>i</sup>*:: (9) row*i*ð Þ¼ <sup>T</sup> �3*<sup>x</sup> <sup>x</sup><sup>T</sup>* <sup>T</sup> *<sup>i</sup>*:: � �*<sup>T</sup>* (10)

¼ T :*j*: � �*<sup>x</sup>* and

1

CCA <sup>¼</sup> *xT* <sup>T</sup> *<sup>i</sup>*:: � �*<sup>T</sup>*

*t* 1 *<sup>i</sup>*<sup>2</sup> *t* 2 *<sup>i</sup>*<sup>2</sup> *t* 3 *i*2

*t* 1 *<sup>i</sup>*<sup>3</sup> *t* 2 *<sup>i</sup>*<sup>3</sup> *t* 3 *i*3

*t* 1 *<sup>i</sup>*<sup>4</sup> *t* 2 *<sup>i</sup>*<sup>4</sup> *t* 3 *i*4

0

BBBBB@

¼ T ::*<sup>k</sup>* � �*<sup>T</sup>*

1

*x* and

<sup>A</sup> <sup>¼</sup> *<sup>x</sup>T*<sup>T</sup> :*j*:

¼ T ::*<sup>k</sup>* � �*<sup>x</sup>* and

1

CCCCCA

<sup>¼</sup> *xT*<sup>T</sup> *<sup>i</sup>*::

!

0

BBBBB@

� � <sup>¼</sup> ð Þ *<sup>x</sup>*<sup>1</sup> *<sup>x</sup>*<sup>2</sup>

*t k* <sup>12</sup> *t k* 22

*t k* <sup>13</sup> *t k* 23

*t k* <sup>14</sup> *t k* 24

CCCCCA ¼

0

BBBBB@

row*j*ð Þ¼ T �1*x aj*<sup>1</sup> *aj*<sup>2</sup> *aj*<sup>3</sup>

*a*1*<sup>k</sup> a*2*<sup>k</sup>*

, then <sup>T</sup> �3*x*∈IR<sup>2</sup>�<sup>4</sup> and

col*j*ð Þ¼ T �3*x*

<sup>¼</sup> *<sup>t</sup> k* <sup>11</sup> *t k* <sup>12</sup> *t k* <sup>13</sup> *t k* 14

row*i*ð Þ¼ T �2*x* ð *ai*<sup>1</sup> *ai*<sup>2</sup> *ai*<sup>3</sup> Þ ¼ ð Þ *x*<sup>1</sup> *x*<sup>2</sup> *x*<sup>3</sup> *x*<sup>4</sup>

*a*1*j a*2*j*

row*i*ð Þ¼ T �3*x* ð *ai*<sup>1</sup> *ai*<sup>2</sup> *ai*<sup>3</sup> Þ ¼ ð Þ *x*<sup>1</sup> *x*<sup>2</sup> *x*<sup>3</sup>

a tensor <sup>T</sup> <sup>∈</sup>IR*<sup>m</sup>*�*n*�*<sup>n</sup>* and a vector *<sup>x</sup>*∈IR*<sup>n</sup>*, we have

¼

*t* 1 <sup>1</sup>*<sup>j</sup> t* 2 <sup>1</sup>*<sup>j</sup> t* 3 1*j*

0 @

*t* 1 <sup>2</sup>*<sup>j</sup> t* 2 <sup>2</sup>*<sup>j</sup> t* 3 2*j*

!

*t k* <sup>21</sup> *t k* <sup>22</sup> *t k* <sup>23</sup> *t k*

24 !

!

2.*x*<sup>∈</sup> IR4, then <sup>T</sup> �2*x*<sup>∈</sup> IR2�<sup>3</sup> and

col*k*ð Þ¼ T �2*x*

3.*x*∈ IR3

*i* ¼ 1, … , *n*, then

**9**

for all *u*, *v*∈ IR*<sup>n</sup>*.

where its elements are defined by

$$a\_{ik} = \sum\_{j=1}^{n} t\_{ij}^k v\_j \quad \text{where} \quad i = 1, \dots, m \text{ and } k = 1, \dots, p$$

where *vj* is the *j*-th component of the vector *v*.

The contracted product mode-3 between a tensor <sup>T</sup> <sup>∈</sup> IR*m*�*n*�*<sup>p</sup>* and a vector *v*∈IR*<sup>p</sup>* is the matrix

$$A = \mathcal{T} \overline{\times}\_3 v \in \mathbb{R}^{m \times m}$$

where its elements are defined by

$$a\_{\vec{\eta}} = \sum\_{k=1}^{p} t\_{\vec{\eta}}^k v\_k \text{ where } i = 1, \dots, m \text{ and } j = 1, \dots, n$$

where *vk* is the *k*-th component of the vector *v*.

A caution must be added when calculating the product between matrices and vectors by considering the definitions 1.3 and 1.4. For example, note that if *A* ∈ IR*<sup>m</sup>*�*<sup>n</sup>* , *<sup>u</sup>*∈IR*<sup>n</sup>*, and *<sup>v</sup>*∈IR*<sup>m</sup>*, then *<sup>A</sup>*�2*<sup>u</sup>* and *<sup>A</sup>*�2*uT* have the same elements, but

$$A \overline{\times}\_2 \mathfrak{u} \neq A \times\_2 \mathfrak{u}^T,$$

because *<sup>A</sup>*�2*u*<sup>∈</sup> IR*<sup>m</sup>* (column vector) and *<sup>A</sup>*�2*u<sup>T</sup>* <sup>∈</sup>IR1�*<sup>m</sup>* (row vector). Note that, in relation to the matrix product of elementary algebra, we have

$$A\underline{u} = A \overline{\times}\_2 \underline{u} \tag{7}$$

$$
v^T \mathbf{A} = \mathbf{A} \times\_1 \mathbf{v}^T \neq \mathbf{A} \,\overline{\times}\_1 \mathbf{v}.\tag{8}$$

In particular, given a tensor <sup>T</sup> <sup>∈</sup> IR*<sup>n</sup>*�*m*�*<sup>m</sup>* and a vector *<sup>v</sup>*∈IR*<sup>m</sup>*, by Definition 1.4 together with (8), it follows that <sup>T</sup> �2*v*∈IR*<sup>n</sup>*�*<sup>m</sup>* and

$$(\mathcal{T}\overline{\times}\_2\nu)\overline{\times}\_2\nu = (\mathcal{T}\overline{\times}\_2\nu)\nu \in \mathbb{R}^n.$$

The contracted product mode-*n* satisfies the following property [5]:

**Property 2.** Given a tensor T of order 3 and vectors *u* and *v* of convenient sizes, we have for all *r* ¼ 1, 2, 3 and *s* ¼ 2, 3 that

$$(\mathcal{T} \overline{\times}\_r \mathfrak{u}) \overline{\times}\_{s-1} v = (\mathcal{T} \overline{\times}\_s \mathfrak{v}) \overline{\times}\_r \mathfrak{u} \quad \text{for} \, r < s.$$

For example, consider a tensor <sup>T</sup> <sup>∈</sup>IR<sup>2</sup>�4�<sup>3</sup> , and denote the *k*-th column and the *q*-th row of matrix *A* by col*k*ð Þ *A* and row*q*ð Þ *A* , respectively. Note that if:

1.*x*∈ IR2 , then <sup>T</sup> �1*x*∈IR<sup>4</sup>�<sup>3</sup> and *Bilinear Applications and Tensors DOI: http://dx.doi.org/10.5772/intechopen.90904*

The contracted product mode-2 between a tensor <sup>T</sup> <sup>∈</sup>IR*m*�*n*�*<sup>p</sup>* and a vector

*<sup>A</sup>* ¼ T �2*v*∈IR*m*�*<sup>p</sup>*

The contracted product mode-3 between a tensor <sup>T</sup> <sup>∈</sup> IR*m*�*n*�*<sup>p</sup>* and a vector

*<sup>A</sup>* ¼ T �3*v*∈IR*<sup>m</sup>*�*<sup>n</sup>*

A caution must be added when calculating the product between matrices and vectors by considering the definitions 1.3 and 1.4. For example, note that if *<sup>A</sup>* <sup>∈</sup> IR*<sup>m</sup>*�*<sup>n</sup>*, *<sup>u</sup>*∈IR*<sup>n</sup>*, and *<sup>v</sup>*∈IR*<sup>m</sup>*, then *<sup>A</sup>*�2*<sup>u</sup>* and *<sup>A</sup>*�2*uT* have the same elements, but

*<sup>A</sup>*�2*<sup>u</sup>* 6¼ *<sup>A</sup>*�2*uT*,

because *<sup>A</sup>*�2*u*<sup>∈</sup> IR*<sup>m</sup>* (column vector) and *<sup>A</sup>*�2*u<sup>T</sup>* <sup>∈</sup>IR1�*<sup>m</sup>* (row vector). Note

In particular, given a tensor <sup>T</sup> <sup>∈</sup> IR*<sup>n</sup>*�*m*�*<sup>m</sup>* and a vector *<sup>v</sup>*∈IR*<sup>m</sup>*, by Definition 1.4

ð Þ <sup>T</sup> �2*<sup>v</sup>* �2*<sup>v</sup>* ¼ Tð Þ �2*<sup>v</sup> <sup>v</sup>*∈IR*<sup>n</sup>:*

**Property 2.** Given a tensor T of order 3 and vectors *u* and *v* of convenient sizes,

ð Þ T �*ru* �*<sup>s</sup>*�<sup>1</sup>*v* ¼ Tð Þ �*sv* �*ru* for*r*<*s:*

The contracted product mode-*n* satisfies the following property [5]:

*q*-th row of matrix *A* by col*k*ð Þ *A* and row*q*ð Þ *A* , respectively. Note that if:

*Au* ¼ *A*�2*u* (7)

, and denote the *k*-th column and the

*vTA* <sup>¼</sup> *<sup>A</sup>*�1*vT* 6¼ *<sup>A</sup>*�1*v:* (8)

that, in relation to the matrix product of elementary algebra, we have

*ijvj* where *i* ¼ 1, … , *m* and *k* ¼ 1, … , *p*

*ijvk* where *i* ¼ 1, … , *m* and *j* ¼ 1, … , *n*

*v*∈IR*<sup>n</sup>* is the matrix

*v*∈IR*<sup>p</sup>* is the matrix

where its elements are defined by

*Advances on Tensor Analysis and Their Applications*

*aik* <sup>¼</sup> <sup>X</sup>*<sup>n</sup> j*¼1 *t k*

where its elements are defined by

*aij* <sup>¼</sup> <sup>X</sup> *p*

*k*¼1 *t k*

where *vk* is the *k*-th component of the vector *v*.

together with (8), it follows that <sup>T</sup> �2*v*∈IR*<sup>n</sup>*�*<sup>m</sup>* and

we have for all *r* ¼ 1, 2, 3 and *s* ¼ 2, 3 that

1.*x*∈ IR2

**8**

For example, consider a tensor <sup>T</sup> <sup>∈</sup>IR<sup>2</sup>�4�<sup>3</sup>

, then <sup>T</sup> �1*x*∈IR<sup>4</sup>�<sup>3</sup> and

where *vj* is the *j*-th component of the vector *v*.

$$\operatorname{col}\_{k}(T \overline{\times} \mathbf{1} \mathbf{x}) = \begin{pmatrix} a\_{1k} \\ a\_{2k} \\ a\_{3k} \\ a\_{4k} \\ a\_{4k} \end{pmatrix} = \begin{pmatrix} t\_{11}^{k} & t\_{21}^{k} \\ t\_{12}^{k} & t\_{22}^{k} \\ t\_{13}^{k} & t\_{23}^{k} \\ t\_{14}^{k} & t\_{24}^{k} \end{pmatrix} \begin{pmatrix} \mathbf{x}\_{1} \\ \mathbf{x}\_{2} \end{pmatrix} = \begin{pmatrix} T^{:k} \end{pmatrix}^{T} \mathbf{x} \text{ and }$$

$$\operatorname{row}\_{j}(T \overline{\times} \mathbf{x}) = \begin{pmatrix} a\_{j1} \ a\_{j2} \ a\_{j3} \end{pmatrix} = \begin{pmatrix} \mathbf{x}\_{1} & \mathbf{x}\_{2} \end{pmatrix} \begin{pmatrix} t\_{1j}^{1} & t\_{1j}^{2} & t\_{11}^{3} \\ t\_{2j}^{1} & t\_{2j}^{2} & t\_{21}^{3} \end{pmatrix} = \mathbf{x}^{T} T^{:j} \mathbf{x}$$

2.*x*<sup>∈</sup> IR4, then <sup>T</sup> �2*x*<sup>∈</sup> IR2�<sup>3</sup> and

$$\begin{aligned} \text{col}\_{k}(T \overline{\times}\_{2} \mathbf{x}) &= \begin{pmatrix} a\_{1k} \\ a\_{2k} \end{pmatrix} = \begin{pmatrix} t\_{11}^{k} & t\_{12}^{k} & t\_{13}^{k} & t\_{14}^{k} \\ t\_{21}^{k} & t\_{22}^{k} & t\_{23}^{k} & t\_{24}^{k} \end{pmatrix} \begin{pmatrix} \mathbf{x}\_{1} \\ \mathbf{x}\_{2} \\ \mathbf{x}\_{3} \\ \mathbf{x}\_{4} \end{pmatrix} = \begin{pmatrix} T^{-k} \end{pmatrix} \mathbf{x} \text{ and }\\ \text{row}\_{i}(T \mathbf{x}) &= \begin{pmatrix} \mathbf{x}\_{1} & \mathbf{x}\_{2} & \mathbf{x}\_{3} \end{pmatrix} \begin{pmatrix} t\_{11}^{1} & t\_{12}^{2} & t\_{13}^{3} \\ t\_{21}^{1} & t\_{22}^{2} & t\_{23}^{3} \end{pmatrix} \begin{pmatrix} t\_{11}^{1} & t\_{11}^{2} & t\_{12}^{3} \\ t\_{12}^{1} & t\_{12}^{2} & t\_{12}^{3} \\ t\_{13}^{1} & t\_{13}^{2} & t\_{13}^{3} \\ t\_{14}^{1} & t\_{14}^{2} & t\_{14}^{3} \end{pmatrix} = \mathbf{x}^{T} \mathcal{T}^{\text{div}} \end{aligned}$$

3.*x*∈ IR3 , then <sup>T</sup> �3*x*∈IR<sup>2</sup>�<sup>4</sup> and

$$\text{col}\_{\circ}(T \overline{\times} \chi \mathbf{x}) = \begin{pmatrix} a\_{1\circ} \\ a\_{2\circ} \end{pmatrix} = \begin{pmatrix} t\_{\overline{\mathbf{y}}}^1 & t\_{\overline{\mathbf{y}}}^2 & t\_{\overline{\mathbf{y}}}^3 \\ t\_{2\overline{\mathbf{y}}}^1 & t\_{2\overline{\mathbf{y}}}^2 & t\_{2\overline{\mathbf{y}}}^3 \end{pmatrix} \begin{pmatrix} \mathbf{x}\_1 \\ \mathbf{x}\_2 \\ \mathbf{x}\_3 \end{pmatrix} = \begin{pmatrix} T^{\overline{\cdot} \cdot} \end{pmatrix} \mathbf{x} \text{ and }$$

$$\text{row}\_{\circ}(T \overline{\times} \chi \mathbf{x}) = \begin{pmatrix} a\_{i1} & a\_{i2} & a\_{i3} \end{pmatrix} = \begin{pmatrix} \mathbf{x}\_1 & \mathbf{x}\_2 & \mathbf{x}\_3 \end{pmatrix} \begin{pmatrix} t\_{i1}^1 & t\_{i2}^1 & t\_{i3}^1 & t\_{i4}^1 \\ t\_{i1}^2 & t\_{i2}^2 & t\_{i3}^2 & t\_{i4}^2 \\ t\_{i1}^3 & t\_{i2}^3 & t\_{i3}^3 & t\_{i4}^3 \end{pmatrix} = \mathbf{x}^T \begin{pmatrix} T^{\overline{\cdot} \cdot} \end{pmatrix}^T$$

This example can be easily generalized to arbitrary dimensions. In particular, for a tensor <sup>T</sup> <sup>∈</sup>IR*<sup>m</sup>*�*n*�*<sup>n</sup>* and a vector *<sup>x</sup>*∈IR*<sup>n</sup>*, we have

$$\text{row}\_{i}(\mathcal{T}\overline{\times}\mathbb{1}\_{2}\mathfrak{x}) = \mathfrak{x}^{T}\mathcal{T}^{\sharp \dots} \tag{9}$$

$$\mathbf{row}\_{l}(\boldsymbol{\mathcal{T}} \overline{\times} \boldsymbol{\mathfrak{x}}) = \boldsymbol{\mathfrak{x}}^{T} \left(\boldsymbol{\mathcal{T}}^{\mathrm{i::}}\right)^{T} \tag{10}$$

**Lemma 1.5.** Let <sup>T</sup> <sup>∈</sup>IR*<sup>n</sup>*�*n*�*<sup>n</sup>* be a tensor. If <sup>T</sup> *<sup>i</sup>*:: is a symmetric matrix for all *i* ¼ 1, … , *n*, then

$$(\mathcal{T}\overline{\times}\_2\mathfrak{u})v = (\mathcal{T}\overline{\times}\_2 v)\mathfrak{u}$$

for all *u*, *v*∈ IR*<sup>n</sup>*.

*Proof.* By Property 2, it follows that ð Þ T �2*u v* ¼ Tð Þ �3*v u*. By (10), (11), and the symmetry of <sup>T</sup> *<sup>i</sup>*::, we have <sup>T</sup> �3*<sup>v</sup>* ¼ T �2*v*. □

#### **3. Space of bilinear applications**

In this section, we define bilinear applications on finite dimensional vector spaces, in order to relate them to the second derivative of a two-differentiable application, as well as a tensor of order 3.

**Definition 1.6.** Let *U*,*V*, *W* be vector spaces. An application *f* : *U* � *V* ! *W* is a bilinear application if:

$$\text{i.} f(\lambda u\_1 + u\_2, v) = \nexists f(u\_1, v) + f(u\_2, v) \text{ for all } \lambda \in \mathbb{R}, u\_1, u\_2 \in U, \text{ and } v \in V.$$

$$\text{ii.} f(u, \lambda v\_1 + v\_2) = \nexists f(u, v\_1) + f(u, v\_2) \text{ for all } \lambda \in \mathbb{R}, u \in U, \text{ and } v\_1, v\_2 \in V.$$

In other words, an application *f* : *U* � *V* ! *W* is a bilinear application if it is linear in each of the variables when the other variable is fixed. We denote by Bð Þ *U* � *V*,*W* the set of all bilinear applications of *U* � *V* in *W*. In particular, if *U* ¼ *V* and *W* ¼ IR in Definition 1.6, then *f* : *U* � *U* ! IR is a bilinear form in which we are used to quadratic forms, for example.

A simple example of bilinear application is the function *f* : *U* � *V* ! IR defined by

$$f(u,v) = h(u)\mathbf{g}(v),\tag{11}$$

with *h*∈ *U* <sup>∗</sup> and *g* ∈*V* <sup>∗</sup> , where *U* <sup>∗</sup> denotes the dual space to *U*. In fact, we have for all *λ*∈IR, *u*1, *u*<sup>2</sup> ∈ *U* and *v*∈*V* such that

$$f(\lambda u\_1 + u\_2, v) = h(\lambda u\_1 + u\_2) \\ \mathbf{g}(v) = (\lambda h(u\_1) + h(u\_2)) \\ \mathbf{g}(v) = \lambda f(u\_1, v) + f(u\_2, v).$$

Similarly, it is easy to see that *f u*ð Þ¼ , *λv*<sup>1</sup> þ *v*<sup>2</sup> *λf u*ð Þþ , *v*<sup>1</sup> *f u*ð Þ , *v*<sup>2</sup> for all *λ*∈IR, *u*∈ *U* and *v*1, *v*<sup>2</sup> ∈*V*.

The next theorem ensures that a bilinear application *f* : *U* � *V* ! *W* is well defined when the image of *f* applied in the bases elements of *U* and *V* is known.

**Theorem 1.7.** Let *U*, *V*, and *W* be vector spaces; f g *u*1, … , *um* and f g *v*1, … , *vn* bases of the *<sup>U</sup>* and *<sup>V</sup>*, respectively; and *wij*j*<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>m</sup>* and *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>* � � a subset of *<sup>W</sup>*. Then, there exists an only bilinear application *f* : *U* � *V* ! *W* such that *f ui*, *vj* � � <sup>¼</sup> *wij*.

*Proof*. Let *<sup>u</sup>* <sup>¼</sup> <sup>P</sup>*<sup>m</sup> <sup>i</sup>*¼<sup>1</sup>*αiui* and *<sup>v</sup>* <sup>¼</sup> <sup>P</sup>*<sup>n</sup> <sup>j</sup>*¼<sup>1</sup>*β<sup>j</sup> vj* be arbitrary elements of *U* and *V*, respectively. We defined an application *f* : *U* � *V* ! *W* by

$$f(u,v) = \sum\_{i=1}^{m} \sum\_{j=1}^{n} a\_i \beta\_j w\_{ij} \dots$$

It is easy to see that *f* is a bilinear application and *f ui*, *vj* � � <sup>¼</sup> *wij*. Such an application is unique because if *g* is another bilinear application satisfying *g ui*, *vj* � � <sup>¼</sup> *wij*, then

$$\mathbf{g}(u,v) = \mathbf{g}\left(\sum\_{i=1}^{m} a\_i u\_i, \sum\_{j=1}^{n} \beta\_j v\_j\right) = \sum\_{i=1}^{m} \sum\_{j=1}^{n} a\_i \beta\_j \mathbf{g}\left(u\_i, v\_j\right) = \tag{12}$$

$$=\sum\_{i=1}^{m}\sum\_{j=1}^{n}a\_{i}\beta\_{j}w\_{ij}=f(u,v).\tag{13}$$

Therefore *<sup>g</sup>* <sup>¼</sup> *<sup>f</sup>*. □

The following theorem guarantees the isomorphism between space of bilinear

**Theorem 1.8.** Let *U*, *V*, and *W* be vector spaces with dimensions *n*, *p*, and *m*,

*Proof*. The idea of the proof is to exhibit a basis for space Bð Þ *U* � *V*,*W* . For this,

*ij* : *U* � *V* ! *W* such that

*k*

*ij*j*i* ¼ 1, … , *m*, *j* ¼ 1, … , *n* and *k* ¼ 1, … , *p* n o

> X*m i*¼1 *as*

X *p*

*k*¼1 *ak ij f k ij:*

*ij ur* ð Þ¼ , *vs*

*ij ur* ð Þ¼ , *vs*

X*m i*¼1 *as irwi:*

X*m i*¼1 *as*

*wi* if *r* ¼ *j* and *s* ¼ *k* 0 if *r* 6¼ *j* or *s* 6¼ *k:*

respectively. For each triple ð Þ *i*, *j*, *k* , with *i* ¼ 1, … , *m*, *j* ¼ 1, … , *n*, and *k* ¼ 1, … , *p*,

� � be bases of the *W*, *U*, and *V*,

*ij*. We will then show that the set

*irwi* (15)

*irwi* ¼ *f ur* ð Þ , *vs*

*ir* ¼ 0 for all *i* ¼ 1, … , *m*, *r* ¼

(14)

respectively. Then, the space Bð Þ *U* � *V*,*W* has dimension *mnp*.

*k*

�

is a basis of the space Bð Þ *U* � *V*,*W* . Let *f* ∈ Bð Þ *U* � *V*,*W* . We note in

*f ur* ð Þ¼ , *vs*

for all *r* ¼ 1, … , *n* and *s* ¼ 1, … , *p*. Consider the bilinear application

X*n j*¼1

*<sup>g</sup>* <sup>¼</sup> <sup>X</sup>*<sup>m</sup> i*¼1

Our goal is to show that *g* ¼ *f*. In particular, we have

X*n j*¼1

X *p*

*k*¼1 *ak ij f k*

X*m i*¼1

X*m i*¼1

X*n j*¼1 *ak ij f k*

X*n j*¼1

for all *r* ¼ 1, … , *n* and *s* ¼ 1, … , *p*. Therefore *g* ¼ *f*. The set *A* is linearly

X *p*

*k*¼1 *ak ij f k ij* ¼ 0,

1, … , *<sup>n</sup>*, and *<sup>k</sup>* <sup>¼</sup> 1, … , *<sup>p</sup>*. □ In particular, if the dimensions of the vector spaces *U* and *V* are *m* and *n*, respectively, then the vector space Bð Þ *U* � *V*, IR has dimension *mn*. Now, as two vector spaces of the same finite dimension are isomorphic [16], there exists a matrix

X*m i*¼1

<sup>0</sup> <sup>¼</sup> <sup>X</sup> *p*

*k*¼1

Since f g *<sup>w</sup>*1, … , *wm* is a basis of *<sup>W</sup>*, it follows that *<sup>a</sup><sup>s</sup>*

*ij ur* ð Þ¼ , *vs*

applications and space of tensor of order 3.

*DOI: http://dx.doi.org/10.5772/intechopen.90904*

let f g *w*1, … , *wm* , f g *u*1, … , *un* , and *v*1, … , *vp*

*f k*

Theorem 1.7 ensures the existence of the *f*

*A* ¼ *f k*

*g ur* ð Þ¼ , *vs*

independent, because if

then

**11**

passing that

we define a bilinear application *f*

*Bilinear Applications and Tensors*

*Bilinear Applications and Tensors DOI: http://dx.doi.org/10.5772/intechopen.90904*

*Proof.* By Property 2, it follows that ð Þ T �2*u v* ¼ Tð Þ �3*v u*. By (10), (11), and the

In this section, we define bilinear applications on finite dimensional vector spaces, in order to relate them to the second derivative of a two-differentiable

i. *f*ð Þ¼ *λu*<sup>1</sup> þ *u*2, *v λf u*ð Þþ 1, *v f u*ð Þ 2, *v* for all *λ*∈IR, *u*1, *u*<sup>2</sup> ∈ *U*, and *v*∈*V*. ii. *f u*ð Þ¼ , *λv*<sup>1</sup> þ *v*<sup>2</sup> *λf u*ð Þþ , *v*<sup>1</sup> *f u*ð Þ , *v*<sup>2</sup> for all *λ*∈IR, *u* ∈ *U*, and *v*1, *v*<sup>2</sup> ∈*V*.

In other words, an application *f* : *U* � *V* ! *W* is a bilinear application if it is linear in each of the variables when the other variable is fixed. We denote by Bð Þ *U* � *V*,*W* the set of all bilinear applications of *U* � *V* in *W*. In particular, if *U* ¼ *V* and *W* ¼ IR in Definition 1.6, then *f* : *U* � *U* ! IR is a bilinear form in which

A simple example of bilinear application is the function *f* : *U* � *V* ! IR defined by

with *h*∈ *U* <sup>∗</sup> and *g* ∈*V* <sup>∗</sup> , where *U* <sup>∗</sup> denotes the dual space to *U*. In fact, we have

*f*ð Þ¼ *λu*<sup>1</sup> þ *u*2, *v h*ð Þ *λu*<sup>1</sup> þ *u*<sup>2</sup> *g v*ð Þ¼ ð Þ *λh u*ð Þþ <sup>1</sup> *h u*ð Þ<sup>2</sup> *g v*ð Þ¼ *λf u*ð Þþ 1, *v f u*ð Þ 2, *v :*

The next theorem ensures that a bilinear application *f* : *U* � *V* ! *W* is well defined when the image of *f* applied in the bases elements of *U* and *V* is known. **Theorem 1.7.** Let *U*, *V*, and *W* be vector spaces; f g *u*1, … , *um* and f g *v*1, … , *vn* bases of the *<sup>U</sup>* and *<sup>V</sup>*, respectively; and *wij*j*<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>m</sup>* and *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>* � � a subset of *<sup>W</sup>*. Then, there exists an only bilinear application *f* : *U* � *V* ! *W* such that *f ui*, *vj*

*<sup>j</sup>*¼<sup>1</sup>*β<sup>j</sup>*

*i*¼1

X*n j*¼1

*αiβjwij:*

<sup>¼</sup> <sup>X</sup>*<sup>m</sup> i*¼1

X*n j*¼1

*f u*ð Þ¼ , *<sup>v</sup>* <sup>X</sup>*<sup>m</sup>*

tion is unique because if *g* is another bilinear application satisfying *g ui*, *vj*

!

X*n j*¼1

*αiβ<sup>j</sup>*

Therefore *<sup>g</sup>* <sup>¼</sup> *<sup>f</sup>*. □

*αiui*, X*n j*¼1 *βjvj*

<sup>¼</sup> <sup>X</sup>*<sup>m</sup> i*¼1

Similarly, it is easy to see that *f u*ð Þ¼ , *λv*<sup>1</sup> þ *v*<sup>2</sup> *λf u*ð Þþ , *v*<sup>1</sup> *f u*ð Þ , *v*<sup>2</sup> for all

*<sup>i</sup>*¼<sup>1</sup>*αiui* and *<sup>v</sup>* <sup>¼</sup> <sup>P</sup>*<sup>n</sup>*

respectively. We defined an application *f* : *U* � *V* ! *W* by

It is easy to see that *f* is a bilinear application and *f ui*, *vj*

*i*¼1

*g u*ð Þ¼ , *<sup>v</sup> <sup>g</sup>* <sup>X</sup>*<sup>m</sup>*

*f u*ð Þ¼ , *v h u*ð Þ*g v*ð Þ, (11)

*vj* be arbitrary elements of *U* and *V*,

*αiβjg ui*, *vj*

*wij* ¼ *f u*ð Þ , *v :* (13)

� � <sup>¼</sup> *wij*. Such an applica-

� � <sup>¼</sup> (12)

� � <sup>¼</sup> *wij*, then

� � <sup>¼</sup> *wij*.

**Definition 1.6.** Let *U*,*V*, *W* be vector spaces. An application *f* : *U* � *V* ! *W* is a

, we have <sup>T</sup> �3*<sup>v</sup>* ¼ T �2*v*. □

symmetry of <sup>T</sup> *<sup>i</sup>*::

bilinear application if:

**3. Space of bilinear applications**

*Advances on Tensor Analysis and Their Applications*

application, as well as a tensor of order 3.

we are used to quadratic forms, for example.

for all *λ*∈IR, *u*1, *u*<sup>2</sup> ∈ *U* and *v*∈*V* such that

*λ*∈IR, *u*∈ *U* and *v*1, *v*<sup>2</sup> ∈*V*.

*Proof*. Let *<sup>u</sup>* <sup>¼</sup> <sup>P</sup>*<sup>m</sup>*

**10**

The following theorem guarantees the isomorphism between space of bilinear applications and space of tensor of order 3.

**Theorem 1.8.** Let *U*, *V*, and *W* be vector spaces with dimensions *n*, *p*, and *m*, respectively. Then, the space Bð Þ *U* � *V*, *W* has dimension *mnp*.

*Proof*. The idea of the proof is to exhibit a basis for space Bð Þ *U* � *V*,*W* . For this, let f g *w*1, … , *wm* , f g *u*1, … , *un* , and *v*1, … , *vp* � � be bases of the *W*, *U*, and *V*, respectively. For each triple ð Þ *i*, *j*, *k* , with *i* ¼ 1, … , *m*, *j* ¼ 1, … , *n*, and *k* ¼ 1, … , *p*, we define a bilinear application *f k ij* : *U* � *V* ! *W* such that

$$f\_{\vec{ij}}^k(u\_r, v\_s) = \begin{cases} w\_i & \text{if } r = j \text{ and } s = k \\ 0 & \text{if } r \neq j \text{ or } s \neq k. \end{cases} \tag{14}$$

Theorem 1.7 ensures the existence of the *f k ij*. We will then show that the set

$$A = \left\{ f\_{\vec{\boldsymbol{y}}}^k | \boldsymbol{i} = 1, \ldots, m, j = 1, \ldots, n \text{ and } k = 1, \ldots, p \right\}$$

is a basis of the space Bð Þ *U* � *V*, *W* . Let *f* ∈ Bð Þ *U* � *V*,*W* . We note in passing that

$$f(\boldsymbol{u}\_r, \boldsymbol{v}\_i) = \sum\_{i=1}^{m} a\_{ir}^{\boldsymbol{s}} \boldsymbol{w}\_i \tag{15}$$

for all *r* ¼ 1, … , *n* and *s* ¼ 1, … , *p*. Consider the bilinear application

$$\mathbf{g} = \sum\_{i=1}^{m} \sum\_{j=1}^{n} \sum\_{k=1}^{p} a\_{ij}^{k} f\_{ij}^{k}.$$

Our goal is to show that *g* ¼ *f*. In particular, we have

$$\operatorname{g}(\boldsymbol{u}\_r, \boldsymbol{v}\_s) = \sum\_{i=1}^m \sum\_{j=1}^n \sum\_{k=1}^p \boldsymbol{a}\_{ij}^k f\_{ij}^k(\boldsymbol{u}\_r, \boldsymbol{v}\_s) = \sum\_{i=1}^m \boldsymbol{a}\_{ir}^\epsilon \boldsymbol{w}\_i = \boldsymbol{f}(\boldsymbol{u}\_r, \boldsymbol{v}\_s)$$

for all *r* ¼ 1, … , *n* and *s* ¼ 1, … , *p*. Therefore *g* ¼ *f*. The set *A* is linearly independent, because if

$$\sum\_{i=1}^{m} \sum\_{j=1}^{n} \sum\_{k=1}^{p} a\_{\vec{\eta}}^{k} f\_{\vec{\eta}}^{k} = \mathbf{0},$$

then

$$0 = \sum\_{k=1}^{p} \sum\_{i=1}^{m} \sum\_{j=1}^{n} a\_{ij}^{k} f\_{\vec{\eta}}^{k}(\mu\_r, \nu\_s) = \sum\_{i=1}^{m} a\_{ir}^{\epsilon} w\_i.$$

Since f g *<sup>w</sup>*1, … , *wm* is a basis of *<sup>W</sup>*, it follows that *<sup>a</sup><sup>s</sup> ir* ¼ 0 for all *i* ¼ 1, … , *m*, *r* ¼ 1, … , *<sup>n</sup>*, and *<sup>k</sup>* <sup>¼</sup> 1, … , *<sup>p</sup>*. □

In particular, if the dimensions of the vector spaces *U* and *V* are *m* and *n*, respectively, then the vector space Bð Þ *U* � *V*, IR has dimension *mn*. Now, as two vector spaces of the same finite dimension are isomorphic [16], there exists a matrix *m* � *n* associated with each *f* ∈Bð Þ *U* � *V*, IR . By considering *B* ¼ f g *u*1, … , *um* and *<sup>C</sup>* <sup>¼</sup> f g *<sup>v</sup>*1, … , *vn* bases of *<sup>U</sup>* and *<sup>V</sup>*, respectively, and if *<sup>u</sup>* <sup>¼</sup> <sup>P</sup>*<sup>m</sup> i*¼1 *<sup>α</sup>iui* and *<sup>v</sup>* <sup>¼</sup> <sup>P</sup>*<sup>n</sup> j*¼1 *βj vj*, then by doing *f ui*, *vj* � � <sup>¼</sup> *aij* for all *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>m</sup>* and *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>*, we have

**Definition 1.10.** Let *U* and *V* be finite dimension vector spaces. For fixed bases *B* ¼ f g *u*1, … , *um* and *C* ¼ f g *v*1, … , *vn* of the *U* and *V*, respectively, we define, for

*k*

Let *<sup>U</sup>* be an open subset of IR*<sup>m</sup>* and *<sup>F</sup>* : *<sup>U</sup>* <sup>⊂</sup>IR*<sup>m</sup>* ! IR*<sup>n</sup>* a differentiable application throughout *<sup>U</sup>* and *<sup>a</sup>*<sup>∈</sup> *<sup>U</sup>*. Denote <sup>L</sup> IR*m*, IR*<sup>n</sup>* ð Þ the set of all linear applications of IR*<sup>m</sup>* in IR*n*. When *<sup>F</sup>*<sup>0</sup> : *<sup>U</sup>* <sup>⊂</sup>IR*<sup>m</sup>* ! L IR*m*, IR*<sup>n</sup>* ð Þ is differentiable in *<sup>a</sup>*<sup>∈</sup> *<sup>U</sup>*, we say that the application *F* is twice differentiable in *a*∈ *U* and then the linear transformation

ð Þ *<sup>a</sup>* is naturally defined. For any *<sup>h</sup>*∈IR*<sup>m</sup>*, it follows that

 ¼ sup k k¼ *h* 1

com *<sup>k</sup>*∈IR*<sup>m</sup>*

sup k k¼ *k* 1

*F*00 ð Þ *<sup>a</sup> hk* 

<sup>∈</sup>IR*<sup>n</sup>*�*<sup>m</sup>:* (17)

ð Þ¼ *<sup>u</sup>* <sup>∇</sup>*aij*ð Þ *<sup>u</sup>* <sup>∈</sup>IR*<sup>n</sup>*�*m*�*<sup>p</sup>:* (18)

*:*

ð Þ *a* is a

*F*00 ð Þ *<sup>a</sup> hk* 

*ij* ¼ *fi uj*, *vk*

*ij* <sup>∈</sup>IR*p*�*m*�*<sup>n</sup>* of the *<sup>f</sup>* relative to the ordered

where *fi* is the i-th

*k*

component of the *f*, that is, *fi* ∈Bð Þ *U* � *V*, IR , with *i* ¼ 1, … , *p*, *j* ¼ 1, … , *m*, and

ð Þ *<sup>a</sup>* <sup>∈</sup><sup>L</sup> IR*<sup>m</sup>* <sup>ð</sup> ,<sup>L</sup> IR*<sup>m</sup>*, IR*<sup>n</sup>* ð ÞÞ is the second derivative of *<sup>F</sup>* in *<sup>a</sup>*<sup>∈</sup> *<sup>U</sup>*.

 ¼ sup k k¼ *k* 1

> *F*00 ð Þ *<sup>a</sup> <sup>h</sup>*

An important observation with respect to Theorem 1.8 is that the spaces <sup>L</sup> IR*<sup>m</sup>* <sup>ð</sup> ,<sup>L</sup> IR*<sup>m</sup>*, IR*<sup>n</sup>* ð ÞÞ and <sup>B</sup> IR*<sup>m</sup>* � IR*<sup>m</sup>*, IR*<sup>n</sup>* ð Þ are isomorphic. This means that *<sup>F</sup>*<sup>00</sup>

bilinear application belonging to space <sup>B</sup> IR*<sup>m</sup>* � IR*<sup>m</sup>*, IR*<sup>n</sup>* ð Þ. Such isomorphism can be found in classical analysis books [17, 18]. On the other hand, by the same theorem, the space of bilinear applications <sup>B</sup> IR*<sup>m</sup>* � IR*<sup>m</sup>*, IR*<sup>n</sup>* ð Þ and space of tensor IR*<sup>n</sup>*�*m*�*<sup>m</sup>* are

In many practical applications, such as algorithm implementations, the second

*ij*ð Þ *α*

The definition of the derivative of *A*ð Þ *α* (17) is a classical definition. We refer to

In the sense of generalizing (17), consider now *<sup>A</sup>* : *<sup>U</sup>* <sup>⊂</sup> IR*<sup>p</sup>* ! IR*<sup>n</sup>*�*<sup>m</sup>* a differentiable application in *<sup>u</sup>* <sup>∈</sup> *<sup>U</sup>* with component function *aij* : IR*<sup>p</sup>* ! IR with *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>* and *j* ¼ 1, … , *m*. When *aij* is differentiable in *u* for all *i* ¼ 1, … , *n* and *j* ¼ 1, … , *m*,

The question now is how the tensor elements are formed. For this, consider the application *<sup>A</sup>* : IR ! IR*<sup>n</sup>*�*<sup>m</sup>* and *<sup>α</sup>* <sup>∈</sup>IR. We have *<sup>A</sup>*ð Þ *<sup>α</sup>* as a matrix with *<sup>n</sup>* rows and *<sup>m</sup>* columns. Its elements are denoted by *aij*ð Þ *α* where *aij* are components functions of *A* with *i* ¼ 1, … , *n* and *j* ¼ 1, … , *m*. Case *aij* : IR ! IR is differentiable in *α* for all *i* ¼

ð Þ *<sup>a</sup>* may be implemented as a tensor belonging to space IR*<sup>n</sup>*�*m*�*<sup>m</sup>*.

 ¼ sup k k¼ *h* 1

1, … , *n* and *j* ¼ 1, … , *m*; the derivative of *A* in *α* is the matrix

*A*0

we defined the derivative of *A* in *u* as the tensor

*A*0

ð Þ¼ *α a*<sup>0</sup>

each *<sup>f</sup>* <sup>∈</sup><sup>B</sup> *<sup>U</sup>* � *<sup>V</sup>*, IR*<sup>p</sup>* ð Þ, the tensor T ¼ *<sup>t</sup>*

*DOI: http://dx.doi.org/10.5772/intechopen.90904*

*Bilinear Applications and Tensors*

*k* ¼ 1, … , *n*.

*F*00

**4. Differentiability**

The norm of *F*00

and then

also isomorphic.

derivative *F*00

[19] for details.

**13**

bases *B* and *C*, whose elements are given by *t*

*F*00 ð Þ *<sup>a</sup> <sup>h</sup>* 

*F*00 ð Þ *<sup>a</sup>* 

$$f(u,v) = \sum\_{i=1}^{m} \sum\_{j=1}^{n} a\_i a\_{ij} \beta\_j,$$

which in matrix form is *f u*ð Þ¼ , *v* ½ � *u T <sup>B</sup>A v*½ �*C*, where *<sup>A</sup>* <sup>¼</sup> *aij* � � and ½ � *<sup>v</sup> <sup>C</sup>* denote the vector components *v* in the basis *C*. Hence follows the next definition:

**Definition 1.9.** Let *U* and *V* be vector spaces of finite dimension and ordered bases *B* ¼ f g *u*1, … , *um* ⊂ *U* and *C* ¼ f g *v*1, … , *vn* ⊂*V*. We define, for each *<sup>f</sup>* <sup>∈</sup>Bð Þ *<sup>U</sup>* � *<sup>V</sup>*, IR , the matrix *<sup>A</sup>* <sup>¼</sup> *aij* � �<sup>∈</sup> IR*m*�*<sup>n</sup>* of the *<sup>f</sup>* relative to the ordered bases *<sup>B</sup>* and *C*, whose elements are given by *aij* ¼ *f ui*, *vj* � � with *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>m</sup>* and *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>*.

Consider now the space <sup>B</sup> IR*<sup>m</sup>* � IR*<sup>n</sup>*, IR*<sup>p</sup>* ð Þ and the canonical bases f g *<sup>e</sup>*1, … ,*em* , f g *e*1, … ,*en* , ^*e*1, … ,^*ep* � � of the IR*<sup>m</sup>*, IR*<sup>n</sup>*, and IR*<sup>p</sup>*, respectively. Consider *<sup>f</sup>* <sup>∈</sup><sup>B</sup> IR*<sup>m</sup>* � IR*<sup>n</sup>*, IR*<sup>p</sup>* ð Þ. For all *<sup>u</sup>*∈IR*<sup>m</sup>* and *<sup>v</sup>*<sup>∈</sup> IR*<sup>n</sup>*, we have

$$f(u,v) = \sum\_{j=1}^{m} \sum\_{k=1}^{n} u\_j v\_k f\left(e\_j, \overline{e}\_k\right).$$

where *uj* and *vk* are the components of the *u* and *v* in the canonical bases of IR*<sup>m</sup>* and IR*<sup>n</sup>*, respectively. Denote the *i*-th component of the *f* by *fi* . Note that *fi* <sup>∈</sup><sup>B</sup> IR*<sup>m</sup>* � IR*<sup>n</sup>* ð Þ , IR . So for each *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>p</sup>*, we have

$$f\_i(u,v) = \sum\_{j=1}^m \sum\_{k=1}^n u\_j v\_k f\_i\left(e\_j, \overline{e}\_k\right).$$

By Definition 1.9, the matrix of the *fi* in relation to the canonical bases is the matrix

$$A\_i = \left(t\_{\vec{\eta}}^k\right) \in \mathbb{R}^{m \times n},$$

where *t k ij* ¼ *fi ej*,*ek* � �. So, we can write

$$f\_i(u, v) = u^T A\_i v.$$

In general, we can define *<sup>p</sup>* matrices *<sup>m</sup>* � *<sup>n</sup>* as a tensor <sup>T</sup> <sup>∈</sup> IR*<sup>p</sup>*�*m*�*<sup>n</sup>*; this means that the *p* matrices can be seen as the horizontal slices of the tensor T . We note in passing that we can write *f u*ð Þ , *v* as a product between tensor T and vectors *u* and *v*, that is,

$$f(u,v) = \begin{pmatrix} u^T A\_1 v \\ u^T A\_2 v \\ \vdots \\ u^T A\_p v \end{pmatrix} = (T \boxtimes u)v. \tag{16}$$

Thus, we can generalize Definition 1.9 as follows:

**Definition 1.10.** Let *U* and *V* be finite dimension vector spaces. For fixed bases *B* ¼ f g *u*1, … , *um* and *C* ¼ f g *v*1, … , *vn* of the *U* and *V*, respectively, we define, for each *<sup>f</sup>* <sup>∈</sup><sup>B</sup> *<sup>U</sup>* � *<sup>V</sup>*, IR*<sup>p</sup>* ð Þ, the tensor T ¼ *<sup>t</sup> k ij* <sup>∈</sup>IR*p*�*m*�*<sup>n</sup>* of the *<sup>f</sup>* relative to the ordered bases *B* and *C*, whose elements are given by *t k ij* ¼ *fi uj*, *vk* where *fi* is the i-th component of the *f*, that is, *fi* ∈Bð Þ *U* � *V*, IR , with *i* ¼ 1, … , *p*, *j* ¼ 1, … , *m*, and *k* ¼ 1, … , *n*.

#### **4. Differentiability**

*m* � *n* associated with each *f* ∈Bð Þ *U* � *V*, IR . By considering *B* ¼ f g *u*1, … , *um* and

*i*¼1

*T*

**Definition 1.9.** Let *U* and *V* be vector spaces of finite dimension and ordered

Consider now the space <sup>B</sup> IR*<sup>m</sup>* � IR*<sup>n</sup>*, IR*<sup>p</sup>* ð Þ and the canonical bases f g *<sup>e</sup>*1, … ,*em* ,

X*n k*¼1

where *uj* and *vk* are the components of the *u* and *v* in the canonical bases of IR*<sup>m</sup>*

X*n k*¼1

By Definition 1.9, the matrix of the *fi* in relation to the canonical bases is the

ð Þ¼ *<sup>u</sup>*, *<sup>v</sup> <sup>u</sup>TAiv:*

In general, we can define *<sup>p</sup>* matrices *<sup>m</sup>* � *<sup>n</sup>* as a tensor <sup>T</sup> <sup>∈</sup> IR*<sup>p</sup>*�*m*�*<sup>n</sup>*; this means that the *p* matrices can be seen as the horizontal slices of the tensor T . We note in passing that we can write *f u*ð Þ , *v* as a product between tensor T and vectors *u* and *v*,

> *uTA*1*v uTA*2*v* ⋮ *uTApv*

1

CCCCCA

0

BBBBB@

� � of the IR*<sup>m</sup>*, IR*<sup>n</sup>*, and IR*<sup>p</sup>*, respectively. Consider

*j*¼1

*j*¼1

*Ai* ¼ *t k ij* � �

*fi*

*f u*ð Þ¼ , *<sup>v</sup>* <sup>X</sup>*<sup>m</sup>*

vector components *v* in the basis *C*. Hence follows the next definition:

bases *B* ¼ f g *u*1, … , *um* ⊂ *U* and *C* ¼ f g *v*1, … , *vn* ⊂*V*. We define, for each

*f u*ð Þ¼ , *<sup>v</sup>* <sup>X</sup>*<sup>m</sup>*

ð Þ¼ *<sup>u</sup>*, *<sup>v</sup>* <sup>X</sup>*<sup>m</sup>*

and IR*<sup>n</sup>*, respectively. Denote the *i*-th component of the *f* by *fi*

*fi* <sup>∈</sup><sup>B</sup> IR*<sup>m</sup>* � IR*<sup>n</sup>* ð Þ , IR . So for each *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>p</sup>*, we have

*fi*

� �. So, we can write

*f u*ð Þ¼ , *v*

Thus, we can generalize Definition 1.9 as follows:

� � <sup>¼</sup> *aij* for all *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>m</sup>* and *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>*, we have

X*n j*¼1

*αiaijβ<sup>j</sup>* ,

*<sup>B</sup>A v*½ �*C*, where *A* ¼ *aij*

*ujvkf ej*,*ek* � �

*ujvkfi ej*,*ek* � �*:*

∈IR*<sup>m</sup>*�*<sup>n</sup>*,

*i*¼1

� �∈ IR*m*�*<sup>n</sup>* of the *f* relative to the ordered bases *B*

� � with *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>m</sup>* and *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>*.

. Note that

¼ Tð Þ �2*u v:* (16)

*<sup>α</sup>iui* and *<sup>v</sup>* <sup>¼</sup> <sup>P</sup>*<sup>n</sup>*

� � and ½ � *<sup>v</sup> <sup>C</sup>* denote the

*j*¼1 *βj vj*,

*<sup>C</sup>* <sup>¼</sup> f g *<sup>v</sup>*1, … , *vn* bases of *<sup>U</sup>* and *<sup>V</sup>*, respectively, and if *<sup>u</sup>* <sup>¼</sup> <sup>P</sup>*<sup>m</sup>*

which in matrix form is *f u*ð Þ¼ , *v* ½ � *u*

*Advances on Tensor Analysis and Their Applications*

and *C*, whose elements are given by *aij* ¼ *f ui*, *vj*

*<sup>f</sup>* <sup>∈</sup><sup>B</sup> IR*<sup>m</sup>* � IR*<sup>n</sup>*, IR*<sup>p</sup>* ð Þ. For all *<sup>u</sup>*∈IR*<sup>m</sup>* and *<sup>v</sup>*<sup>∈</sup> IR*<sup>n</sup>*, we have

*f* ∈Bð Þ *U* � *V*, IR , the matrix *A* ¼ *aij*

then by doing *f ui*, *vj*

f g *e*1, … ,*en* , ^*e*1, … ,^*ep*

matrix

that is,

**12**

where *t k*

*ij* ¼ *fi ej*,*ek*

Let *<sup>U</sup>* be an open subset of IR*<sup>m</sup>* and *<sup>F</sup>* : *<sup>U</sup>* <sup>⊂</sup>IR*<sup>m</sup>* ! IR*<sup>n</sup>* a differentiable application throughout *<sup>U</sup>* and *<sup>a</sup>*<sup>∈</sup> *<sup>U</sup>*. Denote <sup>L</sup> IR*m*, IR*<sup>n</sup>* ð Þ the set of all linear applications of IR*<sup>m</sup>* in IR*n*. When *<sup>F</sup>*<sup>0</sup> : *<sup>U</sup>* <sup>⊂</sup>IR*<sup>m</sup>* ! L IR*m*, IR*<sup>n</sup>* ð Þ is differentiable in *<sup>a</sup>*<sup>∈</sup> *<sup>U</sup>*, we say that the application *F* is twice differentiable in *a*∈ *U* and then the linear transformation *F*00 ð Þ *<sup>a</sup>* <sup>∈</sup><sup>L</sup> IR*<sup>m</sup>* <sup>ð</sup> ,<sup>L</sup> IR*<sup>m</sup>*, IR*<sup>n</sup>* ð ÞÞ is the second derivative of *<sup>F</sup>* in *<sup>a</sup>*<sup>∈</sup> *<sup>U</sup>*.

The norm of *F*00 ð Þ *<sup>a</sup>* is naturally defined. For any *<sup>h</sup>*∈IR*<sup>m</sup>*, it follows that

$$\left\| \left| F'(a)h \right| \right\| = \sup\_{\left\| h \right\| = 1} \left\{ \left| \left| F'(a)hk \right| \right| \text{ com } k \in \mathbb{R}^m \right\}.$$

and then

$$\left| \left| F'(a) \right| \right| = \sup\_{\left| h \right| = 1} \left| \left| F'(a)h \right| \right| = \sup\_{\left| h \right| = 1} \sup\_{\left| h \right| = 1} \left| \left| F'(a)hk \right| \right|.$$

An important observation with respect to Theorem 1.8 is that the spaces <sup>L</sup> IR*<sup>m</sup>* <sup>ð</sup> ,<sup>L</sup> IR*<sup>m</sup>*, IR*<sup>n</sup>* ð ÞÞ and <sup>B</sup> IR*<sup>m</sup>* � IR*<sup>m</sup>*, IR*<sup>n</sup>* ð Þ are isomorphic. This means that *<sup>F</sup>*<sup>00</sup> ð Þ *a* is a bilinear application belonging to space <sup>B</sup> IR*<sup>m</sup>* � IR*<sup>m</sup>*, IR*<sup>n</sup>* ð Þ. Such isomorphism can be found in classical analysis books [17, 18]. On the other hand, by the same theorem, the space of bilinear applications <sup>B</sup> IR*<sup>m</sup>* � IR*<sup>m</sup>*, IR*<sup>n</sup>* ð Þ and space of tensor IR*<sup>n</sup>*�*m*�*<sup>m</sup>* are also isomorphic.

In many practical applications, such as algorithm implementations, the second derivative *F*00 ð Þ *<sup>a</sup>* may be implemented as a tensor belonging to space IR*<sup>n</sup>*�*m*�*<sup>m</sup>*. The question now is how the tensor elements are formed. For this, consider the application *<sup>A</sup>* : IR ! IR*<sup>n</sup>*�*<sup>m</sup>* and *<sup>α</sup>* <sup>∈</sup>IR. We have *<sup>A</sup>*ð Þ *<sup>α</sup>* as a matrix with *<sup>n</sup>* rows and *<sup>m</sup>* columns. Its elements are denoted by *aij*ð Þ *α* where *aij* are components functions of *A* with *i* ¼ 1, … , *n* and *j* ¼ 1, … , *m*. Case *aij* : IR ! IR is differentiable in *α* for all *i* ¼ 1, … , *n* and *j* ¼ 1, … , *m*; the derivative of *A* in *α* is the matrix

$$A'(a) = \left(a'\_{\vec{\eta}}(a)\right) \in \mathbb{R}^{n \times m}.\tag{17}$$

The definition of the derivative of *A*ð Þ *α* (17) is a classical definition. We refer to [19] for details.

In the sense of generalizing (17), consider now *<sup>A</sup>* : *<sup>U</sup>* <sup>⊂</sup> IR*<sup>p</sup>* ! IR*<sup>n</sup>*�*<sup>m</sup>* a differentiable application in *<sup>u</sup>* <sup>∈</sup> *<sup>U</sup>* with component function *aij* : IR*<sup>p</sup>* ! IR with *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>* and *j* ¼ 1, … , *m*. When *aij* is differentiable in *u* for all *i* ¼ 1, … , *n* and *j* ¼ 1, … , *m*, we defined the derivative of *A* in *u* as the tensor

$$A'(u) = \left(\nabla a\_{\vec{\eta}}(u)\right) \in \mathbb{R}^{n \times m \times p}.\tag{18}$$

Note that in fact (18) is a generalization of (17). With fixed *i* and *j*, ∇*aij*ð Þ *u* is a tube fiber of the tensor *A*<sup>0</sup> ð Þ *u* , whose elements are

$$A'(u)^k\_{\vec{ij}} = \frac{\partial a\_{\vec{ij}}}{\partial \mathfrak{x}\_k}(u) \tag{19}$$

ð Þ T *<sup>F</sup>*ð Þ *a* �2*v u* ¼

for all *u*, *v*∈ IR2

*Bilinear Applications and Tensors*

bilinear application *F*00

vector of IR*<sup>n</sup>*

Given that row*k*∇<sup>2</sup>

**5. Conclusions**

**15**

product T *<sup>F</sup>*ð Þ *x ek*, with *k* ¼ 1, … , *n*.

we have

.

*DOI: http://dx.doi.org/10.5772/intechopen.90904*

tion 1.10. Without loss of generality, we have

and by means of Lemma 1.5, it follows that

T *<sup>F</sup>*ð Þ *x ek* ¼

*fi*

*vT*∇<sup>2</sup>

0

BBB@

This means that the tensor T *<sup>F</sup>*ð Þ *a* defined by (20) is the associate tensor to

ð Þ *<sup>a</sup>* , in relation to canonical basis of IR2

T *<sup>F</sup>*ð Þ *a* �3*v* ¼ T *<sup>F</sup>*ð Þ *a* �2*v* ¼ T *<sup>F</sup>*ð Þ *a v*

ð Þ T *<sup>F</sup>*ð Þ *a u v* ¼ Tð Þ *<sup>F</sup>*ð Þ *a v u* ¼ T *<sup>F</sup>*ð Þ *a vu:*

To finish, we consider the following particular case. We know that the *k*-th column of Jacobian *JF*ð Þ *x* is equal to product *JF*ð Þ *x ek*, where *ek* is the *k*-th canonical

1

CCCCCCA ¼

T *<sup>F</sup>*ð Þ *x ek* as the *k*-th lateral slice, or, by symmetry of Hessians, it is the transpose of *k*th frontal slice. In short, for the twice differentiable application *<sup>F</sup>* : *<sup>U</sup>* <sup>⊂</sup>IR*<sup>n</sup>* ! IR*<sup>m</sup>*,

In this text, we have shown some properties of tensors, in particular those of order 3. In addition, we have approached bilinear applications, and we have shown the isomorphism between space of bilinear applications and of tensor of order 3. As

mentioned in the introduction, to solve a nonlinear system, some numerical methods use tensors, either in the iterative scheme or in the proof of theorems. For this reason, we have written a section on differentiability of applications by showing how to calculate the product between tensor of second derivatives and vectors.

*eT <sup>k</sup>* ∇<sup>2</sup> *f* <sup>1</sup>ð Þ *x*

0

BBBBBB@

*eT <sup>k</sup>* ∇<sup>2</sup> *f* <sup>2</sup>ð Þ *x*

*eT <sup>k</sup>* <sup>∇</sup><sup>2</sup>*<sup>f</sup> <sup>n</sup>*ð Þ *<sup>x</sup>*

⋮

we have <sup>T</sup> *<sup>F</sup>*ð Þ *<sup>x</sup>* <sup>∈</sup>IR*<sup>m</sup>*�*n*�*<sup>n</sup>* where the *<sup>m</sup>* horizontal slices are the Hessians <sup>∇</sup><sup>2</sup>

with *i* ¼ 1, … , *m* and the *n* lateral and frontal slices obtained by the following

. It is worth noting what the slice of the matrix T *<sup>F</sup>*ð Þ *x ek* is. By definition,

0

BBBBBB@

row*k*∇<sup>2</sup>

row*k*∇<sup>2</sup>

ð Þ *x* is the *k*-th tube fiber of *i*-th horizontal slice, we have

⋮

row*k*∇<sup>2</sup>*<sup>f</sup> <sup>n</sup>*ð Þ *<sup>x</sup>*

*f* <sup>1</sup>ð Þ *x*

1

CCCCCCA

*fi* ð Þ *x* ,

*f* <sup>2</sup>ð Þ *x*

*f* <sup>1</sup>ð Þ *a u*

1

CCCA

∈IR<sup>3</sup> (22)

, by means of Defini-

*vT*∇<sup>2</sup>*<sup>f</sup>* <sup>2</sup>ð Þ *<sup>a</sup> <sup>u</sup> vT*∇<sup>2</sup>*<sup>f</sup>* <sup>3</sup>ð Þ *<sup>a</sup> <sup>u</sup>*

for all *k* ¼ 1, … , *p*.

For example, consider an application *<sup>F</sup>* : *<sup>U</sup>* <sup>⊂</sup>IR<sup>2</sup> ! IR<sup>3</sup> twice differentiable in *a*∈ *U* where *U* is an open set. The Jacobian matrix of *F* in *a* is given by

$$f\_F(a) = \begin{pmatrix} \nabla f\_1(a)^T \\ \nabla f\_2(a)^T \\ \nabla f\_3(a)^T \end{pmatrix} = \begin{pmatrix} \frac{\partial f\_1}{\partial \mathbf{x}\_1}(a) & \frac{\partial f\_1}{\partial \mathbf{x}\_2}(a) \\ \frac{\partial f\_2}{\partial \mathbf{x}\_1}(a) & \frac{\partial f\_2}{\partial \mathbf{x}\_2}(a) \\ \frac{\partial f\_3}{\partial \mathbf{x}\_1}(a) & \frac{\partial f\_3}{\partial \mathbf{x}\_2}(a) \end{pmatrix}$$

and its derivative is, by (18), the tensor

$$J\_F(a) = \mathcal{T}\_F(a) = \left(\nabla \frac{\partial f\_i}{\partial \mathbf{x}\_j}(a)\right) \in \mathbb{R}^{3 \times 2 \times 2} \tag{20}$$

where, by (19), its elements are described as

$$t\_{\vec{\eta}}^k = \frac{\partial^2 f\_i}{\partial \mathfrak{x}\_k \partial \mathfrak{x}\_j}(a).$$

With fixed *i*, it is easy to see that the *i*-th horizontal slice of the T *<sup>F</sup>*ð Þ *a* is the Hessian matrix ∇<sup>2</sup>*fi* ð Þ *a* defined by

$$\nabla^2 f\_i(a) = \mathcal{T}\_F(a)^{i \cdots} = \begin{pmatrix} \frac{\partial^2 f\_i}{\partial \mathbf{x}\_1 \partial \mathbf{x}\_1}(a) & \frac{\partial^2 f\_i}{\partial \mathbf{x}\_1 \partial \mathbf{x}\_2}(a) \\\\ \frac{\partial^2 f\_i}{\partial \mathbf{x}\_2 \partial \mathbf{x}\_1}(a) & \frac{\partial^2 f\_i}{\partial \mathbf{x}\_2 \partial \mathbf{x}\_2}(a) \end{pmatrix}. \tag{21}$$

We note in passing that any column of the matrix ∇<sup>2</sup>*fi* ð Þ *x* is a row fiber of the *i*-th horizontal slice.

As mentioned in the introduction, some numerical methods need to calculate the product between tensor <sup>T</sup> *<sup>F</sup>*ð Þ *<sup>a</sup>* and vectors in IR<sup>2</sup> .

From Definition 1.4, it is possible to calculate the contracted product mode-2 and mode-3. As Hessian matrices are symmetrical, given *v*∈IR2 , by Lemma 1.5 together with (10) and (11), we have

$$\begin{aligned} \operatorname{Tr}\_F(a) \overline{\times}\_3 v = \operatorname{T}\_F(a) \overline{\times}\_2 v = \begin{pmatrix} \operatorname{row}\_1(\operatorname{T}\_F(a) \overline{\times}\_2 v) \\\\ \operatorname{row}\_2(\operatorname{T}\_F(a) \overline{\times}\_2 v) \\\\ \operatorname{row}\_3(\operatorname{T}\_F(a) \overline{\times}\_2 v) \end{pmatrix} = \begin{pmatrix} v^T \nabla^2 f\_1(a) \\\\ v^T \nabla^2 f\_2(a) \\\\ v^T \nabla^2 f\_3(a) \end{pmatrix} \in \mathbb{R}^{3 \times 2} \end{aligned}$$

and consequently it follows that

*Bilinear Applications and Tensors DOI: http://dx.doi.org/10.5772/intechopen.90904*

$$(\mathcal{T}\_F(a)\overline{\times}\_2 v)u = \begin{pmatrix} v^T \nabla^2 f\_1(a) u \\ v^T \nabla^2 f\_2(a) u \\ v^T \nabla^2 f\_3(a) u \end{pmatrix} \in \mathbb{R}^3 \tag{22}$$

for all *u*, *v*∈ IR2 .

Note that in fact (18) is a generalization of (17). With fixed *i* and *j*, ∇*aij*ð Þ *u* is a

For example, consider an application *<sup>F</sup>* : *<sup>U</sup>* <sup>⊂</sup>IR<sup>2</sup> ! IR<sup>3</sup> twice differentiable in

*∂f* 1 *∂x*<sup>1</sup> ð Þ *a*

0

BBBBBBBB@

*∂fi ∂xj* ð Þ *a* � �

ð Þ *a :*

ð Þ *a*

ð Þ *a*

.

1

CCA ¼

0

BBB@

*∂*2 *fi ∂x*1*∂x*<sup>2</sup>

*∂*2 *fi ∂x*2*∂x*<sup>2</sup> ð Þ *a*

1

CCCA*:* (21)

ð Þ *x* is a row fiber of the

, by Lemma 1.5

1

CCCA

∈IR3�<sup>2</sup>

ð Þ *a*

*<sup>v</sup><sup>T</sup>*∇<sup>2</sup>*<sup>f</sup>* <sup>1</sup>ð Þ *<sup>a</sup> vT*∇<sup>2</sup>*<sup>f</sup>* <sup>2</sup>ð Þ *<sup>a</sup>*

*f* <sup>3</sup>ð Þ *a*

*v<sup>T</sup>*∇<sup>2</sup>

With fixed *i*, it is easy to see that the *i*-th horizontal slice of the T *<sup>F</sup>*ð Þ *a* is the

0

BBB@

*∂*2 *fi ∂x*1*∂x*<sup>1</sup>

*∂*2 *fi ∂x*2*∂x*<sup>1</sup>

As mentioned in the introduction, some numerical methods need to calculate the

From Definition 1.4, it is possible to calculate the contracted product mode-2

row1ð Þ T *<sup>F</sup>*ð Þ *a* �2*v* row2ð Þ T *<sup>F</sup>*ð Þ *a* �2*v* row3ð Þ T *<sup>F</sup>*ð Þ *a* �2*v*

*∂f* 2 *∂x*<sup>1</sup> ð Þ *a*

*∂f* 3 *∂x*<sup>1</sup> ð Þ *a* *∂f* 1 *∂x*<sup>2</sup> ð Þ *a*

*∂f* 2 *∂x*<sup>2</sup> ð Þ *a*

*∂f* 3 *∂x*<sup>2</sup> ð Þ *a*

1

CCCA ¼

ð Þ *u* (19)

1

CCCCCCCCA

∈IR<sup>3</sup>�2�<sup>2</sup> (20)

ð Þ *u* , whose elements are

*A*0 ð Þ *u k ij* <sup>¼</sup> *<sup>∂</sup>aij ∂xk*

*a*∈ *U* where *U* is an open set. The Jacobian matrix of *F* in *a* is given by

∇*f* <sup>1</sup>ð Þ *a T*

0

BBB@

∇*f* <sup>2</sup>ð Þ *a T*

∇*f* <sup>3</sup>ð Þ *a T*

*JF*0ð Þ¼T *a <sup>F</sup>*ð Þ¼ *a* ∇

*t k ij* <sup>¼</sup> *<sup>∂</sup>*<sup>2</sup> *fi ∂xk∂xj*

*JF*ð Þ¼ *a*

*Advances on Tensor Analysis and Their Applications*

and its derivative is, by (18), the tensor

where, by (19), its elements are described as

ð Þ *a* defined by

ð Þ¼T *<sup>a</sup> <sup>F</sup>*ð Þ *<sup>a</sup> <sup>i</sup>*:: <sup>¼</sup>

We note in passing that any column of the matrix ∇<sup>2</sup>*fi*

and mode-3. As Hessian matrices are symmetrical, given *v*∈IR2

0

BB@

product between tensor <sup>T</sup> *<sup>F</sup>*ð Þ *<sup>a</sup>* and vectors in IR<sup>2</sup>

∇2 *fi*

together with (10) and (11), we have

T *<sup>F</sup>*ð Þ *a* �3*v* ¼ T *<sup>F</sup>*ð Þ *a* �2*v* ¼

and consequently it follows that

tube fiber of the tensor *A*<sup>0</sup>

for all *k* ¼ 1, … , *p*.

Hessian matrix ∇<sup>2</sup>*fi*

*i*-th horizontal slice.

**14**

This means that the tensor T *<sup>F</sup>*ð Þ *a* defined by (20) is the associate tensor to bilinear application *F*00 ð Þ *<sup>a</sup>* , in relation to canonical basis of IR2 , by means of Definition 1.10. Without loss of generality, we have

$$\mathcal{T}\_F(\mathfrak{a}) \overline{\times}\_3 \mathcal{v} = \mathcal{T}\_F(\mathfrak{a}) \overline{\times}\_2 \mathcal{v} = \mathcal{T}\_F(\mathfrak{a}) \mathcal{v}$$

and by means of Lemma 1.5, it follows that

$$(\mathcal{T}\_F(\mathfrak{a})\mathfrak{u})v = (\mathcal{T}\_F(\mathfrak{a})v)\mathfrak{u} = \mathcal{T}\_F(\mathfrak{a})v\mathfrak{u}\dots$$

To finish, we consider the following particular case. We know that the *k*-th column of Jacobian *JF*ð Þ *x* is equal to product *JF*ð Þ *x ek*, where *ek* is the *k*-th canonical vector of IR*<sup>n</sup>* . It is worth noting what the slice of the matrix T *<sup>F</sup>*ð Þ *x ek* is. By definition, we have

$$\mathcal{T}\_F(\boldsymbol{\kappa}) \boldsymbol{e}\_k = \begin{pmatrix} \boldsymbol{e}\_k^T \nabla^2 f\_1(\boldsymbol{\kappa}) \\ \boldsymbol{e}\_k^T \nabla^2 f\_2(\boldsymbol{\kappa}) \\ \vdots \\ \boldsymbol{e}\_k^T \nabla^2 f\_n(\boldsymbol{\kappa}) \end{pmatrix} = \begin{pmatrix} \text{row}\_k \nabla^2 f\_1(\boldsymbol{\kappa}) \\ \text{row}\_k \nabla^2 f\_2(\boldsymbol{\kappa}) \\ \vdots \\ \text{row}\_k \nabla^2 f\_n(\boldsymbol{\kappa}) \end{pmatrix}$$

Given that row*k*∇<sup>2</sup>*fi* ð Þ *x* is the *k*-th tube fiber of *i*-th horizontal slice, we have T *<sup>F</sup>*ð Þ *x ek* as the *k*-th lateral slice, or, by symmetry of Hessians, it is the transpose of *k*th frontal slice. In short, for the twice differentiable application *<sup>F</sup>* : *<sup>U</sup>* <sup>⊂</sup>IR*<sup>n</sup>* ! IR*<sup>m</sup>*, we have <sup>T</sup> *<sup>F</sup>*ð Þ *<sup>x</sup>* <sup>∈</sup>IR*<sup>m</sup>*�*n*�*<sup>n</sup>* where the *<sup>m</sup>* horizontal slices are the Hessians <sup>∇</sup><sup>2</sup> *fi* ð Þ *x* , with *i* ¼ 1, … , *m* and the *n* lateral and frontal slices obtained by the following product T *<sup>F</sup>*ð Þ *x ek*, with *k* ¼ 1, … , *n*.

#### **5. Conclusions**

In this text, we have shown some properties of tensors, in particular those of order 3. In addition, we have approached bilinear applications, and we have shown the isomorphism between space of bilinear applications and of tensor of order 3. As mentioned in the introduction, to solve a nonlinear system, some numerical methods use tensors, either in the iterative scheme or in the proof of theorems. For this reason, we have written a section on differentiability of applications by showing how to calculate the product between tensor of second derivatives and vectors.

*Advances on Tensor Analysis and Their Applications*

**References**

1997;**55**:113-130

1976;**13**(3):432-447

6654-6675

1964

[3] Eustaquio RG, Ribeiro AA,

[1] Hernández MA, Gutiérrez JM. A family of Chebyshev-Halley type methods in Banach spaces. Bulletin of the Australian Mathematical Society.

*DOI: http://dx.doi.org/10.5772/intechopen.90904*

*Bilinear Applications and Tensors*

[10] Chen B, Petropulu A,

De Lathauwer L. Blind identification of convolutive MIMO systems with 3 sources and 2 sensors. Applied Signal Processing. 2002;**5**:487-496. Special Issue: Space-time Coding and Its Applications—Part II. Available from: http://publi-etis.ensea.fr/2002/CPD02a

[11] Golub GH, Kolda TG, Nagy JG, Van Loan CF. Workshop on Tensor Decompositions. Palo Alto, California: American Institute of Mathematics; 2004. Available from http://www. aimath.org/WWN/tensordecomp/

[12] De Lathauwer L, Comon P. Workshop on Tensor Decompositions and Applications. Marseille, France; 2005. Available from: http://www.etis.

[13] Bader BW, Kolda TG. Efficient MATLAB Computations with Sparse and Factored Tensors. Albuquerque, NM/Livermore, CA: Sandia National Laboratories; 2006. SAND2006-7592. Available from: http://www.prod.sandia. gov/cgi-bin/techlib/access-control.pl/

[14] Ishteva M. Numerical methods for the best low multilinear rank approximation of higher-order tensors [PhD thesis]. Belgium: Faculty of Engineering, Katholieke Universiteit

[15] Kiers HAL. Towards a standardized notation and terminology in multiway analysis. Journal of Chemometrics.

[16] Coelho FU, Loureno ML. Um Curso de lgebra Linear. So Paulo, Brasil: Editora da Universidade de So Paulo; 2007

[17] Lima EL. Anlise no Espao IR*<sup>n</sup>*. So Paulo: Editora Universidade de Braslia;

ensea.fr/wtda/

2006/067592.pdf

Leuven; 2009

2000;**14**:105-122

1970

[2] Ehle GP, Schwetlick H. Discretized Euler-Chebyshev multistep methods. SIAM Journal on Numerical Analysis.

Dumett MA. A new class of root-finding methods in IR*<sup>n</sup>*: The inexact tensor-free Chebyshev-Halley class. Computational and Applied Mathematics. 2018;**37**:

[4] Traub JF. Iterative Methods for the Solution of Equations. Prentice-Hall Series in Automatic Computation. Englewood Cliffs, NJ: Prentice-Hall;

[5] Bader BW, Kolda TG. Algorithm 862:

Transactions on Mathematical Software. 2006;**32**(4):635-653. DOI: 10.1145/

[6] Cichocki A, Zdunek R, Phan AH, Amari S. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source Separation. John Wiley

[7] Kolda TG, Bader BW. Tensor decompositions and applications. SIAM

[8] De Lathauwer L, De Moor B, VandeWalle J. A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications. 2000;

[9] Smilde A, Bro R, Geladi P. Multi-Way Analysis: Applications in the Chemical Sciences. Wiley; 2004

Review. 2009;**51**(3):455-500

MATLAB tensor classes for fast algorithm prototyping. ACM

1186785.1186794

Sons, Ltd; 2009

**21**:1253-1278

**17**

#### **Author details**

Rodrigo Garcia Eustaquio Department of Mathematics, Federal Technological University of Paraná, Curitiba, PR, Brazil

\*Address all correspondence to: eustaquio@utfpr.edu.br; rodrigogeustaquio@gmail.com

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Hernández MA, Gutiérrez JM. A family of Chebyshev-Halley type methods in Banach spaces. Bulletin of the Australian Mathematical Society. 1997;**55**:113-130

[2] Ehle GP, Schwetlick H. Discretized Euler-Chebyshev multistep methods. SIAM Journal on Numerical Analysis. 1976;**13**(3):432-447

[3] Eustaquio RG, Ribeiro AA, Dumett MA. A new class of root-finding methods in IR*<sup>n</sup>*: The inexact tensor-free Chebyshev-Halley class. Computational and Applied Mathematics. 2018;**37**: 6654-6675

[4] Traub JF. Iterative Methods for the Solution of Equations. Prentice-Hall Series in Automatic Computation. Englewood Cliffs, NJ: Prentice-Hall; 1964

[5] Bader BW, Kolda TG. Algorithm 862: MATLAB tensor classes for fast algorithm prototyping. ACM Transactions on Mathematical Software. 2006;**32**(4):635-653. DOI: 10.1145/ 1186785.1186794

[6] Cichocki A, Zdunek R, Phan AH, Amari S. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source Separation. John Wiley Sons, Ltd; 2009

[7] Kolda TG, Bader BW. Tensor decompositions and applications. SIAM Review. 2009;**51**(3):455-500

[8] De Lathauwer L, De Moor B, VandeWalle J. A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications. 2000; **21**:1253-1278

[9] Smilde A, Bro R, Geladi P. Multi-Way Analysis: Applications in the Chemical Sciences. Wiley; 2004

[10] Chen B, Petropulu A, De Lathauwer L. Blind identification of convolutive MIMO systems with 3 sources and 2 sensors. Applied Signal Processing. 2002;**5**:487-496. Special Issue: Space-time Coding and Its Applications—Part II. Available from: http://publi-etis.ensea.fr/2002/CPD02a

[11] Golub GH, Kolda TG, Nagy JG, Van Loan CF. Workshop on Tensor Decompositions. Palo Alto, California: American Institute of Mathematics; 2004. Available from http://www. aimath.org/WWN/tensordecomp/

[12] De Lathauwer L, Comon P. Workshop on Tensor Decompositions and Applications. Marseille, France; 2005. Available from: http://www.etis. ensea.fr/wtda/

[13] Bader BW, Kolda TG. Efficient MATLAB Computations with Sparse and Factored Tensors. Albuquerque, NM/Livermore, CA: Sandia National Laboratories; 2006. SAND2006-7592. Available from: http://www.prod.sandia. gov/cgi-bin/techlib/access-control.pl/ 2006/067592.pdf

[14] Ishteva M. Numerical methods for the best low multilinear rank approximation of higher-order tensors [PhD thesis]. Belgium: Faculty of Engineering, Katholieke Universiteit Leuven; 2009

[15] Kiers HAL. Towards a standardized notation and terminology in multiway analysis. Journal of Chemometrics. 2000;**14**:105-122

[16] Coelho FU, Loureno ML. Um Curso de lgebra Linear. So Paulo, Brasil: Editora da Universidade de So Paulo; 2007

[17] Lima EL. Anlise no Espao IR*<sup>n</sup>*. So Paulo: Editora Universidade de Braslia; 1970

**Author details**

PR, Brazil

**16**

Rodrigo Garcia Eustaquio

rodrigogeustaquio@gmail.com

provided the original work is properly cited.

Department of Mathematics, Federal Technological University of Paraná, Curitiba,

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: eustaquio@utfpr.edu.br;

*Advances on Tensor Analysis and Their Applications*

Section 2

Tensors in the Exploring

of the Space-Time

**19**

[18] Lima EL. Curso de Anlise, Volume 2. Rio de Janeiro, Brasil: IMPA; 1981

[19] Golub GH, Van Loan CF. Matrix Computations. The Johns Hopkins University Press; 1996

Section 2
