Joint EigenValue Decomposition for Quantum Information Theory and Processing

*Gilles Burel, Hugo Pillin, Paul Baird, El-Houssaïn Baghious and Roland Gautier*

## **Abstract**

The interest in quantum information processing has given rise to the development of programming languages and tools that facilitate the design and simulation of quantum circuits. However, since the quantum theory is fundamentally based on linear algebra, these high-level languages partially hide the underlying structure of quantum systems. We show that in certain cases of practical interest, keeping a handle on the matrix representation of the quantum systems is a fruitful approach because it allows the use of powerful tools of linear algebra to better understand their behavior and to better implement simulation programs. We especially focus on the Joint EigenValue Decomposition (JEVD). After giving a theoretical description of this method, which aims at finding a common basis of eigenvectors of a set of matrices, we show how it can easily be implemented on a Matrix-oriented programming language, such as Matlab (or, equivalently, Octave). Then, through two examples taken from the quantum information domain (quantum search based on a quantum walk and quantum coding), we show that JEVD is a powerful tool both for elaborating new theoretical developments and for simulation.

**Keywords:** quantum information, quantum coding, quantum walk, quantum search, joint eigenspaces, joint eigenvalues, joint eigenvectors

## **1. Introduction**

The field of quantum information is experiencing a resurgence of interest due to the recent implementation of secure transmission systems [1] based on the teleportation of quantum states in metropolitan networks and in the context of satellite transmissions, further underscored by the development of quantum computers. A new path for intercontinental quantum communication opened up in 2017 when a source onboard a Chinese satellite made it possible to distribute entangled photons between two ground stations, separated by more than 1000 km [2, 3]. Experiments using optical fibers [4] and terrestrial free-space channels [5] have also proved that the use of quantum entanglement can be achieved over large distances.

Quantum programming languages, such as Q# [6] have been developed to facilitate the design and simulation of quantum circuits. The underlying quantum theory is quite complex and often counter-intuitive due to the fact that it relies on linear algebra and tensor products—for instance, the state of a set of three independent qubits (quantum bits) is not described by a 3-dimensional vector, as would be the case for classical bits, but by a 2<sup>3</sup> -dimensional vector which lives in a Hilbert space constructed by tensor products of lower-dimensional spaces. Therefore, these programming languages are helpful for people who do not need to bother with the underlying theory.

However, since the quantum theory is fundamentally based on linear algebra, there are cases of practical interest for the researcher in which keeping a handle on the matrix representation of the quantum systems is a fruitful approach because it allows the use of powerful tools of linear algebra to better understand their behavior and to better implement simulation programs.

In this chapter, our objective is to illustrate how the concept of Joint EigenValue Decomposition (JEVD) can provide interesting results in the domain of quantum information. The chapter is organized as follows. In Section 2, we give some mathematical background and in Section 3, we provide basic elements to understand quantum information. Then, in Section 4, we show an example of the application of JEVD to quantum coding, more precisely we propose an algorithm, based on JEVD, to identify a quantum encoder matrix from a collection of given Pauli errors. Finally, in Section 5, we show that JEVD is a powerful tool for the analysis of a quantum walk search. More precisely, we prove that, while the quantum walk operates in a huge state space, there exists a small subspace that captures all the essential elements of the quantum walk, and this subspace can be determined thanks to JEVD.

## **2. Mathematical background**

#### **2.1 Matrices and notations**

We note *U<sup>T</sup>* the transpose of a matrix *U* and *U* <sup>∗</sup> the transpose conjugate of *U*.

*H* is the normalized Hadamard 2 � 2 matrix and *HN* the *N* � *N* Hadamard matrix obtained by the Kronecker product (defined in the next subsection):

$$H = \frac{1}{\sqrt{2}} \begin{pmatrix} \mathbf{1} & \mathbf{1} \\ \mathbf{1} & -\mathbf{1} \end{pmatrix} \qquad \text{and} \qquad H\_N = H^{\otimes n} \qquad (N = \mathbf{2}^n) \tag{1}$$

*IN* is the *N* � *N* identity matrix (which will sometimes be noted *I* when its dimension is implicit).

In the domain of quantum information processing, we mainly have unitary matrices. A square matrix *<sup>U</sup>* is unitary [7] if *<sup>U</sup>* <sup>∗</sup> *<sup>U</sup>* <sup>¼</sup> *UU* <sup>∗</sup> <sup>¼</sup> *<sup>I</sup>*. The columns of a unitary matrix are orthonormal and its eigenvalues are of norm 1. If the unitary matrix is real, its eigenvalues come by conjugate pairs.

We call "shuffle matrix" the permutation matrix *Pa*,*<sup>b</sup>* which represents the permutation obtained when one writes elements row by row in an *a* � *b* matrix and reads them column by column. For instance, set *a* ¼ 2 and *b* ¼ 3. If one writes the elements 1, 2, 3, 4, 5, 6 row by row in a 2 � 3 matrix and reads them column by column, the order becomes 1, 4, 2, 5, 3, 6. Then the shuffle matrix is the permutation matrix such that 1 4 2 5 3 6 ð Þ ¼ ð Þ 123456 *P*2,3. The inverse of *Pa*,*<sup>b</sup>* is *Pb*,*<sup>a</sup>* ¼ ð Þ *Pa*,*<sup>b</sup> T*.

*Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*

*Gn* is the *n* � *n* Grover diffusion matrix defined by [8]:

$$G\_n = -I\_n + 2\theta\_n \theta\_n^T \tag{2}$$

where *<sup>θ</sup><sup>n</sup>* the ð Þ *<sup>n</sup>* � <sup>1</sup> vector is defined by *<sup>θ</sup><sup>n</sup>* <sup>¼</sup> ½ � 1 1 <sup>⋯</sup> <sup>1</sup> *<sup>T</sup><sup>=</sup>* ffiffiffi *n* p . It is easy to see that *Gnθ<sup>n</sup>* ¼ *θn*. Therefore, *θ<sup>n</sup>* is an eigenvector of *Gn* with eigenvalue þ1. We can also see that for any vector *v* orthogonal to *θ<sup>n</sup>* we have *Gnv* ¼ �*v*. It follows that *Gn* has two eigenvalues, �1 and þ1, and the dimensions of the associated eigenspaces are *n* � 1 and 1.

### **2.2 Kronecker product**

The Kronecker product, denoted by ⊗ , is a bilinear operation on two matrices. If *A* is a *k* � *l* matrix and *B* is a *m* � *n* matrix, then the Kronecker product is the *km* � *ln* block matrix *C* below:

$$C = A \otimes B = \begin{pmatrix} a\_{11}B & \cdots & a\_{1l}B \\ \vdots & \ddots & \vdots \\ a\_{k1}B & \cdots & a\_{kl}B \end{pmatrix} \tag{3}$$

Assuming the sizes are such that one can form the matrix products *AC* and *BD*, an interesting property, known as the mixed-product property, is:

$$(A \otimes B)(C \otimes D) = (AC) \otimes (BD) \tag{4}$$

The Kronecker product is associative, but not commutative. However, there exist permutation matrices (the shuffle matrices defined in the previous subsection) such that, if *A* is an *a* � *a* square matrix and *B* a *b* � *b* square matrix, then [9]:

$$(A \otimes B)P\_{a,b} = (B \otimes A)P\_{b,a} \tag{5}$$

### **2.3 Singular value decomposition, image, and kernel**

The Singular Value Decomposition (SVD) of an *m* � *n* matrix *A* is [7]:

$$A = U\!\!\!\!\!V^\* \tag{6}$$

where *U* and *V* are unitary matrices, and *S* is diagonal. The diagonal of *S* contains the Singular Values, which are real nonnegative numbers, ranked by decreasing order. The sizes of the matrices are *U m*ð Þ � *m* , *S m*ð Þ � *n* and *V n*ð Þ � *n* . The SVD is a very useful linear algebra tool because it reveals a great deal about the structure of a matrix.

The image and the kernel of *A* are defined by:

$$\operatorname{Im}(A) = \{ \mathbf{y} \in \mathbb{C}^n : \mathbf{y} = A\mathbf{x} \text{ for some} \mathbf{x} \in \mathbb{C}^n \}\tag{7}$$

$$\ker(A) = \{ \mathfrak{x} \in \mathbb{C}^n : A\mathfrak{x} = \mathbf{0} \}\tag{8}$$

When used in an algorithm, the notation *null* will also be used for a procedure that computes a matrix whose columns are an orthonormal basis of the kernel of *A*.

The complement of a subspace A within a vector space H is defined by:

$$\mathbb{P}(\mathcal{A})^\varepsilon = \{ y \in \mathcal{H} : x^\*y = 0 \text{ for all } x \in \mathcal{A} \}\tag{9}$$

In an algorithm, if the columns of *A* are an orthonormal basis of A then the columns of *<sup>B</sup>* <sup>¼</sup> *null A*<sup>∗</sup> ð Þ provide an orthonormal basis of ð Þ <sup>A</sup> *<sup>c</sup>* .

The rank of *A* is its number of nonzero singular values. When programmed on a computer determination of the rank must take into account finite precision arithmetic, which means that "zero" is replaced by "extremely small" (less than a given tolerance value). Let us note *r* ¼ *rank A*ð Þ. We have

$$\dim\left(Im(A)\right) = r \tag{10}$$

$$\dim\left(\ker(A)\right) = n - r \tag{11}$$

An orthonormal basis of kerð Þ *A* is obtained by taking the last *n* � *r* columns of the matrix *V*.

#### **2.4 Joint eigenspaces and joint eigenvalue decomposition (JEVD)**

The eigenvalue decomposition of a unitary matrix *A* is:

$$A = \text{VDV}^\* \tag{12}$$

where *D* is a diagonal matrix, the diagonal of which contains the eigenvalues, and *V* is a unitary matrix whose columns are the eigenvectors.

Let us note *EA <sup>λ</sup>* the eigenspace of an operator *A* associated with an eigenvalue *λ:* The joint eigenspace *EA*,*<sup>B</sup> <sup>λ</sup>*,*<sup>μ</sup>* is:

$$E^{A,B}\_{\lambda,\mu} = E^A\_\lambda \cap E^B\_\mu \tag{13}$$

A property of great interest in quantum information processing is that within *EA*,*<sup>B</sup> λ*,*μ* (and even within any union of joint eigenspaces) the operators *A* and *B* commute.

Determination of the joint eigenspace on a computer may be determined through the complement, because:

$$E^A\_{\lambda} \cap E^B\_{\mu} = \left( \left( E^A\_{\lambda} \right)^c \cup \left( E^B\_{\mu} \right)^c \right)^c \tag{14}$$

Using Matrix-oriented programming languages, such as Matlab or Octave, this requires only a few lines. Let us note *A<sup>λ</sup>* and *B<sup>μ</sup>* matrices whose columns are orthonormal bases of *EA <sup>λ</sup>* and *EB <sup>μ</sup>* and ½ �*:* the horizontal concatenation of matrices. The following computation procedure provides a matrix *C* whose columns are an orthonormal basis of *E<sup>A</sup>*,*<sup>B</sup> <sup>λ</sup>*,*<sup>μ</sup>* :

$$\mathbf{C} = null \ \left( \left[ null(\mathbf{A}\_{\lambda}) \quad null\left(\mathbf{B}\_{\mu}\right) \right] \right) \tag{15}$$

However, it is not efficient in terms of complexity and in the next sections we will propose faster computational procedures, adapted to each context.

A lower bound on the dimension of a joint eigenspace can be obtained as follows. Let us note *n* the dimension of the full space. We have, obviously:

*Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*

$$\dim E^A\_\lambda \cup E^B\_\mu \le n \tag{16}$$

and we know that:

$$\dim E^A\_\lambda \cup E^B\_\mu = \dim E^A\_\lambda + \dim E^B\_\mu - \dim E^A\_\lambda \cap E^B\_\mu \tag{17}$$

Combining both equations, we obtain:

$$\dim E^{A,B}\_{\lambda,\mu} \ge \dim E^A\_\lambda + \dim E^B\_\mu - n \tag{18}$$

## **3. Quantum information principles**

A quantum system is described by a state vector j i *<sup>ψ</sup>* <sup>∈</sup> *<sup>N</sup>*, where *<sup>N</sup>* is the dimension of the system. Since in the quantum formalism states j i *ψ* and *γ ψ*j i are equivalent, for any nonzero complex number *γ*, the state is usually represented by a normed vector and the global phase is considered irrelevant.

As long as it remains isolated, the evolution of a quantum system is driven by the Schrödinger equation. The latter is a first-order differential equation operating on the quantum state. Its integration shows that the quantum states at times *t*<sup>1</sup> and *t*<sup>2</sup> are linked by a unitary matrix *U* such that *ψ*<sup>2</sup> j i ¼ *U ψ*<sup>1</sup> j i. The norm is preserved because *U* is unitary.

The second kind of evolution, called "measurement," may occur if the system interacts with its environment. A measurement consists of the projection of the state onto a subspace of *<sup>N</sup>*. When the measurement is controlled, it consists in defining *a priori* a decomposition of the state space into a direct sum of orthogonal subspaces ⊕ *i* H*i*. The measurement randomly selects one subspace. The result of the measure-

ment is an identifier of the selected subspace (for instance, its index *i*). After measurement, the state is projected onto H*i*. If *Pi* is the projection matrix onto H*i*, then the state becomes *Pi*j i *ψ* (which is then renormalized because the projection does not preserve the norm). The probability of H*<sup>i</sup>* being selected is the square norm of *Pi*j i *ψ* .

It is worth noting that a measurement may destroy a part of quantum information (because usually, a projection is an irreversible process), while the unitary evolution is reversible, and as such, preserves quantum information. Consequently, measurements must be used with extreme caution—how to design the system and the measurement device to measure only what is strictly required and not more is one of the difficult problems encountered in quantum information processing.

Quantum systems of special interest for quantum information processing are qubits (quantum bits) and qubit registers. A qubit belongs to a 2D quantum system with state a normed vector of <sup>2</sup> . To highlight links with classical digital computation, it is convenient to note 0j i and 1j i for the orthonormal basis of <sup>2</sup> . Physically any 2D quantum system can carry a quantum bit. For instance, the spin of an electron is a 2D quantum system, and the spins up and down can be associated with the basic states 0j i and 1j i. A general qubit has an expression:

$$|\psi\rangle = a\_0|\mathbf{0}\rangle + a\_1|\mathbf{1}\rangle\tag{19}$$

where *α*<sup>0</sup> and *α*<sup>1</sup> are complex numbers subject to j j *α*<sup>0</sup> <sup>2</sup> <sup>þ</sup> j j *<sup>α</sup>*<sup>1</sup> <sup>2</sup> <sup>¼</sup> 1.

A qubit register is a 2*n*-D quantum system which, for convenience, is usually referred to as a standard orthonormal basis noted

f g j i 0*:::*00 , 0j i *:::*01 , 0j i *:::*10 , … , 1j i *:::*11 and then, by analogy with classical digital processing, *n* is the number of qubits. For instance, for *n* ¼ 2 the basis is f g j i 00 , 01 j i, 10j i, 11 j i , where j i *ab* ¼ j i *a* ⊗ j i *b* , and the quantum state of the register is:

$$|\psi\rangle = \sum\_{(a,b)\in\{0,1\}^2} \gamma\_{ab} |ab\rangle\tag{20}$$

Note that, contrary to classical digital registers, the qubits are usually not separable, hence the register must be considered as a whole. We say that the qubits are entangled. However in the special case where the coefficients *γab* can be decomposed in the form *γab* ¼ *αaβ<sup>b</sup>* the state can be written as a tensor product of the states of two qubits, which can be considered separately. Then, we have:

$$|\psi\rangle = (a\_0|\mathbf{0}\rangle + a\_1|\mathbf{1}\rangle) \otimes (\beta\_0|\mathbf{0}\rangle + \beta\_1|\mathbf{1}\rangle) \tag{21}$$

## **4. Application of JEVD to quantum coding**

#### **4.1 Principle of quantum coding**

The objective of quantum coding is to protect quantum information [10]. In the classical domain, the information can be protected using redundancy—for instance, if we want to transmit bit 0 on a noisy communication channel, we can instead transmit 000 (and, similarly, transmit 111 instead of 1). On the receiver side, if one error has occurred on the channel, for instance, if the second bit is false, we receive 010 instead of 000, from which we can still guess that the most probable hypothesis is that the transmitted word was 000. Of course, if there were two errors the transmitted word could have been 111, but it is assumed that the probability of error is low, hence two errors are less likely than one error. More elaborated channel codes have been proposed, but fundamentally they are all based on the idea of adding redundancy and assuming that the probability of channel error is low.

In the quantum domain, it is impossible to use redundancy because it is impossible to copy a quantum state (this is due to the "no-cloning theorem" [11]). However, we can use entanglement to produce the quantum equivalent of classical redundancy. The principle of quantum coding is shown in **Figure 1**. Assume we want to protect the quantum state j i *ψ* of a *k*-qubit register. We add *r* ancillary qubits initialized to 0j i to form an *<sup>n</sup>*-qubit register (*<sup>n</sup>* <sup>¼</sup> *<sup>k</sup>* <sup>þ</sup> *<sup>r</sup>*). The encoder is represented by a unitary 2*<sup>n</sup>* � <sup>2</sup>*<sup>n</sup>*

**Figure 1.** *Principle of quantum coding.*

*Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*

**Figure 2.** *CNOT quantum gate.*

matrix *U*. Then, errors may occur on the encoded state: they are represented by a unitary matrix *E*. The decoder is represented by another unitary matrix *U* <sup>∗</sup> which is the transpose conjugate of the encoding matrix. Finally, we measure the last *r* qubits of the decoded state, and, depending on the result of the measurement, we apply the appropriate restoration matrix *Uc* (which is a unitary matrix of size 2*<sup>k</sup>* � <sup>2</sup>*<sup>k</sup>*) to the *k*-qubit register composed of the first *k* qubits of the decoded state.

As an illustration, let us consider *n* ¼ 2, *k* ¼ 1 and the very simple quantum encoder shown in **Figure 2**. It is a basic quantum circuit known as the CNOT quantum gate, and it is represented by the unitary matrix below:

$$U = \begin{pmatrix} \mathbf{1} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{1} \\ \mathbf{0} & \mathbf{0} & \mathbf{1} & \mathbf{0} \end{pmatrix} \tag{22}$$

A quantum error on a qubit is described by a 2 � 2 unitary matrix. It is convenient to decompose the error as a linear sum of the identity and the Pauli matrices below [12]:

$$Z = \begin{pmatrix} \mathbf{1} & \mathbf{0} \\ \mathbf{0} & -\mathbf{1} \end{pmatrix} \quad X = \begin{pmatrix} \mathbf{0} & \mathbf{1} \\ \mathbf{1} & \mathbf{0} \end{pmatrix} \quad Y = \begin{pmatrix} \mathbf{0} & i \\ -i & \mathbf{0} \end{pmatrix} \tag{23}$$

Let us consider that an error may appear on the first encoded qubit and that this error, if present, is represented by the unitary Pauli matrix *X*. Then, the error matrix which acts on the encoded state is:

$$E = X \otimes I = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix} \tag{24}$$

It is easy to check that:

$$F = U^\* E U = \begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{pmatrix} = X \otimes X \tag{25}$$

The state at the input of the encoder is ½ � *α*<sup>0</sup> *α*<sup>1</sup> <sup>T</sup> <sup>⊗</sup> ½ � <sup>10</sup> <sup>T</sup> <sup>¼</sup> ½ � *<sup>α</sup>*<sup>0</sup> <sup>0</sup> *<sup>α</sup>*<sup>1</sup> <sup>0</sup> T. The state at the output of the decoder is, therefore, 0½ � *α*<sup>1</sup> 0 *α*<sup>0</sup> T .

**Figure 3.** *Steane encoder.*

Measuring the second qubit on the output of the decoder consists in decomposing the state space into a direct sum H0⊕H<sup>1</sup> of two subspaces spanned by 00 f g j i, 10j i and f g j i 01 , 11 j i . The result of the measurement will be either 0 or 1 (index of the selected subspace), and by analogy with classical decoding, this result will be called the "syndrome." The projections on these subspaces are 00 ½ �<sup>T</sup> and ½ � *<sup>α</sup>*<sup>1</sup> *<sup>α</sup>*<sup>0</sup> T. Then the probability to obtain syndrome 1 is 1.

The measurement then projects the state onto H1. Note that in this particular case, the information is preserved by the projection. Then, applying the operator *Uc* ¼ *X* to the projected state restores the initial state.

Similarly, if there is no error, we can see that *F* is the identity matrix, then the projections on the subspaces are ½ � *α*<sup>0</sup> *α*<sup>1</sup> <sup>T</sup> and 00 ½ �T. In that case, the syndrome is 0 and the state is projected onto H0. Correction is done by applying the operator *I* to the projected state, which is equivalent to doing nothing.

The very simple code used above, as an illustration, cannot correct more complex errors (for instance, an error *Z* on the first qubit). However, there exist efficient quantum codes, such as the Steane code [13], and the Shor code [14]. A remarkable result of quantum coding theory is that a linear combination of correctable errors is correctable [15].

**Figure 3** shows the Steane Encoder, which is a ð Þ *n* ¼ 7, *k* ¼ 1, *t* ¼ 1 quantum encoder. This means that it encodes *k* ¼ 1 qubits on *n* ¼ 7 qubits and it is able to correct any error occurring on *t* ¼ 1 encoded qubits. It is built with Hadamard (Eq. (1)) and CNOT (Eq. (22)) quantum gates. From this circuit description, it is possible to obtain the coding matrix *U*.

#### **4.2 Determination of encoder matrix using JEVD**

The problem we address can be stated as follows (see **Figure 1** for the notations) given a list of *n* independent Pauli errors *Ei* with corresponding diagonal outer errors *Fi*, determine the unitary operator *U* (quantum encoder) such that:

$$U^\* E\_i U = F\_i \,\,\forall i \in \{1, \ldots, n\} \tag{26}$$

This equation shows that the columns of *U* are the eigenvectors of *Ei*. Specification of the code by a small set of Pauli errors is very convenient and the interest of

*Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*


#### **Table 1.**

*Collection of Pauli errors.*

automatic determination of matrix *U* is to allow further simulations of the behavior of the quantum code in various configurations.

To illustrate and validate the approach that will be developed below, let us consider the collection of *n* ¼ 7 Pauli errors shown in **Table 1**. Here, to be able to check the results, this collection has been chosen to correspond to the Steane encoder (**Figure 3**), while in a standard application of the method, it would be given *a priori*. The interest is that here we can compute the encoder matrix from the circuit and this will allow us to check that our method produces the correct encoder matrix.

We use *n* independent equations in which each *Fi* is a tensor product of *I* and *Z* only (including only one *Z*). Therefore, matrices *Fi* are diagonal, and their diagonal elements are þ1 and �1 in equal numbers.

**Figure 4** shows the diagonals of the matrices *Fi* (each row corresponding to one diagonal). Values �1 and þ1 are represented, respectively, by black and white dots.

Since matrix *U* does not depend on *i* in Eq. (26) its columns are joint eigenvectors of the *Ei*. For instance, in the example above, the 20*th* column of *U* is a joint eigenvector of *E*1, *E*2, … , *E*<sup>7</sup> associated to eigenvalues þ1,þ1,�1,þ1,þ1,�1,�1 (see **Figure 4**). In the general case, the set of *n* eigenvalues corresponding to the column *c* of *U* is easily obtained by taking the binary representation of *c* � 1 with the mapping 0 ! þ1 and 1 ! �1.

Now, let us consider the determination of column *c* of *U*. We know that it is a vector spanning a joint eigenspace of the *Ei* corresponding to a given set of eigenvalues *<sup>λ</sup><sup>i</sup>* f g , *<sup>i</sup>* <sup>¼</sup> 1, … , *<sup>n</sup>* . For each *Ei* let *Ai* denote the 2*<sup>n</sup>* � <sup>2</sup>*<sup>n</sup>*�<sup>1</sup> matrix whose orthonormal columns span the eigenspace associated to *<sup>λ</sup><sup>i</sup>* and *Bi* the 2*<sup>n</sup>* � <sup>2</sup>*<sup>n</sup>*�<sup>1</sup> matrix whose orthonormal columns span the kernel of *Ai* (which corresponds to the eigenspace associated to �*λi*).

Let <sup>Y</sup>*<sup>k</sup>* denote the joint eigenspace corresponding to eigenvalues *<sup>λ</sup> <sup>j</sup>*, *<sup>j</sup>* <sup>¼</sup> 1, … , *<sup>k</sup>* with *k*∈f g 1, … , *n* . We propose Algorithm 1 to efficiently compute the column of *U*. It computes a series of matrices *Yk* whose columns are an orthonormal basis of Y*k*. Obviously, the searched column of *U* is *Yn*. For the moment, let us consider that *K c*ð Þ=1 (the optimal value will be discussed later).

**Figure 4.** *Diagonals of matrices Fi.*

**if** *K c*ð Þ¼ 1 **then** ∣ *Y*<sup>1</sup> ¼ *A*<sup>1</sup> **end for***k* ¼ *K c*ð Þþ 1 to*n* **do** *Ck* <sup>¼</sup> *<sup>B</sup>*<sup>∗</sup> *<sup>k</sup> Yk*�<sup>1</sup> *Zk* ¼ *null C*ð Þ*<sup>k</sup> Yk* ¼ *Yk*�<sup>1</sup>*Zk* 

**Algorithm 1:** Algorithm for determination of a joint eigenspace.

The sizes of the matrices are decreasing with *k*: *Ck*: 2*<sup>n</sup>*�<sup>1</sup> � <sup>2</sup>*<sup>n</sup>*�*k*þ<sup>1</sup> *Zk*: 2*<sup>n</sup>*�*k*þ<sup>1</sup> � <sup>2</sup>*<sup>n</sup>*�*kYk*: 2*<sup>n</sup>* � <sup>2</sup>*<sup>n</sup>*�*<sup>k</sup>*. The intuitive ideas under the algorithm are the following:


Let us prove that the matrices *Yk* have orthonormal columns. This is obviously the case for *k* ¼ 1. Then, by recursion, we have:

$$Y\_k^\* \, Y\_k = Z\_k^\* \, Y\_{k-1}^\* \, Y\_{k-1} Z\_k = I \tag{27}$$

Now, let us prove, by recurrence, that *Im Y*ð Þ¼ *<sup>k</sup>* Y*k*. Obviously, this is the case for *k* ¼ 1. Assume this is the case for *k* � 1. We have:

$$\operatorname{Im}(Y\_k) \subset \operatorname{Im}(Y\_{k-1}) = \mathcal{Y}\_{k-1} \tag{28}$$

We have also:

$$B\_k^\* \, Y\_k = B\_k^\* \, (Y\_{k-1} Z\_k) = \left( B\_k^\* \, Y\_{k-1} \right) Z\_k = \mathbf{C}\_k Z\_k = \mathbf{0} \tag{29}$$

Then

$$\operatorname{Im}(Y\_k) \subset \ker(B\_k^\*) = \operatorname{Im}(A\_k) \tag{30}$$

From *Im Y*ð Þ*<sup>k</sup>* ⊂Y*<sup>k</sup>*�<sup>1</sup> and *Im Y*ð Þ*<sup>k</sup>* ⊂ *Im A*ð Þ*<sup>k</sup>* we deduce *Im Y*ð Þ*<sup>k</sup>* ⊂Y*k*.

Conversely, assume that a vector *x* belongs to Y*k*. Because Y*<sup>k</sup>* ⊂Y*<sup>k</sup>*�<sup>1</sup> there exists a vector *<sup>b</sup>* such that *<sup>x</sup>* <sup>¼</sup> *Yk*�<sup>1</sup>*<sup>b</sup>* and because *<sup>x</sup>*∈*Im A*ð Þ*<sup>k</sup>* we have also *<sup>B</sup>*<sup>∗</sup> *<sup>k</sup> x* ¼ 0

*Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*


#### **Table 2.**

*Additional Collection of Pauli errors.*

Then

$$B\_k^\* \, Y\_{k-1} b = 0 \Rightarrow C\_k b = 0 \Rightarrow \exists a : b = Z\_k a$$

Therefore *x* ¼ *Yk*�<sup>1</sup>*b* ¼ *Yk*�<sup>1</sup>*Zka* ¼ *Yka* ) *x*∈ *Im Y*ð Þ)*<sup>k</sup>* Y*<sup>k</sup>* ⊂*Im Y*ð Þ*<sup>k</sup>* .

After execution of the algorithm to determine each column of *U*, there remains an indetermination because the joint eigenvectors (i.e., the columns of *U*) are determined up to a phase factor. This has no consequence on the performance of the quantum code. However, if we want to fix this residual indetermination, we proposed a fast and simple procedure in ref. [16]. The procedure requires an additional set of *n* Pauli errors in which each additional *Fi* is a tensor product of *I* and *X* only. As an example, for the Steane code, we use **Table 2**.

After these remaining differences have been removed, we obtain an estimated matrix *U* that is equal to the true matrix, up to a global phase (**Figure 5**). However, this remaining indetermination does not matter because, as said before, the global phase has no significance in quantum physics. Here we have chosen the global phase so that the encoder matrix is real.

**Figure 5.** *Estimated Matrix U for the Steane encoder.*

**Figure 5** shows the matrix computed by our method. We have checked that it is equal to the matrix directly computed from the circuit description.

The programmer may speed up the computation by taking into account the fact that when computing columns *c* of *U*, some matrices *Yk* have already been computed for other columns and can be reused. For instance, in **Figure 4**, we see that the joint eigenvalues corresponding to columns 19 and 20 are the same, except the last one. Then, when computing column 20, we can set *K*ð Þ¼ 20 *n* in Algorithm 1 instead of the default value *K*ð Þ¼ 20 1, because the *Yn*�<sup>1</sup> for column 20 is the same as for column 19. More generally, Algorithm 2 written in pseudo-Octave code computes the optimal values of the *K c*ð Þ.

> *K* ¼ ½ � 1 1 **for** *k* ¼ 2 *to n* **do** <sup>|</sup> *<sup>K</sup>* <sup>¼</sup> reshape *<sup>K</sup>*; *<sup>k</sup>* <sup>∗</sup> ones 1, 2*<sup>k</sup>*�<sup>1</sup> 1 2*<sup>k</sup>* **end**

**Algorithm 2:** Algorithm for computation of the optimal values *K c*ð Þ.

For instance, for *n* ¼ 3 the algorithm produces *K* ¼ ½ � 13231323 .

## **5. Application of JEVD to quantum walk search**

#### **5.1 Principle of quantum walk search**

Let us consider a particle that can move on a graph. In the classical world, at the time *t* this particle is localized at a node of the graph. It can then randomly choose one of the edges linked to this node to reach one of the adjacent nodes at a time *t* þ 1. The repeated iteration of this process is the concept of classical random walk.

A quantum walk [17] relies also on a graph, but contrary to the classical walk, here the particle may be located at many nodes at the same time and can choose many edges simultaneously. At the time *t*, the state of the particle is then described by a state vector *ψ<sup>t</sup>* j i and the evolution between times *t* and *t* þ 1 is given by a unitary matrix *U* ¼ *SC* such that *ψ<sup>t</sup>*þ<sup>1</sup> ¼ *U ψ<sup>t</sup>* j i. The unitary matrices *C* and *S* represent, respectively, the choice of the edges and the movement to the adjacent nodes.

In the following, we will consider graphs associated with hypercubes [18]. We will note *<sup>n</sup>* the dimension of the hypercube and *<sup>N</sup>* <sup>¼</sup> <sup>2</sup>*<sup>n</sup>* the number of nodes. **Figure 6** shows the graph corresponding to a hypercube of dimension *n* ¼ 3. It is convenient to label the nodes by binary words. In quantum language, these binary words *κ* are also used to label the basis vectors of the so-called position space H<sup>S</sup>.

The quantum state lives in a Hilbert space built by the tensor product of the position space H<sup>S</sup> (corresponding to the nodes) and the coin space H<sup>C</sup> (corresponding to the possible movements along the edges) <sup>H</sup> <sup>¼</sup> HS <sup>⊗</sup> HC. The dimensions of these state spaces are *Ne* ¼ *nN*, *N* and *n*.

It is usual to define *C* as [19]:

$$\mathbf{C} = I\_N \otimes \mathbf{G}\_n \tag{31}$$

*Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*

**Figure 6.** *Hypercube for n* ¼ 3*.*

**Figure 7.** *Matrices C (left) and O (right) for n* ¼ 3*.*

where *Gn* is the *n* � *n* Grover diffusion matrix defined in Section 2. Matrix *C* obtained for *n* ¼ 3 is shown on **Figure 7**.

The structure of *S* is more complex. It is convenient to first define it in H<sup>C</sup> ⊗ H<sup>S</sup> and then to transpose it to H using the shuffle matrix *P* ¼ *Pn*,*<sup>N</sup>* (defined in subsection 2.2). Then:

$$\mathcal{S} = P\hat{\mathbf{S}}\mathbf{P}^T \tag{32}$$

where

$$
\hat{\mathbf{S}} = \text{diag}\left(\hat{\mathbf{S}}\_1, \dots, \hat{\mathbf{S}}\_n\right) \qquad \text{and} \qquad \hat{\mathbf{S}}\_d = I^{\otimes (n-d)} \otimes X \otimes I^{\otimes (d-1)} \tag{33}
$$

The last equation just means that, because a movement along direction *d* corresponds to an inversion of the *dth* bit in *κ*, the shift operator permutes the values associated to nodes that are adjacent along that dimension.

A quantum walk search can be described by repeated application of a unitary evolution operator *Q*, which can be written:

$$Q = \mathbf{U} \mathbf{O} \tag{34}$$

Here *O* is the oracle, which aims at marking the solutions. An example of oracle structure is shown in **Figure 7**. It is a block-diagonal matrix, whose blocks are �*Gn* when they correspond to a solution and *In* otherwise. Denote *M* the number of solutions and assume that *M* ≪ *N* (otherwise the quantum walk search would serve no purpose because the probability of rapidly finding a solution with a classical search would be high). In the example shown in the figure, there are *M* ¼ 2 solutions (located at positions 1 and 4 on **Figure 6**).

Let *t* denote the number of iterations until a measurement is performed. Starting from an initial state *<sup>ψ</sup>*<sup>0</sup> j i, repeated iterations lead to the state *<sup>ψ</sup><sup>t</sup>* j i <sup>¼</sup> *<sup>Q</sup><sup>t</sup> <sup>ψ</sup>*<sup>0</sup> j i which is then measured. The theory of quantum walk search [19] shows that the probability of success (that is the probability of obtaining a solution by measurement) oscillates as a function of *t*. This means that theoretical tools which help to understand and simulate quantum walk search lead to the development of methods to determine the optimal time of measurement.

In the sequel, we will show that JEVD is a fruitful tool in this context. Indeed, set *E* to be the union of the joint eigenspaces of *U* and *O*, and *E* its complement. Inside *E*, the operators commute. So, if we note with index *E* the restrictions of the operators to *E*, we have:

$$\mathbf{Q}\_E^2 = (U\_E \mathbf{O}\_E)(U\_E \mathbf{O}\_E) = U\_E \mathbf{O}\_E^2 U\_E = U\_E^2 \tag{35}$$

Then, inside *E*, there is no significant difference between the effective quantum walk *Q* and the uniform quantum walk *U*, because, after each pair of successive iterations, the evolution is identical. Since the uniform quantum walk has no reason to converge to a solution, we deduce that the interesting part of the process lives in the complement of *E*, that is in *E*.

After establishing results about the dimensions of the eigenspaces of *U* and *O*, we will show that the concept of joint eigenspaces allows us to establish an upper bound on the dimension of the complement, with the remarkable result that this dimension grows only linearly with *n*. Then, we propose an algorithm for efficient computation of the joint eigenspaces and, finally, use it to check our theoretical upper bound.

#### **5.2 Eigenspaces of** *U* **and** *O*

Set

$$F = H\_N \otimes I\_n \tag{36}$$

Then matrix *F* diagonalizes *S*:

*Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*

$$F\text{SF} = (H\_N \otimes I\_n) P\_{N,n} \hat{\text{SP}}\_{n,N} (H\_N \otimes I\_n) \tag{37}$$

$$\hat{\mathbf{I}} = P\_{n,N}(I\_n \otimes H\_N)\hat{\mathbf{S}}(I\_n \otimes H\_N)P\_{N,n} \tag{38}$$

$$\hat{\mathbf{y}} = P^T \text{diag}\left(\dots, H\_N \hat{\mathbf{S}}\_d H\_N, \dots\right) P \tag{39}$$

The latter term is diagonal because the mixed product property, *<sup>H</sup>*<sup>2</sup> <sup>¼</sup> *<sup>I</sup>* and *HXH* ¼ *Z*, shows that:

$$H\_N \hat{\mathbf{S}}\_d H\_N = I^{\otimes (n-d)} \otimes Z \otimes I^{\otimes (d-1)} \tag{40}$$

Once more, using the mixed product property, we can also prove that *F* keeps *C* unchanged, that is:

$$FCF = \mathbb{C} \tag{41}$$

The diagonal of *FSF* is the concatenation of the binary representation of the numbers 0 to *N* � 1 with the mapping 0 ! þð Þ1 and 1 ! �ð Þ1 . That is:

$$FSF = diag(\mathbb{S}\_0, \dots, \mathbb{S}\_k, \dots, \mathbb{S}\_{N-1})\tag{42}$$

Note that the diagonal of *S<sup>κ</sup>* contains *k* times �1 and *n* � *k* times þ1 (where *k* is the Hamming weight of *κ*).

Then, because *<sup>F</sup>*<sup>2</sup> <sup>¼</sup> *<sup>I</sup>*, *FUF* is a block diagonal matrix:

$$FUF = (FSF)(FCF) \tag{43}$$

$$\mathbf{c} = \text{diag}(\dots, \mathbb{S}\_{\kappa}, \dots)\mathbf{C} \tag{44}$$

Block *κ* is then

$$U\_{\kappa} = \mathbb{S}\_{\kappa} \mathbb{G}\_{n} \tag{45}$$

We have:

$$\dim E\_{-}^{U\_{\kappa}} \geq \dim E\_{+,-}^{\mathbb{S}\_{\kappa}, G\_{\kappa}} \tag{46}$$

$$0 \geq \dim E\_+^{\mathbb{S}\_n} + \dim E\_-^{\mathbb{G}\_n} - n \tag{47}$$

$$\geq (n-k) + (n-1) - n \tag{48}$$

$$n \ge n - k - 1\tag{49}$$

and

$$\dim E\_+^{U\_k} \ge \dim E\_{-,-}^{\mathbb{S}\_k, G\_u} \tag{50}$$

$$0 \geq \dim E\_{-}^{S\_{\kappa}} + \dim E\_{-}^{G\_{n}} - n \tag{51}$$

$$k \ge k + (n - 1) - n \tag{52}$$

$$k \ge k - 1 \tag{53}$$

Then, there is only room left for at most 2 eigenvalues, specifically, at most a pair of conjugate ones.

Assume that this pair of eigenvalues exists. Since the diagonal of *Gn* contains �<sup>1</sup> <sup>þ</sup> <sup>2</sup> *<sup>n</sup>*, the trace of *U<sup>κ</sup>* is:

$$\text{trace}(\mathbf{U}\_k) = ((n-k)(+\mathbf{1}) + k(-\mathbf{1})) \left(-\mathbf{1} + \frac{\mathbf{2}}{n}\right) \tag{54}$$

$$\hat{\theta} = -n + 2k + 2\left(1 - 2\frac{k}{n}\right) \tag{55}$$

The sum of the eigenvalues is equal to the trace and we already have eigenvalue �1 with multiplicity *n* � *k* � 1 and eigenvalue þ1 with multiplicity *k* � 1. The sum of these *n* � 2 eigenvalues is �*n* þ 2*k*. Then the sum of the two missing eigenvalues must be 2 1 � <sup>2</sup> *<sup>k</sup> n* � �. Let us denote them by *λ<sup>k</sup>* and *λ* <sup>∗</sup> *<sup>k</sup>* . We must have *Re*ð Þ¼ *<sup>λ</sup><sup>k</sup>* <sup>1</sup> � <sup>2</sup> *<sup>k</sup> <sup>n</sup>*. Then, since j j *λ<sup>k</sup>* ¼ 1 we have

$$
\lambda\_k = \left(1 - 2\frac{k}{n}\right) + i\frac{2}{n}\sqrt{k(n-k)}\tag{56}
$$

Considering the eigenvalues of �*Gn* and *In* it is trivial to show that the dimensions of the eigenspaces of the oracle are:

$$
\dim E\_-^O = M \qquad \text{and} \qquad \dim E\_+^O = N\_\epsilon - M \tag{57}
$$

#### **5.3 Upper bound on the dimension of the complement**

The eigenvalues of *<sup>U</sup>* belong to �1, <sup>þ</sup>1, *<sup>λ</sup>k*, *<sup>λ</sup>* <sup>∗</sup> *k* � � where *<sup>k</sup>*∈½ � 1, *<sup>n</sup>* � <sup>1</sup> . Then, there are 2 þ 2ð Þ¼ *n* � 1 2*n* eigenspaces of *U*.

For *j*∈ ½ � 1, 2*n* let *α <sup>j</sup>* be the dimensions of these eigenspaces and *β <sup>j</sup>* the dimensions of their intersections with *EO* <sup>þ</sup>. An eigenvector of *U* is in an intersection if and only if it is orthogonal to *EO* �. Then, because the dimension of *EO* � is *<sup>M</sup>*, we have *<sup>β</sup> <sup>j</sup>* <sup>≥</sup>*<sup>α</sup> <sup>j</sup>* � *<sup>M</sup>*. Consequently

$$\sum\_{j=1}^{2n} \beta\_j \ge \sum\_{j=1}^{2n} a\_j - 2nM \tag{58}$$

Obviously, we have P<sup>2</sup>*<sup>n</sup> <sup>j</sup>*¼<sup>1</sup>*<sup>α</sup> <sup>j</sup>* <sup>¼</sup> *Ne*, so that

$$\sum\_{j=1}^{2n} \beta\_j \ge N\_\epsilon - 2nM \tag{59}$$

It follows that the dimension of the complement has an upper bound:

$$\dim E\_c \le 2nM \tag{60}$$

This is a remarkable result—despite the fact that the dimension of the Hilbert space grows exponentially (*Ne* <sup>¼</sup> *<sup>n</sup>*2*<sup>n</sup>*), the dimension of the complement grows only linearly with *n*.

### **5.4 Fast computation of the joint eigenspaces**

## *5.4.1 Introduction*

To check our theoretical upper bound, we propose an efficient algorithm for fast computation of the joint eigenspaces.

We have to compute orthonormal bases of joint eigenspaces of *U* and *O*. The dimension of *E<sup>O</sup>* � is small, hence, it makes sense to define it by an orthonormal basis generating the eigenspace. However, the dimension of *EO* <sup>þ</sup> is large (greater than *Ne=*2). Hence, it is computationally more efficient to define it by an orthonormal basis of its complement (which is *EO* �). Indeed dim *EO* � <sup>≪</sup> dim *EO* <sup>þ</sup>. We then have to design an algorithm adapted to each case.

### *5.4.2 Intersection of two eigenspaces defined by orthonormal bases*

Let us consider a matrix *A* whose columns are an orthonormal basis of an eigenspace of *U*, and a matrix *B* whose columns are an orthonormal basis of *EO* �. Set *p* and *q* to be the number of columns of these matrices (their number of rows being *Ne*). We want to compute an *Ne* � *r* matrix *J* whose columns are an orthonormal basis of the joint eigenspace (whose dimension we have set to be *r*). We propose the algorithm below, which is a straightforward adaptation of Theorem 1 in ref. [20].

First, we compute the *p* � *q* matrix *C* below:

$$\mathbf{C} = \mathbf{A}^\* \mathbf{B} \tag{61}$$

Then, we compute the SVD of *C*:

$$\mathbf{C} = U\_c \mathbf{S}\_c V\_c^\* \tag{62}$$

Denote by *sk* the singular values (the diagonal elements of *Sc*) and determine *r* such that *sk* ≥1 � *ε* for *k* ¼ 1, … ,*r*, and *sk* <1 � *ε* for *k*> *r*. Here *ε* ≪ 1 is a very small positive value introduced to take into account the presence of small errors due to computer finite precision arithmetic. Finally:

$$J = AU\_c(:, \mathbf{1}:r) \tag{63}$$

Or, equivalently, *J* ¼ *BVc*ð Þ : , 1 : *r* .

## *5.4.3 Intersection of two eigenspaces, one of them being defined by an orthonormal basis of its complement*

Let us consider a matrix *A* whose columns are an orthonormal basis of an eigenspace of *U*, and a matrix *B* whose columns are an orthonormal basis of the complement of *EO* <sup>þ</sup> (that is *EO* �). First, we compute the *p* � *q* matrix *C* (Eq. (61)). Then, we compute the *p* � *r* matrix (*r*≤ *p*) *Z* below:

$$Z = null(\mathbb{C}^\*) \tag{64}$$


**Table 3.**

*Joint eigenspaces of O and U for n* ¼ 7 *and M* ¼ 3 *solutions located at nodes* 2, 8, 9*.*

and we obtain an *Ne* � *<sup>r</sup>* matrix *<sup>J</sup>* whose columns are an orthonormal basis of *<sup>E</sup><sup>U</sup>*,*<sup>O</sup> λ*,þ from:

$$J = AZ \tag{65}$$

The justification of the algorithm is as follows. The *q* columns of *C* are a basis of the projection of *Im B*ð Þ into *Im A*ð Þ, the components being expressed in the basis of *Im A*ð Þ The complement of *Im C*ð Þ in *Im A*ð Þ is the desired intersection (expressed in *Im A*ð Þ). The columns of *Z* are an orthonormal basis of this intersection. Finally, Eq. (65) restores the components in the original space.

#### **5.5 Simulation results**

Consider a hypercube of dimension *n* ¼ 7 with *M* ¼ 3 solutions located at nodes 2, 8, 9. The dimension of the state space is then *Ne* <sup>¼</sup> *<sup>n</sup>*2*<sup>n</sup>* <sup>¼</sup> 896. From the discussion above, we know that the dimension of the complement is upper bounded by 2*nM* ¼ 42.

The algorithm gives us the dimensions of the joint eigenspaces of *U* and *O* (**Table 3**). The sum of the dimensions of the joint eigenspaces is then P<sup>2</sup>*<sup>n</sup> <sup>j</sup>*¼<sup>1</sup>*<sup>β</sup> <sup>j</sup>* <sup>¼</sup> 858, from which we obtain the dimension of the complement:

$$\dim E\_{\mathfrak{c}} = N\_{\mathfrak{c}} - \sum\_{j=1}^{2n} \beta\_j = \mathbf{38} \tag{66}$$

We can see that, as expected, this dimension (dim *Ec* ¼ 38) is much smaller than the dimension of the original state space (*Ne* ¼ 896). We can also check that it is less than the theoretical upper bound (2*nM* ¼ 42), as expected.

## **6. Conclusions**

The recent growth of research on quantum communications and quantum information processing opens new challenges. In this chapter, we have shown that matrix *Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*

theory concepts, such as JEVD, are powerful tools to propose new theoretical results as well as efficient simulation algorithms.

In the domain of quantum coding, we have shown how to determine the encoding matrix of a quantum code from a collection of Pauli errors. On a more speculative note to be part of future work concerning interception of quantum channels, it might also be useful to identify the quantum coder used by a noncooperative transmitter.

In the domain of quantum walk search, thanks to JEVD we have proved that there exists a small subspace of the whole Hilbert space which captures the essence of the search process, and we have given an algorithm that allows us to check this result by simulation.

## **Acknowledgements**

The authors thank the IBNM (Institut Brestois du Numérique et des Mathématiques), CyberIoT Chair of Excellence, for its support.

## **Abbreviations**


## **Author details**

Gilles Burel<sup>1</sup> \*, Hugo Pillin1,2, Paul Baird2 , El-Houssaïn Baghious<sup>1</sup> and Roland Gautier<sup>1</sup>

1 Lab-STICC, University of Brest, Brest, France

2 LMBA, University of Brest, Brest, France

\*Address all correspondence to: gilles.burel@univ-brest.fr

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Humble TS. Quantum security for the physical layer. IEEE Communications Magazine. 2013;**51**(8):56-62

[2] Liao SK et al. Satellite-to-ground quantum key distribution. Nature. 2017; **549**:43-47

[3] Ren JG et al. Ground-to-satellite quantum teleportation. Nature. 2017; **549**:70-73

[4] Valivarthi R et al. Quantum teleportation across a metropolitan fibre network. Nature Photonics. 2016;**10**: 676-680

[5] Yin J et al. Quantum teleportation and entanglement distribution over 100 kilometre freespace channels. Nature. 2012;**488**:185-188

[6] Microsoft. The Q# programming language user guide [Internet] 2022 Available from: https://docs.microsoft.c om/en-us/azure/quantum/user-guide/? view=qsharp-preview [Accessed: January 11, 2022]

[7] Golub GH, Van Loan CF. Matrix Computations. 3rd ed. Baltimore and London: The John Hopkins University Press; 1996

[8] Grover LK. Quantum mechanics helps in searching for a needle in a haystack. Physical Review Letters. 1997; **79**(2):325

[9] D'Angeli D, Donno A. Shuffling Matrices, Kronecker Product and Discrete Fourier Transform. Discrete Applied Mathematics. 2017;**233**:1-18

[10] Raussendorf R. Key ideas in quantum error correction. Philosophical Transactions of the Royal Society A. 2012;**370**:4541-4565

[11] Wootters W, Zurek W. A single quantum cannot be cloned. Nature. 1982; **299**(5886):802-803. DOI: 10.1038/299 802a0

[12] Nielsen MA, Chuang IL. Quantum Computation and Quantum Information. Cambridge, UK: Cambridge University Press; 2010

[13] Steane A. Multiple-particle interference and quantum error correction. Proceeding of the Royal Society of London. 1996;**452**(1954): 2551-2577. DOI: 10.1098/rspa.1996.0136

[14] Shor PW. Scheme for reducing decoherence in quantum computer memory. Physical Review A. 1995;**52**(4): R2493-R2496. DOI: 10.1103/PhysRevA. 52.R2493

[15] Calderbank AR, Shor PW. Good quantum error-correcting codes exist. Physical Review A. 1996;**54**(2): 1098-1105. DOI: 10.1103/ physreva.54.1098

[16] Burel G, Pillin H, Baghious EH, Baird P, Gautier R. Identification Of Quantum Encoder Matrix From A Collection Of Pauli Errors. Ho Chi Minh city, Vietnam: Asia-Pacific Conference on Communications; 2019

[17] Kempe J. Quantum random walks – An introductory overview. Contemporary Physics. 2003;**44**(4):307-327. DOI: 10.1080/00107151031000110776

[18] Moore C, Russell A. Quantum Walks on the Hypercube. Lecture Notes in Computer Science. Vol. 2483. New York: Springer. DOI: 10.1007/3-540-45726-7\_14

[19] Shenvi N, Kempe J, Whaley KB. Quantum random-walk search

*Joint EigenValue Decomposition for Quantum Information Theory and Processing DOI: http://dx.doi.org/10.5772/intechopen.102899*

algorithm. Physical Review A. 2003;**67**: 052307

[20] Bjorck A, GolubG. Numerical methods for computing angles between linear subspaces. Mathematics of Computation. 1973;**27**:123. DOI: 10.2307/ 2005662

## **Chapter 9**
