**Optimal Allocation of Reliability in Series Parallel Production System**

Rami Abdelkader, Zeblah Abdelkader, Rahli Mustapha and Massim Yamani

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55725

## **1. Introduction**

20 Search Algorithms

2004.

*China*, 2008.

192(1):56–68, 2007.

NJ, USA, 2nd edition, 2004.

2001.

1997.

pages 1440–1447, 2003.

240 Search Algorithms for Engineering Optimization

[15] Parsopoulos K.E. and Groumpos P.P. A first study of fuzzy cognitive maps learning using particle swarm optimization. In *IEEE 2003 Congress on Evolutionary Computation*,

[16] Papageorgiou E.I. Learning algorithms for fuzzy cognitive maps - a review study. *IEEE Transactions on Systems, Man, and Cybernetics - Part C*, 42(2):150 –163, march 2012.

[17] Stylios C.D. Papageorgiou E.I. and P.P. Groumpos. Active hebbian learning algorithm to train fuzzy cognitive maps. *International Journal of Approximate Reasoning*, 37(3):219–249,

[18] Papageorgiou E.I. and Groumpos P.P. A new hybrid learning algorithm for fuzzy

[19] Zhu Y. and Zhang W. An integrated framework for learning fuzzy cognitive map using rcga and nhl algorithm. In *Int. Conf. Wireless Commun., Netw. Mobile Comput., Dalian,*

[20] Alizadeh S., Ghazanfari M., Jafari M., and Hooshmand S. Learning fcm by tabu search.

[21] Stach W., Kurgan L.A., Pedrycz W., and Reformat M. Genetic learning of fuzzy cognitive

[22] Ghazanfari M., Alizadeh S., Fathian M., and Koulouriotis D.E. Comparing simulated annealing and genetic algorithm in learning fcm. *Applied Mathematics and Computation*,

[23] Kennedy J. and Eberhart R. Particle swarm optimization. In *IEEE International Conference*

[24] Kennedy J. and Eberhart R.C. *Swarm Intelligence*. Morgan Kaufmann, first edition, March

[25] Shi Y. and Eberhart R.C. A modified particle swarm optimizer. In *IEEE International*

[26] Haupt R.L. and Haupt S.E. *Practical Genetic Algorithms*. John Wiley & Sons, Hoboken,

[27] Storn R. and Price K. Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. *J. of Global Optimization*, 11(4):341–359, December

[28] Storn R.M. Price K.V. and Lampinen J.A. *Differential Evolution A Practical Approach to Global Optimization*. Natural Computing Series. Springer-Verlag, Berlin, Germany, 2005.

[29] Das S. and Suganthan P.N. Differential evolution: A survey of the state-of-the-art. *IEEE*

cognitive maps learning. *Applied Soft Computing*, 5:409–431, 2005.

*International Journal of Computer Science*, 3:142–149, 2007.

*Conference on Evolutionary Computation*, pages 69–73, 1998.

*Transactions on Evolutionary Computation*, 15(1):4 –31, feb. 2011.

maps. *Fuzzy Sets and Systems*, 153(3):371–401, 2005.

*on Neural Networks*, pages 1942–1948, 1995.

One of the most important problems in many industrial applications is the redundancy optimization problem. This latter is well known combinatorial optimization problem where the design goal is achieved by discrete choices made from elements available on the market. The natural objective function is to find the minimal cost configuration of a series-parallel system under availability constraints. The system is considered to have a range of performance levels from perfect working to total failure. In this case the system is called a *multi-state* system (MSS). Let consider a multi-state system containing *n* components *Ci* (*i* = *1*, *2*, …, *n*) in series arrangement. For each component *Ci* there are various versions, which are proposed by the suppliers on the market. Elements are characterized by their cost, performance and availability according to their version. For example, these elements can represent machines in a manufac‐ turing system to accomplish a task on product in our case they represent the whole of electrical power system (generating units, transformers and electric carrying lines devices). Each component *Ci* contains a number of elements connected in parallel. Different versions of elements may be chosen for any given system component. Each component can contain elements of *different* versions as sketched in figure 1.

A limitation can be undesirable or even unacceptable, where only identical elements are used in parallel (i.e. homogeneous system) for two reasons. First, by allowing different versions of the devices to be allocated in the same system, one can obtain a solution that provides the desired availability or reliability level with a lower cost than in the solution with identical parallel devices. Second, in practice the designer often has to include additional devices in the existing system. It may be necessary, for example, to modernize a production line system according to a new demand levels from customers or according to new reliability requirements.

© 2013 Abdelkader et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Abdelkader et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**Optimal Allocation of Reliability in Series Parallel Production System** 

One of the most important problems in many industrial applications is the redundancy optimization problem. This latter is well known combinatorial optimization problem where the design goal is achieved by discrete choices made from elements available on the market. The natural objective function is to find the minimal cost configuration of a series-parallel system under availability constraints. The system is considered to have a range of performance levels from perfect working to total failure. In this case the system is called a *multi-state* system (MSS). Let consider a multi-state system containing *n* components *Ci* (*i* = *1*, *2*, …, *n*) in series arrangement. For each component *Ci* there are various versions, which are proposed by the suppliers on the market. Elements are characterized by their cost, performance and availability according to their version. For example, these elements can represent

connected in parallel. Different versions of elements may be chosen for any given system component. Each component can contain

This work uses an *ant colony* optimization approach to solve the ROP for multi-state system. The idea of employing a colony of cooperating agents to solve combinatorial optimization problems was recently proposed in (Dorigo, Maniezzo and Colorni, 1996). The ant colony approach has been successfully applied to the classical traveling salesman problem (Dorigo and Gambardella, 1997), and to the quadratic assignment problem (Maniezzo and Colorni, 1999). Ant colony shows very good results in each applied area. It has been recently adapted for the reliability design of binary state systems (Liang and Smith, 2001). The ant colony has also been adapted with success to other combinatorial optimization problems such as the vehicle routing problem (Bullnheimer, Hartl and Strauss, 1997). The ant colony method has been used to solving the redundancy allocation problem (Nahas N., Nourelfath M., Aït-Kadi

Optimal Allocation of Reliability in Series Parallel Production System

http://dx.doi.org/10.5772/55725

243

In this paper, we extend the work of other researchers by proposing ant colony system algorithm to solve the ROP characterised in the problem of optimization of the structure of power system where redundant elements are included in order to provide a desired level of reliability through optimal allocation of elements with different parameters (optimal structure

The use of this algorithm is within a general framework for the comparative and structural study of metaheuristics. In a first step the application of ant colonies in its primal form is

The problem formulated in this chapter lead to a complicated combinatorial optimization problem. The total number of different solution to be examined is very large, even for rather small problems. An exhaustive examination of all possible solutions is not feasible given reasonable time limitations. Because of this, the ant colony optimization (or simply ACO) approach is adapted to find optimal or nearly optimal solutions to be obtained in a short time. The newer developed meta-heuristic method has the advantage to solve the ROP for MSS *without* the limitation on the diversity of versions of elements in parallel. Ant colony optimi‐ zation is inspired by the behavior of real ant colonies that exhibit the highly structured behavior. Ants lay down in some quantity an aromatic substance, known as *pheromone*, in their way to food. An ant chooses a specific path in correlation with the intensity of the pheromone. The pheromone trail evaporates over time if no more pheromone in laid down by others ants, therefore the best path has more intensive pheromone and higher probability to be chosen. During the optimization process, artificial ants will have to evaluate the availability of a given selected structure of the series-parallel system (electrical network). To do this, a fast procedure of availability estimation is developed. This procedure is based on a modern mathematical technique: the *z*-transform or UMGF which was introduced in (Ushakov, 1986). It was proven to be very effective for high dimension combinatorial problems: see e.g. (Ushakov, 2002), (Levitin, 2001). The universal moment generating function is an extension of the ordinary moment generating function (UGF) (Ross, 1993). The method developed in this chapter allows the availability function of reparable series-parallel MSS to be obtained using a straightforward

with series-parallel elements) in continuous production system.

necessary and thereafter in perspective the study will be completed.

Daoud, 2006).

**1.2. Approach and outlines**

numerical procedure.

A limitation can be undesirable or even unacceptable, where only identical elements are used in parallel (i.e. homogeneous system)

Zeblah Abdelkader1, Rami Abdelkader1, Rahli Mustapha2 and Massim Yamani3

1University Of Sidi Bel Abbes, Engineering Faculty

University Of Sidi Bel Abbes, Engineering Faculty

elements of *different* versions as sketched in figure 1.

University Of Oran, Engineering Faculty

Algeria

Algeria

Algeria

rahlim@yahoo.fr

**1. Introduction** 

2

3

Figure 1. Series Parallel Production System **Figure 1.** Series Parallel Production System

#### **1.1. Literature review** for two reasons. First, by allowing different versions of the devices to be allocated in the same system, one can obtain a solution that provides the desired availability or reliability level with a lower cost than in the solution with identical parallel devices.

The vast majority of classical reliability or availability analysis and optimization assume that components and system are in either of *two* states (i.e., complete working state and total failure state). However, in many real life situations we are actually able to distinguish among various levels of performance for both system and components. For such situation, the existing dichotomous model is a gross oversimplification and so models assuming multi-state (de‐ gradable) systems and components are preferable since they are closer to reliability. Recently much works treat the more sophisticated and more realistic models in which systems and components may assume many states ranging from perfect functioning to complete failure. In this case, it is important to develop MSS reliability theory. In this paper, an MSS reliability theory will be used, where the binary state system theory is extending to the multi-state case. As is addresses in recent review of the literature for example in (Ushakov, Levitin and Lisnianski, 2002) or (Levitin and Lisnianski, 2001). Generally, the methods of MSS reliability assessment are based on four different approaches: Second, in practice the designer often has to include additional devices in the existing system. It may be necessary, for example, to modernize a production line system according to a new demand levels from customers or according to new reliability requirements. **1.1. Literature review**  The vast majority of classical reliability or availability analysis and optimization assume that components and system are in either of *two* states (i.e., complete working state and total failure state). However, in many real life situations we are actually able to distinguish among various levels of performance for both system and components. For such situation, the existing dichotomous model is a gross oversimplification and so models assuming multi-state (degradable) systems and components are preferable since


In (Ushakov, Levitin and Lisnianski, 2002), a comparison between these four approaches highlights that the UGF approach is fast enough to be used in the optimization problems where the search space is sizeable.

The problem of total investment-cost minimization, subject to reliability or availability constraints, is well known as the redundancy optimization problem (ROP). The ROP is studied in many different forms as summarized in (Tillman, Hwang and Kuo, 1977), and more recently in (Kuo and Prasad, 2000). The ROP for the multi-state reliability was introduced in (Ushakov, 1987). In (Lisnianski, Levitin, Ben-Haim and Elmakis, 1996) and (Levitin, Lisnianski, Ben-Haim and Elmakis, 1997), genetic algorithms were used to find the optimal or nearly optimal power system structure.

This work uses an *ant colony* optimization approach to solve the ROP for multi-state system. The idea of employing a colony of cooperating agents to solve combinatorial optimization problems was recently proposed in (Dorigo, Maniezzo and Colorni, 1996). The ant colony approach has been successfully applied to the classical traveling salesman problem (Dorigo and Gambardella, 1997), and to the quadratic assignment problem (Maniezzo and Colorni, 1999). Ant colony shows very good results in each applied area. It has been recently adapted for the reliability design of binary state systems (Liang and Smith, 2001). The ant colony has also been adapted with success to other combinatorial optimization problems such as the vehicle routing problem (Bullnheimer, Hartl and Strauss, 1997). The ant colony method has been used to solving the redundancy allocation problem (Nahas N., Nourelfath M., Aït-Kadi Daoud, 2006).

In this paper, we extend the work of other researchers by proposing ant colony system algorithm to solve the ROP characterised in the problem of optimization of the structure of power system where redundant elements are included in order to provide a desired level of reliability through optimal allocation of elements with different parameters (optimal structure with series-parallel elements) in continuous production system.

The use of this algorithm is within a general framework for the comparative and structural study of metaheuristics. In a first step the application of ant colonies in its primal form is necessary and thereafter in perspective the study will be completed.

## **1.2. Approach and outlines**

**1.1. Literature review**

**Figure 1.** Series Parallel Production System

242 Search Algorithms for Engineering Optimization

assessment are based on four different approaches:

requirements.

**1.1. Literature review** 

**iii.** The Monte-Carlo simulation technique.

**ii.** The stochastic process (mainly Markov) approach.

**iv.** The universal moment generating function (UMGF) approach.

In (Ushakov, Levitin and Lisnianski, 2002), a comparison between these four approaches highlights that the UGF approach is fast enough to be used in the optimization problems where

The problem of total investment-cost minimization, subject to reliability or availability constraints, is well known as the redundancy optimization problem (ROP). The ROP is studied in many different forms as summarized in (Tillman, Hwang and Kuo, 1977), and more recently in (Kuo and Prasad, 2000). The ROP for the multi-state reliability was introduced in (Ushakov, 1987). In (Lisnianski, Levitin, Ben-Haim and Elmakis, 1996) and (Levitin, Lisnianski, Ben-Haim and Elmakis, 1997), genetic algorithms were used to find the optimal or nearly optimal power

**i.** The structure function approach.

the search space is sizeable.

system structure.

The vast majority of classical reliability or availability analysis and optimization assume that components and system are in either of *two* states (i.e., complete working state and total failure state). However, in many real life situations we are actually able to distinguish among various levels of performance for both system and components. For such situation, the existing dichotomous model is a gross oversimplification and so models assuming multi-state (de‐ gradable) systems and components are preferable since they are closer to reliability. Recently much works treat the more sophisticated and more realistic models in which systems and components may assume many states ranging from perfect functioning to complete failure. In this case, it is important to develop MSS reliability theory. In this paper, an MSS reliability theory will be used, where the binary state system theory is extending to the multi-state case. As is addresses in recent review of the literature for example in (Ushakov, Levitin and Lisnianski, 2002) or (Levitin and Lisnianski, 2001). Generally, the methods of MSS reliability

**Optimal Allocation of Reliability in Series Parallel Production System** 

One of the most important problems in many industrial applications is the redundancy optimization problem. This latter is well known combinatorial optimization problem where the design goal is achieved by discrete choices made from elements available on the market. The natural objective function is to find the minimal cost configuration of a series-parallel system under availability constraints. The system is considered to have a range of performance levels from perfect working to total failure. In this case the system is called a *multi-state* system (MSS). Let consider a multi-state system containing *n* components *Ci* (*i* = *1*, *2*, …, *n*) in series arrangement. For each component *Ci* there are various versions, which are proposed by the suppliers on the market. Elements are characterized by their cost, performance and availability according to their version. For example, these elements can represent machines in a manufacturing system to accomplish a task on product in our case they represent the whole of electrical power system (generating units, transformers and electric carrying lines devices). Each component *Ci* contains a number of elements connected in parallel. Different versions of elements may be chosen for any given system component. Each component can contain

A limitation can be undesirable or even unacceptable, where only identical elements are used in parallel (i.e. homogeneous system) for two reasons. First, by allowing different versions of the devices to be allocated in the same system, one can obtain a solution that provides the desired availability or reliability level with a lower cost than in the solution with identical parallel devices. Second, in practice the designer often has to include additional devices in the existing system. It may be necessary, for example, to modernize a production line system according to a new demand levels from customers or according to new reliability

Element 1

Component Cn

Element 2

Element 3

Element Kn

The vast majority of classical reliability or availability analysis and optimization assume that components and system are in either of *two* states (i.e., complete working state and total failure state). However, in many real life situations we are actually able to distinguish among various levels of performance for both system and components. For such situation, the existing dichotomous model is a gross oversimplification and so models assuming multi-state (degradable) systems and components are preferable since

Zeblah Abdelkader1, Rami Abdelkader1, Rahli Mustapha2 and Massim Yamani3

1University Of Sidi Bel Abbes, Engineering Faculty

University Of Sidi Bel Abbes, Engineering Faculty

elements of *different* versions as sketched in figure 1.

Component C1 Component C2

Element 1

Element 2

Element 3

 

Element K2

Figure 1. Series Parallel Production System

University Of Oran, Engineering Faculty

Algeria

Algeria

Algeria

rahlim@yahoo.fr

**1. Introduction** 

Element 1

Element 2

Element 3

 

Element K1

2

3

The problem formulated in this chapter lead to a complicated combinatorial optimization problem. The total number of different solution to be examined is very large, even for rather small problems. An exhaustive examination of all possible solutions is not feasible given reasonable time limitations. Because of this, the ant colony optimization (or simply ACO) approach is adapted to find optimal or nearly optimal solutions to be obtained in a short time. The newer developed meta-heuristic method has the advantage to solve the ROP for MSS *without* the limitation on the diversity of versions of elements in parallel. Ant colony optimi‐ zation is inspired by the behavior of real ant colonies that exhibit the highly structured behavior. Ants lay down in some quantity an aromatic substance, known as *pheromone*, in their way to food. An ant chooses a specific path in correlation with the intensity of the pheromone. The pheromone trail evaporates over time if no more pheromone in laid down by others ants, therefore the best path has more intensive pheromone and higher probability to be chosen.

During the optimization process, artificial ants will have to evaluate the availability of a given selected structure of the series-parallel system (electrical network). To do this, a fast procedure of availability estimation is developed. This procedure is based on a modern mathematical technique: the *z*-transform or UMGF which was introduced in (Ushakov, 1986). It was proven to be very effective for high dimension combinatorial problems: see e.g. (Ushakov, 2002), (Levitin, 2001). The universal moment generating function is an extension of the ordinary moment generating function (UGF) (Ross, 1993). The method developed in this chapter allows the availability function of reparable series-parallel MSS to be obtained using a straightforward numerical procedure.

## **2. Formulation of redundancy optimization problem**

#### **2.1. Series-parallel system with different redundant elements**

Let consider a series-parallel system containing *n* subcomponents *Ci* (*i* = *1*, *2*, …, *n*) in series as represented in figure 1. Every component *Ci* contains a number of different elements connected in parallel. For each component *i*, there are a number of element versions available in the market. For any given system component, different versions and number of elements may be chosen. For each subcomponent *i*, elements are characterized according to their version *v* by their cost (*Civ*), availability (*Aiv*) and performance (*∑iv*). The structure of system component *i* can be defined by the numbers of parallel elements (of each version) *kiv* for 1≤*v* ≤*Vi* , where *Vi* is a number of versions available for element of type *i*. Figure 2 illustrates these notations for a given component *i*. The entire system structure is defined by the vectors *ki* = {*kivi* } (1≤*i* ≤*n*, 1≤*v* ≤*Vi* ). For a given set of vectors *k1*, *k2*, …, *kn* the total cost of the system can be calculated as:

$$\mathbf{C} = \sum\_{i=1}^{n} \sum\_{v=1}^{V\_i} k\_{iv} \mathbf{C}\_{iv} \tag{1}$$

lim[Pr ( )] ( ) *<sup>j</sup> <sup>t</sup> <sup>j</sup> obabP t* ®¥ S == S (2)

<sup>=</sup> å (3)

Optimal Allocation of Reliability in Series Parallel Production System

http://dx.doi.org/10.5772/55725

245

å (4)

} (1≤ *j* ≤*M* ), respectively. As the availability *A*

<sup>=</sup> åå (5)

*j*

1

<sup>=</sup> å S ³

*j j j*

*T* <sup>=</sup> =

*M*

1

availability index *A* is:

system.

We denote by *D* and *T* the vectors {*Dj*

**2.3. Optimal design problem formulation**

*j D E P* S ³

If the operation period *T* is divided into *M* intervals (with durations *T1*, *T2*, …, *TM*) and each interval has a required demand level (*D1*, *D2*, …, *DM*, respectively), then the generalized MSS

<sup>1</sup> Pr ( )

*A obab D T*

} and {*Tj*

*M j j*

is a function of *k1*, *k2*, …, *kn*, *D* and *T*, it will be written *A*(*k1*, *k2*, …, *kn*, *D*, *T*). In the case of a power system, the vectors *D* and *T* define the cumulative load curve (consumer demand). In reality the load curves varies randomly; an approximation is used from random curve to discrete curve see (Wood and Ringlee, 1970). In general, this curve is known for every power

The multi-state system redundancy optimization problem of electrical power system can be formulated as follows: find the minimal cost system configuration *k1*, *k2*, …, *kn*, such that the

1 1

*i v C kC* = =

The input of this problem is the specified availability and the outputs are the minimal investment-cost and the corresponding configuration determined. To solve this combinatorial optimization problem, it is important to have an effective and fast procedure to evaluate the availability index for a series-parallel system of elements. Thus, a method is developed in the

*iv iv*

( ) 1 2 <sup>0</sup> subject to , , , , , *A k k k DT A* ¼ ³ *<sup>n</sup>* (6)

*n Vi*

corresponding availability exceeds or equal the specified availability *A0*. That is,

Minimize

next section to estimate the value of *A*(*k1*, *k2*, …, *kn*, *D*, *T*).

#### **2.2. Availability of reparable multi-state systems**

The series-parallel system is composed of a number of failure prone elements, such that the failure of some elements leads only to a degradation of the system performance. This system is considered to have a range of performance levels from perfect working to complete failure. In fact, the system failure can lead to decreased capability to accomplish a given task, but not to complete failure. An important MSS measure is related to the ability of the system to satisfy a given demand.

In electric power systems, reliability is considered as a measure of the ability of the system to meet the load demand (*D)*, i.e., to provide an adequate supply of electrical energy (*∑)*. This definition of the reliability index is widely used in power systems: see e.g., (Ross, 1993), (Murchland, 1975), (Levitin, Lisnianski, Ben-Haim and Elmakis, 1998), (Lisnianski, Levitin, Ben-Haim and Elmakis, 1996), (Levitin, Lisnianski, and Elmakis, 1997). The Loss of Load Probability index (LOLP) is usually used to estimate the reliability index (Billinton and Allan, 1990). This index is the overall probability that the load demand will not be met. Thus, we can write *R* = *Probab*(*∑ ≥ D)* o*rR* = *1-*LOLP with LOLP = *Probab*(*∑ < D)*. This reliability index depends on consumer demand *D*.

For reparable MSS, a multi-state steady-state availability *E* is used as *Probab*(*∑≥ D)* after enough time has passed for this probability to become constant (Levitin, Lisnianski, Ben-Haim and Elmakis, 1998). In the steady-state the distribution of states probabilities is given by equation (2), while the multi-state stationary availability is formulated by equation (3):

Optimal Allocation of Reliability in Series Parallel Production System http://dx.doi.org/10.5772/55725 245

$$P\_j = \lim\_{t \to \infty} [\text{Probab}(\sum \{t\} \ = \sum\_j)] \tag{2}$$

$$E = \sum\_{\Sigma\_j \gtrsim D} P\_j \tag{3}$$

If the operation period *T* is divided into *M* intervals (with durations *T1*, *T2*, …, *TM*) and each interval has a required demand level (*D1*, *D2*, …, *DM*, respectively), then the generalized MSS availability index *A* is:

$$A = \frac{1}{\sum\_{j=1}^{M} \sum\_{j=1}^{M} \Pr{obab}(\Sigma \ge D\_j) \ T\_j}$$

We denote by *D* and *T* the vectors {*Dj* } and {*Tj* } (1≤ *j* ≤*M* ), respectively. As the availability *A* is a function of *k1*, *k2*, …, *kn*, *D* and *T*, it will be written *A*(*k1*, *k2*, …, *kn*, *D*, *T*). In the case of a power system, the vectors *D* and *T* define the cumulative load curve (consumer demand). In reality the load curves varies randomly; an approximation is used from random curve to discrete curve see (Wood and Ringlee, 1970). In general, this curve is known for every power system.

#### **2.3. Optimal design problem formulation**

**2. Formulation of redundancy optimization problem**

Let consider a series-parallel system containing *n* subcomponents *Ci*

in parallel. For each component *i*, there are a number of element versions available in the market. For any given system component, different versions and number of elements may be chosen. For each subcomponent *i*, elements are characterized according to their version *v* by their cost (*Civ*), availability (*Aiv*) and performance (*∑iv*). The structure of system component *i*

is a number of versions available for element of type *i*. Figure 2 illustrates these notations for

). For a given set of vectors *k1*, *k2*, …, *kn* the total cost of the system can be

can be defined by the numbers of parallel elements (of each version) *kiv* for 1≤*v* ≤*Vi*

a given component *i*. The entire system structure is defined by the vectors *ki*

1 1

*i v C kC* = =

*iv iv*

The series-parallel system is composed of a number of failure prone elements, such that the failure of some elements leads only to a degradation of the system performance. This system is considered to have a range of performance levels from perfect working to complete failure. In fact, the system failure can lead to decreased capability to accomplish a given task, but not to complete failure. An important MSS measure is related to the ability of the system to satisfy

In electric power systems, reliability is considered as a measure of the ability of the system to meet the load demand (*D)*, i.e., to provide an adequate supply of electrical energy (*∑)*. This definition of the reliability index is widely used in power systems: see e.g., (Ross, 1993), (Murchland, 1975), (Levitin, Lisnianski, Ben-Haim and Elmakis, 1998), (Lisnianski, Levitin, Ben-Haim and Elmakis, 1996), (Levitin, Lisnianski, and Elmakis, 1997). The Loss of Load Probability index (LOLP) is usually used to estimate the reliability index (Billinton and Allan, 1990). This index is the overall probability that the load demand will not be met. Thus, we can write *R* = *Probab*(*∑ ≥ D)* o*rR* = *1-*LOLP with LOLP = *Probab*(*∑ < D)*. This reliability index depends

For reparable MSS, a multi-state steady-state availability *E* is used as *Probab*(*∑≥ D)* after enough time has passed for this probability to become constant (Levitin, Lisnianski, Ben-Haim and Elmakis, 1998). In the steady-state the distribution of states probabilities is given by equation

(2), while the multi-state stationary availability is formulated by equation (3):

*n Vi*

(*i* = *1*, *2*, …, *n*) in series as

, where *Vi*

 = {*kivi* }

contains a number of different elements connected

<sup>=</sup> åå (1)

**2.1. Series-parallel system with different redundant elements**

represented in figure 1. Every component *Ci*

244 Search Algorithms for Engineering Optimization

**2.2. Availability of reparable multi-state systems**

(1≤*i* ≤*n*, 1≤*v* ≤*Vi*

a given demand.

on consumer demand *D*.

calculated as:

The multi-state system redundancy optimization problem of electrical power system can be formulated as follows: find the minimal cost system configuration *k1*, *k2*, …, *kn*, such that the corresponding availability exceeds or equal the specified availability *A0*. That is,

$$\text{Minimize } \mathsf{C} = \sum\_{i=1}^{n} \sum\_{v=1}^{V\_i} k\_{iv} \mathsf{C}\_{iv} \tag{5}$$

$$\text{subject to } A\{k\_1, k\_2, \dots, k\_n, D, T\} \ge A\_0 \tag{6}$$

The input of this problem is the specified availability and the outputs are the minimal investment-cost and the corresponding configuration determined. To solve this combinatorial optimization problem, it is important to have an effective and fast procedure to evaluate the availability index for a series-parallel system of elements. Thus, a method is developed in the next section to estimate the value of *A*(*k1*, *k2*, …, *kn*, *D*, *T*).

## **3. Multi-state system availability estimation**

The procedure used in this chapter is based on the universal *z*-transform, which is a modern mathematical technique introduced in (Ushakov, 1986). This method, convenient for numerical implementation, is proved to be very effective for high dimension combinatorial problems. In the literature, the universal *z*-transform is also called universal moment generating function (UMGF) or simply *u*-function or *u*-transform. In this chapter, we mainly use the acronym UMGF. The UMGF extends the widely known ordinary moment generat‐ ing function (Ross, 1993).

#### **3.1. Definition and properties**

The UMGF of a discrete random variable *∑* is defined as a polynomial:

$$\mu(z) = \sum\_{j=1}^{J} P\_j z^{\Sigma\_j} \tag{7}$$

Consider single elements with total failures and each element *i* has nominal performance *∑<sup>i</sup>*

<sup>0</sup> (1 ) (1 ) *i i*

To evaluate the MSS availability of a series-parallel system, two basic composition operators are introduced. These operators determine the polynomial *u(z)* for a group of elements.

The essential property of the UMGF is that it allows the total UMGF for a system of elements connected in parallel or in series to be obtained using simple algebraic operations on the individual UMGF of elements. These operations may be defined according to the physical nature of the elements and their interactions. The only limitation on such an arbitrary operation is that its operator *ϕ* should satisfy the following Ushakov's conditions (Ushakov, 1986):

) *for any k*.

Let consider a system component *m* containing *Jm* elements connected in parallel. As the performance measure is related to the system productivity, the total performance of the parallel system is the *sum* of performances of all its elements. In power systems engineering, the term capacity is usually used to indicate the quantitative performance measure of an element (Lisnianski, Levitin, Ben-Haim and Elmakis, 1996). It may have different physical nature. Examples of elements capacities are: generating capacity for a generator, pipe capacity for a water circulator, carrying capacity for an electric transmission line, etc. The capacity of an element can be measured as a percentage of nominal total system capacity. In a manufacturing system, elements are machines. Therefore, the total performance of the parallel machine is the

The *u*-function of MSS component *m* containing *Jm* parallel elements can be calculated by using

*i*=1 *n gi* .

and *Probab(∑* = *0)* = 1− *Ai*

Optimal Allocation of Reliability in Series Parallel Production System

*i i i ii u A z Az A Az* S S =- + =- + (11)

. The UMGF of such an

http://dx.doi.org/10.5772/55725

247

*)* = *Ai*

. Then, *Probab(∑* = *∑<sup>i</sup>*

element has only two terms and can be defined as:

and availability *Ai*

**3.2. Composition operators**

*3.2.1. Properties of the operators*

*<sup>φ</sup>*(*p*1*<sup>z</sup> <sup>g</sup>*<sup>1</sup>

*φ*(*g*)= *g*,

, *<sup>p</sup>*2*<sup>z</sup> <sup>g</sup>*<sup>2</sup>

*3.2.2. Parallel elements*

the *Γ* operator:

)= *<sup>p</sup>*<sup>1</sup> *<sup>p</sup>*2*<sup>z</sup> <sup>φ</sup>*(*g*1,*g*2)

*φ*(*g*1, ..., *gk* , *gk* +1, ..., *gn*)=*φ*(*g*1,...

*φ*(*g*1, ..., *gn*)=*φ*(*φ*(*g*1, ..., *gk* ), *φ*(*gk* +1, ..., *gn*)),

sum of performances (Dallery and Gershwin, 1992).

*up*(*z*)=*Γ*(*u*1(*z*), *<sup>u</sup>*2(*z*), ..., *un*(*z*)), where *<sup>Γ</sup>*(*g*1, *<sup>g</sup>*2, ..., *gn*)=∑

Therefore for a pair of elements connected in parallel:

,

, *gk* +1 ,*gk* , ... ,*gn*

where the variable *∑* has *J* possible values and *Pj* is the probability that *∑* is equal to *∑<sup>j</sup>* .

The probabilistic characteristics of the random variable *∑* can be found using the function *u(z)*. In particular, if the discrete random variable *∑* is the MSS stationary output performance, the availability *E* is given by the probability *Probab*(*∑ ≥ D)* which can be defined as follows:

$$\text{Probab}(\Sigma \ge D) = \Psi\left(\mu(z)z^{-D}\right) \tag{8}$$

where *Ψ* is a distributive operator defined by expressions (9) and (10):

$$\Psi(\mathbb{P}z^{\sigma-D}) = \begin{cases} \mathbb{P}, & \text{if } \sigma \ge D \\ 0, & \text{if } \sigma < D \end{cases} \tag{9}$$

$$\Psi\left(\sum\_{j=1}^{J} P\_j \mathbf{z}^{\frac{\Sigma\_j - D}{J} - D}\right) = \sum\_{j=1}^{J} \Psi\left(P\_j \mathbf{z}^{\frac{\Sigma\_j - D}{J} - D}\right) \tag{10}$$

It can be easily shown that equations (7) – (10) meet condition *Probab*(*∑≥ D)* = ∑ *Σj* ≥*D Pj* . By using the operator *Ψ*, the coefficients of polynomial *u(z)* are summed for every term with *∑<sup>j</sup>* ≥ *D*, and the probability that *∑* is not less than some arbitrary value *D* is systematically obtained.

Consider single elements with total failures and each element *i* has nominal performance *∑<sup>i</sup>* and availability *Ai* . Then, *Probab(∑* = *∑<sup>i</sup> )* = *Ai* and *Probab(∑* = *0)* = 1− *Ai* . The UMGF of such an element has only two terms and can be defined as:

$$\mu\_{i} = (1 - A\_{i})z^{0} + A\_{i}z^{\Sigma\_{i}} = (1 - A\_{i}) + A\_{i}z^{\Sigma\_{i}} \tag{11}$$

To evaluate the MSS availability of a series-parallel system, two basic composition operators are introduced. These operators determine the polynomial *u(z)* for a group of elements.

#### **3.2. Composition operators**

**3. Multi-state system availability estimation**

The UMGF of a discrete random variable *∑* is defined as a polynomial:

ing function (Ross, 1993).

**3.1. Definition and properties**

246 Search Algorithms for Engineering Optimization

where the variable *∑* has *J* possible values and *Pj*

The procedure used in this chapter is based on the universal *z*-transform, which is a modern mathematical technique introduced in (Ushakov, 1986). This method, convenient for numerical implementation, is proved to be very effective for high dimension combinatorial problems. In the literature, the universal *z*-transform is also called universal moment generating function (UMGF) or simply *u*-function or *u*-transform. In this chapter, we mainly use the acronym UMGF. The UMGF extends the widely known ordinary moment generat‐

> 1 ( ) *<sup>j</sup> J j j u z Pz*<sup>S</sup> =

The probabilistic characteristics of the random variable *∑* can be found using the function *u(z)*. In particular, if the discrete random variable *∑* is the MSS stationary output performance, the availability *E* is given by the probability *Probab*(*∑ ≥ D)* which can be defined as follows:

Probab( ) ( ) ( ) *<sup>D</sup> D uzz*- S ³ =Y

where *Ψ* is a distributive operator defined by expressions (9) and (10):

, ( ) 0,

1 1

It can be easily shown that equations (7) – (10) meet condition *Probab*(*∑≥ D)* = ∑

*j j*

= =

s

æ ö Y =Y ç ÷ ç ÷ è ø

Y = í

*<sup>D</sup> P if D Pz*


*if D*

( )

å å (10)

*Σj* ≥*D Pj*

s

s

ï < î

*j j J J D D j j*

the operator *Ψ*, the coefficients of polynomial *u(z)* are summed for every term with *∑<sup>j</sup>* ≥ *D*, and the probability that *∑* is not less than some arbitrary value *D* is systematically obtained.

*P z P z* S - S -

<sup>=</sup> å (7)

is the probability that *∑* is equal to *∑<sup>j</sup>*

.

(8)

(9)

. By using

#### *3.2.1. Properties of the operators*

The essential property of the UMGF is that it allows the total UMGF for a system of elements connected in parallel or in series to be obtained using simple algebraic operations on the individual UMGF of elements. These operations may be defined according to the physical nature of the elements and their interactions. The only limitation on such an arbitrary operation is that its operator *ϕ* should satisfy the following Ushakov's conditions (Ushakov, 1986):

$$\begin{aligned} &\varrho(p\_1z^{\otimes\_1}, \ \_p\_2z^{\otimes\_2}) = p\_1p\_2z^{\otimes(\mathcal{G}\_{\mathbb{V}}\mathcal{G}\_2)},\\ &\varrho(\varrho(g) = \mathcal{g}\_{\mathcal{A}}\\ &\varrho(\mathcal{g}\_{1'}, \ \_\cdots, \ \_\mathcal{g}\mathcal{g}\_n) = \varrho(\varrho(\mathcal{g}\_{1'}, \ \_\cdots, \ \_\mathcal{g}\mathcal{g}\_k), \ \_\mathcal{\!}\varrho(\mathcal{g}\_{k+1'}, \ \_\cdots, \ \_\mathcal{\!}\mathcal{g}\_n)),\\ &\varrho(\mathcal{g}\_{1'}, \ \_\cdots, \ \_\mathcal{\!}\mathcal{g}\_{k'}, \ \_\mathcal{\!\!}\mathcal{g}\_{k+1'}, \ \cdots \ \_\mathcal{\!\!}\mathcal{g}\_n) = \varrho(\varrho(\!\_{\mathcal{G}\_{\mathbb{V}}, \ \_\cdots, \mathcal{\!\!}\mathcal{k}^{\!\!}} \ \_ \mathcal{\!\!}\mathcal{g}\_n \ ) \ \_ \mathcal{\!\!\!}\mathcal{k}.\end{aligned}$$

#### *3.2.2. Parallel elements*

Let consider a system component *m* containing *Jm* elements connected in parallel. As the performance measure is related to the system productivity, the total performance of the parallel system is the *sum* of performances of all its elements. In power systems engineering, the term capacity is usually used to indicate the quantitative performance measure of an element (Lisnianski, Levitin, Ben-Haim and Elmakis, 1996). It may have different physical nature. Examples of elements capacities are: generating capacity for a generator, pipe capacity for a water circulator, carrying capacity for an electric transmission line, etc. The capacity of an element can be measured as a percentage of nominal total system capacity. In a manufacturing system, elements are machines. Therefore, the total performance of the parallel machine is the sum of performances (Dallery and Gershwin, 1992).

The *u*-function of MSS component *m* containing *Jm* parallel elements can be calculated by using the *Γ* operator:

$$
\mu\_p(\mathbf{z}) = \Gamma(\mu\_1(\mathbf{z}), \ \mu\_2(\mathbf{z}), \ \dots, \ \mu\_n(\mathbf{z})), \quad \text{where } \Gamma(\mathbf{g}\_{1'}, \ \mathbf{g}\_{2'}, \ \dots, \ \mathbf{g}\_n) = \sum\_{i=1}^n \mathbf{g}\_i.
$$

Therefore for a pair of elements connected in parallel:

$$\Gamma(\mu\_1(z), \ \mu\_2(z)) = \Gamma(\sum\_{i=1}^n P\_i z^{a\_i}, \sum\_{j=1}^m Q\_j z^{b\_j}) = \sum\_{i=1}^n \sum\_{j=1}^m P\_i Q\_j z^{a\_i + b\_j}.$$

Parameters *ai* and *bj* are physically interpreted as the respective performances of the two elements. *n* and *m* are numbers of possible performance levels for these elements. *Pi* and *Qj* are steady-state probabilities of possible performance levels for elements.

by the work of real ant colonies that exhibit the highly structured behavior. Ants lay down in some quantity an aromatic substance, known as pheromone, in their way to food. An ant chooses a specific path in correlation with the intensity of the pheromone. The pheromone trail evaporates over time if no more pheromone in laid down by others ants, therefore the best paths have more intensive pheromone and higher probability to be chosen. This simple behavior explains why ants are able to adjust to changes in the environment, such as new

Optimal Allocation of Reliability in Series Parallel Production System

http://dx.doi.org/10.5772/55725

249

Artificial ants used in ant system are agents with very simple basic capabilities mimic the behavior of real ants to some extent. This approach provides algorithms called ant algorithms. The Ant System approach associates pheromone trails to features of the solutions of a combi‐ natorial problem, which can be seen as a kind of adaptive memory of the previous solutions. Solutions are iteratively constructed in a randomized heuristic fashion biased by the phero‐ mone trails, left by the previous ants. The pheromone trails, *τij*, are updated after the con‐ struction of a solution, enforcing that the best features will have a more intensive pheromone. An Ant algorithm presents the following characteristics. It is a natural algorithm since it is based on the behavior of ants in establishing paths from their colony to feeding sources and back. It is parallel and distributed since it concerns a population of agents moving simultane‐ ously, independently and without supervisor. It is cooperative since each agent chooses a path on the basis of the information, pheromone trails, laid by the other agents with have previously selected the same path. It is versatile that can be applied to similar versions the same problem. It is robust that it can be applied with minimal changes to other combinatorial optimization problems. The solution of the travelling salesman problem (TSP) was one of the first applica‐

Various extensions to the basic TSP algorithm were proposed, notably by (Dorigo and Gambardella, 1997a). The improvements include three main aspects: the state transition rule provides a direct way to balance between exploration of new edges and exploitation of a priori and accumulated knowledge about the problem, the global updating rule is applied only to edges which belong to the best ant tour and while ants construct solution, a local pheromone updating rule is applied. These extensions have been included in the algorithm proposed in

In our reliability optimization problem, we have to select the best combination of parts to minimize the total cost given a reliability constraint. The parts can be chosen in any combination from the available components. Components are characterized by their reliability, capacity and cost. This problem can be represented by a graph (figure 2) in which the set of nodes comprises the set of subsystems and the set of available components (i.e.

is connected only to its available components). An additional node (blank node) is connect‐

In figure 2, a series-parallel system is illustrated where the first and the second subsystem are

connected respectively to their 3 and 2 available components. The nodes *cpi3* and *cpi4*,

), *j* = 1..*n*) with a set of connections partially connect the graph (i.e. each subsystem

represent

obstacles interrupting the currently shortest path.

tions of ACO.

this paper.

max (*Mj*

ed to each subsystem.

**4.2. ACO-based solution approach**

One can see that the *Γ* operator is simply a product of the individual *u*-functions. Thus, the component UMGF is:

$$u\_p(z) = \prod\_{j=1}^{l\_n} u\_j(z).$$

Given the individual UMGF of elements defined in equation (11), we have:

$$\mu\_p(z) = \prod\_{j=1}^{J\_n} (1 - A\_j + A\_j z^{\frac{\Sigma\_i}{\epsilon}}) .$$

#### *3.2.3. Series elements*

When the elements are connected in series, the element with the least performance becomes the bottleneck of the system. This element therefore defines the total system productivity. To calculate the *u*-function for system containing *n* elements connected in series, the operator *η* should be used: *us*(*z*)=*η*(*u*1(*z*), *u*2(*z*), ..., *um*(*z*)), where *η*(*g*1, *g*2, ..., *gm*)=min{*g*1, *g*2, ..., *gm*}

so that

$$\eta(\mu\_1(z), \ \mu\_2(z)) = \eta\left(\sum\_{i=1}^n P\_i z^{a\_i}, \ \sum\_{j=1}^m Q\_j z^{b\_j}\right) = \sum\_{i=1}^n \sum\_{j=1}^m P\_i Q\_j z^{\min\{a\_i, b\_j\}}$$

Applying composition operators *Γ* and *η* consecutively, one can obtain the UMGF of the entire series-parallel system.

#### **4. The ant colony optimization approach**

The problem formulated in this chapter is a complicated combinatorial optimization problem. The total number of different solutions to be examined is very large, even for rather small problems. An exhaustive examination of the enormous number of possible solutions is not feasible given reasonable time limitations. Thus, because of the search space size of the ROP for MSS, a new meta-heuristic is developed in this section. This meta-heuristic consists in an adaptation of the ant colony optimization method.

#### **4.1. The ACO principle**

Recently, (Dorigo, Maniezzo and Colorni, 1996) introduced a new approach to optimization problems derived from the study of any colonies, called "Ant System". Their system inspired by the work of real ant colonies that exhibit the highly structured behavior. Ants lay down in some quantity an aromatic substance, known as pheromone, in their way to food. An ant chooses a specific path in correlation with the intensity of the pheromone. The pheromone trail evaporates over time if no more pheromone in laid down by others ants, therefore the best paths have more intensive pheromone and higher probability to be chosen. This simple behavior explains why ants are able to adjust to changes in the environment, such as new obstacles interrupting the currently shortest path.

Artificial ants used in ant system are agents with very simple basic capabilities mimic the behavior of real ants to some extent. This approach provides algorithms called ant algorithms. The Ant System approach associates pheromone trails to features of the solutions of a combi‐ natorial problem, which can be seen as a kind of adaptive memory of the previous solutions. Solutions are iteratively constructed in a randomized heuristic fashion biased by the phero‐ mone trails, left by the previous ants. The pheromone trails, *τij*, are updated after the con‐ struction of a solution, enforcing that the best features will have a more intensive pheromone. An Ant algorithm presents the following characteristics. It is a natural algorithm since it is based on the behavior of ants in establishing paths from their colony to feeding sources and back. It is parallel and distributed since it concerns a population of agents moving simultane‐ ously, independently and without supervisor. It is cooperative since each agent chooses a path on the basis of the information, pheromone trails, laid by the other agents with have previously selected the same path. It is versatile that can be applied to similar versions the same problem. It is robust that it can be applied with minimal changes to other combinatorial optimization problems. The solution of the travelling salesman problem (TSP) was one of the first applica‐ tions of ACO.

Various extensions to the basic TSP algorithm were proposed, notably by (Dorigo and Gambardella, 1997a). The improvements include three main aspects: the state transition rule provides a direct way to balance between exploration of new edges and exploitation of a priori and accumulated knowledge about the problem, the global updating rule is applied only to edges which belong to the best ant tour and while ants construct solution, a local pheromone updating rule is applied. These extensions have been included in the algorithm proposed in this paper.

#### **4.2. ACO-based solution approach**

*<sup>Γ</sup>*(*u*1(*z*), *<sup>u</sup>*2(*z*))=*Γ*(∑

component UMGF is:

(1− *Aj* + *Aj*

*3.2.3. Series elements*

*<sup>η</sup>*(*u*1(*z*), *<sup>u</sup>*2(*z*))=*η*(∑

series-parallel system.

**4.1. The ACO principle**

*z Σi* ).

*i*=1 *n Pi z ai* , ∑ *j*=1 *m Qj z bj* ) =∑ *i*=1 *n* ∑ *j*=1 *m Pi Qj z* min{*ai* , *bj* }

**4. The ant colony optimization approach**

adaptation of the ant colony optimization method.

Parameters *ai*

*up*(*z*)=∏ *j*=1 *Jm uj* (*z*).

*up*(*z*)=∏ *j*=1 *Jm*

so that

*i*=1 *n Pi z ai* , ∑ *j*=1 *m Qj z bj* )=∑ *i*=1 *n* ∑ *j*=1 *m Pi Qj z ai* +*bj* .

are physically interpreted as the respective performances of the two

and *Qj*

are

elements. *n* and *m* are numbers of possible performance levels for these elements. *Pi*

One can see that the *Γ* operator is simply a product of the individual *u*-functions. Thus, the

When the elements are connected in series, the element with the least performance becomes the bottleneck of the system. This element therefore defines the total system productivity. To calculate the *u*-function for system containing *n* elements connected in series, the operator *η* should be used: *us*(*z*)=*η*(*u*1(*z*), *u*2(*z*), ..., *um*(*z*)), where *η*(*g*1, *g*2, ..., *gm*)=min{*g*1, *g*2, ..., *gm*}

Applying composition operators *Γ* and *η* consecutively, one can obtain the UMGF of the entire

The problem formulated in this chapter is a complicated combinatorial optimization problem. The total number of different solutions to be examined is very large, even for rather small problems. An exhaustive examination of the enormous number of possible solutions is not feasible given reasonable time limitations. Thus, because of the search space size of the ROP for MSS, a new meta-heuristic is developed in this section. This meta-heuristic consists in an

Recently, (Dorigo, Maniezzo and Colorni, 1996) introduced a new approach to optimization problems derived from the study of any colonies, called "Ant System". Their system inspired

steady-state probabilities of possible performance levels for elements.

Given the individual UMGF of elements defined in equation (11), we have:

and *bj*

248 Search Algorithms for Engineering Optimization

In our reliability optimization problem, we have to select the best combination of parts to minimize the total cost given a reliability constraint. The parts can be chosen in any combination from the available components. Components are characterized by their reliability, capacity and cost. This problem can be represented by a graph (figure 2) in which the set of nodes comprises the set of subsystems and the set of available components (i.e. max (*Mj* ), *j* = 1..*n*) with a set of connections partially connect the graph (i.e. each subsystem is connected only to its available components). An additional node (blank node) is connect‐ ed to each subsystem.

In figure 2, a series-parallel system is illustrated where the first and the second subsystem are connected respectively to their 3 and 2 available components. The nodes *cpi3* and *cpi4*, represent

been included in the algorithm proposed in this paper.

**4.2. ACO-based solution approach** 

Figure 2. Definition of a series-parallel system with tree subsystems into a graph **Figure 2.** Definition of a series-parallel system with tree subsystems into a graph

arg max ([ ] [ ] ) *i*

*if q q <sup>j</sup>*

method.

trails, 

**4.1. The ACO principle** 

currently shortest path.

the blank components of the two subsystems. At each step of the construction process, an ant uses problem-specific heuristic information, denoted by *ηij* to choose the optimal number of components in each subsystem. An imaginary heuristic information is associated to each blank node. These new factors allow us to limit the search surfaces (i.e. tuning factors). An ant positioned on subsystem *i* chooses a component *j* by applying the rule given by: In figure 2, a series-parallel system is illustrated where the first and the second subsystem are connected respectively to their 3 and 2 available components. The nodes *cpi3* and *cpi4*, represent the blank components of the two subsystems. At each step of the construction process, an ant uses problem-specific heuristic information, denoted by components in each subsystem. An imaginary heuristic information is associated to each blank node. These new factors allow us to limit the search surfaces (i.e. tuning factors). An ant positioned on subsystem *i* chooses a component *j* by applying the rule given by:

*im im <sup>o</sup> m AC*

 

$$j = \begin{cases} \arg \min\_{\boldsymbol{\pi} \in \mathcal{AC}\_i} \{ [\boldsymbol{\pi}\_{i\boldsymbol{m}}]^\alpha [\boldsymbol{\eta}\_{i\boldsymbol{m}}]^\beta \} & \text{if} \quad q \le q\_o \\ j & \text{if} \quad q > q\_o \end{cases} \tag{12}$$

(12)

The problem formulated in this chapter is a complicated combinatorial optimization problem. The total number of different solutions to be examined is very large, even for rather small problems. An exhaustive examination of the enormous number of possible solutions is not feasible given reasonable time limitations. Thus, because of the search space size of the ROP for MSS, a new meta-heuristic is developed in this section. This meta-heuristic consists in an adaptation of the ant colony optimization

Recently, (Dorigo, Maniezzo and Colorni, 1996) introduced a new approach to optimization problems derived from the study of any colonies, called "Ant System". Their system inspired by the work of real ant colonies that exhibit the highly structured behavior. Ants lay down in some quantity an aromatic substance, known as pheromone, in their way to food. An ant chooses a specific path in correlation with the intensity of the pheromone. The pheromone trail evaporates over time if no more pheromone in laid down by others ants, therefore the best paths have more intensive pheromone and higher probability to be chosen. This simple behavior explains why ants are able to adjust to changes in the environment, such as new obstacles interrupting the

Artificial ants used in ant system are agents with very simple basic capabilities mimic the behavior of real ants to some extent. This approach provides algorithms called ant algorithms. The Ant System approach associates pheromone trails to features of the solutions of a combinatorial problem, which can be seen as a kind of adaptive memory of the previous solutions. Solutions are iteratively constructed in a randomized heuristic fashion biased by the pheromone trails, left by the previous ants. The pheromone

Various extensions to the basic TSP algorithm were proposed, notably by (Dorigo and Gambardella, 1997a). The improvements include three main aspects: the state transition rule provides a direct way to balance between exploration of new edges and exploitation of a priori and accumulated knowledge about the problem, the global updating rule is applied only to edges which belong to the best ant tour and while ants construct solution, a local pheromone updating rule is applied. These extensions have

In our reliability optimization problem, we have to select the best combination of parts to minimize the total cost given a reliability

subsystems and the set of available components (i.e. max (*Mj*), *j* = 1..*n*) with a set of connections partially connect the graph (i.e. each subsystem is connected only to its available components). An additional node (blank node) is connected to each subsystem.

*ij* to choose the optimal number of

rule:

*new* =(1−*ρ*)*τij*

*Δτij* ={ <sup>1</sup> *T Cbest* *old* <sup>+</sup> *ρΔτij*

0 *otherwise*

1. Set NC:=0 (NC: cycle counter)

2. For k=1 to NbAnt do

**4.3. The algorithm**

*τij*

and effects a temporary reduction in the quantity of pheromone for a given subsystemcomponent edge so as to discourage the next ant from choosing the same component during

where *ρ* is a coefficient such that *(1-ρ)* represents the evaporation of trail and *τo* is an initial value of trail intensity. It is initialized to the value *(n.TCnn)*-1 with *n* is the size of the problem (i.e. number of subsystem and total number of available components) and *TCnn* is the result of

After all ants have constructed a complete system, the pheromone trail is then updated at the end of a cycle (i.e. global updating), but only for the best solution found. This choice, together with the use of the pseudo-random-proportional rule, is intended to make the search more directed: ants search in a neighbourhood of the best solution found up to the current iteration of the algorithm. The pheromone level is updated by applying the following global updating

An ant-cycle algorithm is stated as follows. At time zero an initialization phase takes place during wish *NbAnt* ants select components in each subsystem according to the Pseudorandom-proportional transition rule. When an ant selects a component, a local update is made to the trail for that subsystem-component edge according to equation (13). In this equation, *ρ* is a parameter that determines the rate of reduction of the pheromone level. The pheromone reduction is small but sufficient to lower the attractiveness of precedent subsystem-component edge. At the end of a cycle, for each ant k, the value of the system's reliability *Rk* and the total cost *TCk* are computed. The best feasible solution found by ants (i.e. total cost and assignments) is saved. The pheromone trail is then updated for the best solution obtained according to (13). This process is iterated until the tour counter reaches the maximum number of cycles *NCmax* or

 rt

=- + (14)

Optimal Allocation of Reliability in Series Parallel Production System

http://dx.doi.org/10.5772/55725

251

(1 ) *new old ij ij o*

 r t

t

the same cycle. The local updating is given by:

a solution obtained through some simple heuristic.

*if* (*i*, *j*)∈*best tour*

all ants make the same tour (stagnation behavior).

For every edge (i,j) set an initial value τij(0)= τ<sup>o</sup>

The followings are formal description of the algorithm.

optimization problems. The solution of the travelling salesman problem (TSP) was one of the first applications of ACO.

*ij* , are updated after the construction of a solution, enforcing that the best features will have a more intensive pheromone. An Ant algorithm presents the following characteristics. It is a natural algorithm since it is based on the behavior of ants in establishing paths from their colony to feeding sources and back. It is parallel and distributed since it concerns a population of agents moving simultaneously, independently and without supervisor. It is cooperative since each agent chooses a path on the basis of the information, pheromone trails, laid by the other agents with have previously selected the same path. It is versatile that can be applied to similar versions the same problem. It is robust that it can be applied with minimal changes to other combinatorial

and *J* is chosen according to the probability:

$$p\_{ij} = \begin{cases} \left[\begin{array}{c} \tau\_{ij} \end{array}\right]^{\alpha} \left[\begin{array}{c} \eta\_{ij} \end{array}\right]^{\beta} \\ \sum\_{m \in AC\_{i}} \left[\begin{array}{c} \tau\_{im} \end{array}\right]^{\alpha} \left[\begin{array}{c} \eta\_{im} \end{array}\right]^{\beta} \\ \mathbf{0} \\ \end{array} \qquad \text{if} \quad j \in AC\_{i} \\ \mathbf{0} \qquad otherwise \end{cases} \tag{13}$$


*ACi* : The set of available components choices for subsystem *i*.

*q*: Random number uniformly generated between 0 and 1.

The heuristic information used is : *ηij =* 1*/*(1*+cij*) where *cij* represents the associated cost of component *j* for subsystem *i*. A "tuning" factor *ti = ηij =* 1/(1*+ci*(*Mi+*1)) is associated to blank component (*Mi +*1) of subsystem *i*. The parameter *qo* determines the relative importance of exploitation versus exploration: every time an ant in subsystem *i* have to choose a component *j*, it samples a random number 0≤*q*≤1. If *q*≤*qo* then the best edge, is chosen (exploitation), otherwise an edge is chosen according to (12) (biased exploration).

The pheromone update consists of two phases: local and global updating. While building a solution of the problem, ants choose components and change the pheromone level on subsys‐ tem-component edges. This local trail update is introduced to avoid premature convergence and effects a temporary reduction in the quantity of pheromone for a given subsystemcomponent edge so as to discourage the next ant from choosing the same component during the same cycle. The local updating is given by:

$$
\tau\_{i\dot{j}}^{new} = (1 - \rho)\tau\_{i\dot{j}}^{old} + \rho\tau\_o \tag{14}
$$

In figure 2, a series-parallel system is illustrated where the first and the second subsystem are connected respectively to their 3 and 2 available components. The nodes *cpi3* and *cpi4*, represent the blank components of the two subsystems. At each step of the *ij* to choose the optimal number of components in each subsystem. An imaginary heuristic information is associated to each blank node. These new factors allow us to limit the search surfaces (i.e. tuning factors). An ant positioned on subsystem *i* chooses a component *j* by applying the rule given where *ρ* is a coefficient such that *(1-ρ)* represents the evaporation of trail and *τo* is an initial value of trail intensity. It is initialized to the value *(n.TCnn)*-1 with *n* is the size of the problem (i.e. number of subsystem and total number of available components) and *TCnn* is the result of a solution obtained through some simple heuristic.

> After all ants have constructed a complete system, the pheromone trail is then updated at the end of a cycle (i.e. global updating), but only for the best solution found. This choice, together with the use of the pseudo-random-proportional rule, is intended to make the search more directed: ants search in a neighbourhood of the best solution found up to the current iteration of the algorithm. The pheromone level is updated by applying the following global updating rule:

$$\begin{aligned} \tau\_{ij}^{new} &= (1 - \rho)\tau\_{ij}^{old} + \rho \Delta \tau\_{ij} \\ \Delta \tau\_{ij} &= \begin{cases} 1 \\ \overline{T C\_{best}} \\ 0 \end{cases} & \quad \begin{aligned} &\quad \dot{f} \begin{pmatrix} i & j \end{pmatrix} \in best \quad \textit{tour} \\ 0 & \quad \textit{otherwise} \end{aligned} \end{aligned}$$

#### **4.3. The algorithm**

the blank components of the two subsystems. At each step of the construction process, an ant uses problem-specific heuristic information, denoted by *ηij* to choose the optimal number of components in each subsystem. An imaginary heuristic information is associated to each blank node. These new factors allow us to limit the search surfaces (i.e. tuning factors). An ant

Ch <sup>21</sup> Ch22

Sub2

Figure 2. Definition of a series-parallel system with tree subsystems into a graph

Ch1k Chn1 Ch n2

Ch2k

been included in the algorithm proposed in this paper.

**4.2. ACO-based solution approach** 

*im im <sup>o</sup> m AC*

 b *o*

*o*

construction process, an ant uses problem-specific heuristic information, denoted by

(12)

Subn

Chnk

*if j AC*

*i*

å (13)

*= ηij =* 1/(1*+ci*(*Mi+*1)) is associated to blank

The problem formulated in this chapter is a complicated combinatorial optimization problem. The total number of different solutions to be examined is very large, even for rather small problems. An exhaustive examination of the enormous number of possible solutions is not feasible given reasonable time limitations. Thus, because of the search space size of the ROP for MSS, a new meta-heuristic is developed in this section. This meta-heuristic consists in an adaptation of the ant colony optimization

Recently, (Dorigo, Maniezzo and Colorni, 1996) introduced a new approach to optimization problems derived from the study of any colonies, called "Ant System". Their system inspired by the work of real ant colonies that exhibit the highly structured behavior. Ants lay down in some quantity an aromatic substance, known as pheromone, in their way to food. An ant chooses a specific path in correlation with the intensity of the pheromone. The pheromone trail evaporates over time if no more pheromone in laid down by others ants, therefore the best paths have more intensive pheromone and higher probability to be chosen. This simple behavior explains why ants are able to adjust to changes in the environment, such as new obstacles interrupting the

Artificial ants used in ant system are agents with very simple basic capabilities mimic the behavior of real ants to some extent. This approach provides algorithms called ant algorithms. The Ant System approach associates pheromone trails to features of the solutions of a combinatorial problem, which can be seen as a kind of adaptive memory of the previous solutions. Solutions are iteratively constructed in a randomized heuristic fashion biased by the pheromone trails, left by the previous ants. The pheromone

Various extensions to the basic TSP algorithm were proposed, notably by (Dorigo and Gambardella, 1997a). The improvements include three main aspects: the state transition rule provides a direct way to balance between exploration of new edges and exploitation of a priori and accumulated knowledge about the problem, the global updating rule is applied only to edges which belong to the best ant tour and while ants construct solution, a local pheromone updating rule is applied. These extensions have

In our reliability optimization problem, we have to select the best combination of parts to minimize the total cost given a reliability constraint. The parts can be chosen in any combination from the available components. Components are characterized by their reliability, capacity and cost. This problem can be represented by a graph (figure 2) in which the set of nodes comprises the set of subsystems and the set of available components (i.e. max (*Mj*), *j* = 1..*n*) with a set of connections partially connect the graph (i.e. each subsystem is connected only to its available components). An additional node (blank node) is connected to each subsystem.

optimization problems. The solution of the travelling salesman problem (TSP) was one of the first applications of ACO.

*ij* , are updated after the construction of a solution, enforcing that the best features will have a more intensive pheromone. An Ant algorithm presents the following characteristics. It is a natural algorithm since it is based on the behavior of ants in establishing paths from their colony to feeding sources and back. It is parallel and distributed since it concerns a population of agents moving simultaneously, independently and without supervisor. It is cooperative since each agent chooses a path on the basis of the information, pheromone trails, laid by the other agents with have previously selected the same path. It is versatile that can be applied to similar versions the same problem. It is robust that it can be applied with minimal changes to other combinatorial

(12)

positioned on subsystem *i* chooses a component *j* by applying the rule given by:

*if q q <sup>j</sup> J if q q*

<sup>ì</sup> £ <sup>ï</sup> <sup>=</sup> <sup>í</sup> ï > î

*if q q <sup>j</sup> J if q q*

 

a

 h

 

*im im <sup>o</sup> m AC*

arg max ([ ] [ ] ) *i*

arg max ([ ] [ ] ) *i*

Î

method.

trails, 

**4.1. The ACO principle** 

currently shortest path.

0

ï î

*β* : The relative importance of the heuristic information *ηij*.

*q*: Random number uniformly generated between 0 and 1.

component *j* for subsystem *i*. A "tuning" factor *ti*

: The set of available components choices for subsystem *i*.

otherwise an edge is chosen according to (12) (biased exploration).

Î

*i*

*ij im im m AC*

t

= í é ùé ù ë ûë û <sup>ï</sup>

t

and *J* is chosen according to the probability:

by:

Ch <sup>11</sup> Ch12

**Figure 2.** Definition of a series-parallel system with tree subsystems into a graph

Sub1

250 Search Algorithms for Engineering Optimization

*p*

*α* : The relative importance of the trail.

*ACi*

component (*Mi*

t

*ij ij*

 h

a

*otherwise*

 h

a

 b

<sup>ì</sup> é ùé ù <sup>ï</sup> ë ûë û <sup>ï</sup> <sup>Î</sup>

 b

The heuristic information used is : *ηij =* 1*/*(1*+cij*) where *cij* represents the associated cost of

exploitation versus exploration: every time an ant in subsystem *i* have to choose a component *j*, it samples a random number 0≤*q*≤1. If *q*≤*qo* then the best edge, is chosen (exploitation),

The pheromone update consists of two phases: local and global updating. While building a solution of the problem, ants choose components and change the pheromone level on subsys‐ tem-component edges. This local trail update is introduced to avoid premature convergence

*+*1) of subsystem *i*. The parameter *qo* determines the relative importance of

An ant-cycle algorithm is stated as follows. At time zero an initialization phase takes place during wish *NbAnt* ants select components in each subsystem according to the Pseudorandom-proportional transition rule. When an ant selects a component, a local update is made to the trail for that subsystem-component edge according to equation (13). In this equation, *ρ* is a parameter that determines the rate of reduction of the pheromone level. The pheromone reduction is small but sufficient to lower the attractiveness of precedent subsystem-component edge. At the end of a cycle, for each ant k, the value of the system's reliability *Rk* and the total cost *TCk* are computed. The best feasible solution found by ants (i.e. total cost and assignments) is saved. The pheromone trail is then updated for the best solution obtained according to (13). This process is iterated until the tour counter reaches the maximum number of cycles *NCmax* or all ants make the same tour (stagnation behavior).

The followings are formal description of the algorithm.

1. Set NC:=0 (NC: cycle counter)

For every edge (i,j) set an initial value τij(0)= τ<sup>o</sup>

2. For k=1 to NbAnt do

For i=1 to NbSubSystem do

For j=1 to MaxComponents do

Choose a component, including blanks, according to (1) and (2).

Local update of pheromone trail for chosen subsystem- component edge (i,j) :

$$\mathbf{r}\_{\circlearrowright}^{\prime\alpha\circ\nu\circ} = (\mathbb{1} - \rho\rho)\mathbf{r}\_{\circlearrowright}^{\prime\alpha\circ\alpha\circ} + \rho\mathbf{r}\_{\circlearrowright}$$

 For i=1 to NbSubSystem do For j=1 to MaxComponents do

(i,j) :

 End For End For

 End For 5. cycle=cycle +1

 *Then Goto step 2 Else*

 *Stop.* Table 1. Brishi caption nema nishta algoritam je

Each element of the system is considered as unit with total failures.

**Comp# Vers# Availability A Cost C Capacity Ξ**

0.980 0.977 0.982 0.978 0.983 0.920 0.984

0.995 0.996 0.997 0.997 0.998

0.971 0.973 0.971 0.976

0.977 0.978 0.978 0.983 0.981 0.971 0.983 0.982 0.977

0.984 0.983 0.987 0.981

**Feeder 1 Convoyer 1 Stracker Convoyer 2 Feeder 1**

Figure 3. Synoptic of the detailed power station coal transportation

1

2

**5. Illustrative example** 

**Figure 3.** Synoptic of the detailed power station coal transportation

**Table 1.** Characteristics of available system components on the market

figure.3.

1

2

3

4

5

**Comp** #: System component number. **Vers** #: System version number.

Acronyms:

**Description of the system to be optimized** 

(1 ) *new old*

*Print the best feasible solution and components selection.*

3. Calculate Rk (system reliability for each ant) Calculate the total cost for each ant TCk Update the best found feasible solution 4. Global update of pheromone trail:

6. *if (NC < NCmax) and ( not stagnation behavior)* 

 Choose a component, including blanks, according to (1) and (2) . Local update of pheromone trail for chosen subsystem- component edge

For each edge (i,j) best feasible solution, update the pheromone trail according to:

<sup>1</sup> (,)

The power station coal transportation system which supplies the boilers is designed with five basic components as depicted in

The process of coal transportation is: The coal is loaded from the bin to the primary conveyor (Conveyor 1) by the primary feeder (Feeder 1). Then the coal is transported through the conveyor 1 to the Stacker-reclaimer, when it is left up to the burner level. The secondary feeder (Feeder 2) loads the secondary conveyor (Conveyor 2) which supplies the burner feeding system of the boiler.

Optimal Allocation of Reliability in Series Parallel Production System

Comp# Vers# Availability A Cost C Capacity

0.590 0.535 0.470 0.420 0.400 0.180 0.220

http://dx.doi.org/10.5772/55725

0.205 0.189 0.091 0.056 0.042

253

0.980 0.977 0.982 0.978 0.983 0.920 0.984

0.590 0.535 0.470 0.420 0.400 0.180 0.220

0.995 0.996 0.997 0.997 0.998

3 1 0.971 7.525 100

0.205 0.189 0.091 0.056 0.042

7.525 4.720 3.590 2.420

0.180 0.160 0.150 0.121 0.102 0.096 0.071 0.049 0.044

0.986 0.825 0.490 0.475

 

*if i j best tour TC otherwise*

(1 )

 

*new old ij ij ij*

0

*ij best*

  *ij ij o*

 

End For

End For

3. Calculate Rk (system reliability for each ant)

Calculate the total cost for each ant TCk

Update the best found feasible solution

4. Global update of pheromone trail:

For each edge (i,j)∈ best feasible solution, update the pheromone trail according to:

$$\begin{array}{ll} \mathsf{r}\_{\circlearrowright}^{\circlearrowright} = (\mathsf{l} - \mathsf{p}) \mathsf{r}\_{\circlearrowright}^{\circlearrowright} + \mathsf{p}\mathsf{d}\ \mathsf{r}\_{\circlearrowright} \\\\ \Delta\mathsf{r}\_{\circlearrowright} = \begin{cases} \frac{1}{\mathsf{T}\mathsf{C}\_{\mathsf{b}\mathsf{c}st}} & \text{if } (i, \, j) \in \mathsf{b}\mathsf{c}st & \mathsf{t}\mathsf{t}\mathsf{r}\_{\circlearrowright} \\\mathsf{q} & \text{otherwise} \end{cases} \end{array}$$

End For


*Then*

```
Goto step 2
```
*Else*

*Print the best feasible solution and components selection. Stop.*

## **5. Illustrative example**

#### **Description of the system to be optimized**

The power station coal transportation system which supplies the boilers is designed with five basic components as depicted in figure.3.

The process of coal transportation is: The coal is loaded from the bin to the primary conveyor (Conveyor 1) by the primary feeder (Feeder 1). Then the coal is transported through the conveyor 1 to the Stacker-reclaimer, when it is left up to the burner level. The secondary feeder (Feeder 2) loads the secondary conveyor (Conveyor 2) which supplies the burner feeding system of the boiler. Each element of the system is considered as unit with total failures.

0.590

120

The power station coal transportation system which supplies the boilers is designed with five basic components as depicted in

secondary feeder (Feeder 2) loads the secondary conveyor (Conveyor 2) which supplies the burner feeding system of the boiler.

Comp# Vers# Availability A Cost C Capacity

0.980

1

Each element of the system is considered as unit with total failures.

 For i=1 to NbSubSystem do For j=1 to MaxComponents do

(i,j) :

 End For End For

 End For 5. cycle=cycle +1

 *Then Goto step 2 Else*

 *Stop.* Table 1. Brishi caption nema nishta algoritam je

(1 ) *new old*

*Print the best feasible solution and components selection.*

3. Calculate Rk (system reliability for each ant) Calculate the total cost for each ant TCk Update the best found feasible solution 4. Global update of pheromone trail:

6. *if (NC < NCmax) and ( not stagnation behavior)* 

 Choose a component, including blanks, according to (1) and (2) . Local update of pheromone trail for chosen subsystem- component edge

For each edge (i,j) best feasible solution, update the pheromone trail according to:

<sup>1</sup> (,)

 

*if i j best tour TC otherwise*

(1 )

 

*new old ij ij ij*

0

*ij best*

  *ij ij o*

 

Figure 3. Synoptic of the detailed power station coal transportation **Figure 3.** Synoptic of the detailed power station coal transportation

**5. Illustrative example** 

figure.3.

**Description of the system to be optimized** 


Acronyms:

For i=1 to NbSubSystem do

252 Search Algorithms for Engineering Optimization

3. Calculate Rk (system reliability for each ant) Calculate the total cost for each ant TCk Update the best found feasible solution

6. if (NC < NCmax) and ( not stagnation behavior)

End For

4. Global update of pheromone trail:

End For

End For 5. cycle=cycle +1

*Then*

*Else*

*Goto step 2*

**5. Illustrative example**

**Description of the system to be optimized**

basic components as depicted in figure.3.

*Stop.*

For j=1 to MaxComponents do

Choose a component, including blanks, according to (1) and (2).

For each edge (i,j)∈ best feasible solution, update the pheromone trail according to:

*new* = (1−ρ)τ*ij*

Δτ*ij* ={ <sup>1</sup> *T Cbest* *old* <sup>+</sup> ρΔτ*ij*

0 *otherwise*

The power station coal transportation system which supplies the boilers is designed with five

The process of coal transportation is: The coal is loaded from the bin to the primary conveyor (Conveyor 1) by the primary feeder (Feeder 1). Then the coal is transported through the conveyor 1 to the Stacker-reclaimer, when it is left up to the burner level. The secondary feeder (Feeder 2) loads the secondary conveyor (Conveyor 2) which supplies the burner feeding system of the boiler. Each element of the system is considered as unit with total failures.

*if* (*i*, *j*)∈*best tour*

τ*ij*

*Print the best feasible solution and components selection.*

Local update of pheromone trail for chosen subsystem- component edge (i,j) : τ*ij*

*new* = (1−ρ)τ*ij*

*old* <sup>+</sup> ρτ*<sup>o</sup>*

**Comp** #: System component number.

**Vers** #: System version number.

**Table 1.** Characteristics of available system components on the market


level of the system availability. The ACO allows each subsystem to contain elements with different technologies. The ACO algorithm proved very efficient in solving the ROP and better quality results in terms of structure costs and reliability levels have been achieved compared

Figure 4. Cost-availability rate of GA and ACO algorithm versus availability

0.975 0.98 0.985 0.99 0.995 12.5

**A0 % of AGA % of AACO % of C/A** 0.975 0.1 0.23 58.5 0.980 0.0 0.12 13.3 0.990 0.2 0.36 39.4

Availability

From figure 4 and the table, one can observe:

**Figure 4.** Cost-availability rate of GA and ACO algorithm versus availability

configuration with a slightly higher reliability level (table 4).

**Table 4.** Comparison of Optimal Solutions Obtained by ACO and Genetic Algorithms For Different Availability

ACO achieved better quality results in terms of structure cost and reliability in different reliability levels (figure 4). We remark in all case, GA performed better by achieving a less expensive configuration, however ACO algorithm achieved a near optimal configuration with

We take, for example, for reference reliability level (A0 = 0.975, table 4), GA prove an augmen‐ tation of 0.1 percent compared to 0.23 percent given by ACO this for a difference in rate Costreliability of 58.3%. It is noticed, according to figure 4, that ACO tends, at equal price, to

equal price, to increase the reliability of the system.

*and Production Management (IEPM'2001)*, Québec 20-23.

**6. Conclusion** 

a slightly higher reliability level (table 4).

From figure 4 and the table, one can observe:

Requirements

13

13.5

14

14.5

Cost / Availability

15

15.5

16

16.5

**References** 

increase the reliability of the system.

The objective is to select the optimal combination of elements used in series-parallel structure of power system. This has to correspond to the minimal total cost with regard to the selected level of the system availability. The ACO allows each subsystem to contain elements with different technologies. The ACO algorithm proved very efficient in solving the ROP and better quality

Optimal Allocation of Reliability in Series Parallel Production System

http://dx.doi.org/10.5772/55725

255

GA ACO

**A**0 % of AGA % of AACO % of C/A **0.975** 0.1 0.23 58.5 **0.980** 0.0 0.12 13.3 **0.990** 0.2 0.36 39.4

ACO achieved better quality results in terms of structure cost and reliability in different reliability levels (figure 4). We remark in all case, GA performed better by achieving a less expensive configuration, however ACO algorithm achieved a near optimal

We take, for example, for reference reliability level (A0 = 0.975, table 4), GA prove an augmentation of 0.1 percent compared to 0.23 percent given by ACO this for a difference in rate Cost-reliability of 58.3%. It is noticed, according to figure 4, that ACO tends, at

A new algorithm for choosing an optimal series-parallel power structure configuration is proposed which minimizes total investment cost subject to availability constraints. This algorithm seeks and selects devices among a list of available products according to their availability, nominal capacity (performance) and cost. Also defines the number and the kind of parallel machines in each sub-system. The proposed method allows a practical way to solve wide instances of structure optimization problem of multi-state power systems without limitation on the diversity of versions of machines put in parallel. A combination is used in this

Ait-Kadi and Nourelfath, (2001) Availability optimization of fault-tolerant systems. *International Conference on Industrial Engineering* 

Bauer, Bullnheimer, Hartl and Strauss, (2000) Minimizing Total Tardiness on a Single machine using Ant Colony

algorithm is based on the universal moment generating function and an ant colony optimization algorithm.

Optimization. Central European Journal of Operations research, Vol. 8, no. 2, 125-141.

Table 5. Comparison of Optimal Solutions Obtained by ACO and Genetic Algorithms For Different Availability Requirements

results in terms of structure costs and reliability levels have been achieved compared to GA (Levitin et al., 1997).

to GA (Levitin et al., 1997).

**Table 2.** Parameters of the cumulative demand curve


**Table 3.** Optimal Solution Obtained By Ant Algorithm

Optimal availabilities obtained by Ant Algorithm were compared to availabilities given by genetic algorithm (presented by symbol A0 in table 3) in the reference (Levitin et al., 1997), and to those obtained by harmony search (presented by symbol A01 in table 3) given in (Rami et al., 2009).

For this type of problem, we define the minimal cost system configuration which provides the desired reliability level A ≥ A0 (where A0 is given in (Levitin et al, 1997) taken as reference).

We will clearly remark the improvement of the reliability of the system at price equal compared to the two other methods.

We gave more importance to the reliability of the system compared to its cost what justifies the increase in the cost compared to the reference.

The compromise of the cost/reliability was treated successfully in this work.

The objective is to select the optimal combination of elements used in series-parallel structure of power system. This has to correspond to the minimal total cost with regard to the selected contain elements with different technologies. The ACO algorithm proved very efficient in solving the ROP and better quality

**A**0 % of AGA % of AACO % of C/A

percent given by ACO this for a difference in rate Cost-reliability of 58.3%. It is noticed, according to figure 4, that ACO tends, at

Ait-Kadi and Nourelfath, (2001) Availability optimization of fault-tolerant systems. *International Conference on Industrial Engineering* 

Bauer, Bullnheimer, Hartl and Strauss, (2000) Minimizing Total Tardiness on a Single machine using Ant Colony

results in terms of structure costs and reliability levels have been achieved compared to GA (Levitin et al., 1997).

level of the system availability. The ACO allows each subsystem to contain elements with different technologies. The ACO algorithm proved very efficient in solving the ROP and better quality results in terms of structure costs and reliability levels have been achieved compared to GA (Levitin et al., 1997). The objective is to select the optimal combination of elements used in series-parallel structure of power system. This has to correspond to the minimal total cost with regard to the selected level of the system availability. The ACO allows each subsystem to

Figure 4. Cost-availability rate of GA and ACO algorithm versus availability **Figure 4.** Cost-availability rate of GA and ACO algorithm versus availability

**Demand level (%)** 100 80 50 20 **Duration (h)** 4203 788 1228 2536 **Probability** 0.479 0.089 0.140 0.289

Optimal availabilities obtained by Ant Algorithm were compared to availabilities given by genetic algorithm (presented by symbol A0 in table 3) in the reference (Levitin et al., 1997), and to those obtained by harmony search (presented by symbol A01 in table 3) given in (Rami

For this type of problem, we define the minimal cost system configuration which provides the desired reliability level A ≥ A0 (where A0 is given in (Levitin et al, 1997) taken as reference).

We will clearly remark the improvement of the reliability of the system at price equal compared

We gave more importance to the reliability of the system compared to its cost what justifies

The objective is to select the optimal combination of elements used in series-parallel structure of power system. This has to correspond to the minimal total cost with regard to the selected

The compromise of the cost/reliability was treated successfully in this work.

**Availability A**

0.9773 13.4440

0.9812 14.9180

0.9936 **16.2870**

**Computed Cost C**

**Table 2.** Parameters of the cumulative demand curve

254 Search Algorithms for Engineering Optimization

0.975 0.9760

0.980 0.9826

0.990 0.9931

et al., 2009).

to the two other methods.

**A0 A01 Optimal Structure Computed**

**1:** Components **3 – 6 – 5 -7 2:** Components **2 – 3 – 4 - 4 3:** Components **1- 4 4:** Components **2 - 5 - 7 - 8 5:** Components **3 - 3 - 4**

**1:** Components **2 - 2 2:** Components **3 – 3 -5 3:** Components **2 - 3 - 3 4:** Components **5 - 6 - 7 5:** Components **3 - 3 - 4**

**1:** Components **2 - 1 2:** Components **3 - 3 3:** Components **2 - 2 - 3 4:** Components **5 - 5 - 6 5:** Components **2 - 2**

**Table 3.** Optimal Solution Obtained By Ant Algorithm

the increase in the cost compared to the reference.


ACO achieved better quality results in terms of structure cost and reliability in different reliability levels (figure 4). We remark in all case, GA performed better by achieving a less expensive configuration, however ACO algorithm achieved a near optimal configuration with a slightly higher reliability level (table 4). **Table 4.** Comparison of Optimal Solutions Obtained by ACO and Genetic Algorithms For Different Availability Requirements

We take, for example, for reference reliability level (A0 = 0.975, table 4), GA prove an augmentation of 0.1 percent compared to 0.23 From figure 4 and the table, one can observe:

**References** 

equal price, to increase the reliability of the system. **6. Conclusion**  A new algorithm for choosing an optimal series-parallel power structure configuration is proposed which minimizes total ACO achieved better quality results in terms of structure cost and reliability in different reliability levels (figure 4). We remark in all case, GA performed better by achieving a less expensive configuration, however ACO algorithm achieved a near optimal configuration with a slightly higher reliability level (table 4).

investment cost subject to availability constraints. This algorithm seeks and selects devices among a list of available products according to their availability, nominal capacity (performance) and cost. Also defines the number and the kind of parallel machines in each sub-system. The proposed method allows a practical way to solve wide instances of structure optimization problem of multi-state power systems without limitation on the diversity of versions of machines put in parallel. A combination is used in this algorithm is based on the universal moment generating function and an ant colony optimization algorithm. We take, for example, for reference reliability level (A0 = 0.975, table 4), GA prove an augmen‐ tation of 0.1 percent compared to 0.23 percent given by ACO this for a difference in rate Costreliability of 58.3%. It is noticed, according to figure 4, that ACO tends, at equal price, to increase the reliability of the system.

Optimization. Central European Journal of Operations research, Vol. 8, no. 2, 125-141.

*and Production Management (IEPM'2001)*, Québec 20-23.

## **6. Conclusion**

A new algorithm for choosing an optimal series-parallel power structure configuration is proposed which minimizes total investment cost subject to availability constraints. This algorithm seeks and selects devices among a list of available products according to their availability, nominal capacity (performance) and cost. Also defines the number and the kind of parallel machines in each sub-system. The proposed method allows a practical way to solve wide instances of structure optimization problem of multi-state power systems without limitation on the diversity of versions of machines put in parallel. A combination is used in this algorithm is based on the universal moment generating function and an ant colony optimization algorithm.

[7] Dallery and Gershwin(1992). Manufacturing Flow Line Systems: A Review of Models and Analytical Results. *Queueing Systems theory and Applications*, Special Issue on

Optimal Allocation of Reliability in Series Parallel Production System

http://dx.doi.org/10.5772/55725

257

[8] Den BestoStützle and Dorigo, ((2000). Ant Colony Optimization fort he Total Weight‐ ed tardiness Problem. Proceeding of the 6th International Conference on parallel

[9] Di Caro and Dorigo(1998). Mobile Agents for Adaptive Routing. *Proceedings for the 31st Hawaii International Conference On System Sciences*, Big Island of Hawaii, , 74-83.

[10] DorigoManiezzo and Colorni, ((1996). The Ant System: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics- Part B, ,

[11] Dorigo and Gambardella(1997a). Ant Colony System: A Cooperative Learning Ap‐ proach to the Traveling Salesman Problem", IEEE Transactions on Evolutionary com‐

[12] Dorigo and Gambardella(1997b). Dorigo, M. and L. M. Gambardella. Ant Colonies

[13] Kuo and Prasad(2000). An Annotated Overview of System-reliability Optimization.

[14] Levitin and Lisnianski(2001). A new approach to solving problems of multi-state sys‐ tem reliability optimization. *Quality and Reliability Engineering International*, , 47(2),

[15] LevitinLisnianski, Ben-Haim and Elmakis, ((1997). Structure optimization of power system with different redundant elements. *Electric Power Systems Research*, , 43(1),

[16] (LevitinLisnianski, Ben-Haim and Elmakis, ((1998). Redundancy optimization for series-parallel multi-state systems. *IEEE Transactions on Reliability* , 47(2), 165-172. [17] Liang and Smith(2001). An Ant Colony Approach to Redundancy Allocation. Sub‐

[18] LisnianskiLevitin, Ben-Haim and Elmakis, ((1996). Power system structure optimiza‐ tion subject to reliability constraints. *Electric Power Systems Research*, , 39(2), 145-152.

[19] Maniezzo and Colorni(1999). The Ant System Applied to the Quadratic Assignment

[20] Murchland(1975). Fundamental concepts and relations for reliability analysis of mul‐ ti-state systems. *Reliability and Fault Tree Analysis*, ed. R. Barlow, J. Fussell, N. Sing‐

[21] Nahas, N, & Nourelfath, M. Aït-Kadi Daoud, Efficiently solving the redundancy allo‐ cation problem by using ant colony optimization and the extended great deluge algo‐

Problem. IEEE Transactions on Knowledge and data Engineering, , 11(5)

Queueing Models of Manufacturing Systems, , 12(1-2), 3-94.

for the Travelling Salesman Problem. Bio Systems, , 43

IEEE Transactions on Reliability, , 49(2)

mitted to *IEEE Transactions on Reliability*.

purwalla. SIAM, Philadelphia.

26(1), 1-13.

93-104.

19-27.

putation, , 1(1), 53-66.

problem Solving from nature (PPSNVI), LNCS 1917, Berlin, , 611-620.

## **Author details**

Rami Abdelkader1 , Zeblah Abdelkader1\*, Rahli Mustapha2 and Massim Yamani1

\*Address all correspondence to: azeblah@yahoo.fr

1 University Of Sidi Bel Abbes, Engineering Faculty, Algeria

2 University Of Oran, Engineering Faculty, Algeria

#### **References**


[7] Dallery and Gershwin(1992). Manufacturing Flow Line Systems: A Review of Models and Analytical Results. *Queueing Systems theory and Applications*, Special Issue on Queueing Models of Manufacturing Systems, , 12(1-2), 3-94.

**6. Conclusion**

256 Search Algorithms for Engineering Optimization

optimization algorithm.

**Author details**

Rami Abdelkader1

**References**

A new algorithm for choosing an optimal series-parallel power structure configuration is proposed which minimizes total investment cost subject to availability constraints. This algorithm seeks and selects devices among a list of available products according to their availability, nominal capacity (performance) and cost. Also defines the number and the kind of parallel machines in each sub-system. The proposed method allows a practical way to solve wide instances of structure optimization problem of multi-state power systems without limitation on the diversity of versions of machines put in parallel. A combination is used in this algorithm is based on the universal moment generating function and an ant colony

[1] Ait-Kadi and Nourelfath(2001). Availability optimization of fault-tolerant systems. *International Conference on Industrial Engineering and Production Management*

[2] BauerBullnheimer, Hartl and Strauss, ((2000). Minimizing Total Tardiness on a Single machine using Ant Colony Optimization. Central European Journal of Operations re‐

[3] BullnheimerHartl and Strauss, ((1997). Applying the Ant System to the vehicle Rout‐ ing problem. *2nd Metaheuristics International Conference (MIC-97),* Sophia-Antipolis,

[5] Chern(1992). On the Computational Complexity of Reliability redundancy Allocation

[6] Costa and Hertz(1997). Ants Can Colour Graphs. Journal of the Operational Research

[4] Billinton and Allan(1990). Reliability evaluation of power systems. *Pitman*.

in a Series System. Operations Research Letters, , 11

and Massim Yamani1

, Zeblah Abdelkader1\*, Rahli Mustapha2

\*Address all correspondence to: azeblah@yahoo.fr

2 University Of Oran, Engineering Faculty, Algeria

*(IEPM'2001)*, Québec , 20-23.

search, , 8(2)

France, , 21-24.

Society, , 48

1 University Of Sidi Bel Abbes, Engineering Faculty, Algeria


rithm, International Conference on Probabilistic Safety Assessment and Management (PSAM) and ESREL, New Orleans, USA, May 14\_19, (2006).

**Section 4**

**Grover-Type Quantum Search**


**Section 4**

**Grover-Type Quantum Search**

rithm, International Conference on Probabilistic Safety Assessment and Management

[22] NourefathAit-Kadi and Soro, ((2002). Optimal design of reconfigurable manufactur‐ ing systems. *IEEE International Conference on Systems, Man and Cybernetics (SMC'02)*,

[23] RamiZeblah and Massim, ((2009). Cost optimization of power system structure sub‐ ject to reliability constraints using harmony search", PRZEGLĄD ELEKTROTECH‐

[25] Schoofs and Naudts(2000). Schoofs, L. and B. Naudts, "Ant Colonies are Good at Solving Contraint Satisfaction Problem," Proceeding of the 2000 Congress on Evolu‐

[26] TillmanHwang and Kuo, ((1977). Tillman, F. A., C. L. Hwang, and W. Kuo, "Optimi‐ zation Techniques for System Reliability with Redundancy- A review," IEEE Trans‐

[27] UshakovLevitin and Lisnianski, ((2002). Multi-state system reliability:from theory to practice. *Proc. of 3 Int. Conf. on mathematical methods in reliability, MMR 2002*, Trond‐

[28] Ushakov(1987). optimal standby problems and a universal generating function. *Sov.*

[29] Ushakov(1986). Universal generating function. *Sov. J. Computing System Science*, N 5, ,

[30] Wagner and Bruckstein(1999). Wagner, I. A. and A. M. Bruckstein. Hamiltonian(t)- An Ant inspired heuristic for Recognizing Hamiltonian Graphs. Proceeding of the

[31] Wood, A. J. & R. J Ringlee ((1970). Frequency and duration methods for power relia‐ bility calculations ', Part II, ' Demand capacity reserve model ',IEEE Trans. On PAS, ,

1999 Congress on Evolutionary Compuation, Washington, D.C., , 1465-1469.

(PSAM) and ESREL, New Orleans, USA, May 14\_19, (2006).

NICZNY (Electrical Review), R. 85 NR 4/2009.

actions on Reliability, , R-26(3)

*J. Computing System Science*, N 4, , 25, 79-82.

heim, Norway, , 635-638.

24, 118-129.

94, 375-388.

[24] Ross(1993). Introduction to probability models. *Academic press*.

tionary Computation, San Diego, CA, July , 2000, 1190-1195.

Tunisia.

258 Search Algorithms for Engineering Optimization

**Chapter 11**

**Provisional chapter**

**Geometry and Dynamics of a Quantum Search**

**Geometry and Dynamics of a Quantum Search**

**Algorithm for an Ordered Tuple of Multi-Qubits**

**Algorithm for an Ordered Tuple of Multi-Qubits**

**1.1. Geometry and dynamics viewpoints to quantum search algorithms**

unsorted list of *N* data, the Grover search algorithm needs only *O*(

of *quantum*; like quantum mechanics vs classical mechanics.

be traced from the preprint archive [14].

Quantum computation has been one of the hottest interdisciplinary research areas over some decades, where informatics, physics and mathematics are crossing with (see [1] including an excellent historical overview and [2–4] as later publications for general references). In the middle of 1990's, two great discoveries are made by Shor [5] in 1994 and by Grover [6] in 1996 that roused bubbling enthusiasm to quantum computation. As one of those, Grover found in 1996 the quantum search algorithm for the linear search through unsorted lists [6, 7], whose efficiency exceeds the theoretical bound of the linear search in classical computing: For an

target with high probability, while the linear search in classical computing needs *O*(*N*) trials. Throughout this paper, the term *classical computing* means the computation theory based on the conventional binary-code operations. The adjective *classical* here is used as an antonym

Though the classical linear search is not of high complexity, the speedup by Grover's algorithm is exciting due to its wide applicability to other search-based problems; G-BBHT algorithm, the quantum counting problem, the minimum value search, the collision problem and the SAT problem, for example [8, 9]. A number of variations and extensions of the Grover algorithm have been made (see [10–13], for example): As far as the author made a search for academic articles in 2012 with keywords 'Grover', 'quantum' and 'search' by Google scholar (accessed 5 September 2012), more than five hundred 'hits' are available. Many of those can

Among numbers of studies concerning Grover's quantum search algorithm, a pioneering geometric study on the algorithm is made by Miyake and Wadati in 2001 [15]: The sequence of quantum states generated by the Grover algorithm in 2*<sup>n</sup>* data is shown to be on a

> ©2012 Uwano, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Uwano; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Uwano, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

√

*N*) trials to find the

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

Yoshio Uwano

**1. Introduction**

Yoshio Uwano

http://dx.doi.org/10.5772/53187

**Provisional chapter**

## **Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits Algorithm for an Ordered Tuple of Multi-Qubits**

**Geometry and Dynamics of a Quantum Search**

Yoshio Uwano Additional information is available at the end of the chapter

Yoshio Uwano

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/53187

## **1. Introduction**

## **1.1. Geometry and dynamics viewpoints to quantum search algorithms**

Quantum computation has been one of the hottest interdisciplinary research areas over some decades, where informatics, physics and mathematics are crossing with (see [1] including an excellent historical overview and [2–4] as later publications for general references). In the middle of 1990's, two great discoveries are made by Shor [5] in 1994 and by Grover [6] in 1996 that roused bubbling enthusiasm to quantum computation. As one of those, Grover found in 1996 the quantum search algorithm for the linear search through unsorted lists [6, 7], whose efficiency exceeds the theoretical bound of the linear search in classical computing: For an unsorted list of *N* data, the Grover search algorithm needs only *O*( √*N*) trials to find the target with high probability, while the linear search in classical computing needs *O*(*N*) trials. Throughout this paper, the term *classical computing* means the computation theory based on the conventional binary-code operations. The adjective *classical* here is used as an antonym of *quantum*; like quantum mechanics vs classical mechanics.

Though the classical linear search is not of high complexity, the speedup by Grover's algorithm is exciting due to its wide applicability to other search-based problems; G-BBHT algorithm, the quantum counting problem, the minimum value search, the collision problem and the SAT problem, for example [8, 9]. A number of variations and extensions of the Grover algorithm have been made (see [10–13], for example): As far as the author made a search for academic articles in 2012 with keywords 'Grover', 'quantum' and 'search' by Google scholar (accessed 5 September 2012), more than five hundred 'hits' are available. Many of those can be traced from the preprint archive [14].

Among numbers of studies concerning Grover's quantum search algorithm, a pioneering geometric study on the algorithm is made by Miyake and Wadati in 2001 [15]: The sequence of quantum states generated by the Grover algorithm in 2*<sup>n</sup>* data is shown to be on a

©2012 Uwano, licensee InTech. This is an open access chapter distributed under the terms of the Creative

geodesic in (2*n*+<sup>1</sup> − 1)-dimensional sphere. Further, the reduced search sequence is given rise to the complex projective space **C***P*2*<sup>n</sup>*−<sup>1</sup> through a geometric reduction, which is also shown to be on a geodesic in **C***P*2*<sup>n</sup>*<sup>−</sup>1. Roughly speaking, the reduction in [15] is made through the phase-factor elimination from quantum states, so that **C***P*2*<sup>n</sup>*−<sup>1</sup> is thought of as the space of rays. Note that the geodesics above are associated with the standard metric on (2*n*+<sup>1</sup> − 1)-dimensional sphere and with the Fubini-Study metric on **C***P*2*<sup>n</sup>*<sup>−</sup>1, respectively. The Fubini-Study metric on **C***P*2*<sup>n</sup>*−<sup>1</sup> is utilized also in [15] to measure the minimum distance from each state involved in the search sequence to the submanifold consisting of non-entangled states, which characterizes the entanglement of the states along the search.

M1(2*n*, ℓ) is reduced to the space of regular density matrices of degree ℓ endowed with the

another way, the reduction made in [19] reveals a direct nontrivial connection between the ESOT and the QIS. The former is a stage of quantum computation and the latter the stage of

Due to the account given below, however, geometric studies were not made in [19] either on the search sequence in the ESOT generated by the Grover-type search algorithm or on the reduced search sequence in the QIS: Instead of geometric studies on the search sequences, it is the gradient dynamical system associated with the negative von Neumann entropy as the potential that is discussed in [19] on inspired by a series works of Nakamura [20–22] on complete integrablity of algorithms arising in applied mathematics. The result on the gradient system in [19] has drawn the author's interest to publish [23, 24] on the gradient systems on the QIS realizing the Karmarkar flow for linear programming and a Hebb-type learning equation for multivariate analysis, while geometric studies on the search sequences

The purpose of this chapter is therefore to study the Grover-type search sequence for an ordered tuple of multi-qubits from geometric and dynamical viewpoints, which has been left since [19]. In particular, the reduced search sequence in the QIS is intensively studied from

As an extension of [15] on the original search sequence, the Grover-type search sequence in the ESOT, M1(2*n*, ℓ), is shown to be on a geodesic. As a nontrivial result on the reduced search sequence in the QIS, the sequence is characterized in terms of an important geometric

**Main Theorem** *Through the reduction of the regular part,* M˙ <sup>1</sup>(2*n*, ℓ)*, of the extended space of*

Note that the *m*-parallel transport is the abbreviation of the *mixture* parallel transport [25–27],

To those who are not familiar to differential geometry, an important remark should be made on the term *geodesic* before the outline of chapter organization: One might hear that the geodesic between a pair of points is understood to be the shortest path connecting those points. This is true if geodesics are discussed on a Riemannian manifold endowed with the Levi-Civita (or Riemannian) parallel transport. As a reference accessible to potential readers, the book [28] is worth cited. Geodesics in the ESOT, M1(2*n*, ℓ), discussed in this chapter are the case. In general, however, geodesics are *not* characterized by shortest-path property *but* by autoparallel curves which have the shortest paths only in the case of the Levi-Civita parallel transport. What is needed to define geodesics is a parallel transport, while Riemannian metrics are not always necessary. The *m*-parallel transport of the QIS is the very example of parallel transport whose geodesics do not have the shortest-path property. As another crucial parallel transport in the QIS, the *exponential* parallel transport (the *e*-parallel transport) is well-known [25–27], whose geodesics do not have the shortest-path property either, though

The organization of this chapter is outlined in what follows. Section 2 is for the quantum search for an ordered tuple of multi-qubits. The section starts with a brief review of the

*ordered tuples of multi-qubits (ESOT) to the quantum information space (QIS),* P˙

*sequence is on a geodesic in the QIS with respect to the m-parallel transport.*

ℓ is referred to as the quantum information space (QIS). Put

http://dx.doi.org/10.5772/53187

263

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

ℓ*, the reduced search*

SLD-Fisher metric, so that P˙

quantum information theory.

**1.3. Chapter purpose, summary and organization**

the viewpoint of quantum information geometry.

object in quantum information geometry:

which is characteristic of the QIS.

it is not dealt with in this chapter.

were left undone.

As expected benefits of geometric and dynamical views on quantum algorithmic studies like [15], the following would be worth listed;


It would be worth noting here that there exists another approach to quantum searches using adiabatic evolution [16–18]. That approach, however, is outside the scope of this chapter since the search dealt with in this chapter is organized on the so-called amplitude magnification technique [8] which differs from the adiabatic evolution.

## **1.2. Quantum search for an ordered tuple of multi-qubits – a brief history –**

Motivated by the work [15], the author studied in [19] a Grover-type search algorithm for an ordered tuple of multi-qubits together with a geometric reduction other than the reduction made in [15]. While the search algorithm is organized as a natural extension of Grover's original algorithm, the reduction of the search space made in [19] provides a nontrivial result: On denoting the degree of multi-qubits by *n* and the number of multi-qubits enclosed in each ordered tuple by ℓ, the space of 2*<sup>n</sup>* × ℓ complex matrices with unit norm denoted by M1(2*n*, ℓ) is taken as the extended space of ordered tuples of multi-qubits (ESOT), where the collection of all the ordered tuples denoted by M*OT* <sup>1</sup> (2*n*, <sup>ℓ</sup>) is included. The reduction is applied to the regular part, M˙ <sup>1</sup>(2*n*, ℓ), of the ESOT, M1(2*n*, ℓ), to give rise to the space denoted by P˙ <sup>ℓ</sup> of regular density matrices of degree ℓ which plays a key role in quantum information theory. Roughly speaking, the reduction applied in [19] is made by the elimination of 'complex rotations' leaving the relative configuration of multi-qubit states placed in each ordered tuples, so that the reduction is understood to be a very natural geometric projection of M˙ <sup>1</sup>(2*n*, ℓ) to the space, P˙ ℓ.

A significant result arising from the reduction is that the Riemannian metric on P˙ ℓ is shown to be derived 'consistently' from the standard metric on M˙ <sup>1</sup>(2*n*, ℓ), which coincides with the SLD-Fisher metric on P˙ ℓ up to a constant multiple [19]. Namely, as a Riemannian manifold, M1(2*n*, ℓ) is reduced to the space of regular density matrices of degree ℓ endowed with the SLD-Fisher metric, so that P˙ ℓ is referred to as the quantum information space (QIS). Put another way, the reduction made in [19] reveals a direct nontrivial connection between the ESOT and the QIS. The former is a stage of quantum computation and the latter the stage of quantum information theory.

Due to the account given below, however, geometric studies were not made in [19] either on the search sequence in the ESOT generated by the Grover-type search algorithm or on the reduced search sequence in the QIS: Instead of geometric studies on the search sequences, it is the gradient dynamical system associated with the negative von Neumann entropy as the potential that is discussed in [19] on inspired by a series works of Nakamura [20–22] on complete integrablity of algorithms arising in applied mathematics. The result on the gradient system in [19] has drawn the author's interest to publish [23, 24] on the gradient systems on the QIS realizing the Karmarkar flow for linear programming and a Hebb-type learning equation for multivariate analysis, while geometric studies on the search sequences were left undone.

## **1.3. Chapter purpose, summary and organization**

2 Search Algorithms

the search.

by P˙

[15], the following would be worth listed;

quantum computation and information.

technique [8] which differs from the adiabatic evolution.

the collection of all the ordered tuples denoted by M*OT*

ℓ.

computation and information.

dynamical systems.

of M˙ <sup>1</sup>(2*n*, ℓ) to the space, P˙

SLD-Fisher metric on P˙

geodesic in (2*n*+<sup>1</sup> − 1)-dimensional sphere. Further, the reduced search sequence is given rise to the complex projective space **C***P*2*<sup>n</sup>*−<sup>1</sup> through a geometric reduction, which is also shown to be on a geodesic in **C***P*2*<sup>n</sup>*<sup>−</sup>1. Roughly speaking, the reduction in [15] is made through the phase-factor elimination from quantum states, so that **C***P*2*<sup>n</sup>*−<sup>1</sup> is thought of as the space of rays. Note that the geodesics above are associated with the standard metric on (2*n*+<sup>1</sup> − 1)-dimensional sphere and with the Fubini-Study metric on **C***P*2*<sup>n</sup>*<sup>−</sup>1, respectively. The Fubini-Study metric on **C***P*2*<sup>n</sup>*−<sup>1</sup> is utilized also in [15] to measure the minimum distance from each state involved in the search sequence to the submanifold consisting of non-entangled states, which characterizes the entanglement of the states along

As expected benefits of geometric and dynamical views on quantum algorithmic studies like

**1** By revealing underlying geometry of quantum algorithms (not necessarily universal), numbers of results in geometry are expected to be applied to make advances in quantum

**2** On looked upon the iterations made in algorithms as (discrete) time-evolutions of states, numbers of results in dynamical systems are expected to be applied to make advances in

**3** In view of a close connection between geometry and dynamical systems, geometric and dynamical-systems studies on quantum algorithms may provide interesting examples of

It would be worth noting here that there exists another approach to quantum searches using adiabatic evolution [16–18]. That approach, however, is outside the scope of this chapter since the search dealt with in this chapter is organized on the so-called amplitude magnification

Motivated by the work [15], the author studied in [19] a Grover-type search algorithm for an ordered tuple of multi-qubits together with a geometric reduction other than the reduction made in [15]. While the search algorithm is organized as a natural extension of Grover's original algorithm, the reduction of the search space made in [19] provides a nontrivial result: On denoting the degree of multi-qubits by *n* and the number of multi-qubits enclosed in each ordered tuple by ℓ, the space of 2*<sup>n</sup>* × ℓ complex matrices with unit norm denoted by M1(2*n*, ℓ) is taken as the extended space of ordered tuples of multi-qubits (ESOT), where

applied to the regular part, M˙ <sup>1</sup>(2*n*, ℓ), of the ESOT, M1(2*n*, ℓ), to give rise to the space denoted

to be derived 'consistently' from the standard metric on M˙ <sup>1</sup>(2*n*, ℓ), which coincides with the

ℓ up to a constant multiple [19]. Namely, as a Riemannian manifold,

A significant result arising from the reduction is that the Riemannian metric on P˙

<sup>ℓ</sup> of regular density matrices of degree ℓ which plays a key role in quantum information theory. Roughly speaking, the reduction applied in [19] is made by the elimination of 'complex rotations' leaving the relative configuration of multi-qubit states placed in each ordered tuples, so that the reduction is understood to be a very natural geometric projection

<sup>1</sup> (2*n*, <sup>ℓ</sup>) is included. The reduction is

ℓ is shown

**1.2. Quantum search for an ordered tuple of multi-qubits – a brief history –**

The purpose of this chapter is therefore to study the Grover-type search sequence for an ordered tuple of multi-qubits from geometric and dynamical viewpoints, which has been left since [19]. In particular, the reduced search sequence in the QIS is intensively studied from the viewpoint of quantum information geometry.

As an extension of [15] on the original search sequence, the Grover-type search sequence in the ESOT, M1(2*n*, ℓ), is shown to be on a geodesic. As a nontrivial result on the reduced search sequence in the QIS, the sequence is characterized in terms of an important geometric object in quantum information geometry:

**Main Theorem** *Through the reduction of the regular part,* M˙ <sup>1</sup>(2*n*, ℓ)*, of the extended space of ordered tuples of multi-qubits (ESOT) to the quantum information space (QIS),* P˙ ℓ*, the reduced search sequence is on a geodesic in the QIS with respect to the m-parallel transport.*

Note that the *m*-parallel transport is the abbreviation of the *mixture* parallel transport [25–27], which is characteristic of the QIS.

To those who are not familiar to differential geometry, an important remark should be made on the term *geodesic* before the outline of chapter organization: One might hear that the geodesic between a pair of points is understood to be the shortest path connecting those points. This is true if geodesics are discussed on a Riemannian manifold endowed with the Levi-Civita (or Riemannian) parallel transport. As a reference accessible to potential readers, the book [28] is worth cited. Geodesics in the ESOT, M1(2*n*, ℓ), discussed in this chapter are the case. In general, however, geodesics are *not* characterized by shortest-path property *but* by autoparallel curves which have the shortest paths only in the case of the Levi-Civita parallel transport. What is needed to define geodesics is a parallel transport, while Riemannian metrics are not always necessary. The *m*-parallel transport of the QIS is the very example of parallel transport whose geodesics do not have the shortest-path property. As another crucial parallel transport in the QIS, the *exponential* parallel transport (the *e*-parallel transport) is well-known [25–27], whose geodesics do not have the shortest-path property either, though it is not dealt with in this chapter.

The organization of this chapter is outlined in what follows. Section 2 is for the quantum search for an ordered tuple of multi-qubits. The section starts with a brief review of the classical linear search in unsorted lists. The second subsection is for preliminaries to the quantum search: Mathematics for multi-qubits and ordered tuples of them is introduced. In the third subsection, the Grover-type quantum search algorithm is organized for an ordered tuple of multi-qubits along with the idea of Grover [6]. Dynamical behavior of the search sequence thus obtained is studied in the fourth subsection from the geometric viewpoint: The search sequence in the ESOT, M1(2*n*, ℓ), is shown to be on a geodesic in the ESOT. Section 3 is devoted to a study on the reduced search sequence in the QIS from geometric and dynamical points of view. The first subsection is a brief introduction of the QIS. The geometric reduction of the ESOT to the QIS is made in the second subsection: To be precise, our interest is focused on the reduction of the regular part, M˙ <sup>1</sup>(2*n*, ℓ), of the ESOT to simplify our geometric analysis. The third subsection starts with the standard parallel transport in the Euclidean space as a very familiar and intuitive example of the parallel transport. After the Euclidean case, the *m*-parallel transport in the QIS is introduced. It is shown that the reduced search sequence in the QIS is on a geodesic in the QIS with respect to the *m*-parallel transport. Section 4 is for concluding remarks, in which a significance of the main theorem (or Theorem 3.3) and some questions for future studies are included. A mathematical detail of Sec. 3 is consigned to Appendices following Sec. 4. Many symbols are introduced for geometric setting-up and analysis, which are listed in Appendix 1.

**2.2. Quantum search: Preliminaries**

As known well, information necessary to classical computing is encoded into sequences of '0' and '1'. The minimum unit carrying '0' or '1' is said to be a *bit*. A quantum analogue of a *bit* is called a *qubit*, that takes 2-dimensional-vector form with complex-valued components. In particular, the basis vectors, (1, 0)*<sup>T</sup>* and (0, 1)*T*, are taken to play the role of symbols '0' and '1' for classical computing, so that they are referred to as the *computational basis vectors*. We note here that the superscript *<sup>T</sup>* indicates the transpose operation to vectors and matrices henceforth. A significant difference between *qubit* and *bit* is that superposition of the computational basis vectors is allowed in qubit while it is not so of '0' and '1' in bit. Namely, superposition, *α*(1, 0)*<sup>T</sup>* + *β*(0, 1)*<sup>T</sup>* (*α*, *β* ∈ **C**), is allowed in qubit, so that we refer to the space of 2-dimensional column complex vectors, denoted by **C**2, as the single-qubit space. The **C**<sup>2</sup> is endowed with the natural Hermitian inner product, say *φ*†*ψ* for *φ*, *ψ* ∈ **C**2, where the superscript † indicates the Hermitian conjugate operation to vectors and matrices.

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

http://dx.doi.org/10.5772/53187

265

In order to express classical *n*-bit information in quantum computing, it is clearly necessary to prepare 2*<sup>n</sup>* computational basis vectors, which span 2*n*-dimensional complex vector space

generally, the multi-qubit space), which usually thought of as a Hilbert space for a combined quantum system consisting of *n* single-qubit systems. In the *n*-qubit space, any vectors with

It is worth noting here that, in a context of quantum computing or of quantum information, the *n*-qubit space, (**C**2)⊗*n*, is often assumed to be a 2*n*-dimensional subspace of a complex Hilbert space (usually of infinite dimension) where a quantum dynamical system is

*αxe*(*x*) (*α<sup>x</sup>* ∈ **C**, *x* = 1, 2, ··· , *n*) with

**<sup>C</sup>**2*<sup>n</sup>* <sup>∼</sup><sup>=</sup> (**C**2)⊗*<sup>n</sup>* <sup>=</sup>

: For any integer *x* subject to 1 ≤ *x* ≤ 2*n*, let us denote by *e*(*x*) the canonical basis vector

, whose *x*-th component equals 1 (*x* = 1, 2, ··· , 2*n*), while the others are naught. Then every *e*(*x*) corresponds to the binary sequence *x*1*x*<sup>2</sup> ··· *xn* (*xj* = 0, 1, *j* = 1, 2, ··· , *n*) with

, so that the basis {*e*(*x*)}*x*=1,2,··· ,2*<sup>n</sup>* turns out to be the computational

*n* 

should be understood as *n*-tensor product,

**C**<sup>2</sup> ⊗···⊗ **C**2, (2)

(∼<sup>=</sup> (**C**2)⊗*n*) is called the *<sup>n</sup>*-qubit space (more

2*n* ∑ *x*=1


<sup>2</sup> = 1. (3)

*2.2.1. Single-qubit*

*2.2.2. Multi-qubits*

**C**2*<sup>n</sup>*

in **C**2*<sup>n</sup>*

*x* − 1 = ∑*<sup>n</sup>*

described.

*<sup>j</sup>*=<sup>1</sup> *xj*2*n*−*<sup>j</sup>*

basis. To be precise mathematically, **C**2*<sup>n</sup>*

of the single-qubit spaces (**C**2s). The **C**2*<sup>n</sup>*

2*n* ∑ *x*=1

unit length are called *state vectors*;

*φ* =

## **2. Quantum search for an ordered tuple of multi-qubits**

#### **2.1. Classical search: Review**

The classical linear search in unsorted lists is outlined very briefly in what follows.

Let *N* be the number of unsorted data in a list, so that the data are labeled as *d*1, *d*2, ··· , *dN*. The *N* is assumed to be large enough. We start with a very figurative description of the search by taking the counter-consultation of a thick telephone book as an example; namely, the identification of the subscriber of a given telephone number. In the telephone book, the superscripts, *j* = 1, 2, ··· *N*, of the data, {*dj*}*j*=1,2,··· ,*N*, correspond to the names of subscribers sorted alphabetically and each *dj* shows the telephone number of the *j*-th subscriber. Among the data, there assumed to be one telephone number, say *dM*, that we wish to know its subscriber. The *dM* is referred to as the target or the marked datum. A very naive way of finding *dM* is to check the telephone number from *d*<sup>1</sup> in ascending order whether or not it is the same as the the target datum until we find *dM*. The label *M* turns out to be the subscriber who we wish to identify. In average, this way requires *<sup>N</sup>* <sup>2</sup> trials of checking to find *dM*.

The linear search is described in a smarter form than above in terms of the oracle function. In the same setting above, the oracle function, denoted by *f* , is defined to be the function of {1, 2, ··· , *N*} to {0, 1} subject to

$$f(j) = \begin{cases} 1 & (j = M) \\ 0 & (j \neq M) \end{cases} \tag{1}$$

for *j* = 1, 2, ··· *N*. Namely, *f*(*j*) = 1 means the 'hit' while *f*(*j*) = 0 a 'miss'. In theory, the evaluation of *f*(*j*) is assumed to be done instantaneously, so that the evaluations does not affect the complexity of the problem. The search is therefore made by evaluating *f*(*j*) from *j* = 1 in ascending order until we have *f*(*j*) = 1. The expected number of evaluations is *<sup>N</sup>* 2 , linear in *N*, so that we say the classical search needs *O*(*N*) evaluations. It is well known that the estimate *O*(*N*) is the theoretical lowest bound of the classical linear search.

#### **2.2. Quantum search: Preliminaries**

#### *2.2.1. Single-qubit*

4 Search Algorithms

classical linear search in unsorted lists. The second subsection is for preliminaries to the quantum search: Mathematics for multi-qubits and ordered tuples of them is introduced. In the third subsection, the Grover-type quantum search algorithm is organized for an ordered tuple of multi-qubits along with the idea of Grover [6]. Dynamical behavior of the search sequence thus obtained is studied in the fourth subsection from the geometric viewpoint: The search sequence in the ESOT, M1(2*n*, ℓ), is shown to be on a geodesic in the ESOT. Section 3 is devoted to a study on the reduced search sequence in the QIS from geometric and dynamical points of view. The first subsection is a brief introduction of the QIS. The geometric reduction of the ESOT to the QIS is made in the second subsection: To be precise, our interest is focused on the reduction of the regular part, M˙ <sup>1</sup>(2*n*, ℓ), of the ESOT to simplify our geometric analysis. The third subsection starts with the standard parallel transport in the Euclidean space as a very familiar and intuitive example of the parallel transport. After the Euclidean case, the *m*-parallel transport in the QIS is introduced. It is shown that the reduced search sequence in the QIS is on a geodesic in the QIS with respect to the *m*-parallel transport. Section 4 is for concluding remarks, in which a significance of the main theorem (or Theorem 3.3) and some questions for future studies are included. A mathematical detail of Sec. 3 is consigned to Appendices following Sec. 4. Many symbols are introduced for

geometric setting-up and analysis, which are listed in Appendix 1.

**2. Quantum search for an ordered tuple of multi-qubits**

who we wish to identify. In average, this way requires *<sup>N</sup>*

The classical linear search in unsorted lists is outlined very briefly in what follows.

Let *N* be the number of unsorted data in a list, so that the data are labeled as *d*1, *d*2, ··· , *dN*. The *N* is assumed to be large enough. We start with a very figurative description of the search by taking the counter-consultation of a thick telephone book as an example; namely, the identification of the subscriber of a given telephone number. In the telephone book, the superscripts, *j* = 1, 2, ··· *N*, of the data, {*dj*}*j*=1,2,··· ,*N*, correspond to the names of subscribers sorted alphabetically and each *dj* shows the telephone number of the *j*-th subscriber. Among the data, there assumed to be one telephone number, say *dM*, that we wish to know its subscriber. The *dM* is referred to as the target or the marked datum. A very naive way of finding *dM* is to check the telephone number from *d*<sup>1</sup> in ascending order whether or not it is the same as the the target datum until we find *dM*. The label *M* turns out to be the subscriber

The linear search is described in a smarter form than above in terms of the oracle function. In the same setting above, the oracle function, denoted by *f* , is defined to be the function of

for *j* = 1, 2, ··· *N*. Namely, *f*(*j*) = 1 means the 'hit' while *f*(*j*) = 0 a 'miss'. In theory, the evaluation of *f*(*j*) is assumed to be done instantaneously, so that the evaluations does not affect the complexity of the problem. The search is therefore made by evaluating *f*(*j*) from *j* = 1 in ascending order until we have *f*(*j*) = 1. The expected number of evaluations is *<sup>N</sup>*

linear in *N*, so that we say the classical search needs *O*(*N*) evaluations. It is well known that

1 (*j* = *M*)

*f*(*j*) =

the estimate *O*(*N*) is the theoretical lowest bound of the classical linear search.

<sup>2</sup> trials of checking to find *dM*.

2 ,

<sup>0</sup> (*<sup>j</sup>* �<sup>=</sup> *<sup>M</sup>*) (1)

**2.1. Classical search: Review**

{1, 2, ··· , *N*} to {0, 1} subject to

As known well, information necessary to classical computing is encoded into sequences of '0' and '1'. The minimum unit carrying '0' or '1' is said to be a *bit*. A quantum analogue of a *bit* is called a *qubit*, that takes 2-dimensional-vector form with complex-valued components. In particular, the basis vectors, (1, 0)*<sup>T</sup>* and (0, 1)*T*, are taken to play the role of symbols '0' and '1' for classical computing, so that they are referred to as the *computational basis vectors*. We note here that the superscript *<sup>T</sup>* indicates the transpose operation to vectors and matrices henceforth. A significant difference between *qubit* and *bit* is that superposition of the computational basis vectors is allowed in qubit while it is not so of '0' and '1' in bit. Namely, superposition, *α*(1, 0)*<sup>T</sup>* + *β*(0, 1)*<sup>T</sup>* (*α*, *β* ∈ **C**), is allowed in qubit, so that we refer to the space of 2-dimensional column complex vectors, denoted by **C**2, as the single-qubit space. The **C**<sup>2</sup> is endowed with the natural Hermitian inner product, say *φ*†*ψ* for *φ*, *ψ* ∈ **C**2, where the superscript † indicates the Hermitian conjugate operation to vectors and matrices.

## *2.2.2. Multi-qubits*

In order to express classical *n*-bit information in quantum computing, it is clearly necessary to prepare 2*<sup>n</sup>* computational basis vectors, which span 2*n*-dimensional complex vector space **C**2*<sup>n</sup>* : For any integer *x* subject to 1 ≤ *x* ≤ 2*n*, let us denote by *e*(*x*) the canonical basis vector in **C**2*<sup>n</sup>* , whose *x*-th component equals 1 (*x* = 1, 2, ··· , 2*n*), while the others are naught. Then every *e*(*x*) corresponds to the binary sequence *x*1*x*<sup>2</sup> ··· *xn* (*xj* = 0, 1, *j* = 1, 2, ··· , *n*) with *x* − 1 = ∑*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *xj*2*n*−*<sup>j</sup>* , so that the basis {*e*(*x*)}*x*=1,2,··· ,2*<sup>n</sup>* turns out to be the computational basis. To be precise mathematically, **C**2*<sup>n</sup>* should be understood as *n*-tensor product,

$$\mathbf{C}^{2^{n}} \cong (\mathbf{C}^{2})^{\otimes n} = \overbrace{\mathbf{C}^{2} \otimes \cdots \otimes \mathbf{C}^{2}}^{n} \tag{2}$$

of the single-qubit spaces (**C**2s). The **C**2*<sup>n</sup>* (∼<sup>=</sup> (**C**2)⊗*n*) is called the *<sup>n</sup>*-qubit space (more generally, the multi-qubit space), which usually thought of as a Hilbert space for a combined quantum system consisting of *n* single-qubit systems. In the *n*-qubit space, any vectors with unit length are called *state vectors*;

$$\phi = \sum\_{\mathbf{x}=1}^{2^n} \mathfrak{a}\_{\mathbf{x}} e(\mathbf{x}) \quad (\mathfrak{a}\_{\mathbf{x}} \in \mathbb{C}, \mathbf{x} = 1, 2, \dots, n) \quad \text{with} \quad \sum\_{\mathbf{x}=1}^{2^n} |\mathfrak{a}\_{\mathbf{x}}|^2 = 1. \tag{3}$$

It is worth noting here that, in a context of quantum computing or of quantum information, the *n*-qubit space, (**C**2)⊗*n*, is often assumed to be a 2*n*-dimensional subspace of a complex Hilbert space (usually of infinite dimension) where a quantum dynamical system is described.

#### *2.2.3. Ordered tuples of multi-qubits*

We move on to introduce ordered tuples of multi-qubits: The degree of multi-qubits is set to be *n* and the number of multi-qubit data enclosed in any tuple to be ℓ henceforth. Let M(2*n*, ℓ) be the set of 2*<sup>n</sup>* × ℓ matrices, which is made into a complex Hilbert space of dimension 2*<sup>n</sup>* × ℓ endowed with the Hermitian inner product

$$
\langle \Phi, \Phi' \rangle = \frac{1}{\ell} \text{trace } \Phi^\dagger \Phi' \quad (\Phi, \Phi' \in \mathcal{M}(2^n, \ell)).\tag{4}
$$

*<sup>A</sup>* <sup>=</sup> <sup>1</sup> √2*n*

the length *n* of each binary sequence.

defined to be

target *W* (see [1] for example).

*R* =

� 2*<sup>n</sup>*

<sup>2</sup>*<sup>n</sup>* <sup>−</sup> <sup>1</sup> *<sup>A</sup>* <sup>−</sup>

span{*W*, *R*}. The action of *IG* can be therefore restricted on span{*W*, *R*} to be

*IG* : (*W*, *R*) �→ (*W*, *R*)

1 ··· 1 . . . ··· . . . 1 ··· 1 where *σ* is an injection of {1, 2, ··· , ℓ} into {1, 2, ··· , 2*n*}. On recalling that the state vector *e*(*x*) corresponds to the binary sequence *x*1*x*<sup>2</sup> ··· *xn*, the target *W* corresponds to the ordered tuple of the binary sequences, *<sup>σ</sup>j*,1*σj*,2 ··· *<sup>σ</sup>j*,ℓ, associated with *<sup>e</sup>*(*σj*) (*<sup>j</sup>* = 1, 2, ··· , ℓ). Note that *σ* is not necessarily injective in general, but, for simplicity in the succeeding section, it is required to be an injection. Through this chapter, we further assume that *n* is sufficiently larger than ℓ, so that, in *W*, the number ℓ of binary sequences is relatively quite smaller than

Like in many literatures on quantum computation, we apply the description without the *oracle qubit* below. A treatment and a role of the oracle qubit can be seen, for example, in [1] .

of M(2*n*, ℓ) looked upon as a Hilbert space, where *IA* and *IW* are the unitary transformations

A very crucial remark is that, on implementation, *IW* will be of course *not* realized with the

To express the action of *IG* to the initial state *<sup>A</sup>*, it is convenient to introduce the 2*<sup>n</sup>* × ℓ matrix,

� 1 2*<sup>n</sup>* − 1

The pair {*W*, *R*} forms an orthonormal basis of the subspace, denoted by span{*W*, *R*}, of M(2*n*, ℓ) consisting of all the superpositions of the initial state *A* and the target *W*. The action of the operator *IG* leaves the subspace, span{*W*, *R*}, invariant; *IG*(span {*W*, *R*}) =

The quantum search is proceeded by applying iteratively the unitary transformation

and *W* = (*e*(*σ*1),*e*(*σ*2), ··· ,*e*(*σ*ℓ)), (9)

http://dx.doi.org/10.5772/53187

267

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

*IG* = (−*IA*) ◦ *IW* (10)

*<sup>W</sup>* ∈ M1(2*n*, ℓ). (13)

, (14)

*IA* : <sup>Φ</sup> ∈ <sup>M</sup>(2*n*, ℓ) �→ <sup>Φ</sup> − <sup>2</sup>�*A*, <sup>Φ</sup>�*<sup>A</sup>* ∈ <sup>M</sup>(2*n*, ℓ), (11) *IW* : <sup>Φ</sup> ∈ <sup>M</sup>(2*n*, ℓ) �→ <sup>Φ</sup> − <sup>2</sup>�*W*, <sup>Φ</sup>�*<sup>W</sup>* ∈ <sup>M</sup>(2*n*, ℓ). (12)

> � cos *θ* − sin *θ* sin *θ* cos *θ*

�

The M(2*n*, ℓ) with �, � is the Hilbert space for our quantum search. The subset,

$$\mathcal{M}\_1(2^n, \ell) = \{ \Phi \in \mathcal{M}(2^n, \ell) \mid \langle \Phi, \Phi \rangle = 1 \},\tag{5}$$

of M(2*n*, ℓ) is what we are going to dealt with henceforth. An ordered tuple of multi-qubits is a matrix in M1(2*n*, ℓ) of the form

$$\Phi = (\phi\_1, \phi\_2, \cdots, \phi\_\ell) \quad \text{with} \quad \phi\_j^\dagger \phi\_\emptyset = 1 \quad (\phi\_j \in \mathbf{C}^{2^u}, j = 1, 2, \cdots, \ell). \tag{6}$$

Namely, every column vector of an ordered tuple of multi-qubits stands for a *n*-qubit state vector. Then the subset of M1(2*n*, ℓ) defined by

$$\mathbf{M}\_1^{\mathrm{OT}}(2^\mathrm{\mathbb{Z}},\ell) = \{ \Phi = (\phi\_1, \phi\_2, \dots, \phi\_\ell) \in \mathrm{M}\_1(2^\mathrm{\mathbb{Z}},\ell) \, | \, \phi\_j^\dagger \phi\_j = 1 \, (j = 1, 2, \dots, \ell) \}\tag{7}$$

is the space of ordered tuples of multi-qubits, so that we refer to M1(2*n*, ℓ) including M*OT* <sup>1</sup> (2*n*, <sup>ℓ</sup>) as the *extended* space of ordered tuples of multi-qubits (ESOT).

On closing this subsubsection, a remark on a vector-space structure of the ESOT, M1(2*n*, ℓ), is made: As a vector space, the ESOT allows the following isomorphisms,

$$\mathbf{M}\_1(\mathfrak{2}^n, \ell) \cong \mathbf{C}^{2^n} \otimes \mathbf{C}^\ell \cong \overbrace{\mathbf{C}^2 \otimes \cdots \otimes \mathbf{C}^2}^n \otimes \mathbf{C}^\ell,\tag{8}$$

which is usually lokked upon as a Hilbert space of the combined system consisting of *n* two-level particle systems (single-qubit systems) and an ℓ-level particle system. The structure (8) will be a clue to think about a physical realization of the present algorithm.

#### **2.3. Quantum search for an ordered tuple**

We are now in a position to present a Grover-type algorithm for an ordered tuple of multi-qubits. Our recipe traces, in principle, Grover's original scenario for the single-target state search [6]. We start with the initial state denoted by *A* and the target state *W*, which are defined to be

<sup>266</sup> Search Algorithms for Engineering Optimization Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits 7 Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits http://dx.doi.org/10.5772/53187 267

$$A = \frac{1}{\sqrt{2^n}} \begin{pmatrix} 1 & \cdots & 1 \\ \vdots & \cdots & \vdots \\ 1 & \cdots & 1 \end{pmatrix} \quad \text{and} \quad W = (e(\sigma\_1), e(\sigma\_2), \cdots, e(\sigma\_\ell)),\tag{9}$$

where *σ* is an injection of {1, 2, ··· , ℓ} into {1, 2, ··· , 2*n*}. On recalling that the state vector *e*(*x*) corresponds to the binary sequence *x*1*x*<sup>2</sup> ··· *xn*, the target *W* corresponds to the ordered tuple of the binary sequences, *<sup>σ</sup>j*,1*σj*,2 ··· *<sup>σ</sup>j*,ℓ, associated with *<sup>e</sup>*(*σj*) (*<sup>j</sup>* = 1, 2, ··· , ℓ). Note that *σ* is not necessarily injective in general, but, for simplicity in the succeeding section, it is required to be an injection. Through this chapter, we further assume that *n* is sufficiently larger than ℓ, so that, in *W*, the number ℓ of binary sequences is relatively quite smaller than the length *n* of each binary sequence.

6 Search Algorithms

*2.2.3. Ordered tuples of multi-qubits*

is a matrix in M1(2*n*, ℓ) of the form

M*OT*

are defined to be

M*OT*

endowed with the Hermitian inner product

�Φ, <sup>Φ</sup>′

Φ = (*φ*1, *φ*2, ··· , *φ*ℓ) with *φ*†

<sup>1</sup> (2*n*, <sup>ℓ</sup>) = {<sup>Φ</sup> = (*φ*1, *<sup>φ</sup>*2, ··· , *<sup>φ</sup>*ℓ) <sup>∈</sup> M1(2*n*, <sup>ℓ</sup>)<sup>|</sup> *<sup>φ</sup>*†

<sup>1</sup> (2*n*, <sup>ℓ</sup>) as the *extended* space of ordered tuples of multi-qubits (ESOT).

is made: As a vector space, the ESOT allows the following isomorphisms,

(8) will be a clue to think about a physical realization of the present algorithm.

M1(2*n*, <sup>ℓ</sup>) <sup>∼</sup><sup>=</sup> **<sup>C</sup>**2*<sup>n</sup>*

vector. Then the subset of M1(2*n*, ℓ) defined by

**2.3. Quantum search for an ordered tuple**

� <sup>=</sup> <sup>1</sup> ℓ

The M(2*n*, ℓ) with �, � is the Hilbert space for our quantum search. The subset,

We move on to introduce ordered tuples of multi-qubits: The degree of multi-qubits is set to be *n* and the number of multi-qubit data enclosed in any tuple to be ℓ henceforth. Let M(2*n*, ℓ) be the set of 2*<sup>n</sup>* × ℓ matrices, which is made into a complex Hilbert space of dimension 2*<sup>n</sup>* × ℓ

of M(2*n*, ℓ) is what we are going to dealt with henceforth. An ordered tuple of multi-qubits

Namely, every column vector of an ordered tuple of multi-qubits stands for a *n*-qubit state

is the space of ordered tuples of multi-qubits, so that we refer to M1(2*n*, ℓ) including

On closing this subsubsection, a remark on a vector-space structure of the ESOT, M1(2*n*, ℓ),

<sup>⊗</sup> **<sup>C</sup>**<sup>ℓ</sup> <sup>∼</sup><sup>=</sup>

which is usually lokked upon as a Hilbert space of the combined system consisting of *n* two-level particle systems (single-qubit systems) and an ℓ-level particle system. The structure

We are now in a position to present a Grover-type algorithm for an ordered tuple of multi-qubits. Our recipe traces, in principle, Grover's original scenario for the single-target state search [6]. We start with the initial state denoted by *A* and the target state *W*, which

*<sup>j</sup> <sup>φ</sup><sup>j</sup>* <sup>=</sup> <sup>1</sup> (*φ<sup>j</sup>* <sup>∈</sup> **<sup>C</sup>**2*<sup>n</sup>*

*n* **<sup>C</sup>**<sup>2</sup> ⊗···⊗ **<sup>C</sup>**<sup>2</sup> <sup>⊗</sup>**C**<sup>ℓ</sup>

trace <sup>Φ</sup>†Φ′ (Φ, <sup>Φ</sup>′ ∈ <sup>M</sup>(2*n*, ℓ)). (4)

, *j* = 1, 2, ··· , ℓ). (6)

*<sup>j</sup> <sup>φ</sup><sup>j</sup>* <sup>=</sup> <sup>1</sup> (*<sup>j</sup>* <sup>=</sup> 1, 2, ··· , <sup>ℓ</sup>)} (7)

, (8)

M1(2*n*, ℓ) = {<sup>Φ</sup> ∈ <sup>M</sup>(2*n*, ℓ)| �Φ, <sup>Φ</sup>� = <sup>1</sup>}, (5)

Like in many literatures on quantum computation, we apply the description without the *oracle qubit* below. A treatment and a role of the oracle qubit can be seen, for example, in [1] .

The quantum search is proceeded by applying iteratively the unitary transformation

$$I\_G = (-I\_A) \odot I\_W \tag{10}$$

of M(2*n*, ℓ) looked upon as a Hilbert space, where *IA* and *IW* are the unitary transformations defined to be

$$I\_A: \Phi \in \mathbf{M}(\mathbf{2}^{\text{\textquotedblleft}}, \ell) \mapsto \Phi - \mathbf{2} \langle A, \Phi \rangle A \in \mathbf{M}(\mathbf{2}^{\text{\textquotedblleft}}, \ell), \tag{11}$$

$$I\_W: \Phi \in \mathbf{M}(\mathfrak{2}^{\text{fl}}, \ell) \mapsto \Phi - \mathbf{2} \langle \mathcal{W}\_\prime \Phi \rangle \mathcal{W} \in \mathbf{M}(\mathfrak{2}^{\text{fl}}, \ell). \tag{12}$$

A very crucial remark is that, on implementation, *IW* will be of course *not* realized with the target *W* (see [1] for example).

To express the action of *IG* to the initial state *<sup>A</sup>*, it is convenient to introduce the 2*<sup>n</sup>* × ℓ matrix,

$$R = \sqrt{\frac{2^n}{2^n - 1}} A - \sqrt{\frac{1}{2^n - 1}} W \in \mathbf{M}\_1(2^n, \ell). \tag{13}$$

The pair {*W*, *R*} forms an orthonormal basis of the subspace, denoted by span{*W*, *R*}, of M(2*n*, ℓ) consisting of all the superpositions of the initial state *A* and the target *W*. The action of the operator *IG* leaves the subspace, span{*W*, *R*}, invariant; *IG*(span {*W*, *R*}) = span{*W*, *R*}. The action of *IG* can be therefore restricted on span{*W*, *R*} to be

$$I\_G: (W, R) \mapsto (W, R) \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \\ \tag{14}$$

where *θ* is defined by

$$
\sin\frac{\theta}{2} = \sqrt{\frac{1}{2^n}}, \quad \cos\frac{\theta}{2} = \sqrt{\frac{2^n - 1}{2^n}}, \quad 0 < \theta < \pi. \tag{15}
$$

with the space. The geodesics to be mentioned of in this subsection can be understood as the

Our discussion is made on the ESOT, M1(2*n*, ℓ) defined by (5), with which the standard Riemannian metric is endowed in the following way. To those who are not familiar to geometry, it is recommended to think of the 2-dimensional unit-radius sphere, *S*<sup>2</sup> , in place of M1(2*n*, ℓ), since M1(2*n*, ℓ) is a (2*n*+1ℓ − <sup>1</sup>)-dimensional analogue of *<sup>S</sup>*2. A Riemannian metric

of M1(2*n*, ℓ) at <sup>Φ</sup> as follows, where ℜ indicates the operation of taking the real part of complex numbers: On recalling the intuitive case of *S*2, the tangent space at a point *p* ∈ *S*<sup>2</sup> ⊂ **R**<sup>3</sup> is thought of as the collection of all the vectors normal to the radial vector *p*, which can be understood as all the velocity vectors from the dynamical viewpoint. The Riemannian

We are to give an explicit form of geodesics in a very intuitive manner as follows. Let us recall the 2-dimensional case, in which a geodesic with the initial position *p* ∈ *S*<sup>2</sup> ⊂ **R**<sup>3</sup> is known well to be realized as a big circle passing through *p*. By the initial velocity, say *v* ∈ **R**3, always normal to *p*, the geodesic is uniquely determined as the intersection of *S*<sup>2</sup> and the plane spanned by the vectors *<sup>p</sup>* and *<sup>v</sup>*. The same story is valid for geodesics in M1(2*n*, ℓ), so

of the geodesic with the initial position <sup>Φ</sup><sup>0</sup> ∈ M1(2*n*, ℓ) and the initial vector *<sup>X</sup>*<sup>0</sup> ∈ *<sup>T</sup>*Φ0M1(2*n*, <sup>ℓ</sup>) of unit length tangent to the geodesic. In (19), *<sup>s</sup>* is taken to be the length parameter measured from the initial point Φ0. To be precise from differential geometric viewpoint, the geodesics given by (19) are said to be associated with the Levi-Civita (or

(16) and (19), we can construct the geodesic from the big circle passing both *W* and *R*, so that

sin <sup>√</sup> ℓ*s*

<sup>2</sup>*<sup>n</sup>* <sup>−</sup> <sup>1</sup>

<sup>2</sup>*<sup>n</sup> <sup>W</sup>* <sup>−</sup>

*R* (*s* ∈ **R**) (20)

2*<sup>n</sup>* − 1 <sup>2</sup>*<sup>n</sup> <sup>R</sup>* +

sin( √ ℓ*s* + *θ* 2 ) 

sin <sup>√</sup> ℓ*s* 

metric, denoted by ((·, ·))*ESOT*, of M1(2*n*, ℓ) is defined to give the inner product

ℜ(trace *<sup>X</sup>*†*X*′

cos <sup>√</sup> ℓ*s* <sup>Φ</sup><sup>0</sup> +

*<sup>T</sup>*ΦM1(2*n*, ℓ) = {*<sup>X</sup>* ∈ <sup>M</sup>(2*n*, ℓ)| ℜ(trace <sup>Φ</sup>†*X*) = <sup>0</sup>} (<sup>Φ</sup> ∈ M1(2*n*, ℓ)), (17)

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

) (*X*, *<sup>X</sup>*′ <sup>∈</sup> *<sup>T</sup>*ΦM1(2*n*, <sup>ℓ</sup>), <sup>Φ</sup> <sup>∈</sup> M1(2*n*, <sup>ℓ</sup>)) (18)

http://dx.doi.org/10.5772/53187

269

*X*<sup>0</sup> (*s* ∈ **R**), (19)

*<sup>G</sup>*(*A*)} is placed on. From (13),

 1 <sup>2</sup>*<sup>n</sup> <sup>R</sup>* 

of M1(2*n*, ℓ) has a role of an inner product in every tangent space,

familiar *shortest paths*.

((*X*, *<sup>X</sup>*′

that we get an explicit form,

Φ(*s*) =

We are to determine a geodesic which the search sequence {*I<sup>k</sup>*

<sup>2</sup>*<sup>n</sup> <sup>W</sup>* <sup>+</sup>

Riemannian) parallel transport in M1(2*n*, ℓ).

cos <sup>√</sup> ℓ*s* <sup>1</sup>

*2.4.2. Geodesics*

we obtain

Ψ(*s*) =

= cos( √ ℓ*s* + *θ* 2 ) *W* +

))*ESOT* <sup>Φ</sup> <sup>=</sup> <sup>1</sup> ℓ

in each tangent space *<sup>T</sup>*ΦM1(2*n*, ℓ).

On putting (10), (13) and (14) together, the *k*-times iteration *I<sup>k</sup> <sup>G</sup>* of *IG* applied to *A* results in

$$I\_G^k(A) = \left(\sin(k + \frac{1}{2})\theta\right)W + \left(\cos(k + \frac{1}{2})\theta\right)R \quad (k = 1, 2, 3, \cdots). \tag{16}$$

Hence *I<sup>k</sup> <sup>G</sup>*(*A*) gets closed to the target *<sup>W</sup>* if (*<sup>k</sup>* <sup>+</sup> <sup>1</sup> <sup>2</sup> )*<sup>θ</sup>* does to *<sup>π</sup>* <sup>2</sup> . Indeed, under the assumption *n* >> 1, Eq. (15) yields *θ* ≃ 2 <sup>1</sup> <sup>2</sup>*<sup>n</sup>* , so that the probability of observing the state *W* from the state *I<sup>k</sup> <sup>G</sup>*(*A*) gets the highest (closed to one) at the iteration number nearest to *<sup>π</sup>* 4 √2*<sup>n</sup>* − <sup>1</sup> 2 . Namely, like Grover's original search algorithm, complexity of the quantum search presented above for an ordered tuple of multi-qubits is of the order of square root of 2*n*, the length of binary sequences allowed to be expressed in multi-qubits. In the case of ℓ = 1, our search of course becomes Grover's original ones, so that our search is thought of as a natural generalization of Grover's original one [6] based on the amplitude magnification technique (see [8], for example).

On closing this subsection, a remark should be made in what follows, which would be of importance to think of a physical implementation in future: We have organized the Grover-type search algorithm *IG* as a unitary transformation of the ESOT, M1(2*n*, ℓ). Since physically acceptable tuples, however, are in the subset, M*OT* <sup>1</sup> (2*n*, <sup>ℓ</sup>), of the ESOT, it is worth checking whether or not *IG* leaves M*OT* <sup>1</sup> (2*n*, <sup>ℓ</sup>) invariant. By a straightforward calculation with (9), (13) and (14), *IG* indeed leaves M*OT* <sup>1</sup> (2*n*, <sup>ℓ</sup>) invariant. Though this fact is very basic and simple, this supports, to an extent, a physical feasibility of the present algorithm.

#### **2.4. Geodesic property of the search sequence**

We show that the search sequence {*I<sup>k</sup> <sup>G</sup>*(*A*)} generated by (16) is on the geodesic starting from the initial state *A* to the target state *W*, like in Wadati and Miyake [15] on Grover's original search.

#### *2.4.1. Geometric setting-up*

As is briefly mentioned of in Sec. 1, the term 'geodesics' can deal with a wider class of curves in differential geometry than that in usual sense. In usual sense especially among non-geometers, for example, one might have an experience of hearing a phrase like 'the shortest path between a pair of points is a geodesic'. In contrast with phrases like this, geodesics are defined to be autoparallel curves in differential geometry. Put in another way, we have to fix a parallel transport to discuss geodesics in the geometric framework. We have a variety of parallel transports, among which the Levi-Civita (or Riemannian) parallel transport can provide the shortest-path property. Note here that the Levi-Civita parallel transport is defined as the parallel transport that leaves the Riemannian metric endowed with the space. The geodesics to be mentioned of in this subsection can be understood as the familiar *shortest paths*.

Our discussion is made on the ESOT, M1(2*n*, ℓ) defined by (5), with which the standard Riemannian metric is endowed in the following way. To those who are not familiar to geometry, it is recommended to think of the 2-dimensional unit-radius sphere, *S*<sup>2</sup> , in place of M1(2*n*, ℓ), since M1(2*n*, ℓ) is a (2*n*+1ℓ − <sup>1</sup>)-dimensional analogue of *<sup>S</sup>*2. A Riemannian metric of M1(2*n*, ℓ) has a role of an inner product in every tangent space,

$$T\_{\Phi} \mathbf{M}\_{1}(2^{\mathbb{N}}, \ell) = \{ X \in \mathbf{M}(2^{\mathbb{N}}, \ell) \, | \, \Re(\text{trace}\, \Phi^{\dagger} X) = 0 \} \quad (\Phi \in \mathbf{M}\_{1}(2^{\mathbb{N}}, \ell)), \tag{17}$$

of M1(2*n*, ℓ) at <sup>Φ</sup> as follows, where ℜ indicates the operation of taking the real part of complex numbers: On recalling the intuitive case of *S*2, the tangent space at a point *p* ∈ *S*<sup>2</sup> ⊂ **R**<sup>3</sup> is thought of as the collection of all the vectors normal to the radial vector *p*, which can be understood as all the velocity vectors from the dynamical viewpoint. The Riemannian metric, denoted by ((·, ·))*ESOT*, of M1(2*n*, ℓ) is defined to give the inner product

$$\langle (X, X') \rangle\_{\Phi}^{ESOT} = \frac{1}{\ell} \Re(\text{trace}\, X^{\dagger} X') \quad (X, X' \in T\_{\Phi} \mathcal{M}\_1(2^n, \ell), \Phi \in \mathcal{M}\_1(2^n, \ell)) \tag{18}$$

in each tangent space *<sup>T</sup>*ΦM1(2*n*, ℓ).

#### *2.4.2. Geodesics*

8 Search Algorithms

Hence *I<sup>k</sup>*

the state *I<sup>k</sup>*

search.

(see [8], for example).

where *θ* is defined by

*I k <sup>G</sup>*(*A*) =

*n* >> 1, Eq. (15) yields *θ* ≃ 2

sin *<sup>θ</sup>* 2 =

1

On putting (10), (13) and (14) together, the *k*-times iteration *I<sup>k</sup>*

1 2 )*θ W* + 

<sup>1</sup>

physically acceptable tuples, however, are in the subset, M*OT*

checking whether or not *IG* leaves M*OT*

We show that the search sequence {*I<sup>k</sup>*

*2.4.1. Geometric setting-up*

with (9), (13) and (14), *IG* indeed leaves M*OT*

**2.4. Geodesic property of the search sequence**

sin(*k* +

*<sup>G</sup>*(*A*) gets closed to the target *<sup>W</sup>* if (*<sup>k</sup>* <sup>+</sup> <sup>1</sup>

<sup>2</sup>*<sup>n</sup>* , cos

*θ* 2 = 2*<sup>n</sup>* − 1

cos(*k* +

*<sup>G</sup>*(*A*) gets the highest (closed to one) at the iteration number nearest to *<sup>π</sup>*

Namely, like Grover's original search algorithm, complexity of the quantum search presented above for an ordered tuple of multi-qubits is of the order of square root of 2*n*, the length of binary sequences allowed to be expressed in multi-qubits. In the case of ℓ = 1, our search of course becomes Grover's original ones, so that our search is thought of as a natural generalization of Grover's original one [6] based on the amplitude magnification technique

On closing this subsection, a remark should be made in what follows, which would be of importance to think of a physical implementation in future: We have organized the Grover-type search algorithm *IG* as a unitary transformation of the ESOT, M1(2*n*, ℓ). Since

and simple, this supports, to an extent, a physical feasibility of the present algorithm.

the initial state *A* to the target state *W*, like in Wadati and Miyake [15] on Grover's original

As is briefly mentioned of in Sec. 1, the term 'geodesics' can deal with a wider class of curves in differential geometry than that in usual sense. In usual sense especially among non-geometers, for example, one might have an experience of hearing a phrase like 'the shortest path between a pair of points is a geodesic'. In contrast with phrases like this, geodesics are defined to be autoparallel curves in differential geometry. Put in another way, we have to fix a parallel transport to discuss geodesics in the geometric framework. We have a variety of parallel transports, among which the Levi-Civita (or Riemannian) parallel transport can provide the shortest-path property. Note here that the Levi-Civita parallel transport is defined as the parallel transport that leaves the Riemannian metric endowed

1 2 )*θ* 

<sup>2</sup> )*<sup>θ</sup>* does to *<sup>π</sup>*

<sup>2</sup>*<sup>n</sup>* , so that the probability of observing the state *W* from

<sup>1</sup> (2*n*, <sup>ℓ</sup>) invariant. By a straightforward calculation

*<sup>G</sup>*(*A*)} generated by (16) is on the geodesic starting from

<sup>1</sup> (2*n*, <sup>ℓ</sup>) invariant. Though this fact is very basic

<sup>2</sup>*<sup>n</sup>* , 0 <sup>&</sup>lt; *<sup>θ</sup>* <sup>&</sup>lt; *<sup>π</sup>*. (15)

*<sup>G</sup>* of *IG* applied to *A* results in

*R* (*k* = 1, 2, 3, ···). (16)

<sup>2</sup> . Indeed, under the assumption

<sup>1</sup> (2*n*, <sup>ℓ</sup>), of the ESOT, it is worth

4 √2*<sup>n</sup>* − <sup>1</sup> 2 .

> We are to give an explicit form of geodesics in a very intuitive manner as follows. Let us recall the 2-dimensional case, in which a geodesic with the initial position *p* ∈ *S*<sup>2</sup> ⊂ **R**<sup>3</sup> is known well to be realized as a big circle passing through *p*. By the initial velocity, say *v* ∈ **R**3, always normal to *p*, the geodesic is uniquely determined as the intersection of *S*<sup>2</sup> and the plane spanned by the vectors *<sup>p</sup>* and *<sup>v</sup>*. The same story is valid for geodesics in M1(2*n*, ℓ), so that we get an explicit form,

$$\Phi(\mathbf{s}) = \begin{pmatrix} \cos\sqrt{\ell}\mathbf{s} \end{pmatrix} \Phi\_0 + \begin{pmatrix} \sin\sqrt{\ell}\mathbf{s} \end{pmatrix} \mathbf{X}\_0 \quad (\mathbf{s} \in \mathbf{R})\_\prime \tag{19}$$

of the geodesic with the initial position <sup>Φ</sup><sup>0</sup> ∈ M1(2*n*, ℓ) and the initial vector *<sup>X</sup>*<sup>0</sup> ∈ *<sup>T</sup>*Φ0M1(2*n*, <sup>ℓ</sup>) of unit length tangent to the geodesic. In (19), *<sup>s</sup>* is taken to be the length parameter measured from the initial point Φ0. To be precise from differential geometric viewpoint, the geodesics given by (19) are said to be associated with the Levi-Civita (or Riemannian) parallel transport in M1(2*n*, ℓ).

We are to determine a geodesic which the search sequence {*I<sup>k</sup> <sup>G</sup>*(*A*)} is placed on. From (13), (16) and (19), we can construct the geodesic from the big circle passing both *W* and *R*, so that we obtain

$$\begin{split} \Psi(s) &= \left(\cos\sqrt{\ell}s\right)\left(\sqrt{\frac{1}{2^n}}W + \sqrt{\frac{2^n - 1}{2^n}}R\right) + \left(\sin\sqrt{\ell}s\right)\left(\sqrt{\frac{2^n - 1}{2^n}}W - \sqrt{\frac{1}{2^n}}R\right) \\ &= \left(\cos(\sqrt{\ell}s + \frac{\theta}{2})\right)\mathcal{W} + \left(\sin(\sqrt{\ell}s + \frac{\theta}{2})\right)R \quad (s \in \mathbb{R}) \end{split} \tag{20}$$

as the desired geodesic, where *s* is the length parameter and *θ* is defined by (15). Setting the parameter sequence {*sk*}*k*=0,1,2,··· to be *sk* = <sup>√</sup> *k* ℓ *θ*, Eq. (20) with *s* = *sk* indeed provides the search sequence {*I<sup>k</sup> <sup>G</sup>*(*A*)}*k*=0,1,2,···; <sup>Ψ</sup>(*sk*) = *<sup>I</sup><sup>k</sup> <sup>G</sup>*(*A*) (see (16)). To summarize, we have the following.

To any tangent vector Ξ ∈ *Tρ*P˙

((Ξ, <sup>Ξ</sup>′

))*QF <sup>ρ</sup>* <sup>=</sup> <sup>1</sup> 2 trace *ρ* 

*ρ* = *h*Θ*h*†, *h* ∈ U(*l*)

where U(*l*) denotes the group of ℓ × ℓ unitary matrices,

with *h* ∈ U(*l*) in (26), the SLD L*ρ*(Ξ) to Ξ ∈ *Tρ*P˙

Putting (26)-(29) into (25), we have

[19], where <sup>Ξ</sup>′ <sup>∈</sup> *<sup>T</sup>ρ*P˙

and *<sup>I</sup>*<sup>ℓ</sup> the identity matrix of degree-ℓ. On expressing <sup>Ξ</sup> ∈ *<sup>T</sup>ρ*P˙

(*h*†L*ρ*(Ξ)*h*)*jk* <sup>=</sup> <sup>2</sup>

ℓ is expressed as

in the present chapter, which will be denoted also as the pair (P˙

The space of ℓ × ℓ regular density matrices, P˙

((Ξ, <sup>Ξ</sup>′

*θ<sup>j</sup>* + *θ<sup>k</sup>*

))*QF <sup>ρ</sup>* = 2

<sup>Ξ</sup>′ <sup>=</sup> *<sup>h</sup>χ*′

((·, ·))*QF* defined above is what we are referring to as the quantum information space (QIS)

ℓ ∑ *j*,*k*=1

*<sup>χ</sup>jkχ*′ *jk θ<sup>j</sup>* + *θ<sup>k</sup>*

provide the Hermitian matrix L*ρ*(Ξ) ∈ <sup>M</sup>(ℓ, ℓ) subject to

*ρ*L*ρ*(Ξ) + L*ρ*(Ξ) *ρ*

The quantum SLD-Fisher metric, denoted by ((·, ·))*QF*, is then defined to be

(see [25, 26]), which plays a central role in quantum information theory. A more explicit expression of ((·, ·))*QF* is given in what follows. Let *ρ* ∈ P˙

*<sup>L</sup>ρ*(Ξ)*Lρ*(Ξ′

1 2  ℓ, the symmetric logarithmic derivate (SLD) is defined to

ℓ). (24)

http://dx.doi.org/10.5772/53187

ℓ) (25)

271

(30)

ℓ be expressed as

(Ξ, <sup>Ξ</sup>′ <sup>∈</sup> *<sup>T</sup>ρ*P˙

= <sup>Ξ</sup> (<sup>Ξ</sup> ∈ *<sup>T</sup>ρ*P˙

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

)*Lρ*(Ξ)

U(*l*) = {*h* ∈ M(ℓ, ℓ)| *h*†*h* = *I*ℓ}, (27)

ℓ as

Ξ = *hχh*† (28)

*<sup>χ</sup>jk* (*j*, *<sup>k</sup>* = 1, 2, ··· , ℓ). (29)

*h*†. (31)

<sup>ℓ</sup>,((·, ·))*QF*) henceforth.

ℓ, endowed with the quantum SLD-Fisher metric

ℓ takes an explicit expression [19]

) + *<sup>L</sup>ρ*(Ξ′

<sup>Θ</sup> <sup>=</sup> diag(*θ*1, ··· , *<sup>θ</sup>*ℓ) with trace <sup>Θ</sup> <sup>=</sup> 1, *<sup>θ</sup><sup>k</sup>* <sup>&</sup>gt; <sup>0</sup> (*<sup>k</sup>* <sup>=</sup> 1, 2, ··· , <sup>ℓ</sup>), (26)

**Theorem 2.1.** *The Grover-type search sequence* {*I<sup>k</sup> <sup>G</sup>*(*A*)} *given by (16) for an ordered tuple of multi-qubits is on the geodesic curve* <sup>Ψ</sup>(*s*) *given by (20) in the ESOT,* M1(2*n*, ℓ)*.*

As the closing remark of this section, it should be pointed out that in the case of ℓ = 1, Theorem 2.1 reproduces the result of Miyake and Wadati on Grover's original search sequence on *S*2*n*+1−<sup>1</sup> in [15].

## **3. Geometry and dynamics of the projected search sequence in the QIS**

In this section, the reduced search sequence in the QIS is shown to be on a geodesic with respect to the *m*-parallel transport, one of the two significant parallel transports of the QIS. The reduced search sequence is derived from the Grover-type sequence {*I<sup>k</sup> <sup>G</sup>*(*A*)} along with the reduction of the regular part of the ESOT to the QIS. The reduction method applied here is entirely different from that in Miyake and Wadati [15].

## **3.1. The QIS**

This subsection is devoted to a brief introduction of the quantum information space (QIS), the space of regular density matrices endowed with the quantum SLD-Fisher metric (see also [19] for another brief introduction and [25, 26] for a detailed one).

Let us consider the space of ℓ × ℓ density matrices

$$\mathcal{P}\_{\ell} = \{ \rho \in \mathcal{M}(\ell, \ell) \, | \, \rho^{\dagger} = \rho, \,\text{trace}\,\rho = 1, \,\rho \text{ : positive semidefinite} \},\tag{21}$$

and its regular part

$$\dot{\mathbf{P}}\_{\ell} = \{ \rho \in \mathbf{M}(\ell, \ell) \, | \, \rho^{\dagger} = \rho, \,\text{trace}\,\rho = 1, \rho : \text{positive definite} \}, \tag{22}$$

where M(ℓ, ℓ) denotes the set of ℓ × ℓ complex matrices. The tangent space of P˙ ℓ at *ρ* can be described by

$$T\_{\rho}\dot{\mathbb{P}}\_{\ell} = \left\{ \boldsymbol{\Xi} \in \mathbf{M}(\ell, \ell) \,|\, \boldsymbol{\Xi}^{\dagger} = \boldsymbol{\Xi}, \,\text{trace}\,\boldsymbol{\Xi} = 0 \right\}. \tag{23}$$

In this chapter, the regular part P˙ ℓ of Pℓ plays a central role, while Pℓ is usually dealt with as the quantum information space. A plausible account for taking P˙ ℓ is that we can be free from dealing with the boundary of Pℓ which requires us an extra effort especially in differential calculus.

To any tangent vector Ξ ∈ *Tρ*P˙ ℓ, the symmetric logarithmic derivate (SLD) is defined to provide the Hermitian matrix L*ρ*(Ξ) ∈ <sup>M</sup>(ℓ, ℓ) subject to

$$\frac{1}{2}\left\{\rho\mathcal{L}\_{\rho}(\Xi) + \mathcal{L}\_{\rho}(\Xi)\,\rho\right\} = \Xi \qquad \left(\Xi \in T\_{\rho}\dot{\mathcal{P}}\_{\ell}\right).\tag{24}$$

The quantum SLD-Fisher metric, denoted by ((·, ·))*QF*, is then defined to be

$$\langle (\Xi, \Xi') \rangle\_{\rho}^{QF} = \frac{1}{2} \text{trace } \left[ \rho \left( L\_{\rho}(\Xi) L\_{\rho}(\Xi') + L\_{\rho}(\Xi') L\_{\rho}(\Xi) \right) \right] \qquad (\Xi, \Xi' \in T\_{\rho} \dot{\mathbb{P}}\_{\ell}) \tag{25}$$

(see [25, 26]), which plays a central role in quantum information theory.

A more explicit expression of ((·, ·))*QF* is given in what follows. Let *ρ* ∈ P˙ ℓ be expressed as

$$\begin{aligned} \rho &= h \Theta h^{\dagger}, \quad h \in \mathbb{U}(l) \\ \Theta &= \text{diag}(\theta\_1, \dots, \theta\_\ell) \quad \text{with} \quad \text{trace} \, \Theta = 1, \quad \theta\_k > 0 \ (k &= 1, 2, \dots, \ell), \end{aligned} \tag{26}$$

where U(*l*) denotes the group of ℓ × ℓ unitary matrices,

$$\mathbf{U}(l) = \{ h \in \mathbf{M}(\ell, \ell) \, | \, h^\dagger h = I\_\ell \}, \tag{27}$$

and *<sup>I</sup>*<sup>ℓ</sup> the identity matrix of degree-ℓ. On expressing <sup>Ξ</sup> ∈ *<sup>T</sup>ρ*P˙ ℓ as

$$
\Delta = h \chi h^\dagger \tag{28}
$$

with *h* ∈ U(*l*) in (26), the SLD L*ρ*(Ξ) to Ξ ∈ *Tρ*P˙ ℓ takes an explicit expression [19]

$$(h^\dagger \mathcal{L}\_\theta(\Xi) h)\_{\vec{j}k} = \frac{2}{\theta\_{\vec{j}} + \theta\_k} \chi\_{\vec{j}k} \quad (\mathcal{j}\_\prime k = 1, 2, \cdots, \ell). \tag{29}$$

Putting (26)-(29) into (25), we have

10 Search Algorithms

following.

**3.1. The QIS**

and its regular part

described by

calculus.

P˙

In this chapter, the regular part P˙

the search sequence {*I<sup>k</sup>*

sequence on *S*2*n*+1−<sup>1</sup> in [15].

as the desired geodesic, where *s* is the length parameter and *θ* is defined by (15). Setting

As the closing remark of this section, it should be pointed out that in the case of ℓ = 1, Theorem 2.1 reproduces the result of Miyake and Wadati on Grover's original search

**3. Geometry and dynamics of the projected search sequence in the QIS** In this section, the reduced search sequence in the QIS is shown to be on a geodesic with respect to the *m*-parallel transport, one of the two significant parallel transports of the QIS.

the reduction of the regular part of the ESOT to the QIS. The reduction method applied here

This subsection is devoted to a brief introduction of the quantum information space (QIS), the space of regular density matrices endowed with the quantum SLD-Fisher metric (see also

P<sup>ℓ</sup> = {*ρ* ∈ M(ℓ, ℓ)| *ρ*† = *ρ*, trace *ρ* = 1, *ρ* : positive semidefinite}, (21)

<sup>ℓ</sup> = {*ρ* ∈ M(ℓ, ℓ)| *ρ*† = *ρ*, trace *ρ* = 1, *ρ* : positive definite}, (22)

ℓ of Pℓ plays a central role, while Pℓ is usually dealt with as

Ξ ∈ M(ℓ, ℓ)| Ξ† = Ξ, trace Ξ = 0

dealing with the boundary of Pℓ which requires us an extra effort especially in differential

*k* ℓ

*θ*, Eq. (20) with *s* = *sk* indeed provides

*<sup>G</sup>*(*A*) (see (16)). To summarize, we have the

*<sup>G</sup>*(*A*)} *given by (16) for an ordered tuple of*

*<sup>G</sup>*(*A*)} along with

ℓ at *ρ* can be

. (23)

ℓ is that we can be free from

the parameter sequence {*sk*}*k*=0,1,2,··· to be *sk* = <sup>√</sup>

**Theorem 2.1.** *The Grover-type search sequence* {*I<sup>k</sup>*

*<sup>G</sup>*(*A*)}*k*=0,1,2,···; <sup>Ψ</sup>(*sk*) = *<sup>I</sup><sup>k</sup>*

*multi-qubits is on the geodesic curve* <sup>Ψ</sup>(*s*) *given by (20) in the ESOT,* M1(2*n*, ℓ)*.*

The reduced search sequence is derived from the Grover-type sequence {*I<sup>k</sup>*

where M(ℓ, ℓ) denotes the set of ℓ × ℓ complex matrices. The tangent space of P˙

is entirely different from that in Miyake and Wadati [15].

Let us consider the space of ℓ × ℓ density matrices

*Tρ*P˙ <sup>ℓ</sup> =

the quantum information space. A plausible account for taking P˙

[19] for another brief introduction and [25, 26] for a detailed one).

$$((\boldsymbol{\Xi}, \boldsymbol{\Xi}'))^{QF}\_{\rho} = 2 \sum\_{j,k=1}^{\ell} \frac{\overline{\boldsymbol{\chi}}\_{jk} \boldsymbol{\chi}'\_{jk}}{\theta\_j + \theta\_k} \tag{30}$$

[19], where <sup>Ξ</sup>′ <sup>∈</sup> *<sup>T</sup>ρ*P˙ ℓ is expressed as

$$
\Xi' = \hbar \chi' \hbar^\dagger.\tag{31}
$$

The space of ℓ × ℓ regular density matrices, P˙ ℓ, endowed with the quantum SLD-Fisher metric ((·, ·))*QF* defined above is what we are referring to as the quantum information space (QIS) in the present chapter, which will be denoted also as the pair (P˙ <sup>ℓ</sup>,((·, ·))*QF*) henceforth.

## **3.2. Geometric reduction of the regular part of the ESOT to the QIS**

We move to show how the regular part, denoted by M˙ <sup>1</sup>(2*n*, ℓ), of the ESOT is reduced to the QIS through the geometric way, where M˙ <sup>1</sup>(2*n*, ℓ) is defined to be

$$\dot{\mathbf{M}}\_1(2^n, \ell) = \{ \Phi \in \mathbf{M}\_1(2^n, \ell) \mid \text{rank}\,\Phi = \ell \}. \tag{32}$$

**Lemma 3.1.** *The quotient space* M1(2*n*, ℓ)/∼ *is realized as* <sup>P</sup><sup>ℓ</sup> *defined by (21), where the projection*

*by <sup>π</sup>*(*n*,*l*) *restricted to* M˙ <sup>1</sup>(2*n*, ℓ)*. The* M˙ <sup>1</sup>(2*n*, ℓ) *admits the fibered manifold structure with the fiber* <sup>U</sup>(2*n*)/U(2*<sup>n</sup>* <sup>−</sup> <sup>ℓ</sup>)*. Namely, the inverse image* (*π*(*n*,*<sup>l</sup>*))<sup>−</sup>1(*ρ*) = {<sup>Φ</sup> <sup>∈</sup> M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)<sup>|</sup> *<sup>π</sup>*(*n*,*<sup>l</sup>*)(Φ) = *<sup>ρ</sup>*} *of*

Note that the fibered manifold structure of M˙ <sup>1</sup>(2*n*, ℓ) allows us to proceed differential

What is an intuitive interpretation of the quotient spaces, M1(2*n*, ℓ)/ ∼ and M˙ <sup>1</sup>(2*n*, ℓ)/ ∼ ? Let us consider any pair of points <sup>Φ</sup> and <sup>Φ</sup>′ <sup>=</sup> <sup>∃</sup>*g*<sup>Φ</sup> in M (M <sup>=</sup> M1(2*n*, <sup>ℓ</sup>), M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)). Then since *g* ∈ U(2*n*), the inner products between column vectors in Φ (see (6)) are kept invariant

This implies that the relative configuration of column vectors (namely multi-qubits) is kept invariant under the U(2*n*) action. Hence each of the quotient spaces of M1(2*n*, ℓ) and of M˙ <sup>1</sup>(2*n*, ℓ), is understood to be a space of relative configurations of multi-qubits [19]. We wish to explain the relative configurations in more detail in a very simple setting-up with *n* = 6 and ℓ = 6. Let us consider the set, *S* = {*A*, *B*, ··· , *Z*, *a*, ··· , *z*, 0, 1, ··· , 9, ", ", "."} , consisting of the capital Roman letters, the small ones, the arabic digits, a comma and a period. The correspondence of the 26-computational basis vectors, *e*(*x*) (*x* = 1, ··· , 26 = 64) (see subsubsec. 2.2.2), to the elements of *S* starts from *e*(1) �→ *A* in ascending order. Then, under the equivalence relation ∼ defined by (35), the word 'Search' is identified with 'Vhduk' since the latter can be obtained from the former through *αg* with the three-step shift matrix *g*. On choosing *g* ∈ U(2*n*) to exchange the capital letters for the small ones, 'Search' is identified

the reduction of (M˙ <sup>1</sup>(2*n*, ℓ),((·, ·))*ESOT*). Note here that the Riemannian metric ((·, ·))*ESOT* of M1(2*n*, ℓ) naturally turns out to be a metric of M˙ <sup>1</sup>(2*n*, ℓ) under the restriction <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ), so that we apply the same symbol, ((·, ·))*ESOT*, to the metric of M˙ <sup>1</sup>(2*n*, ℓ). A crucial key is the

of M˙ <sup>1</sup>(2*n*, ℓ) at <sup>Φ</sup>, which is associated with the fibered-manifold structure of M˙ <sup>1</sup>(2*n*, ℓ) mentioned of in Lemma 3.1. Note that *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) is identical with *<sup>T</sup>*ΦM1(2*n*, ℓ) if <sup>Φ</sup> ∈

*<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) = {*<sup>X</sup>* ∈ <sup>M</sup>(2*n*, ℓ)| ℜ(trace <sup>Φ</sup>†*X*) = <sup>0</sup>} (<sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ)), (40)

1 ℓ

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

<sup>ℓ</sup> <sup>∼</sup><sup>=</sup> M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)/<sup>∼</sup> freely, while not on P<sup>ℓ</sup> due to a collapse of the fibered structure

*<sup>k</sup>*� <sup>=</sup> �*gφj*, *<sup>g</sup>φk*� <sup>=</sup> �*φj*, *<sup>φ</sup>k*� (*j*, *<sup>k</sup>* <sup>=</sup> 1, 2, ··· , <sup>ℓ</sup>). (39)

Φ†Φ ∈ Pℓ. (38)

http://dx.doi.org/10.5772/53187

273

ℓ *defined by (22). The projection is given*

<sup>ℓ</sup>,((·, ·))*QF*) is a very natural outcome of

*<sup>π</sup>*(*n*,*l*) : <sup>Φ</sup> ∈ M1(2*n*, ℓ) �→

*Similarly, the quotient space* M˙ <sup>1</sup>(2*n*, ℓ)/ ∼ *is realized as* P˙

<sup>ℓ</sup> *is diffeomorphic to* U(2*n*)/U(2*<sup>n</sup>* − ℓ)*.*

�*φ*′ *j* , *φ*′

We are now in a position to show that the QIS (P˙

direct-sum decomposition of the tangent space,

*of* M1(2*n*, ℓ) *to* <sup>P</sup><sup>ℓ</sup> *is given by*

*any ρ* ∈ P˙

calculus on P˙

under *αg*;

on the boundary.

with 'sEARCH'.

M˙ <sup>1</sup>(2*n*, ℓ).

A key to the reduction is the U(2*n*) action on M1(2*n*, ℓ),

$$\mathfrak{a}\_{\mathcal{S}} : \Phi \in \mathbf{M}\_1(\mathfrak{2}^n, \ell) \mapsto \mathfrak{g}\Phi \in \mathbf{M}\_1(\mathfrak{2}^n, \ell) \quad (\mathcal{g} \in \mathbf{U}(\mathfrak{2}^n)),\tag{33}$$

where U(2*n*) stands for the group of 2*<sup>n</sup>* × 2*<sup>n</sup>* unitary matrices,

$$\mathbf{U}(2^{n}) = \{ \mathbf{g} \in \mathbf{M}(2^{n}, 2^{n}) \, | \, \mathbf{g}^{\dagger} \mathbf{g} = I\_{2^{n}} \},\tag{34}$$

with M(2*n*, 2*n*) denoting the set of 2*<sup>n</sup>* × 2*<sup>n</sup>* complex matrices and *I*2*<sup>n</sup>* the identity matrix of degree-2*n*. The U(2*n*) action (33) is well-defined also on M˙ <sup>1</sup>(2*n*, ℓ) since it leaves M˙ <sup>1</sup>(2*n*, ℓ) invariant; *<sup>α</sup>g*(M˙ <sup>1</sup>(2*n*, ℓ)) = M˙ <sup>1</sup>(2*n*, ℓ).

The U(2*n*) action given above provides us with the equivalence relation ∼ both on M1(2*n*, ℓ) and on M˙ <sup>1</sup>(2*n*, ℓ);

$$
\Phi \sim \Phi' \quad \text{ if and only if} \quad \exists \text{ g} \in \text{U}(2^n) \quad \text{s.t.} \ \text{a}\_{\mathcal{S}} \Phi = \Phi'
$$

$$
(\Phi, \Phi' \in \text{M}, \text{M} = \text{M}\_1(2^n, \ell), \dot{\text{M}}\_1(2^n, \ell)). \tag{35}
$$

The subset of M defined by

$$\mathbf{M}\left[\Phi\right] = \left\{\Phi' \in \mathbf{M} \,|\,\Phi \sim \Phi'\right\} \quad \text{(M} = \mathbf{M}\_1(\mathbf{2}'',\ell), \dot{\mathbf{M}}\_1(\mathbf{2}'',\ell)) \tag{36}$$

is called the equivalence class whose representative is <sup>Φ</sup> ∈ M (M = M1(2*n*, ℓ), M˙ <sup>1</sup>(2*n*, ℓ)). Note that [Φ]=[Φ′ ] holds true if and only if Φ ∼ Φ′ . The collection of the equivalence classes is called the quotient space, denoted by M/∼, of M by ∼ (M = M1(2*n*, ℓ), M˙ <sup>1</sup>(2*n*, ℓ)).

To describe a geometric structure of the quotient spaces, M/ ∼ (M = M1(2*n*, ℓ), M˙ <sup>1</sup>(2*n*, ℓ)), let us introduce the group of (2*<sup>n</sup>* − ℓ) × (2*<sup>n</sup>* − ℓ) unitary matrices,

$$\mathbf{U}(2^{n}-\ell) = \{ \mathbf{x} \in \mathbf{M}(2^{n}-\ell, 2^{n}-\ell) \, | \, \mathbf{x}^{\dagger}\mathbf{x} = I\_{2^{n}-\ell} \},\tag{37}$$

with M(2*<sup>n</sup>* − ℓ, 2*<sup>n</sup>* − ℓ) denoting the set of (2*<sup>n</sup>* − ℓ) × (2*<sup>n</sup>* − ℓ) complex matrices and *<sup>I</sup>*2*<sup>n</sup>*−<sup>ℓ</sup> the identity matrix of degree-(2*<sup>n</sup>* − ℓ). We have the following lemma [19].

**Lemma 3.1.** *The quotient space* M1(2*n*, ℓ)/∼ *is realized as* <sup>P</sup><sup>ℓ</sup> *defined by (21), where the projection of* M1(2*n*, ℓ) *to* <sup>P</sup><sup>ℓ</sup> *is given by*

12 Search Algorithms

**3.2. Geometric reduction of the regular part of the ESOT to the QIS**

QIS through the geometric way, where M˙ <sup>1</sup>(2*n*, ℓ) is defined to be

A key to the reduction is the U(2*n*) action on M1(2*n*, ℓ),

invariant; *<sup>α</sup>g*(M˙ <sup>1</sup>(2*n*, ℓ)) = M˙ <sup>1</sup>(2*n*, ℓ).

and on M˙ <sup>1</sup>(2*n*, ℓ);

The subset of M defined by

Note that [Φ]=[Φ′

where U(2*n*) stands for the group of 2*<sup>n</sup>* × 2*<sup>n</sup>* unitary matrices,

[Φ] = {Φ′ ∈ <sup>M</sup> | <sup>Φ</sup> ∼ <sup>Φ</sup>′

let us introduce the group of (2*<sup>n</sup>* − ℓ) × (2*<sup>n</sup>* − ℓ) unitary matrices,

identity matrix of degree-(2*<sup>n</sup>* − ℓ). We have the following lemma [19].

We move to show how the regular part, denoted by M˙ <sup>1</sup>(2*n*, ℓ), of the ESOT is reduced to the

with M(2*n*, 2*n*) denoting the set of 2*<sup>n</sup>* × 2*<sup>n</sup>* complex matrices and *I*2*<sup>n</sup>* the identity matrix of degree-2*n*. The U(2*n*) action (33) is well-defined also on M˙ <sup>1</sup>(2*n*, ℓ) since it leaves M˙ <sup>1</sup>(2*n*, ℓ)

The U(2*n*) action given above provides us with the equivalence relation ∼ both on M1(2*n*, ℓ)

<sup>Φ</sup> <sup>∼</sup> <sup>Φ</sup>′ if and only if <sup>∃</sup>*<sup>g</sup>* <sup>∈</sup> <sup>U</sup>(2*n*) s.t. *<sup>α</sup>g*<sup>Φ</sup> <sup>=</sup> <sup>Φ</sup>′

is called the equivalence class whose representative is <sup>Φ</sup> ∈ M (M = M1(2*n*, ℓ), M˙ <sup>1</sup>(2*n*, ℓ)).

classes is called the quotient space, denoted by M/∼, of M by ∼ (M = M1(2*n*, ℓ), M˙ <sup>1</sup>(2*n*, ℓ)). To describe a geometric structure of the quotient spaces, M/ ∼ (M = M1(2*n*, ℓ), M˙ <sup>1</sup>(2*n*, ℓ)),

with M(2*<sup>n</sup>* − ℓ, 2*<sup>n</sup>* − ℓ) denoting the set of (2*<sup>n</sup>* − ℓ) × (2*<sup>n</sup>* − ℓ) complex matrices and *<sup>I</sup>*2*<sup>n</sup>*−<sup>ℓ</sup> the

] holds true if and only if Φ ∼ Φ′

M˙ <sup>1</sup>(2*n*, ℓ) = {<sup>Φ</sup> ∈ M1(2*n*, ℓ)|rank <sup>Φ</sup> = ℓ}. (32)

*<sup>α</sup><sup>g</sup>* : <sup>Φ</sup> ∈ M1(2*n*, ℓ) �→ *<sup>g</sup>*<sup>Φ</sup> ∈ M1(2*n*, ℓ) (*<sup>g</sup>* ∈ <sup>U</sup>(2*n*)), (33)

U(2*n*) = {*g* ∈ M(2*n*, 2*n*)| *g*† *g* = *I*2*<sup>n</sup>* }, (34)

(Φ, <sup>Φ</sup>′ <sup>∈</sup> M, M <sup>=</sup> M1(2*n*, <sup>ℓ</sup>), M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)). (35)

<sup>U</sup>(2*<sup>n</sup>* − ℓ) = {*<sup>κ</sup>* ∈ <sup>M</sup>(2*<sup>n</sup>* − ℓ, 2*<sup>n</sup>* − ℓ)| *<sup>κ</sup>*†*<sup>κ</sup>* = *<sup>I</sup>*2*<sup>n</sup>*−ℓ}, (37)

} (<sup>M</sup> = M1(2*n*, ℓ), M˙ <sup>1</sup>(2*n*, ℓ)) (36)

. The collection of the equivalence

$$\pi^{(n,l)}: \Phi \in \mathbf{M}\_1(\mathfrak{L}^n, \ell) \mapsto \frac{1}{\ell} \Phi^\dagger \Phi \in \mathbf{P}\_\ell. \tag{38}$$

*Similarly, the quotient space* M˙ <sup>1</sup>(2*n*, ℓ)/ ∼ *is realized as* P˙ ℓ *defined by (22). The projection is given by <sup>π</sup>*(*n*,*l*) *restricted to* M˙ <sup>1</sup>(2*n*, ℓ)*. The* M˙ <sup>1</sup>(2*n*, ℓ) *admits the fibered manifold structure with the fiber* <sup>U</sup>(2*n*)/U(2*<sup>n</sup>* <sup>−</sup> <sup>ℓ</sup>)*. Namely, the inverse image* (*π*(*n*,*<sup>l</sup>*))<sup>−</sup>1(*ρ*) = {<sup>Φ</sup> <sup>∈</sup> M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)<sup>|</sup> *<sup>π</sup>*(*n*,*<sup>l</sup>*)(Φ) = *<sup>ρ</sup>*} *of any ρ* ∈ P˙ <sup>ℓ</sup> *is diffeomorphic to* U(2*n*)/U(2*<sup>n</sup>* − ℓ)*.*

Note that the fibered manifold structure of M˙ <sup>1</sup>(2*n*, ℓ) allows us to proceed differential calculus on P˙ <sup>ℓ</sup> <sup>∼</sup><sup>=</sup> M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)/<sup>∼</sup> freely, while not on P<sup>ℓ</sup> due to a collapse of the fibered structure on the boundary.

What is an intuitive interpretation of the quotient spaces, M1(2*n*, ℓ)/ ∼ and M˙ <sup>1</sup>(2*n*, ℓ)/ ∼ ? Let us consider any pair of points <sup>Φ</sup> and <sup>Φ</sup>′ <sup>=</sup> <sup>∃</sup>*g*<sup>Φ</sup> in M (M <sup>=</sup> M1(2*n*, <sup>ℓ</sup>), M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)). Then since *g* ∈ U(2*n*), the inner products between column vectors in Φ (see (6)) are kept invariant under *αg*;

$$
\langle \phi\_{\mathbf{j}'}' \phi\_{\mathbf{k}}' \rangle = \langle \mathbf{g} \phi\_{\mathbf{j}'} \mathbf{g} \phi\_{\mathbf{k}} \rangle = \langle \phi\_{\mathbf{j}'} \phi\_{\mathbf{k}} \rangle \quad (\mathbf{j}, \mathbf{k} = \mathbf{1}, \mathbf{2}, \cdots, \mathbf{\ell}). \tag{39}
$$

This implies that the relative configuration of column vectors (namely multi-qubits) is kept invariant under the U(2*n*) action. Hence each of the quotient spaces of M1(2*n*, ℓ) and of M˙ <sup>1</sup>(2*n*, ℓ), is understood to be a space of relative configurations of multi-qubits [19]. We wish to explain the relative configurations in more detail in a very simple setting-up with *n* = 6 and ℓ = 6. Let us consider the set, *S* = {*A*, *B*, ··· , *Z*, *a*, ··· , *z*, 0, 1, ··· , 9, ", ", "."} , consisting of the capital Roman letters, the small ones, the arabic digits, a comma and a period. The correspondence of the 26-computational basis vectors, *e*(*x*) (*x* = 1, ··· , 26 = 64) (see subsubsec. 2.2.2), to the elements of *S* starts from *e*(1) �→ *A* in ascending order. Then, under the equivalence relation ∼ defined by (35), the word 'Search' is identified with 'Vhduk' since the latter can be obtained from the former through *αg* with the three-step shift matrix *g*. On choosing *g* ∈ U(2*n*) to exchange the capital letters for the small ones, 'Search' is identified with 'sEARCH'.

We are now in a position to show that the QIS (P˙ <sup>ℓ</sup>,((·, ·))*QF*) is a very natural outcome of the reduction of (M˙ <sup>1</sup>(2*n*, ℓ),((·, ·))*ESOT*). Note here that the Riemannian metric ((·, ·))*ESOT* of M1(2*n*, ℓ) naturally turns out to be a metric of M˙ <sup>1</sup>(2*n*, ℓ) under the restriction <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ), so that we apply the same symbol, ((·, ·))*ESOT*, to the metric of M˙ <sup>1</sup>(2*n*, ℓ). A crucial key is the direct-sum decomposition of the tangent space,

$$T\_{\Phi} \dot{\mathbf{M}}\_1(2^n, \ell) = \{ X \in \mathbf{M}(2^n, \ell) \mid \Re(\text{trace}\,\Phi^\dagger X) = 0 \} \quad (\Phi \in \dot{\mathbf{M}}\_1(2^n, \ell)), \tag{40}$$

of M˙ <sup>1</sup>(2*n*, ℓ) at <sup>Φ</sup>, which is associated with the fibered-manifold structure of M˙ <sup>1</sup>(2*n*, ℓ) mentioned of in Lemma 3.1. Note that *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) is identical with *<sup>T</sup>*ΦM1(2*n*, ℓ) if <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ).

Let us consider the pair of subspaces, Ver(Φ) and Hor(Φ), of *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ), which are defined by

$$\text{Ver}(\Phi) = \{ X \in T\_{\Phi} \dot{\mathbf{M}}\_1(2^n, \ell) \mid X = \xi \Phi, \,\xi \in \mathfrak{u}(2^n) \}\tag{41}$$

The Ξ<sup>∗</sup> and Ξ′ ∗ are the horizontal lift at Φ of Ξ and Ξ′

)))*ESOT*

here that the rhs of (47) is well-defined owing to the invariance,

*<sup>α</sup><sup>g</sup>* (Φ) = ((*X*, *<sup>X</sup>*′

We have the following on the coincidence of ((·, ·))*RS* and ((·, ·))*QF* [19].

spaces, namely the reduced spaces, are of course different mutually.

We are now in a position to show that the reduced search sequence {*π*(*n*,*<sup>l</sup>*)(*I<sup>k</sup>*

QIS is on an *m*-geodesic, a geodesic with respect to the *m*-parallel transport, of the QIS.

Let us start with thinking of parallel transport in the 3-dimensional Euclidean space **R**3, the conventional model space not only for basic mathematics and physics but for our daily life. In **R**3, the notion of *parallel* seems to be a trivial one, which is usually not presented in a differential geometric framework to those who are not familiar to differential geometry. As minimum geometric knowledge necessary in this subsection, we introduce below a coordinate expression of tangent vectors. As known well, **R**<sup>3</sup> is endowed with the Cartesian coordinates *y* = (*y*1, *y*2, *y*3)*<sup>T</sup>* valid globally in **R**3. The tangent vectors at any point *p* ∈ **R**<sup>3</sup> can be understood to be the infinitesimal limit of displacements from *p*. The tangent vector understood to be the displacement-limit lim*ε*→0(*<sup>p</sup>* <sup>+</sup> *<sup>ε</sup>e*(*j*)) is then written as *<sup>∂</sup>*

where *e*(*j*)s are the orthonormal vectors along the *j*-th axis (*j* = 1, 2, 3). The account for

functions *F*. The parallel transport is defined to be a rule to transfer the tangent vectors at *p* ∈ **R**<sup>3</sup> to those at another *p*′ ∈ **R**3, which is of course have to be subject to several

*∂yj* 

is that we have lim*ε*→<sup>0</sup> *<sup>F</sup>*(*<sup>p</sup>* <sup>+</sup> *<sup>ε</sup>e*(*j*)) = *<sup>∂</sup><sup>F</sup>*

**3.3. Geodesic property of the reduced search sequence**

*3.3.1. Intuitive example of parallel transports: The Euclidean case*

))*ESOT*

of Hor(Φ) under the U(2*n*) action, where (*αg*)∗<sup>Φ</sup> is the differential map of *α<sup>g</sup>* at Φ (see (33) and Appendix 2). In view of (47), we say that the projection *<sup>π</sup>*(*n*,*l*) : M˙ <sup>1</sup>(2*n*, ℓ) → P˙

**Theorem 3.2.** *The Riemannian metric* ((·, ·))*RS defined by (47) to make π*(*n*,*l*) *the Riemannian submersion coincides with the SLD-Fisher metric defined by (25) up to the constant multiple* 4*;*

On closing this subsection, a comparison between the reduction here and the one by Miyake and Wadati is made. The reduction methods are essentially different since our reduction is made under 'left' U(2*n*) action while 'right' U(1) action is dealt with in [15] The resultant

implies that ((·, ·))*RS* is the Riemannian metric of P˙

(((*αg*)∗Φ(*X*),(*αg*)∗Φ(*X*′

of ((·, ·))*ESOT* and the equivariance,

Riemannian submersion [29].

4((·, ·))*RS* = ((·, ·))*QF.*

the expression *<sup>∂</sup>*

*∂yj p* , respectively, and the superscript *RS*

http://dx.doi.org/10.5772/53187

ℓ is a

275

*<sup>G</sup>*(*A*))} in the

*∂yj p* ,

(*p*) for any differentiable

ℓ looked upon as the reduced space. Note

<sup>Φ</sup> (*X*, *<sup>X</sup>*′ <sup>∈</sup> *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>), <sup>Φ</sup> <sup>∈</sup> M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)), (48)

Hor(*αg*(Φ)) = (*αg*)∗Φ(Hor(Φ)) (<sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ)), (49)

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

and

$$\operatorname{Hor}(\Phi) = \{ X \in T\_{\Phi} \dot{\mathbf{M}}\_1(2^{\mathbb{N}}, \ell) \, | \, ((X^{\prime}, X))\_{\Phi}^{\mathrm{ESOT}} = 0, \, X^{\prime} \in \operatorname{Ver}(\Phi) \}. \tag{42}$$

The *u*(2*n*) is the Lie algebra of U(2*n*) consisting of all the 2*<sup>n</sup>* × 2*<sup>n</sup>* anti-Hermitian matrices,

$$\mathfrak{u}(\mathfrak{2}^{\mathfrak{n}}) = \{ \mathfrak{f} \in \mathbf{M}(\mathfrak{2}^{\mathfrak{n}}, \mathfrak{2}^{\mathfrak{n}}) \,|\, \mathfrak{f}^{\mathfrak{f}} = -\mathfrak{f} \}. \tag{43}$$

The Ver(Φ) and Hor(Φ) are often called the vertical subspace and the horizontal subspace of *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ), respectively. The Ver(Φ) is understood to be the tangent space at <sup>Φ</sup> of the fiber space,

$$\mathbb{U}(\mathfrak{2}^{\mathfrak{n}}) \cdot \Phi = \{ \Phi' \in \dot{\mathbf{M}}\_1(\mathfrak{2}^{\mathfrak{n}}, \ell) \, | \, \Phi' = \mathfrak{a}\_{\mathcal{S}}(\Phi), \, \mathfrak{g} \in \mathbb{U}(\mathfrak{2}^{\mathfrak{n}}) \},\tag{44}$$

passing <sup>Φ</sup>, and Hor(Φ) to be the subspace of *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) normal to Ver(Φ) with respect to ((·, ·))*ESOT* <sup>Φ</sup> . Thus the orthogonal direct-sum decomposition

$$T\_{\Phi} \dot{\mathcal{M}}\_1(2^{\mathbb{N}}, \ell) = \text{Ver}(\Phi) \oplus \text{Hor}(\Phi) \quad (\Phi \in \dot{\mathcal{M}}\_1(2^{\mathbb{N}}, \ell))\tag{45}$$

with respect to the inner product ((·, ·))*ESOT* <sup>Φ</sup> is allowed to the tangent space *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>).

On using (45), the horizontal lift of any tangent vector of the QIS is given as follows: Let us fix *ρ* ∈ P˙ <sup>ℓ</sup> arbitrarily and any <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ) subject to *<sup>π</sup>*(*n*,*<sup>l</sup>*)(Φ) = *<sup>ρ</sup>*. For any tangent vector Ξ ∈ *Tρ*P˙ <sup>ℓ</sup> (see (23)), the horizontal lift of Ξ at Φ is the unique tangent vector, denoted by Ξ∗, in *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) that satisfies

$$(\pi^{(n,l)})\_{\*\Phi}(\Xi^\*) = \Xi \quad \text{and} \quad \Xi^\* \in \text{Hor}(\Phi), \tag{46}$$

where (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup><sup>Φ</sup> : *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>) <sup>→</sup> *<sup>T</sup>π*(*n*,*<sup>l</sup>*)(Φ)P˙ <sup>ℓ</sup> = *Tρ*P˙ <sup>ℓ</sup> is the differential map of *π*(*n*,*l*) at Φ. For a detail of differential maps, see Appendix 2. Recalling, further, the orthogonal direct-sum decomposition (45), we can understand that the horizontal lift <sup>Ξ</sup><sup>∗</sup> of <sup>Ξ</sup> <sup>∈</sup> *<sup>T</sup>ρ*P˙ ℓ is of minimum length among vectors, say *<sup>X</sup>*s, in *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) subject to (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup>Φ(*X*) = <sup>Ξ</sup>.

Accordingly, the horizontal lift (46) and the Riemannian metric ((·, ·))*ESOT* are put together to give rise the Riemannian metric, denoted by ((·, ·))*RS*, of P˙ ℓ, which is defined to satisfy

$$((\boldsymbol{\Sigma}, \boldsymbol{\Sigma}'))^{RS}\_{\rho} = ((\boldsymbol{\Sigma}^\*, \boldsymbol{\Sigma}'^\*))^{ESOT}\_{\Phi} \quad \text{with} \quad \pi^{(\boldsymbol{\eta}, \boldsymbol{\zeta})}(\Phi) = \rho \quad (\boldsymbol{\Xi}, \boldsymbol{\Xi}' \in T\_{\rho} \dot{\mathbf{P}}\_{\ell}, \rho \in \dot{\mathbf{P}}\_{\ell}).\tag{47}$$

The Ξ<sup>∗</sup> and Ξ′ ∗ are the horizontal lift at Φ of Ξ and Ξ′ , respectively, and the superscript *RS* implies that ((·, ·))*RS* is the Riemannian metric of P˙ ℓ looked upon as the reduced space. Note here that the rhs of (47) is well-defined owing to the invariance,

$$\langle ((\mathfrak{a}\_{\mathcal{S}})\_\*\bullet (X)\_\*(\mathfrak{a}\_{\mathcal{S}})\_\*\bullet (X'))\rangle\_{\mathfrak{a}\_{\mathcal{S}}(\Phi)}^{\mathrm{ESOT}} = \langle (X, X')\rangle\_{\Phi}^{\mathrm{ESOT}} \quad (X, X' \in T\_{\Phi}\dot{\mathbf{M}}\_1(2^n, \ell), \Phi \in \dot{\mathbf{M}}\_1(2^n, \ell)), \tag{48}$$

of ((·, ·))*ESOT* and the equivariance,

14 Search Algorithms

by

and

space,

((·, ·))*ESOT*

fix *ρ* ∈ P˙

Ξ ∈ *Tρ*P˙

in *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) that satisfies

((Ξ, <sup>Ξ</sup>′

))*RS*

Let us consider the pair of subspaces, Ver(Φ) and Hor(Φ), of *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ), which are defined

The *u*(2*n*) is the Lie algebra of U(2*n*) consisting of all the 2*<sup>n</sup>* × 2*<sup>n</sup>* anti-Hermitian matrices,

The Ver(Φ) and Hor(Φ) are often called the vertical subspace and the horizontal subspace of *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ), respectively. The Ver(Φ) is understood to be the tangent space at <sup>Φ</sup> of the fiber

passing <sup>Φ</sup>, and Hor(Φ) to be the subspace of *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) normal to Ver(Φ) with respect to

On using (45), the horizontal lift of any tangent vector of the QIS is given as follows: Let us

<sup>ℓ</sup> = *Tρ*P˙

a detail of differential maps, see Appendix 2. Recalling, further, the orthogonal direct-sum

Accordingly, the horizontal lift (46) and the Riemannian metric ((·, ·))*ESOT* are put together

<sup>Φ</sup> with *<sup>π</sup>*(*n*,*l*)

decomposition (45), we can understand that the horizontal lift <sup>Ξ</sup><sup>∗</sup> of <sup>Ξ</sup> <sup>∈</sup> *<sup>T</sup>ρ*P˙

length among vectors, say *<sup>X</sup>*s, in *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) subject to (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup>Φ(*X*) = <sup>Ξ</sup>.

<sup>ℓ</sup> arbitrarily and any <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ) subject to *<sup>π</sup>*(*n*,*<sup>l</sup>*)(Φ) = *<sup>ρ</sup>*. For any tangent vector

<sup>ℓ</sup> (see (23)), the horizontal lift of Ξ at Φ is the unique tangent vector, denoted by Ξ∗,

Hor(Φ) = {*<sup>X</sup>* <sup>∈</sup> *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)|((*X*′

<sup>Φ</sup> . Thus the orthogonal direct-sum decomposition

(*π*(*n*,*l*)

to give rise the Riemannian metric, denoted by ((·, ·))*RS*, of P˙

*<sup>ρ</sup>* = ((Ξ∗, <sup>Ξ</sup>′ ∗))*ESOT*

with respect to the inner product ((·, ·))*ESOT*

where (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup><sup>Φ</sup> : *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>) <sup>→</sup> *<sup>T</sup>π*(*n*,*<sup>l</sup>*)(Φ)P˙

Ver(Φ) = {*<sup>X</sup>* ∈ *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ)| *<sup>X</sup>* = *<sup>ξ</sup>*Φ, *<sup>ξ</sup>* ∈ *<sup>u</sup>*(2*n*)} (41)

*u*(2*n*) = {*ξ* ∈ M(2*n*, 2*n*)| *ξ*† = −*ξ*}. (43)

<sup>Φ</sup> <sup>=</sup> 0, *<sup>X</sup>*′ <sup>∈</sup> Ver(Φ)}. (42)

, *X*))*ESOT*

<sup>U</sup>(2*n*) · <sup>Φ</sup> <sup>=</sup> {Φ′ <sup>∈</sup> M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>)<sup>|</sup> <sup>Φ</sup>′ <sup>=</sup> *<sup>α</sup>g*(Φ), *<sup>g</sup>* <sup>∈</sup> <sup>U</sup>(2*n*)}, (44)

*<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) = Ver(Φ) ⊕ Hor(Φ) (<sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ)) (45)

<sup>Φ</sup> is allowed to the tangent space *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>).

)∗Φ(Ξ∗) = <sup>Ξ</sup> and <sup>Ξ</sup><sup>∗</sup> <sup>∈</sup> Hor(Φ), (46)

(Φ) = *<sup>ρ</sup>* (Ξ, <sup>Ξ</sup>′ <sup>∈</sup> *<sup>T</sup>ρ*P˙

<sup>ℓ</sup> is the differential map of *π*(*n*,*l*) at Φ. For

ℓ, which is defined to satisfy

<sup>ℓ</sup>, *ρ* ∈ P˙

ℓ is of minimum

ℓ). (47)

$$\operatorname{Hor}(\mathfrak{a}\_{\mathcal{S}}(\Phi)) = (\mathfrak{a}\_{\mathcal{S}})\_{\*\Phi}(\operatorname{Hor}(\Phi)) \quad (\Phi \in \dot{\mathbf{M}}\_1(2^n, \ell)), \tag{49}$$

of Hor(Φ) under the U(2*n*) action, where (*αg*)∗<sup>Φ</sup> is the differential map of *α<sup>g</sup>* at Φ (see (33) and Appendix 2). In view of (47), we say that the projection *<sup>π</sup>*(*n*,*l*) : M˙ <sup>1</sup>(2*n*, ℓ) → P˙ ℓ is a Riemannian submersion [29].

We have the following on the coincidence of ((·, ·))*RS* and ((·, ·))*QF* [19].

**Theorem 3.2.** *The Riemannian metric* ((·, ·))*RS defined by (47) to make π*(*n*,*l*) *the Riemannian submersion coincides with the SLD-Fisher metric defined by (25) up to the constant multiple* 4*;* 4((·, ·))*RS* = ((·, ·))*QF.*

On closing this subsection, a comparison between the reduction here and the one by Miyake and Wadati is made. The reduction methods are essentially different since our reduction is made under 'left' U(2*n*) action while 'right' U(1) action is dealt with in [15] The resultant spaces, namely the reduced spaces, are of course different mutually.

## **3.3. Geodesic property of the reduced search sequence**

We are now in a position to show that the reduced search sequence {*π*(*n*,*<sup>l</sup>*)(*I<sup>k</sup> <sup>G</sup>*(*A*))} in the QIS is on an *m*-geodesic, a geodesic with respect to the *m*-parallel transport, of the QIS.

#### *3.3.1. Intuitive example of parallel transports: The Euclidean case*

Let us start with thinking of parallel transport in the 3-dimensional Euclidean space **R**3, the conventional model space not only for basic mathematics and physics but for our daily life. In **R**3, the notion of *parallel* seems to be a trivial one, which is usually not presented in a differential geometric framework to those who are not familiar to differential geometry. As minimum geometric knowledge necessary in this subsection, we introduce below a coordinate expression of tangent vectors. As known well, **R**<sup>3</sup> is endowed with the Cartesian coordinates *y* = (*y*1, *y*2, *y*3)*<sup>T</sup>* valid globally in **R**3. The tangent vectors at any point *p* ∈ **R**<sup>3</sup> can be understood to be the infinitesimal limit of displacements from *p*. The tangent vector understood to be the displacement-limit lim*ε*→0(*<sup>p</sup>* <sup>+</sup> *<sup>ε</sup>e*(*j*)) is then written as *<sup>∂</sup> ∂yj p* , where *e*(*j*)s are the orthonormal vectors along the *j*-th axis (*j* = 1, 2, 3). The account for the expression *<sup>∂</sup> ∂yj p* is that we have lim*ε*→<sup>0</sup> *<sup>F</sup>*(*<sup>p</sup>* <sup>+</sup> *<sup>ε</sup>e*(*j*)) = *<sup>∂</sup><sup>F</sup> ∂yj* (*p*) for any differentiable functions *F*. The parallel transport is defined to be a rule to transfer the tangent vectors at *p* ∈ **R**<sup>3</sup> to those at another *p*′ ∈ **R**3, which is of course have to be subject to several mathematical claims not gotten in detail here: The well-known parallel transport in the conventional Euclidean space, **R**3, is clearly expressed as

$$\sum\_{j=1}^{3} v\_j \left( \frac{\partial}{\partial y\_j} \right)\_p \in T\_p \mathbf{R}^3 \mapsto \sum\_{j=1}^{3} v\_j \left( \frac{\partial}{\partial y\_j} \right)\_{p'} \in T\_{p'} \mathbf{R}^3 \quad (v\_j \in \mathbf{R}).\tag{50}$$

A very important remark should be made here. From a very naive viewpoint, the geodesic *ρmg*(*t*) in (52) looks like 'straight'. This is *not true*, however, since the QIS is not Euclidean due to the SLD-Fisher metric ((·, ·))*QF* endowed in the QIS. Precisely, *ρmg*(*t*) has to be understood

on an *m*-geodesic, a geodesic with respect to the *m*-transport, of the QIS. We start with

√

*<sup>G</sup>*(*A*))} takes the form

(*R*†*W* + *W*†*R*)

sin(*<sup>k</sup>* <sup>+</sup>

1 2 )*θ* 

*<sup>G</sup>*(*A*)} is out of the range M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>), we apply *<sup>π</sup>*(*n*,*l*) to {*I<sup>k</sup>*

(*W*†*W*)*jh* = *<sup>δ</sup>jh* (*j*, *<sup>h</sup>* = 1, 2, ··· , ℓ), (53)

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

(*A*†*A*)*jh* = <sup>1</sup> (*j*, *<sup>h</sup>* = 1, 2, ··· , ℓ), (56)

sin(*k* +

1 2 )*θ W* + 

(*k* = 1, 2, ···) (57)

<sup>2</sup>*<sup>n</sup>* <sup>−</sup> <sup>1</sup> (*j*, *<sup>h</sup>* <sup>=</sup> 1, 2, ··· , <sup>ℓ</sup>), (54)

*<sup>G</sup>*(*A*)} is reduced through *<sup>π</sup>*(*n*,*l*)

http://dx.doi.org/10.5772/53187

277

cos(*k* +

1 2 )*θ R* 

(*k* = 1, 2, ···), (58)

*<sup>G</sup>*(*A*)} in the

*<sup>G</sup>*(*A*))} explicitly. Though the initial states *<sup>A</sup>* for

<sup>2</sup>*<sup>n</sup>* <sup>−</sup> <sup>1</sup> (*j*, *<sup>h</sup>* <sup>=</sup> 1, 2, ··· , <sup>ℓ</sup>), (55)

to be 'curved' in the QIS.

the search sequence {*I<sup>k</sup>*

*π*(*n*,*l*) (*I k <sup>G</sup>*(*A*))

<sup>=</sup> <sup>1</sup> ℓ 

<sup>=</sup> <sup>1</sup> ℓ 

with

manner (38). Since we have

*3.3.3. The reduced search sequence is on a geodesic*

calculating the reduced sequence {*π*(*n*,*<sup>l</sup>*)(*I<sup>k</sup>*

the reduced search sequence {*π*(*n*,*<sup>l</sup>*)(*I<sup>k</sup>*

1 2 )*θ W* + 

1 2 )*θ W*†*W* +

cos(*k* +

*<sup>τ</sup><sup>k</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> <sup>2</sup>*<sup>n</sup>* <sup>−</sup> <sup>2</sup>

and *θ* is defined already to satisfy (15).

1 2 )*θ*

<sup>2</sup>*<sup>n</sup>* <sup>−</sup> <sup>1</sup> cos2

− <sup>2</sup> √2*<sup>n</sup>* − 1

sin(*k* +

1 ℓ *A*†*A* + *τ<sup>k</sup>* 1 ℓ *I*ℓ 

sin2(*k* +

+ 1 ℓ 

= (1 − *τk*)

We are at the final stage to show that the search sequence {*I<sup>k</sup>*

(*R*†*R*)*jh* <sup>=</sup> <sup>2</sup>*<sup>n</sup>* <sup>−</sup> <sup>2</sup> <sup>+</sup> *<sup>δ</sup>jh*

(*W*†*R*)*jh* = (*R*†*W*)*jh* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>δ</sup>jh*

cos(*k* +

1 ℓ 

sin(*<sup>k</sup>* <sup>+</sup>

(*k* + 1 2 )*θ* 

cos(*k* +

1 2 )*θ*

where *<sup>I</sup>*<sup>ℓ</sup> stands for the ℓ × ℓ identity matrix. The *<sup>δ</sup>jh* in (53)-(55) indicates Kronecker's delta

1 2 )*θ R* †

cos2(*k* +

1 2 )*θ*  1 2 )*θ R*†*R*

An important note is that parallel transports in general differentiable manifolds (including the familiar sphere *S*2) are defined in terms of curves specifying the way of point-translations (see Appendix 3 for the case of *S*2). The Euclidean case (50) is hence understood to be a curve-free case.

Once the parallel transport (50) is given to **R**3, geodesics in **R**<sup>3</sup> are defined to be autoparallel curves: Let *<sup>γ</sup>*(*t*) (*<sup>t</sup>* <sup>∈</sup> <sup>∃</sup>[*t*0, *<sup>t</sup>*1] <sup>∈</sup> **<sup>R</sup>**) be a curve in **<sup>R</sup>**3, whose tangent vector at *<sup>t</sup>* <sup>=</sup> *<sup>τ</sup>* of the curve is given by *<sup>d</sup><sup>γ</sup> dt* (*τ*). The curve *γ*(*t*) is autoparallel if the tangent vector at each point is equal to the parallel transport of the initial tangent vector *<sup>d</sup><sup>γ</sup> dt* (*t*0). Accordingly, every autoparallel curve turns out to be a straight line or its segment as widely known. Geodesics in **R**<sup>3</sup> discussed here have the shortest-path property with respect to the Euclidean metric since the parallel transport (50) leaves the metric invariant. Note that parallel transports other than (50) can exist whose geodesics of course lose the shortest-path property with respect to the Euclidean metric.

#### *3.3.2. The m-parallel transport in the QIS*

We move on to the *m*-parallel transport in the QIS. Fortunately, the *m*-parallel transport can be described in a similar setting-up to that for the transport (50). Let us start with the space of ℓ × ℓ complex matrices, M(ℓ, ℓ), that includes the QIS, P˙ <sup>ℓ</sup>, as a subset. The M(ℓ, ℓ) admits the matrix-entries as global (complex) coordinates like the Cartesian coordinates of **R**3. The tangent space *Tρ*P˙ <sup>ℓ</sup> at *ρ* ∈ P˙ <sup>ℓ</sup> can be identified with the set of ℓ × ℓ traceless Hermitian matrices (see (23)), which can be dealt with in a smilar way to the Euclidean parallel transport setting-up. Indeed, in view of the definition (22), P˙ ℓ is understood to be a fragment of an affine subspace of M(ℓ, ℓ). Hence the tangent space *T*ΦP˙ <sup>ℓ</sup> at every P˙ ℓ admits the structure (23), which is looked upon as a linear subspace of M(ℓ, ℓ).

According to quantum information theory [25, 26], the *m*-parallel transport is written in a simple form

$$
\Xi \in T\_{\rho} \dot{\mathbf{P}}\_{\ell} \mapsto \Xi \in T\_{\rho'} \dot{\mathbf{P}}\_{\ell}.\tag{51}
$$

The geodesic from *ρ*<sup>0</sup> to *ρ*<sup>1</sup> with respect to the *m*-parallel transport is therefore characterized as an autoparallel curve,

$$
\rho^{\rm reg}(t) = (1 - t)\rho\_0 + t\rho\_1 \quad (0 \le t \le 1), \tag{52}
$$

which takes a very similar form to the Euclidean case. The parameter *t* in (52) can be chosen arbitrarily up to affine transformations; *t* → *at* + *b* (*a*, *b* ∈ **R**).

A very important remark should be made here. From a very naive viewpoint, the geodesic *ρmg*(*t*) in (52) looks like 'straight'. This is *not true*, however, since the QIS is not Euclidean due to the SLD-Fisher metric ((·, ·))*QF* endowed in the QIS. Precisely, *ρmg*(*t*) has to be understood to be 'curved' in the QIS.

#### *3.3.3. The reduced search sequence is on a geodesic*

We are at the final stage to show that the search sequence {*I<sup>k</sup> <sup>G</sup>*(*A*)} is reduced through *<sup>π</sup>*(*n*,*l*) on an *m*-geodesic, a geodesic with respect to the *m*-transport, of the QIS. We start with calculating the reduced sequence {*π*(*n*,*<sup>l</sup>*)(*I<sup>k</sup> <sup>G</sup>*(*A*))} explicitly. Though the initial states *<sup>A</sup>* for the search sequence {*I<sup>k</sup> <sup>G</sup>*(*A*)} is out of the range M˙ <sup>1</sup>(2*n*, <sup>ℓ</sup>), we apply *<sup>π</sup>*(*n*,*l*) to {*I<sup>k</sup> <sup>G</sup>*(*A*)} in the manner (38). Since we have

$$(\mathcal{W}^\dagger \mathcal{W})\_{\text{jh}} = \delta\_{\text{jh}} \qquad (\text{j}, \hbar = 1, 2, \dots, \ell), \tag{53}$$

$$(\mathbf{R}^\dagger \mathbf{R})\_{j\mathbf{h}} = \frac{\mathbf{2}^\mathbf{n} - \mathbf{2} + \delta\_{j\mathbf{h}}}{\mathbf{2}^\mathbf{n} - \mathbf{1}} \qquad (j, \mathbf{h} = 1, \mathbf{2}, \dots, \mathbf{\ell}), \tag{54}$$

$$(\mathcal{W}^\dagger R)\_{j\hbar} = (\mathcal{R}^\dagger \mathcal{W})\_{j\hbar} = \frac{1 - \delta\_{j\hbar}}{\sqrt{2^n - 1}} \qquad (j, \hbar = 1, 2, \cdots, \ell), \tag{55}$$

$$(A^\dagger A)\_{j\hbar} = 1 \qquad (j, \hbar = 1, 2, \cdots, \ell), \tag{56}$$

the reduced search sequence {*π*(*n*,*<sup>l</sup>*)(*I<sup>k</sup> <sup>G</sup>*(*A*))} takes the form

*π*(*n*,*l*) (*I k <sup>G</sup>*(*A*)) <sup>=</sup> <sup>1</sup> ℓ sin(*k* + 1 2 )*θ W* + cos(*k* + 1 2 )*θ R* † sin(*k* + 1 2 )*θ W* + cos(*k* + 1 2 )*θ R* <sup>=</sup> <sup>1</sup> ℓ sin2(*k* + 1 2 )*θ W*†*W* + 1 ℓ cos2(*k* + 1 2 )*θ R*†*R* + 1 ℓ cos(*k* + 1 2 )*θ* sin(*<sup>k</sup>* <sup>+</sup> 1 2 )*θ* (*R*†*W* + *W*†*R*) = (1 − *τk*) 1 ℓ *A*†*A* + *τ<sup>k</sup>* 1 ℓ *I*ℓ (*k* = 1, 2, ···) (57)

with

16 Search Algorithms

curve-free case.

is given by *<sup>d</sup><sup>γ</sup>*

Euclidean metric.

tangent space *Tρ*P˙

simple form

as an autoparallel curve,

*3.3.2. The m-parallel transport in the QIS*

mathematical claims not gotten in detail here: The well-known parallel transport in the

An important note is that parallel transports in general differentiable manifolds (including the familiar sphere *S*2) are defined in terms of curves specifying the way of point-translations (see Appendix 3 for the case of *S*2). The Euclidean case (50) is hence understood to be a

Once the parallel transport (50) is given to **R**3, geodesics in **R**<sup>3</sup> are defined to be autoparallel curves: Let *<sup>γ</sup>*(*t*) (*<sup>t</sup>* <sup>∈</sup> <sup>∃</sup>[*t*0, *<sup>t</sup>*1] <sup>∈</sup> **<sup>R</sup>**) be a curve in **<sup>R</sup>**3, whose tangent vector at *<sup>t</sup>* <sup>=</sup> *<sup>τ</sup>* of the curve

curve turns out to be a straight line or its segment as widely known. Geodesics in **R**<sup>3</sup> discussed here have the shortest-path property with respect to the Euclidean metric since the parallel transport (50) leaves the metric invariant. Note that parallel transports other than (50) can exist whose geodesics of course lose the shortest-path property with respect to the

We move on to the *m*-parallel transport in the QIS. Fortunately, the *m*-parallel transport can be described in a similar setting-up to that for the transport (50). Let us start with the space

the matrix-entries as global (complex) coordinates like the Cartesian coordinates of **R**3. The

matrices (see (23)), which can be dealt with in a smilar way to the Euclidean parallel transport

According to quantum information theory [25, 26], the *m*-parallel transport is written in a

The geodesic from *ρ*<sup>0</sup> to *ρ*<sup>1</sup> with respect to the *m*-parallel transport is therefore characterized

which takes a very similar form to the Euclidean case. The parameter *t* in (52) can be chosen

<sup>ℓ</sup> �→ <sup>Ξ</sup> ∈ *<sup>T</sup>ρ*′P˙

Ξ ∈ *Tρ*P˙

<sup>ℓ</sup> can be identified with the set of ℓ × ℓ traceless Hermitian

*ρmg*(*t*)=(1 − *t*)*ρ*<sup>0</sup> + *tρ*<sup>1</sup> (0 ≤ *t* ≤ 1), (52)

<sup>ℓ</sup> at every P˙

*dt* (*τ*). The curve *γ*(*t*) is autoparallel if the tangent vector at each point is equal

*p*′

∈ *Tp*′**R**<sup>3</sup> (*vj* ∈ **R**). (50)

*dt* (*t*0). Accordingly, every autoparallel

<sup>ℓ</sup>, as a subset. The M(ℓ, ℓ) admits

ℓ admits the structure

ℓ is understood to be a fragment of an

ℓ. (51)

3 ∑ *j*=1 *vj ∂ ∂yj*

conventional Euclidean space, **R**3, is clearly expressed as

to the parallel transport of the initial tangent vector *<sup>d</sup><sup>γ</sup>*

of ℓ × ℓ complex matrices, M(ℓ, ℓ), that includes the QIS, P˙

<sup>ℓ</sup> at *ρ* ∈ P˙

setting-up. Indeed, in view of the definition (22), P˙

affine subspace of M(ℓ, ℓ). Hence the tangent space *T*ΦP˙

(23), which is looked upon as a linear subspace of M(ℓ, ℓ).

arbitrarily up to affine transformations; *t* → *at* + *b* (*a*, *b* ∈ **R**).

*p*

∈ *Tp***R**<sup>3</sup> �→

3 ∑ *j*=1 *vj ∂ ∂yj*

$$\begin{aligned} \tau\_k &= 1 - \frac{2^n - 2}{2^n - 1} \cos^2 \left( (k + \frac{1}{2}) \theta \right) \\ &- \frac{2}{\sqrt{2^n - 1}} \left( \cos(k + \frac{1}{2}) \theta \right) \left( \sin(k + \frac{1}{2}) \theta \right) \qquad (k = 1, 2, \dotsb) \end{aligned} \tag{58}$$

where *<sup>I</sup>*<sup>ℓ</sup> stands for the ℓ × ℓ identity matrix. The *<sup>δ</sup>jh* in (53)-(55) indicates Kronecker's delta and *θ* is defined already to satisfy (15).

The expression (52) and (57) are put together to inspire us to consider the *m*-geodesic in the QIS of the form

$$\boldsymbol{\rho}^G(t) = (1 - t) \left( \frac{1}{\ell} \boldsymbol{A}^\dagger \boldsymbol{A} \right) + t \left( \frac{1}{\ell} \boldsymbol{I} \boldsymbol{\ell} \right) \qquad (\boldsymbol{\varepsilon} \le t \le 1) \tag{59}$$

where *<sup>ε</sup>* is a sufficiently small positive number subject to 0 < *<sup>ε</sup>* < *<sup>τ</sup>*<sup>1</sup> (see (58) with *<sup>k</sup>* = 1 for *τ*1). Note here that the reduction <sup>1</sup> <sup>ℓ</sup> *<sup>A</sup>*†*<sup>A</sup>* ∈ <sup>P</sup><sup>ℓ</sup> of the initial states *<sup>A</sup>* ∈ M1(2*n*, ℓ) turns out to be placed as the limit point of the geodesic *ρG*(*t*) in the sense that

$$\lim\_{\varepsilon \to +0} \rho^G(\varepsilon) = \frac{1}{\ell} A^\dagger A. \tag{60}$$

An direct application of the search in the ESOT is not yet found: However, if a problem with a strong relation to relative ordering of data (see around Eq. (39)) exists, our search will be

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

http://dx.doi.org/10.5772/53187

279

On closing this section, three questions are posed below, which would be of interest from the

1 In view of the results in this chapter, we are able to clarify that the 'Grover search orbit' given by a continuous-time version of (16) is an *m*-geodesic. A question thereby arises as to 'Is it possible to characterize the *m*-geodesics by orbits of a certain dynamical system on M1(2*n*, ℓ)?'. To this direction, a variation of the free particle system on M1(2*n*, ℓ) would

2 Accordingly, another question would be worth posed: 'Is it possible to characterize the *<sup>e</sup>*-geodesics by orbits of a certain dynamical system on M1(2*n*, ℓ)?' (Benefits 1 and 2). 3 The celebrated fact on the duality between the *m*-transport and the *e*-transport (see [25] and [26]) may provide us with a further question: 'If there exists a pair of dynamical systems on M1(2*n*, ℓ) whose reduced orbits characterize the *<sup>m</sup>*-geodesics and *<sup>e</sup>*-geodesics respectively, which kind of relation does it exist between those systems ?' (Benefit 3).

The author would like to thank the editor for his valuable comments to improve the earlier manuscript of this chapter. This work is partly supported by Special Fund for Strategic

• ESOT: The abbreviation of the extended space of ordered tuples of multi-qubits,

set of ℓ × ℓ positive definite Hermitian matrices with unit trace, endowed with the

ℓ, the

<sup>ℓ</sup>, and (24)-(31) for ((·, ·))*QF*).

• QIS: The abbreviation of the quantum information space, which is realized as P˙

• SLD: The abbreviation of the symmetric logarithmic derivative (see Eq. (24)).

• Hor(Φ): The horizontal subspace of *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) (see Eq. (42) with (41)).

• M(2*<sup>n</sup>* − ℓ, 2*<sup>n</sup>* − ℓ): The set of (2*<sup>n</sup>* − ℓ) × (2*<sup>n</sup>* − ℓ) complex matrices.

worth applying to the problem.

viewpoint of the expected benefits listed in Sec. 1.

Research No. 4 (2011) in Future University Hakodate.

**Appendix 1. Glossary of symbols and notation**

which is denoted by M1(2*n*, ℓ) (see Eq. (5)).

• M(ℓ, ℓ): The set of ℓ × ℓ complex matrices. • M(2*n*, ℓ): The set of 2*<sup>n</sup>* × ℓ complex matrices. • M(2*n*, 2*n*): The set of 2*<sup>n</sup>* × 2*<sup>n</sup>* complex matrices.

quantum SLD-Fisher metric ((·, ·))*QF* (see Eq. (22) for P˙

be a candidate (Benefits 1 and 2).

**Acknowledgements**

**Appendices**

**Sets and spaces**

**Acronyms**

Combining (59) with (57) and (58), we have

$$
\pi^{(\mathfrak{n},l)}(I\_G^k(A)) = \rho^G(\tau\_k) \qquad (k = 1, 2, \dots, K\_{\mathfrak{n}}) \tag{61}
$$

where *Kn* is the integer nearest to *<sup>π</sup>* 4 √2*<sup>n</sup>* − <sup>1</sup> <sup>2</sup> . The reduction of the initial state is placed at the limit point of the *m*-geodesic *ρG*(*t*) in the sense (60). Thus we have the following outlined as Main Theorem in Sec. 1:

**Theorem 3.3.** *Through the reduction of the regular part,* M˙ <sup>1</sup>(2*n*, ℓ)*, of the extended space of ordered tuples of multi-qubits (ESOT) to the quantum information space (QIS),* P˙ ℓ*, the reduced search sequence* {*π*(*n*,*<sup>l</sup>*)(*I<sup>k</sup> <sup>G</sup>*(*A*))}*k*=1,2,··· ,*Kn is on the m-geodesic <sup>ρ</sup>G*(*t*) *of the QIS given by (59).*

## **4. Concluding remarks**

We have studied the Grover-type search sequence for an ordered tuple of multi-qubits. The search sequence itself is shown to be on a geodesic with respect to the Levi-Civita parallel transport in the ESOT. Further, the reduced search sequence in the QIS is shown to be on a geodesic with respect to the *m*-parallel transport in the QIS. The *m*-geodesics do not have the shortest-path property but they are very important geodesics in the QIS together with those with respect to *e*-parallel transport. The geometric reduction method applied this chapter is entirely different from the method in Miyake and Wadati [15].

A significance of this chapter is the discovery of a novel geometric pathway that connects directly the search sequence in the ESOT with an *m*-geodesic in the QIS. According to a crucial role of the *m*-geodesics and the *e*-geodesics together with their mutual duality, the pathway will be a key to further studies on the search in the ESOT from the quantum information geometry viewpoint. Further, since the QIS is well-known to be the stage for describing dynamics of quantum-state ensembles of quantum systems [2, 3], the pathway shown in this chapter will be of good use to connect the search in the ESOT with dynamics of a certain quantum system.

An direct application of the search in the ESOT is not yet found: However, if a problem with a strong relation to relative ordering of data (see around Eq. (39)) exists, our search will be worth applying to the problem.

On closing this section, three questions are posed below, which would be of interest from the viewpoint of the expected benefits listed in Sec. 1.


## **Acknowledgements**

The author would like to thank the editor for his valuable comments to improve the earlier manuscript of this chapter. This work is partly supported by Special Fund for Strategic Research No. 4 (2011) in Future University Hakodate.

## **Appendices**

## **Appendix 1. Glossary of symbols and notation**

## **Acronyms**

18 Search Algorithms

QIS of the form

*τ*1). Note here that the reduction <sup>1</sup>

Combining (59) with (57) and (58), we have

where *Kn* is the integer nearest to *<sup>π</sup>*

as Main Theorem in Sec. 1:

**4. Concluding remarks**

of a certain quantum system.

*sequence* {*π*(*n*,*<sup>l</sup>*)(*I<sup>k</sup>*

*π*(*n*,*l*) (*I k*

The expression (52) and (57) are put together to inspire us to consider the *m*-geodesic in the

where *<sup>ε</sup>* is a sufficiently small positive number subject to 0 < *<sup>ε</sup>* < *<sup>τ</sup>*<sup>1</sup> (see (58) with *<sup>k</sup>* = 1 for

*<sup>ρ</sup>G*(*ε*) = <sup>1</sup> ℓ

the limit point of the *m*-geodesic *ρG*(*t*) in the sense (60). Thus we have the following outlined

**Theorem 3.3.** *Through the reduction of the regular part,* M˙ <sup>1</sup>(2*n*, ℓ)*, of the extended space of ordered*

We have studied the Grover-type search sequence for an ordered tuple of multi-qubits. The search sequence itself is shown to be on a geodesic with respect to the Levi-Civita parallel transport in the ESOT. Further, the reduced search sequence in the QIS is shown to be on a geodesic with respect to the *m*-parallel transport in the QIS. The *m*-geodesics do not have the shortest-path property but they are very important geodesics in the QIS together with those with respect to *e*-parallel transport. The geometric reduction method applied this chapter is

A significance of this chapter is the discovery of a novel geometric pathway that connects directly the search sequence in the ESOT with an *m*-geodesic in the QIS. According to a crucial role of the *m*-geodesics and the *e*-geodesics together with their mutual duality, the pathway will be a key to further studies on the search in the ESOT from the quantum information geometry viewpoint. Further, since the QIS is well-known to be the stage for describing dynamics of quantum-state ensembles of quantum systems [2, 3], the pathway shown in this chapter will be of good use to connect the search in the ESOT with dynamics

*<sup>G</sup>*(*A*))}*k*=1,2,··· ,*Kn is on the m-geodesic <sup>ρ</sup>G*(*t*) *of the QIS given by (59).*

(*ε* ≤ *t* ≤ 1) (59)

*A*†*A*. (60)

<sup>ℓ</sup> *<sup>A</sup>*†*<sup>A</sup>* ∈ <sup>P</sup><sup>ℓ</sup> of the initial states *<sup>A</sup>* ∈ M1(2*n*, ℓ) turns out to

*<sup>G</sup>*(*A*)) = *<sup>ρ</sup>G*(*τk*) (*<sup>k</sup>* <sup>=</sup> 1, 2, ··· , *Kn*) (61)

<sup>2</sup> . The reduction of the initial state is placed at

ℓ*, the reduced search*

*ρG*(*t*)=(1 − *t*)

be placed as the limit point of the geodesic *ρG*(*t*) in the sense that

1 ℓ *A*†*A* + *t* 1 ℓ *I*ℓ 

lim *ε*→+0

4 √2*<sup>n</sup>* − <sup>1</sup>

*tuples of multi-qubits (ESOT) to the quantum information space (QIS),* P˙

entirely different from the method in Miyake and Wadati [15].


## **Sets and spaces**


• M1(2*n*, ℓ): The subset of M(2*n*, ℓ) consisting of 2*<sup>n</sup>* × ℓ complex matrices with unit norm referred to as the extended space of ordered tuples of multi-qubits (see Eq. (5)), which is abbreviated to the ESOT.

**Others**

Eq. (9)).

to *γ*(0) = Φ and *<sup>d</sup><sup>γ</sup>*

which yields

**Appendix 2. Differential maps**

which turns out to take the explicit form

• *A*: The matrix expressing the initial state for the Grover-type search in the ESOT (see

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

• *R*: The matrix with which forms an orthonormal basis of the subspace consisting of

• *W*: The matrix expressing the target state (namely the marked state) for the

• ∼: The equivalence relation both on M1(2*n*, ℓ) and on M˙ <sup>1</sup>(2*n*, ℓ) (see Eq. (35)).

We here give a detailed explanation of differential maps, (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup><sup>Φ</sup> and (*αg*)∗Φ. For any *<sup>X</sup>* ∈ *<sup>T</sup>*ΦM1(2*n*, ℓ), we can always find a curve *<sup>γ</sup>*(*t*) (−*<sup>τ</sup>* < *<sup>t</sup>* < *<sup>τ</sup>*, *<sup>τ</sup>* > 0) on M1(2*n*, ℓ) subject

*dt* (0) = *<sup>X</sup>*. The differential map (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup>,<sup>Φ</sup> is defined to be

*dt t*=0

ℓ

The differential map (*αg*)∗<sup>Φ</sup> of *α<sup>g</sup>* at Φ is defined in the same way: On the same setting-up

*dt t*=0

In this appendix, the standard parallel transport is concisely reviewed. In particular, we present the fact that the transport depends on the choice of the paths connecting a pair of

*π*(*n*,*l*)

(*γ*(*t*)), (62)

http://dx.doi.org/10.5772/53187

281

(*X*†Φ + Φ†*X*). (63)

*αg*(*γ*(*t*)), (64)

(*αg*)∗Φ(*X*) = *gX*. (65)

*S*<sup>2</sup> = {*y* ∈ **R**<sup>3</sup> | *yTy* = 1}, (66)

)∗Φ(*X*) = *<sup>d</sup>*

)∗Φ(*X*) = <sup>1</sup>

(*αg*)∗Φ(*X*) = *<sup>d</sup>*

all the superpositions of *A* and *W* (see Eq. (13)).

(*π*(*n*,*l*)

(*π*(*n*,*l*)

to the curve *<sup>γ</sup>*(*t*) with *<sup>X</sup>* ∈ *<sup>T</sup>*ΦM1(2*n*, ℓ), the (*αg*)∗<sup>Φ</sup> is defined by

**Appendix 3. The standard parallel transport in** *S*<sup>2</sup>

points in *S*2. Below, *S*<sup>2</sup> is realized as the set,

in the 3-dimensional Euclidean space **R**3.

Grover-type search in the ESOT (see Eq. (9)).


## **Maps, operators and transformations**


#### **Metrics**


## **Others**

20 Search Algorithms

• M*OT*

• P˙

• *Tρ*P˙

which is abbreviated to the ESOT.

maximum rank equal to ℓ (see Eq. (32)).

columns are of unit length (see Eq. (7)).

of ℓ × ℓ density matrices (see Eq. (21)).

ℓ × ℓ regular density matrices (see Eq. (22)).

identical with *<sup>T</sup>*ΦM1(2*n*, ℓ) if <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ).

• U(*l*): The group of ℓ × ℓ unitary matrices (see Eq. (27)). • U(2*n*): The group of 2*<sup>n</sup>* × 2*<sup>n</sup>* unitary matrices (see Eq. (34)). • *u*(2*n*): The Lie algebra of the group U(2*n*) (see Eq. (43)).

• Ver(Φ): The vertical subspace of *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ) (see Eq. (41)).

• *IA*: The unitary transformation of M1(2*n*, ℓ) defined by (11).

• *IW*: The unitary transformation of M1(2*n*, ℓ) defined by (12).

• †: The Hermitian conjugate operation to vectors and matrices.

• *<sup>T</sup>*: The transpose operation to vectors and matrices.

• *IG*: The unitary transformation composed of −*IA* and *IW* (see Eq. (10)).

• L*ρ*: The symmetric logarithmic derivative (SLD) (see Eqs. (24) and (29)).

<sup>ℓ</sup> : The tangent space of P˙

**Maps, operators and transformations**

• *<sup>π</sup>*(*n*,*l*): The projection of M˙ <sup>1</sup>(2*n*, ℓ) to P˙

definition.

the definition.

**Metrics**

• M1(2*n*, ℓ): The subset of M(2*n*, ℓ) consisting of 2*<sup>n</sup>* × ℓ complex matrices with unit norm referred to as the extended space of ordered tuples of multi-qubits (see Eq. (5)),

• M˙ <sup>1</sup>(2*n*, ℓ): The subset of M1(2*n*, ℓ) consisting of the elements of M1(2*n*, ℓ) with the

• P<sup>ℓ</sup> The set of ℓ × ℓ positive semidefinite Hermitian matrices with unit trace; the space

• *<sup>T</sup>*ΦM˙ <sup>1</sup>(2*n*, ℓ): The tangent space of M˙ <sup>1</sup>(2*n*, ℓ) at <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ) (see Eq. (40)), which is

• *<sup>α</sup>g*: The unitary transformation of M1(2*n*, ℓ) associated with *<sup>g</sup>* ∈ <sup>U</sup>(2*n*) (see Eq. (33)). • (*αg*)∗Φ: The differential map of *<sup>α</sup><sup>g</sup>* at <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ). See also Appendix 2 for the

• (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup>Φ: The differential map of *<sup>π</sup>*(*n*,*l*) at <sup>Φ</sup> ∈ M˙ <sup>1</sup>(2*n*, ℓ). See also Appendix 2 for

• ((·, ·))*ESOT*: The Riemannian metric of the ESOT and of its regular part (see Eq. (18)).

• ((·, ·))*RS*: The Riemannian metric of the QIS other than ((·, ·))*QF*, that makes the

• ((·, ·))*QF*: The quantum SLD-Fisher metric of the QIS (see Eqs. (25) and (30)).

projection *π*(*n*,*l*) a Riemannian submersion (see Eq. (47) with (46)).

<sup>ℓ</sup> at a point *ρ* ∈ P˙

• *<sup>T</sup>*ΦM1(2*n*, ℓ): The tangent space of M1(2*n*, ℓ) at <sup>Φ</sup> ∈ M1(2*n*, ℓ) (see Eq. (17)).

• U(2*<sup>n</sup>* − ℓ): The group of (2*<sup>n</sup>* − ℓ) × (2*<sup>n</sup>* − ℓ) unitary matrices (see Eq. (37)).

<sup>1</sup> (2*n*, <sup>ℓ</sup>); The subset of M1(2*n*, <sup>ℓ</sup>) consisting of 2*<sup>n</sup>* <sup>×</sup> <sup>ℓ</sup> complex matrices whose

<sup>ℓ</sup>: The set of ℓ × ℓ positive definite Hermitian matrices with unit trace; the space of

ℓ (see Eq. (23)).

ℓ (the QIS) (see Eq. (38)).


## **Appendix 2. Differential maps**

We here give a detailed explanation of differential maps, (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup><sup>Φ</sup> and (*αg*)∗Φ. For any *<sup>X</sup>* ∈ *<sup>T</sup>*ΦM1(2*n*, ℓ), we can always find a curve *<sup>γ</sup>*(*t*) (−*<sup>τ</sup>* < *<sup>t</sup>* < *<sup>τ</sup>*, *<sup>τ</sup>* > 0) on M1(2*n*, ℓ) subject to *γ*(0) = Φ and *<sup>d</sup><sup>γ</sup> dt* (0) = *<sup>X</sup>*. The differential map (*π*(*n*,*<sup>l</sup>*))<sup>∗</sup>,<sup>Φ</sup> is defined to be

$$(\pi^{(\eta,l)})\_{\*\Phi}(X) = \left. \frac{d}{dt} \right|\_{t=0} \pi^{(\eta,l)}(\gamma(t)),\tag{62}$$

which turns out to take the explicit form

$$(\pi^{(n,l)})\_{\*\Phi}(X) = \frac{1}{\ell}(X^{\dagger}\Phi + \Phi^{\dagger}X). \tag{63}$$

The differential map (*αg*)∗<sup>Φ</sup> of *α<sup>g</sup>* at Φ is defined in the same way: On the same setting-up to the curve *<sup>γ</sup>*(*t*) with *<sup>X</sup>* ∈ *<sup>T</sup>*ΦM1(2*n*, ℓ), the (*αg*)∗<sup>Φ</sup> is defined by

$$(\mathfrak{a}\_{\mathcal{S}})\_{\*\Phi}(X) = \left. \frac{d}{dt} \right|\_{t=0} \mathfrak{a}\_{\mathcal{S}}(\gamma(t)),\tag{64}$$

which yields

$$(\mathfrak{a}\_{\mathfrak{J}})\_{\*\Phi}(X) = \mathfrak{g}X.\tag{65}$$

## **Appendix 3. The standard parallel transport in** *S*<sup>2</sup>

In this appendix, the standard parallel transport is concisely reviewed. In particular, we present the fact that the transport depends on the choice of the paths connecting a pair of points in *S*2. Below, *S*<sup>2</sup> is realized as the set,

$$\mathbf{S}^2 = \{ \mathbf{y} \in \mathbf{R}^3 \, | \, \mathbf{y}^T \mathbf{y} = 1 \},\tag{66}$$

in the 3-dimensional Euclidean space **R**3.

Let us fix a pair of distinct points, *y*<sup>0</sup> and *y*1, in *S*<sup>2</sup> arbitrarily, which connect by a smooth curve *γ*(*s*), where *s* is the length parameter. Namely, the *γ*(*s*) satisfies

$$
\gamma(0) = y\_0, \quad \gamma(L) = y\_1 \quad (L:\text{the full curve length}).\tag{67}
$$

Again, we remark that *γ*(*s*) takes 3-dimensional vector form. To express tangent vectors of *<sup>S</sup>*<sup>2</sup> at *<sup>γ</sup>*(*s*), we prepare the orthonormal basis {*v*1(*s*), *<sup>v</sup>*2(*s*)} of *<sup>T</sup>γ*(*s*)*S*<sup>2</sup> subject to

$$v\_1(\mathbf{s}) = \dot{\gamma}(\mathbf{s}), \quad v\_2(\mathbf{s}) = \gamma(\mathbf{s}) \times \dot{\gamma}(\mathbf{s}) \tag{68}$$

**References**

U.P.; 2007.

Press; 1996.

p22-33.

Cambridge: Cambridge U.P.; 2000.

Alamitos: IEEE Press; 1994.

Lett. 1997; 79 p325-328.

2000; 62 062303 p1-6.

2001; 64 042317 p1-9.

042308 p1-6.

Info. and Comput. 2006; 6 p483-494.

[14] arXiv: http://arXiv.org (accessed 5 September 2012).

evolution. arXiv:quant-ph/0001106v1 2000.

Volume I. Singapore: World Scientific; 2004.

Volume II. Singapore: World Scientific; 2007.

[1] Nielsen MA., Chuang IL. Quantum Computation and Quantum Information.

Geometry and Dynamics of a Quantum Search Algorithm for an Ordered Tuple of Multi-Qubits

http://dx.doi.org/10.5772/53187

283

[2] Benenti G., Casati G., Strini G. Principles of Quantum Computation and Information

[3] Benenti G., Casati G., Strini G. Principles of Quantum Computation and Information

[4] Mermin ND. Quantum Computer Science An Introduction. Cambridge: Cambridge

[5] Shor PW. Algorithms for quantum computation: discrete logarithms and factoring. In: Proceedings of the 35th Annual Symposium on Foundations of Computer Science. Los

[6] Grover L. A fast quantum mechanical algorithm for database search. In: Proceedings of the 28th Annual ACM Symposium on the Theory of Computing. New York: ACM

[7] Grover L. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev.

[9] Ambainis A. Guest column: Quantum search algorithms. ACM SIGACT News 2004; 35

[10] Galindo A., Martin-Delgado MA. Family of Grover's search algorithms. Phys. Rev. A

[13] Tulsi T., Grover LK., Patel A. A new algorithm for fixed point quantum search. Quantum

[15] Miyake A., Wadati M. Geometric strategy for the optimal quantum search. Phys. Rev. A

[16] Farhi E., Goldstone J., Gutmann S., Sipser M. Quantum computation by adiabatic

[17] Roland J., Cerf NJ. Quantum search by local adiabatic evolution. Phys. Rev. A 2002; 65

[11] Grover LK. A different kind of quantum search. arXiv:quant-ph/0503205v1 2005.

[12] Grover LK. Fixed-point quantum search. Phys. Rev. Lett. 2005; 95 150501 p1-4.

[8] Gruska J. Quantum Computing. London: McGraw-Hill International; 1999.

where the overdot ˙ stands for the derivation by *s* and × the vector product operation. We note here that the vector, *γ*˙(*s*), tangent to *γ*(*s*) is always of unit length since *s* is the length parameter. This ensures that the basis {*v*1(*s*), *v*2(*s*)} is orthonormal. In terms of the basis {*v*1(*s*), *v*2(*s*)}, any tangent vector at *γ*(*s*) can be expressed in the form of linear combination, *c*1*v*1(*s*) + *c*2*v*2(*s*) (*c*1, *c*<sup>2</sup> ∈ **R**). Accordingly, the parallel transport along the curve *γ*(*s*) is understood to be the way of connecting {*v*1(*L*), *v*2(*L*)} at *y*<sup>1</sup> = *γ*(*L*) to {*v*1(0), *v*2(0)} at *y*<sup>0</sup> = *γ*(0).

To express the parallel transport concisely, it is of good use to introduce the one-parameter rotation matrix,

$$
\Gamma(s) = (v\_0(s), v\_1(s), v\_2(s)), \quad v\_0(s) = \gamma(s) \quad (0 \le s \le L). \tag{69}
$$

On denoting by *Pγ*(*v*1(0), *v*2(0)) the parallel transport of (*v*1(0), *v*2(0)) along the curve *γ*(*s*), we have

$$(v\_1(L), v\_2(L)) = P\_\gamma(v\_1(0), v\_2(0)) \begin{pmatrix} \cos a & \sin a \\ -\sin a & \cos a \end{pmatrix} \tag{70}$$

with

$$a = -\int\_0^L \frac{1}{2} \text{trace} \left( N \dot{\Gamma} (s) \Gamma (s)^T \right) ds, \quad N = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix}. \tag{71}$$

If we choose *γ*(*s*) to be a big circle or its segment, the *γ*(*s*) is autoparallel since *a* in (70) and (71) vanishes, so that big circles and their segments turn out to be geodesics.

## **Author details**

#### Yoshio Uwano

Department of Complex and Intelligent Systems, Faculty of Systems Information Science, Future University Hakodate, Kameda Nakano-cho, Hakodate, Japan

## **References**

22 Search Algorithms

*y*<sup>0</sup> = *γ*(0).

we have

with

**Author details**

Yoshio Uwano

*a* = −

� *L* 0 1 2

rotation matrix,

Let us fix a pair of distinct points, *y*<sup>0</sup> and *y*1, in *S*<sup>2</sup> arbitrarily, which connect by a smooth

Again, we remark that *γ*(*s*) takes 3-dimensional vector form. To express tangent vectors of

where the overdot ˙ stands for the derivation by *s* and × the vector product operation. We note here that the vector, *γ*˙(*s*), tangent to *γ*(*s*) is always of unit length since *s* is the length parameter. This ensures that the basis {*v*1(*s*), *v*2(*s*)} is orthonormal. In terms of the basis {*v*1(*s*), *v*2(*s*)}, any tangent vector at *γ*(*s*) can be expressed in the form of linear combination, *c*1*v*1(*s*) + *c*2*v*2(*s*) (*c*1, *c*<sup>2</sup> ∈ **R**). Accordingly, the parallel transport along the curve *γ*(*s*) is understood to be the way of connecting {*v*1(*L*), *v*2(*L*)} at *y*<sup>1</sup> = *γ*(*L*) to {*v*1(0), *v*2(0)} at

To express the parallel transport concisely, it is of good use to introduce the one-parameter

On denoting by *Pγ*(*v*1(0), *v*2(0)) the parallel transport of (*v*1(0), *v*2(0)) along the curve *γ*(*s*),

trace (*N* Γ˙(*s*) Γ(*s*)*T*) *ds*, *N* =

If we choose *γ*(*s*) to be a big circle or its segment, the *γ*(*s*) is autoparallel since *a* in (70) and

Department of Complex and Intelligent Systems, Faculty of Systems Information Science,

(*v*1(*L*), *v*2(*L*)) = *Pγ*(*v*1(0), *v*2(0))

(71) vanishes, so that big circles and their segments turn out to be geodesics.

Future University Hakodate, Kameda Nakano-cho, Hakodate, Japan

Γ(*s*)=(*v*0(*s*), *v*1(*s*), *v*2(*s*)), *v*0(*s*) = *γ*(*s*) (0 ≤ *s* ≤ *L*). (69)

� cos *a* sin *a* − sin *a* cos *a*

> 

�

000 0 0 −1 010 . (71)

(70)

*<sup>S</sup>*<sup>2</sup> at *<sup>γ</sup>*(*s*), we prepare the orthonormal basis {*v*1(*s*), *<sup>v</sup>*2(*s*)} of *<sup>T</sup>γ*(*s*)*S*<sup>2</sup> subject to

*γ*(0) = *y*0, *γ*(*L*) = *y*<sup>1</sup> (*L* : the full curve length). (67)

*v*1(*s*) = *γ*˙(*s*), *v*2(*s*) = *γ*(*s*) × *γ*˙(*s*) (68)

curve *γ*(*s*), where *s* is the length parameter. Namely, the *γ*(*s*) satisfies


24 Search Algorithms

284 Search Algorithms for Engineering Optimization

[18] Rezakhani AT., Abasto DF., Lidar DA., Zanardi P. Intrinsic geometry of quantum adiabatic evolution and quantum phase transitions. Phys. Rev. A 2010; 82 012321 p1-16.

[19] Uwano Y., Hino H., Ishiwatari Y. Certain integrable system on a space associated with

[20] Nakamura Y. Completely integrable dynamical systems on the manifolds of Gaussian and multinomial distributions. Japan J. Indust. Appl. Math. 1993; 10 p179-189.

[21] Nakamura Y. Lax pair and fixed point analysis of Karmarkar's projective scaling trajectory for linear programming. Japan J. Indust. Appl. Math. 1994; 11 p1-9.

[22] Nakamura Y. Neurodyanamics and nonlinear integrable systems of Lax type. Japan J.

[23] Uwano Y., Yuya H. A gradient system on the quantum information space realizing the Karmarkar flow for linear programming – A clue to effective algorithms –. In: Sakaji A., Licata I., Singh J., Felloni S. (eds.) New Trends in Quantum Information (Special Issue of Electronic Journal of Theoretical Physics). Rome: Aracne Editorice; 2010 p257-276.

[24] Uwano Y., Yuya H. A Hebb-type learning equation on the quantum information space – A clue to a fast principal component analyzer. Far East Journal of Applied Mathematics

[26] Amari S., Nagaoka H. Methods of Information Geometry. Providence: American

[27] Bengtsson I., Zyczkowski K. Geometry of Quantum States. Cambridge: Cambridge U.P.;

[29] Montgomery R.A Tour of Subriemannian Geometries, Their Geodesics and

[28] Nakahara M. Geometry, Topology and Physics. Bristol: IOP Publishing; 1990.

Applications. Providence: American Mathematical Society; 2002.

[25] Hayashi M. Quantum Information. Berlin: Springer-Verlag; 2006.

a quantum search algorithm. Physics of Atomic Nuclei 2007; 70 p784-791.

Indust. Appl. Math. 1994; 11 p11-20.

2010; 47 p149-167.

2006.

Mathematical Society; 2000.

˙

## *Edited by Taufik Abrão*

Heuristic Search is an important sub-discipline of optimization theory and finds applications in a vast variety of fields, including life science and engineering. Search methods have been useful in solving tough engineering-oriented problems that either could not be solved any other way or solutions take a very long time to be computed. This book explores a variety of applications for search methods and techniques in different fields of electrical engineering. By organizing relevant results and applications, this book will serve as a useful resource for students, researchers and practitioners to further exploit the potential of search methods in solving hard optimization problems that arise in advanced engineering technologies, such as image and video processing issues, detection and resource allocation in telecommunication systems, security and harmonic reduction in power generation systems, as well as redundancy optimization problem and search-fuzzy learning mechanisms in industrial applications.

Photo by maciek905 / iStock

Search Algorithms for Engineering Optimization

Search Algorithms for

Engineering Optimization