**Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing**

Charis Harley

116 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

Bhar K. Aliyu, Charles A. Osheku, Lanre M.A. Adetoro and Aliyu A. Funmilayo *Federal Ministry of Science and Technology (FMST), National Space Research & Development Agency (NASRDA), Center For Space Transport & Propulsion (CSTP) Epe, Lagos, Nigeria* 

NASRDA, for his meaningful suggestions and discussions.

*Matlab/Simulink*, ISBN 973-3-8443-2729-8, Germany.

Boca Raton, London, New York, Washington D.C

Weinhem, Brisban,Sigapore,Toronto

(December 2009), pp. 1133-1143

is the most optimal for pitch plane control of an ELV in the boast phase.

paper.

**Author details** 

**Acknowledgement** 

**8. References** 

511-07707-4, USA.

Kalman gain as a solution to a Riccati Differential Equation (RDE). Thus, the solution to *Riccati Differential Equation* for the implementation of Kalman filter in LQG controller design

It is required that after designing Kalman filter, the accuracy of estimation is also assessed from the covariance matrix. It could be seen that both cases gave a very good estimation (very small covariance). Though, that of ARE gave a much smaller value. This has less significance to our research since we are majorly interested in the time response characteristic of the controlled plant. MATLAB 2010a was used for all the simulations in this

The authors will like to specially appreciate the Honourable *Minister* of Science and Technology, Prof. Ita Okon Bassey Ewe, for his special intereste in the Research and Development activities at National Space Research and Development Agency (NASRDA). His support has been invaluable. Also, in appreciation is the *Director-General* of NASRDA, Dr. S. O Mohammed for making CSTP a priority agency among others Centres of NASRDA. We also thank Dr. Femi A. Agboola, *Director*, Engineering and Space Systems (ESS) at

Aliyu Bhar Kisabo. (2011). *Expendable Launch Vehicle Flight Control; Design & Simulation With* 

L. F. Shampine, I. Gladwell, S. Thompson (2003). *Solving ODEs with MATLAB*, ISBN 978-0-

Robert H. Bishop. (2002). *The mechatronics Handbook ,* Second Edition, ISBN 0-8493-0066-5,

Mohinder S Grewal.; & Angus P. Anderson. (2001). *Applications of kalman Filtering; Theory & practice Using Matlab,* Second Edition, ISBN 0-471-26638-8, New York, Chichester,

Aliyu Bhar Kisabo et al (2011). Autopilot Design for a Generic Based Expendable Launch Vehicle, Using Linear Quadratic gaussian (LQG) Control. *European Journal of Scientific* 

Reza, Abazari. (2009). Solution of Riccati Type Differential Equations Using Matrix Differential Transform. *Journal of Applied Mathematics & Informatics,* Vol.27, No.5-6,

*Research ,* Vol.50, No.4, (February 2011), pp. 597-611, ISSN 1450-216X

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/46463

## **1. Introduction**

The efficient solution of the Frank-Kamenetskii partial differential equation through the implementation of parallelized numerical algorithms or GPUs (Graphics Processing Units) in MATLAB is a natural progression of the work which has been conducted in an area of practical import. There is an on-going interest in the mathematics describing thermal explosions due to the significance of the applications of such models - one example is the chemical processes which occur in grain silos. Solutions which pertain to the different geometries of such a physical process have different physical interpretations, however in this chapter we will consider the Frank-Kamenetskii partial differential equation within the context of the mathematical theory of combustion which according to Frank-Kamenetskii [16] deals with the combined systems of equations of chemical kinetics and of heat transfer and diffusion. A physical explanation of such a system is often a gas confined within a vessel which then reacts chemically, heating up until it either attains a steady state or explodes.

The focus of this chapter is to investigate the performance of the parallelization power of the GPU vs. the computing power of the CPU within the context of the solution of the Frank-Kamenetskii partial differential equation. GPU computing is the use of a GPU as a co-processor to accelerate CPUs (Central Processing Units) for general purpose scientific and engineering computing. The GPU accelerates applications running on the CPU by offloading some of the compute-intensive and time consuming portions of the code. The rest of the application still runs on the CPU. The reason why the application is seen to run faster is because it is using the extreme parallel processing power of the GPU to boost performance. A CPU consists of 4 to 8 CPU cores while the GPU consists of 100s of smaller cores. Together they operate to crunch through the data in the application and as such it is this massive parallel architecture which gives the GPU its high compute performance.

The methods which will be investigated in this research are implicit methods, such as the Crank-Nicolson method (*CN*) and the Crank-Nicolson method incorporating the Newton method (*CNN*) [26]. These algorithms pose a serious challenge to the implementation of

©2012 Harley, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ©2012 Harley, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### 2 Will-be-set-by-IN-TECH 118 MATLAB – A Fundamental Tool for Scienti c Computing and Engineering Applications – Volume 3 Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing <sup>3</sup>

parallelized architecture as we shall later discuss. We will also consider Rosenbrock methods which are iterative in nature and as was indicated in Harley [22] showed drastically increased running times as the time period over which the problem was considered increased. Further pitfalls which arise when trying to obtain a solution for the partial differential equation in question when using numerical techniques is firstly the singularity which exists at *x* = 0. This complexity may be dealt with through the use of a Maclaurin expansion which splits the problem into two cases: *x* = 0 and *x* � 0.

where *k* is indicative of the shape of the vessel within which the chemical reaction takes place: *k* = 0, 1 and 2 represent an infinite slab, infinite circular cylinder and sphere, respectively.

be able to speak of combustion the condition *�* 1 must be satisfied due to the fact that the ambient temperature can normally be seen as much smaller in magnitude than the ignition temperature [16]. Equation (1) for *�* = 0 was derived by Frank-Kamenetskii [16]. Further work was done by Steggerda [31] on Frank-Kamenetskii's original criterion for a thermal explosion showing that a more detailed consideration of the situation is possible. For small *x* a solution was derived for the cylindrical system by Rice [27], Bodington and Gray [6] and Chambr*e*´ [10]. While the steady state case - often termed the Lane-Emden equation of the second kind - has been considered extensively, the time dependent case is also of import and has been studied

In this chapter we consider numerical solutions for equation (1) modelling a thermal explosion within a cylindrical vessel, i.e. *k* = 1. A thermal explosion occurs when the heat generated by a chemical reaction is far greater than the heat lost to the walls of the vessel in which the reaction is taking place. As such this equation is subject to certain boundary conditions given

where *R* = 1 is the radius of the cylinder. The boundary conditions (4) and (5) imply that the temperature at the vessel walls is kept fixed and the solution is symmetric about the origin. Frank-Kamenetskii [16] obtained a steady state solution to this problem with *�* = 0. Zeldovich et al. [34] considered similarity solutions admitted by (1) for *k* = 1 that exhibit blow-up in finite time. These kinds of solutions, while noteworthy, have limited significance due to the restricted form of the initial profiles compatible with the similarity solutions. These solutions correspond to very special initial conditions for the temperature evolution profile, limiting the degree to which results obtained in this manner are applicable. This disadvantage has been noted by Anderson et al. [3] while analytically investigating the time evolution of the one-dimensional temperature profile in a fusion reactor plasma. A solution which also models blow-up in finite time has been obtained by Harley and Momoniat [18] via nonlocal

In Harley [21] a Crank-Nicolson- and hopscotch scheme were implemented for equation (1) subject to (4) and (5) where *δ* = 1 and *�* = 0. The nonlinear source term was kept explicit when the Crank-Nicolson method was employed, as commented on by Britz et al. [9] in whose work the nonlinear term was incorporated in an implicit manner in a style more consistent with the Crank-Nicolson method. Britz et al. [9] implemented the Crank-Nicolson scheme with the Newton iteration and showed that it outperformed the explicit implementation of the nonlinearity as in [21] in terms of accuracy. However it does require more computer time

In recent work (see [22]) the Crank-Nicolson method was implemented with the Newton iteration as done by Britz et al. [9] by computing a correction set in each iteration to obtain

at the walls of the vessel. The appropriate boundary conditions for this problem are

*<sup>E</sup>* is introduced as the critical activation parameter. To

Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing 119

*u*(*x*, 0) = 0, (4)

(0, *t*) = 0, (*a*) *u*(*R*,*t*) = 0 (*b*) (5)

The dimensionless parameter *�* = *RT*<sup>0</sup>

*∂u ∂x*

symmetries of the steady-state equation.

as would be expected.

in [2], [32] and [33].

The second hurdle is the nonlinear source term which may be dealt with using different techniques. In this chapter we will implement the Newton method which acts as an updating mechanism for the nonlinear source term and in so doing maintains the implicit nature of the scheme in a consistent fashion. While the incorporation of the Newton method leads to an increase in the computation time for the Crank-Nicolson difference scheme (see [22]) there is also an increase in the accuracy and stability of the solution. As such we find that the algorithms we are attempting to employ in the solution of this partial differential equation would benefit from the processing power of a GPU.

In this chapter we will focus on the implementation of the Crank-Nicolson implicit method, employed with and without the Newton method, and two Rosenbrock methods, namely ROS2 and ROWDA3. We consider the effectiveness of running the algorithms on the GPU rather than the CPU and discuss whether these algorithms can in fact be parallelized effectively.

### **2. Model**

The steady state formulation of the equation to be considered in this chapter was described by Frank-Kamenetskii [16] who later also considered the time development of such a reaction. The reaction rate depends on the temperature in a nonlinear fashion, generally given by Arrhenius' law. This nonlinearity is an important characteristic of the combustion phenomena since without it the critical condition for inflammation would disappear causing the idea of combustion to lose its meaning [16]. Thus, in the case of a thermal explosion, the Arrhenius law is maintained by the introduction of the exponential term which acts as a source for the heat generated by the chemical reaction. As such we are able to write an equation modelling the dimensionless temperature distribution in a vessel as

$$\frac{\partial \mu}{\partial t} = \nabla^2 \mu + \delta e^{\mu/(1+\epsilon u)} \tag{1}$$

where *u* is a function of the spatial variable *x* and time *t* and the Frank-Kamenetskii parameter *δ* is given by

$$\delta = \frac{Q}{\lambda} \frac{E}{RT\_0} r^2 \kappa \alpha e^{\left(-\frac{E}{RT\_0}\right)}.\tag{2}$$

The value of the Frank-Kamanetskii parameter [16] *δ* is related to the critical temperature at which ignition of a thermal explosion takes place and is thus also referred to as the critical value. At values below its critical value *δcr* a steady state is reached for a given geometry and set of boundary conditions whereas an explosion ensues for values above it. The Laplacian operator takes the form

$$
\nabla^2 = \frac{\partial^2}{\partial \mathbf{x}^2} + \frac{k}{\mathbf{x}} \frac{\partial}{\partial \mathbf{x}}, \qquad \qquad 0 < \mathbf{x} < 1 \tag{3}
$$

where *k* is indicative of the shape of the vessel within which the chemical reaction takes place: *k* = 0, 1 and 2 represent an infinite slab, infinite circular cylinder and sphere, respectively. The dimensionless parameter *�* = *RT*<sup>0</sup> *<sup>E</sup>* is introduced as the critical activation parameter. To be able to speak of combustion the condition *�* 1 must be satisfied due to the fact that the ambient temperature can normally be seen as much smaller in magnitude than the ignition temperature [16]. Equation (1) for *�* = 0 was derived by Frank-Kamenetskii [16]. Further work was done by Steggerda [31] on Frank-Kamenetskii's original criterion for a thermal explosion showing that a more detailed consideration of the situation is possible. For small *x* a solution was derived for the cylindrical system by Rice [27], Bodington and Gray [6] and Chambr*e*´ [10]. While the steady state case - often termed the Lane-Emden equation of the second kind - has been considered extensively, the time dependent case is also of import and has been studied in [2], [32] and [33].

2 Will-be-set-by-IN-TECH

parallelized architecture as we shall later discuss. We will also consider Rosenbrock methods which are iterative in nature and as was indicated in Harley [22] showed drastically increased running times as the time period over which the problem was considered increased. Further pitfalls which arise when trying to obtain a solution for the partial differential equation in question when using numerical techniques is firstly the singularity which exists at *x* = 0. This complexity may be dealt with through the use of a Maclaurin expansion which splits the

The second hurdle is the nonlinear source term which may be dealt with using different techniques. In this chapter we will implement the Newton method which acts as an updating mechanism for the nonlinear source term and in so doing maintains the implicit nature of the scheme in a consistent fashion. While the incorporation of the Newton method leads to an increase in the computation time for the Crank-Nicolson difference scheme (see [22]) there is also an increase in the accuracy and stability of the solution. As such we find that the algorithms we are attempting to employ in the solution of this partial differential equation

In this chapter we will focus on the implementation of the Crank-Nicolson implicit method, employed with and without the Newton method, and two Rosenbrock methods, namely ROS2 and ROWDA3. We consider the effectiveness of running the algorithms on the GPU rather than the CPU and discuss whether these algorithms can in fact be parallelized effectively.

The steady state formulation of the equation to be considered in this chapter was described by Frank-Kamenetskii [16] who later also considered the time development of such a reaction. The reaction rate depends on the temperature in a nonlinear fashion, generally given by Arrhenius' law. This nonlinearity is an important characteristic of the combustion phenomena since without it the critical condition for inflammation would disappear causing the idea of combustion to lose its meaning [16]. Thus, in the case of a thermal explosion, the Arrhenius law is maintained by the introduction of the exponential term which acts as a source for the heat generated by the chemical reaction. As such we are able to write an equation modelling

problem into two cases: *x* = 0 and *x* � 0.

**2. Model**

*δ* is given by

operator takes the form

would benefit from the processing power of a GPU.

the dimensionless temperature distribution in a vessel as

*∂u*

*<sup>δ</sup>* <sup>=</sup> *<sup>Q</sup> λ*

<sup>∇</sup><sup>2</sup> <sup>=</sup> *<sup>∂</sup>*<sup>2</sup>

*<sup>∂</sup>x*<sup>2</sup> <sup>+</sup>

*k x ∂ ∂x*

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> <sup>∇</sup>2*<sup>u</sup>* <sup>+</sup> *<sup>δ</sup><sup>e</sup>*

*E RT*<sup>0</sup>

where *u* is a function of the spatial variable *x* and time *t* and the Frank-Kamenetskii parameter

The value of the Frank-Kamanetskii parameter [16] *δ* is related to the critical temperature at which ignition of a thermal explosion takes place and is thus also referred to as the critical value. At values below its critical value *δcr* a steady state is reached for a given geometry and set of boundary conditions whereas an explosion ensues for values above it. The Laplacian

<sup>2</sup> *<sup>r</sup>*2*κα<sup>e</sup>* <sup>−</sup> *<sup>E</sup> RT*0 

*<sup>u</sup>*/(1+*�u*) (1)

, 0 < *x* < 1 (3)

. (2)

In this chapter we consider numerical solutions for equation (1) modelling a thermal explosion within a cylindrical vessel, i.e. *k* = 1. A thermal explosion occurs when the heat generated by a chemical reaction is far greater than the heat lost to the walls of the vessel in which the reaction is taking place. As such this equation is subject to certain boundary conditions given at the walls of the vessel. The appropriate boundary conditions for this problem are

$$
\mu(\mathbf{x}, \mathbf{0}) = \mathbf{0}, \tag{4}
$$

$$\frac{\partial u}{\partial \mathbf{x}}(0,t) = 0, \qquad (a) \qquad \qquad u(R,t) = 0 \qquad (b) \tag{5}$$

where *R* = 1 is the radius of the cylinder. The boundary conditions (4) and (5) imply that the temperature at the vessel walls is kept fixed and the solution is symmetric about the origin.

Frank-Kamenetskii [16] obtained a steady state solution to this problem with *�* = 0. Zeldovich et al. [34] considered similarity solutions admitted by (1) for *k* = 1 that exhibit blow-up in finite time. These kinds of solutions, while noteworthy, have limited significance due to the restricted form of the initial profiles compatible with the similarity solutions. These solutions correspond to very special initial conditions for the temperature evolution profile, limiting the degree to which results obtained in this manner are applicable. This disadvantage has been noted by Anderson et al. [3] while analytically investigating the time evolution of the one-dimensional temperature profile in a fusion reactor plasma. A solution which also models blow-up in finite time has been obtained by Harley and Momoniat [18] via nonlocal symmetries of the steady-state equation.

In Harley [21] a Crank-Nicolson- and hopscotch scheme were implemented for equation (1) subject to (4) and (5) where *δ* = 1 and *�* = 0. The nonlinear source term was kept explicit when the Crank-Nicolson method was employed, as commented on by Britz et al. [9] in whose work the nonlinear term was incorporated in an implicit manner in a style more consistent with the Crank-Nicolson method. Britz et al. [9] implemented the Crank-Nicolson scheme with the Newton iteration and showed that it outperformed the explicit implementation of the nonlinearity as in [21] in terms of accuracy. However it does require more computer time as would be expected.

In recent work (see [22]) the Crank-Nicolson method was implemented with the Newton iteration as done by Britz et al. [9] by computing a correction set in each iteration to obtain approximate values of the dependent variable at the next time step. The efficiency of the Crank-Nicolson scheme, hopscotch scheme (both of these methods were implemented with an explicit and then an implicit discretisation of the source term) and two versions of the Rosenbrock method were compared [22]. Using the pdepe function in MATLAB and the steady state solution obtained by Frank-Kamenetskii [16] as a means of comparison, it was found that the incorporation of the Newton method for the Crank-Nicolson- and hopscotch scheme led to increased running times as *T*, where 0 ≤ *t* ≤ *T*, increased.

work from one MATLAB session (the client) to other MATLAB sessions, called workers. You can use multiple workers to take advantage of parallel processing and in this way improve the performance of such loop execution by allowing several MATLAB workers to execute individual loop iterations simultaneously. In this context however we are not able to implement in-built MATLAB functions such as parfor due to the numerical algorithms which we have chosen to consider. The *CN*- and *CNN* method, both implicit, loop through the index *m* until *t*<sup>0</sup> + *m*�*t* = *T*. These iterative steps are not independent of each other, i.e to obtain data at the *m* + 1*th* step the data at the *mth* step is required. In a similar fashion the ROS2 and ROWDA3 methods also iterate through dependent loops to obtain a solution. As such we attempt to run the code directly on the GPU instead of the CPU in order to decrease

Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing 121

The Parallel Computing Toolbox in MATLAB allows one to create data on and transfer it to the GPU so that the resulting GPU array can then be used as an input to enhance built-in functions that support them. The first thing to consider when implementing computations on the GPU is *keeping* the data on the GPU so that we do not have to transfer it back and forth for each operation - this can be done through the use of the gpuArray command. In this manner computations with such input arguments run on the GPU because the input arguments are already in the GPU memory. One then retrieves the results from the GPU to the MATLAB workspace via the gather command. Having to recall the results from the GPU is costly in terms of computing time and can in certain instances make the implementation of an algorithm on the GPU less efficient than one would expect. Furthermore, the manner in which one codes algorithms for GPUs is of vital importance given certain limitations to the manner in which functions of the Toolbox may be implemented (see [25]). More importantly however, is whether the method employed can allow for the necessary adjustments in order to improve its performance. In this chapter we will see that there are some problems with implementing

In this chapter we are employing MATLAB under Windows 7 (64 bits) on a PC equipped with

We will implement the Crank-Nicolson method while maintaing the explicit nature of the nonlinear source term and also apply the method by computing a correction set in each iteration to obtain approximate values of the dependent variable at the next time step through the use of the Newton method [26]. The methodology will be explained briefly here; the reader

When implementing the Crank-Nicolson method we employ the following central-difference

*<sup>n</sup>*+<sup>1</sup> <sup>−</sup> <sup>2</sup>*u<sup>m</sup>*

*<sup>n</sup>*+<sup>1</sup> <sup>−</sup> *<sup>u</sup><sup>m</sup>*

*<sup>n</sup>* + *u<sup>m</sup> n*−1

*n*−1

�*x*<sup>2</sup> , (6)

<sup>2</sup>�*<sup>x</sup>* (7)

approximations for the second-and first-order spatial derivatives respectively

*∂u <sup>∂</sup><sup>x</sup>* <sup>≈</sup> *<sup>u</sup><sup>m</sup>*

*∂*2*u <sup>∂</sup>x*<sup>2</sup> <sup>≈</sup> *<sup>u</sup><sup>m</sup>*

the running time of the algorithms.

the kind of algorithms considered here on the GPU.

an i7 2.2 GHz processor with 32 GB of RAM.

**3.1. Crank-Nicolson implicit scheme**

is referred to [7–9] for clarification.

Furthermore, it was shown that while the Crank-Nicolson- and hopscotch method (with or without the implementation of the Newton method) performed well in terms of accuracy for *T* = 0.3 and 0.5, they were in fact able to outperform pdepe at *T* = 4. The Rosenbrock methods employed (ROS2 and ROWDA3) performed similarly with regards to accuracy, however showed almost an exponential increase in their running times as *T* increased, indicating that using the Crank-Nicolson- or hopscotch scheme may be more efficient. Thus, given that the Rosenbrock methods performed even poorer with regards to running time, it seems reasonable to suggest that implementing the Crank-Nicolson- or hopscotch scheme with a Newton iteration is most ideal. The Crank-Nicolson method using the Newton method as a means of maintaining the implicit nature of the source term in the difference scheme has been used by Anderson and Zienkiewicz [2]. In Harley [21] and Abd El-Salam and Shehata [1] the discretisation of the exponential terms were kept explicit, thereby removing the nonlinearity.

As a consequence of these findings and due to the complexity created by the nonlinear source term which serves a critical function in the model, further work regarding faster algorithms for the solution of such an equation are of interest. This chapter will not consider the hopscotch scheme directly as an appropriate method for the solution of the Frank-Kamenetskii partial differential equation due to work done by Feldberg [15] which indicated that for large values of *β* = �*<sup>t</sup>* �*x*<sup>2</sup> the algorithm produces the problem of propagational inadequacy which leads to inaccuracies - similar results were obtained in [22]. Given the improved accuracy of the Crank-Nicolson method incorporating the Newton method [22] - the order of the error for this method is *O*(�*t* <sup>2</sup>) which is only approximately the case for the Crank-Nicolson method without the Newton iteration incorporated [9] - it seems more fitting to consider an improvement in the computing time of this method. Hence a consideration of such an improvement on the algorithm's current running time will be the focus of this chapter. The means by which we wish to accomplish this is through the use of the the Parallel Computing Toolbox in MATLAB. It is hoped that this is the next step towards creating fast and effective numerical algorithms for the solution of a partial differential equation such as the one originating from the work of Frank-Kamenetskii [16].

### **3. Executing MATLAB on a GPU**

The advantage of using the Parallel Computing Toolbox in MATLAB is the fact that it allows one to solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters. In this manner one can parallelize numerical algorithms, and in so doing MATLAB applications, without CUDA or MPI programming. Parallelized algorithms such as parfor, used within the context of what is usually a for loop, allows you to offload work from one MATLAB session (the client) to other MATLAB sessions, called workers. You can use multiple workers to take advantage of parallel processing and in this way improve the performance of such loop execution by allowing several MATLAB workers to execute individual loop iterations simultaneously. In this context however we are not able to implement in-built MATLAB functions such as parfor due to the numerical algorithms which we have chosen to consider. The *CN*- and *CNN* method, both implicit, loop through the index *m* until *t*<sup>0</sup> + *m*�*t* = *T*. These iterative steps are not independent of each other, i.e to obtain data at the *m* + 1*th* step the data at the *mth* step is required. In a similar fashion the ROS2 and ROWDA3 methods also iterate through dependent loops to obtain a solution. As such we attempt to run the code directly on the GPU instead of the CPU in order to decrease the running time of the algorithms.

The Parallel Computing Toolbox in MATLAB allows one to create data on and transfer it to the GPU so that the resulting GPU array can then be used as an input to enhance built-in functions that support them. The first thing to consider when implementing computations on the GPU is *keeping* the data on the GPU so that we do not have to transfer it back and forth for each operation - this can be done through the use of the gpuArray command. In this manner computations with such input arguments run on the GPU because the input arguments are already in the GPU memory. One then retrieves the results from the GPU to the MATLAB workspace via the gather command. Having to recall the results from the GPU is costly in terms of computing time and can in certain instances make the implementation of an algorithm on the GPU less efficient than one would expect. Furthermore, the manner in which one codes algorithms for GPUs is of vital importance given certain limitations to the manner in which functions of the Toolbox may be implemented (see [25]). More importantly however, is whether the method employed can allow for the necessary adjustments in order to improve its performance. In this chapter we will see that there are some problems with implementing the kind of algorithms considered here on the GPU.

In this chapter we are employing MATLAB under Windows 7 (64 bits) on a PC equipped with an i7 2.2 GHz processor with 32 GB of RAM.

#### **3.1. Crank-Nicolson implicit scheme**

4 Will-be-set-by-IN-TECH

approximate values of the dependent variable at the next time step. The efficiency of the Crank-Nicolson scheme, hopscotch scheme (both of these methods were implemented with an explicit and then an implicit discretisation of the source term) and two versions of the Rosenbrock method were compared [22]. Using the pdepe function in MATLAB and the steady state solution obtained by Frank-Kamenetskii [16] as a means of comparison, it was found that the incorporation of the Newton method for the Crank-Nicolson- and hopscotch

Furthermore, it was shown that while the Crank-Nicolson- and hopscotch method (with or without the implementation of the Newton method) performed well in terms of accuracy for *T* = 0.3 and 0.5, they were in fact able to outperform pdepe at *T* = 4. The Rosenbrock methods employed (ROS2 and ROWDA3) performed similarly with regards to accuracy, however showed almost an exponential increase in their running times as *T* increased, indicating that using the Crank-Nicolson- or hopscotch scheme may be more efficient. Thus, given that the Rosenbrock methods performed even poorer with regards to running time, it seems reasonable to suggest that implementing the Crank-Nicolson- or hopscotch scheme with a Newton iteration is most ideal. The Crank-Nicolson method using the Newton method as a means of maintaining the implicit nature of the source term in the difference scheme has been used by Anderson and Zienkiewicz [2]. In Harley [21] and Abd El-Salam and Shehata [1] the discretisation of the exponential terms were kept explicit, thereby removing

As a consequence of these findings and due to the complexity created by the nonlinear source term which serves a critical function in the model, further work regarding faster algorithms for the solution of such an equation are of interest. This chapter will not consider the hopscotch scheme directly as an appropriate method for the solution of the Frank-Kamenetskii partial differential equation due to work done by Feldberg [15] which indicated that for large values

to inaccuracies - similar results were obtained in [22]. Given the improved accuracy of the Crank-Nicolson method incorporating the Newton method [22] - the order of the error

method without the Newton iteration incorporated [9] - it seems more fitting to consider an improvement in the computing time of this method. Hence a consideration of such an improvement on the algorithm's current running time will be the focus of this chapter. The means by which we wish to accomplish this is through the use of the the Parallel Computing Toolbox in MATLAB. It is hoped that this is the next step towards creating fast and effective numerical algorithms for the solution of a partial differential equation such as

The advantage of using the Parallel Computing Toolbox in MATLAB is the fact that it allows one to solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters. In this manner one can parallelize numerical algorithms, and in so doing MATLAB applications, without CUDA or MPI programming. Parallelized algorithms such as parfor, used within the context of what is usually a for loop, allows you to offload

the one originating from the work of Frank-Kamenetskii [16].

**3. Executing MATLAB on a GPU**

�*x*<sup>2</sup> the algorithm produces the problem of propagational inadequacy which leads

<sup>2</sup>) which is only approximately the case for the Crank-Nicolson

scheme led to increased running times as *T*, where 0 ≤ *t* ≤ *T*, increased.

the nonlinearity.

of *β* = �*<sup>t</sup>*

for this method is *O*(�*t*

We will implement the Crank-Nicolson method while maintaing the explicit nature of the nonlinear source term and also apply the method by computing a correction set in each iteration to obtain approximate values of the dependent variable at the next time step through the use of the Newton method [26]. The methodology will be explained briefly here; the reader is referred to [7–9] for clarification.

When implementing the Crank-Nicolson method we employ the following central-difference approximations for the second-and first-order spatial derivatives respectively

$$\frac{\partial^2 u}{\partial x^2} \approx \frac{u\_{n+1}^m - 2u\_n^m + u\_{n-1}^m}{\triangle x^2} \,\,\,\tag{6}$$

$$\frac{\partial u}{\partial x} \approx \frac{u\_{n+1}^m - u\_{n-1}^m}{2\triangle x} \tag{7}$$

#### 6 Will-be-set-by-IN-TECH 122 MATLAB – A Fundamental Tool for Scienti c Computing and Engineering Applications – Volume 3 Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing <sup>7</sup>

while a forward-difference approximation

$$\frac{\partial \underline{u}}{\partial t} \approx \frac{\underline{u}\_n^{m+1} - \underline{u}\_n^m}{\triangle t} \tag{8}$$

the following

<sup>−</sup> *<sup>λ</sup><sup>n</sup>*

(1 + 2*β*) *um*+<sup>1</sup>

<sup>2</sup> *<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>*−<sup>1</sup> <sup>+</sup> (<sup>1</sup> <sup>+</sup> *<sup>β</sup>*) *<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>* <sup>−</sup> *<sup>γ</sup><sup>n</sup>*

where *xn* <sup>=</sup> *<sup>n</sup>*�*<sup>x</sup>* and *<sup>β</sup>* <sup>=</sup> �*<sup>t</sup>*

and is solved as follows

usually relatively small.

**3.2. Rosenbrock method**

numerical literature of [17, 28–30].

as the initial difference scheme with *β* = �*<sup>t</sup>*

equations which are to be solved iteratively.

generated by (15) can be written in compact form as

**u***m*+<sup>1</sup> = (*A*)

<sup>0</sup> <sup>−</sup> <sup>2</sup>*βum*+<sup>1</sup>

discussed above we obtain the general numerical scheme

<sup>2</sup> *<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>*+<sup>1</sup> − �*tδ<sup>e</sup>*

�*x*<sup>2</sup> such that *<sup>γ</sup><sup>n</sup>* <sup>=</sup> *<sup>β</sup>*

*<sup>A</sup>***u***m*+<sup>1</sup> <sup>=</sup> *<sup>B</sup>***u***<sup>m</sup>* <sup>+</sup> �*tδe***u***<sup>m</sup>*

−1 

difference scheme (15), including the initial difference condition (14), form a system of

As indicated by the boundary conditions (5a) and (5b) we consider the problem for *x* ∈ [0, 1] and *t* ∈ [0, *T*]. The domain [0, 1] is sub-divided into *N* equidistant intervals termed �*x*, i.e. <sup>0</sup> = *<sup>x</sup>*<sup>0</sup> < *<sup>x</sup>*<sup>1</sup> < *<sup>x</sup>*<sup>2</sup> < ··· < *xN*−<sup>1</sup> < *xN* where *xn*+<sup>1</sup> = *xn* + �*x*. In a similar fashion the domain [0, *T*] is sub-divided into *M* intervals of equal length, �*t*, through which the scheme iterates. The system will iterate until *tm* + �*t* = *T*, i.e. for *M* = *T*/�*t* steps. The system

*<sup>B</sup>***u***<sup>m</sup>* <sup>+</sup> �*tδ<sup>e</sup>*

The inverse of *A* is calculated using the \ operator in MATLAB which is more efficient than the inv function. The nonlinear term on the *m* + 1*th* level is dealt with through an implementation of the Newton method [26] in an iterative fashion as done by Britz et al. [9] and discussed in [8]. The system **J***δ***u** = −**F**(**u**) is solved where **F** is the set of difference equations created as per (16) such that **F**(**u**) = 0. The starting vector at *t* = 0 is chosen as per the initial condition (4) such that **u** = 0. The Newton iteration converges within 2-3 steps given that changes are

We now consider two particular Rosenbrock methods, ROS2 and ROWDA3, as a means of comparison for the effectiveness of the methods discussed in the previous section. The Rosenbrock methods belong to the class of linearly implicit Runge - Kutta methods [11, 17]. They were used successfully for the numerical solution of non-electrochemical stiff partial differential equations, including equations of interest to electrochemistry. For further information regarding the particulars of such methods interested readers are referred to the

The reason for the use of the Rosenbrock methods in this paper is the ease with which they are able to deal with the nonlinear source term and the fact that no Newton iterations are

<sup>1</sup> − �*tδeum*+<sup>1</sup> <sup>0</sup> <sup>=</sup> (<sup>1</sup> <sup>+</sup> <sup>2</sup>*β*) *<sup>u</sup><sup>m</sup>*

*<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>* <sup>=</sup> *<sup>λ</sup><sup>n</sup>*

<sup>2</sup> *<sup>u</sup><sup>m</sup>*

 <sup>1</sup> <sup>−</sup> <sup>1</sup> 2*n* 

<sup>+</sup> �*tδe***u***m*+<sup>1</sup>

+ �*tδe*

**u***m*+<sup>1</sup> 

**u***<sup>m</sup>*

<sup>0</sup> <sup>+</sup> <sup>2</sup>*βu<sup>m</sup>*

Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing 123

*<sup>n</sup>*−<sup>1</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>β</sup>*) *<sup>u</sup><sup>m</sup>*

�*x*<sup>2</sup> . Implementing the difference approximations

<sup>1</sup> <sup>+</sup> �*tδeu<sup>m</sup>*

*<sup>n</sup>* <sup>+</sup> *<sup>γ</sup><sup>n</sup>* <sup>2</sup> *<sup>u</sup><sup>m</sup>*

> 1 + <sup>1</sup> 2*n* . This

. (17)

and *λ<sup>n</sup>* = *β*

<sup>0</sup> (14)

*<sup>n</sup>*+<sup>1</sup> + �*tδe*

*um n* (15)

(16)

is used for the time derivative. We implement a Crank-Nicolson scheme by approximating the second-derivative on the right-hand side of (1) by the implicit Crank-Nicolson [12] approximation

$$\frac{\partial^2 u}{\partial x^2} \approx \frac{u\_{n+1}^{m+1} - 2u\_n^{m+1} + u\_{n-1}^{m+1}}{2\triangle x^2} + \frac{u\_{n+1}^m - 2u\_n^m + u\_{n-1}^m}{2\triangle x^2}.\tag{9}$$

In a similar fashion the first-derivative on the right-hand side becomes

$$\frac{\partial \underline{u}}{\partial \underline{x}} \approx \frac{u\_{n+1}^{m+1} - u\_{n-1}^{m+1}}{4\triangle \underline{x}} + \frac{u\_{n+1}^m - u\_{n-1}^m}{4\triangle \underline{x}}.\tag{10}$$

To impose zero-shear boundary conditions at the edges we approximate the spatial first-derivative by the central-difference approximation (7) which leads to the following condition

$$
\mu\_{-1}^m = \mu\_1^m.\tag{11}
$$

As mentioned before the boundary condition (5a) at *x*<sup>0</sup> = 0 can pose a problem for the solution of equation (1). One could discretise it directly as a forward difference formula, such as the three-point approximation <sup>−</sup>3*u<sup>m</sup>* <sup>0</sup> <sup>+</sup> <sup>4</sup>*u<sup>m</sup>* <sup>1</sup> <sup>−</sup> *<sup>u</sup><sup>m</sup>* <sup>2</sup> = 0, and add this to the set of equations to solve when using the Crank-Nicolson scheme. Alternatively one could use the more accurate symmetric approximation, *u<sup>m</sup>* <sup>−</sup><sup>1</sup> <sup>=</sup> *<sup>u</sup><sup>m</sup>* <sup>1</sup> , which introduces a 'fictitious point' at *x* = −�*x*. This however, would lead to another problem due to the singularity in the differential equation at *x*<sup>0</sup> = 0. Instead we choose to overcome this difficulty by using the Maclaurin expansion

$$\lim\_{\chi \to 0} \frac{1}{\chi} \frac{\partial u}{\partial \chi} = \left. \frac{\partial^2 u}{\partial x^2} \right|\_{x=0} \tag{12}$$

which simplifies the equation for the case *x*<sup>0</sup> = 0. It has been noted by Britz et al. [9] that using (12) turns out to be more convenient and accurate. Due to the fact that the point *x*<sup>0</sup> = 0 would lead to a singularity in equation (1) we structure the code to account for two instances: *x* = 0 and *x* �= 0. Using (12) for equation (1) we attain the following approximation

$$\frac{\partial \mu}{\partial t} = 2 \frac{\partial^2 \mu}{\partial x^2} + e^{\mu} \tag{13}$$

to equation (1) at *x*<sup>0</sup> = 0. This approximation has been taken into account in the system given by (16) below. Such an approximation has been used in many numerical algorithms. In Crank and Furzeland [13], for instance, they presented a modified finite-difference method which eliminates inaccuracies that occur in the standard numerical solution near singularities. The approximation has also been used by Harley and Momoniat [19] to generate a consistency criteria for initial values at *x*<sup>0</sup> = 0 for a Lane-Emden equation of the second-kind. From the equation under consideration (1) an initial condition for *u*(*x*,*t*) is obtained at *x*<sup>0</sup> = 0 giving the following

6 Will-be-set-by-IN-TECH

*<sup>∂</sup><sup>t</sup>* <sup>≈</sup> *<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>* <sup>−</sup> *<sup>u</sup><sup>m</sup>*

is used for the time derivative. We implement a Crank-Nicolson scheme by approximating the second-derivative on the right-hand side of (1) by the implicit Crank-Nicolson [12]

<sup>2</sup>�*x*<sup>2</sup> <sup>+</sup>

*n*

*um*

+ *um <sup>n</sup>*+<sup>1</sup> <sup>−</sup> *<sup>u</sup><sup>m</sup>*

To impose zero-shear boundary conditions at the edges we approximate the spatial first-derivative by the central-difference approximation (7) which leads to the following

As mentioned before the boundary condition (5a) at *x*<sup>0</sup> = 0 can pose a problem for the solution of equation (1). One could discretise it directly as a forward difference formula, such as the

solve when using the Crank-Nicolson scheme. Alternatively one could use the more accurate

however, would lead to another problem due to the singularity in the differential equation at *x*<sup>0</sup> = 0. Instead we choose to overcome this difficulty by using the Maclaurin expansion

which simplifies the equation for the case *x*<sup>0</sup> = 0. It has been noted by Britz et al. [9] that using (12) turns out to be more convenient and accurate. Due to the fact that the point *x*<sup>0</sup> = 0 would lead to a singularity in equation (1) we structure the code to account for two instances:

> *∂*2*u <sup>∂</sup>x*<sup>2</sup> <sup>+</sup> *<sup>e</sup>*

to equation (1) at *x*<sup>0</sup> = 0. This approximation has been taken into account in the system given by (16) below. Such an approximation has been used in many numerical algorithms. In Crank and Furzeland [13], for instance, they presented a modified finite-difference method which eliminates inaccuracies that occur in the standard numerical solution near singularities. The approximation has also been used by Harley and Momoniat [19] to generate a consistency criteria for initial values at *x*<sup>0</sup> = 0 for a Lane-Emden equation of the second-kind. From the equation under consideration (1) an initial condition for *u*(*x*,*t*) is obtained at *x*<sup>0</sup> = 0 giving

*um* <sup>−</sup><sup>1</sup> <sup>=</sup> *<sup>u</sup><sup>m</sup>*

<sup>1</sup> <sup>−</sup> *<sup>u</sup><sup>m</sup>*

*<sup>n</sup>*+<sup>1</sup> <sup>−</sup> <sup>2</sup>*u<sup>m</sup>*

*n*−1

�*<sup>t</sup>* (8)

<sup>2</sup>�*x*<sup>2</sup> . (9)

<sup>4</sup>�*<sup>x</sup>* . (10)

<sup>1</sup> . (11)

*<sup>u</sup>* (13)

(12)

<sup>2</sup> = 0, and add this to the set of equations to

<sup>1</sup> , which introduces a 'fictitious point' at *x* = −�*x*. This

*<sup>n</sup>* + *u<sup>m</sup> n*−1

*∂u*

*<sup>∂</sup>x*<sup>2</sup> <sup>≈</sup> *<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>*+<sup>1</sup> <sup>−</sup> <sup>2</sup>*um*+<sup>1</sup> *<sup>n</sup>* <sup>+</sup> *<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>*−<sup>1</sup>

In a similar fashion the first-derivative on the right-hand side becomes

*<sup>∂</sup><sup>x</sup>* <sup>≈</sup> *<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>m*+<sup>1</sup> *<sup>n</sup>*−<sup>1</sup> 4�*x*

<sup>0</sup> <sup>+</sup> <sup>4</sup>*u<sup>m</sup>*

lim *x*→0 1 *x ∂u <sup>∂</sup><sup>x</sup>* <sup>=</sup> *<sup>∂</sup>*2*<sup>u</sup> ∂x*<sup>2</sup> *x*=0

*x* = 0 and *x* �= 0. Using (12) for equation (1) we attain the following approximation

*∂u <sup>∂</sup><sup>t</sup>* <sup>=</sup> <sup>2</sup>

<sup>−</sup><sup>1</sup> <sup>=</sup> *<sup>u</sup><sup>m</sup>*

*∂u*

while a forward-difference approximation

*∂*2*u*

three-point approximation <sup>−</sup>3*u<sup>m</sup>*

symmetric approximation, *u<sup>m</sup>*

approximation

condition

$$(1+2\beta)\,\upmu\_0^{m+1} - 2\beta\mu\_1^{m+1} - \triangle t\delta e^{\mu\_0^{m+1}} = (1+2\beta)\,\upmu\_0^m + 2\beta\mu\_1^m + \triangle t\delta e^{\mu\_0^m} \tag{14}$$

as the initial difference scheme with *β* = �*<sup>t</sup>* �*x*<sup>2</sup> . Implementing the difference approximations discussed above we obtain the general numerical scheme

$$-\frac{\lambda\_n}{2}u\_{n-1}^{m+1} + (1+\beta)\,u\_n^{m+1} - \frac{\gamma\_n}{2}u\_{n+1}^{m+1} - \Delta t \delta e^{u\_n^{m+1}} = \frac{\lambda\_n}{2}u\_{n-1}^m + (1-\beta)\,u\_n^m + \frac{\gamma\_n}{2}u\_{n+1}^m + \Delta t \delta e^{u\_n^m} \tag{15}$$

where *xn* <sup>=</sup> *<sup>n</sup>*�*<sup>x</sup>* and *<sup>β</sup>* <sup>=</sup> �*<sup>t</sup>* �*x*<sup>2</sup> such that *<sup>γ</sup><sup>n</sup>* <sup>=</sup> *<sup>β</sup>* <sup>1</sup> <sup>−</sup> <sup>1</sup> 2*n* and *λ<sup>n</sup>* = *β* 1 + <sup>1</sup> 2*n* . This difference scheme (15), including the initial difference condition (14), form a system of equations which are to be solved iteratively.

As indicated by the boundary conditions (5a) and (5b) we consider the problem for *x* ∈ [0, 1] and *t* ∈ [0, *T*]. The domain [0, 1] is sub-divided into *N* equidistant intervals termed �*x*, i.e. <sup>0</sup> = *<sup>x</sup>*<sup>0</sup> < *<sup>x</sup>*<sup>1</sup> < *<sup>x</sup>*<sup>2</sup> < ··· < *xN*−<sup>1</sup> < *xN* where *xn*+<sup>1</sup> = *xn* + �*x*. In a similar fashion the domain [0, *T*] is sub-divided into *M* intervals of equal length, �*t*, through which the scheme iterates. The system will iterate until *tm* + �*t* = *T*, i.e. for *M* = *T*/�*t* steps. The system generated by (15) can be written in compact form as

$$A\mathbf{u}^{m+1} = B\mathbf{u}^m + \triangle t\delta e^{\mathbf{u}^m} + \triangle t\delta e^{\mathbf{u}^{m+1}} \tag{16}$$

and is solved as follows

$$\mathbf{u}^{m+1} = (A)^{-1} \left( B\mathbf{u}^{\mathfrak{m}} + \triangle t \delta e^{\mathbf{u}^{\mathfrak{m}}} + \triangle t \delta e^{\mathbf{u}^{\mathfrak{m}+1}} \right). \tag{17}$$

The inverse of *A* is calculated using the \ operator in MATLAB which is more efficient than the inv function. The nonlinear term on the *m* + 1*th* level is dealt with through an implementation of the Newton method [26] in an iterative fashion as done by Britz et al. [9] and discussed in [8]. The system **J***δ***u** = −**F**(**u**) is solved where **F** is the set of difference equations created as per (16) such that **F**(**u**) = 0. The starting vector at *t* = 0 is chosen as per the initial condition (4) such that **u** = 0. The Newton iteration converges within 2-3 steps given that changes are usually relatively small.

#### **3.2. Rosenbrock method**

We now consider two particular Rosenbrock methods, ROS2 and ROWDA3, as a means of comparison for the effectiveness of the methods discussed in the previous section. The Rosenbrock methods belong to the class of linearly implicit Runge - Kutta methods [11, 17]. They were used successfully for the numerical solution of non-electrochemical stiff partial differential equations, including equations of interest to electrochemistry. For further information regarding the particulars of such methods interested readers are referred to the numerical literature of [17, 28–30].

The reason for the use of the Rosenbrock methods in this paper is the ease with which they are able to deal with the nonlinear source term and the fact that no Newton iterations are necessary. The advantages of these methods are great efficiency, stability and a smooth error response if ROS2 or ROWDA3 are used (see [4] for instance) and the ease with which they are able to handle time-dependent and/or nonlinear systems.

We consider equation (1) as the following system

$$\frac{du\_{n}}{dt} = \begin{cases} \frac{(1+k)}{\triangle x^{2}} (2u\_{1} - 2u\_{0}) + \delta e^{u\_{0}} \text{ if } n = 0\\ \frac{\gamma\_{n}}{\triangle t} u\_{n-1} - \frac{2}{\triangle x^{2}} u\_{n} + \frac{\lambda\_{n}}{\triangle t} u\_{n+1} + \delta e^{u\_{n}} \text{ if } n = 1, 2, \dots, n-2\\ -\frac{2}{\triangle x^{2}} u\_{N-1} + \frac{\lambda\_{N-1}}{\triangle t} u\_{N-2} + \delta e^{u\_{N-1}} \text{ if } n = N - 1 \end{cases} \tag{18}$$

which along with 0 = *uN* can be written in the compact form

$$\mathbf{S}\frac{d\mathbf{u}}{dt} = \mathbf{F}(t, \mathbf{u})\tag{19}$$

In order to implement the Rosenbrock methods a number *s* of **k***<sup>i</sup>* vectors are computed with *s* the order chosen. The general equation given by (22) is solved iteratively to obtain each vector **k***<sup>i</sup>* for all *i* specified which will then be used to update the vector **u** for the next time step. We use the notation employed in [7] for the general equation to be used to obtain the values for

⎝*t* + *ϕi*�*t*, **u** +

*<sup>β</sup>* �*<sup>t</sup>*

*s* ∑ *i*=1 *i*−1 ∑ *j*=1 *aij***k***<sup>j</sup>*

*<sup>κ</sup>* − �*t***F***<sup>u</sup>* were the function **F** is applied at partly augmented *t* and

⎞ ⎠

Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing 125

<sup>2</sup>**F***t*(*t*, **u**) (22)

*mi***k***<sup>i</sup>* (23)

− **M**

*<sup>β</sup>* **<sup>k</sup>***<sup>i</sup>* <sup>=</sup> <sup>−</sup> �*<sup>t</sup>*

− **S** *β i*−1 ∑ *j*=1

*β* **F**

functions of time. Having calculated the *s* **k***<sup>i</sup>* vectors the solution is obtained from

**u***m*+<sup>1</sup> = **u***<sup>m</sup>* +

⎛

*cij***k***<sup>j</sup>* <sup>−</sup> *<sup>κ</sup><sup>i</sup>*

**u** values and the time derivative **F***<sup>t</sup>* is zero in this case since the system does not include

where the *mi* are weighting factors included in the tables of constants specified for each

In this chapter we implement the ROS2 and ROWDA3 methods though there are other variants of the Rosenbrock methods. Lang [24] described a L-stable second-order formula called ROS2. A third-order variant thereof is called ROWDA3 and described by Roche [28] and later made more efficient by Lang [23]. The latter is a method favoured by Bieniasz who introduced Rosenbrock methods to electrochemical digital simulation [4, 5]. For a more detailed explanation and discussion regarding the method and its advantages refer to [7]. The focus of the work done here is with regards to whether the Rosenbrock algorithms lend themselves toward parallelized implementation. It has already been noted that functions such as the parfor command cannot be used in this instance. It now remains to consider the method's performance when run on a GPU via the MATLAB Parallel Computing Toolbox.

The results noticed, as per Table 1, indicate the extent to which implementing the code on the GPU slows down overall performance of the *CN*, *CNN*, ROS2 and ROWDA3 methods. The question is why this would be the case. In Table 1 the results for the different methods run on a CPU were obtained by running the code on one CPU only instead of all of those available to MATLAB on the computer employed. This was done to get a better understanding of the one-on-one performance between the processing units, and yet implementing the code on the

To gain a better understanding of these results we consider the baseline structure for our *CN*

**k***i*

where we define *M* = **<sup>S</sup>**

method (see [4] and [7]).

**4. Discussion of numerical results**

GPU still led to poor performance.

code:

A = gpuArray(A); B = gpuArray(B);

where **S** = diag(1, 1, 1, ..., 1, 0) is the selection matrix containing zeros in those positions where the set of differential algebraic equations has an algebraic equation (i.e. zero on the left-hand side of (18)) and unity in those positions corresponding to the ordinary differential equations. The function **F**(*t*, **u**) can be written as: **F**(*t*, **u**) = **Ju** + **s** where the matrix **J** is the Jacobian and the vector **s** arises from the constant terms of the set of differential algebraic equations. We can thus write **F**(*t*, **u**) = **Ju** + **s** as

$$
\boldsymbol{u} = \frac{1}{\triangle t} \begin{bmatrix} -2(1+k)\mathcal{J}\,2(1+k)\mathcal{J} & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 \\ 0 & \gamma\_1 & -2\beta & \lambda\_1 & 0 & \dots & 0 & 0 & 0 & 0 \\ 0 & 0 & \gamma\_2 & -2\beta \,\lambda\_2 & \dots & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \dots & \dots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 0 & 0 & \gamma\_{N-2} & -2\beta \,\lambda\_{N-2} & 0 \\ 0 & 0 & 0 & 0 & \dots & 0 & 0 & \gamma\_{N-1} & -2\beta \,0 \\ 0 & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} u\_0 \\ u\_1 \\ u\_2 \\ \vdots \\ u\_{N-2} \\ u\_{N-1} \\ u\_N \end{bmatrix}
$$

$$+ \begin{bmatrix} \delta e^{u\_0} \\ \delta e^{u\_1} \\ \delta e^{u\_2} \\ \vdots \\ \delta e^{u\_{N-2}} \\ \delta e^{u\_{N-1}} \\ 0 \end{bmatrix} \tag{20}$$

such that

$$\mathbf{F}\_{u} = \frac{1}{\triangle t} \begin{bmatrix} (1+k)\beta(2u\_1 - 2u\_0) + \delta e^{u\_0} \\ \gamma\_1 u\_0 - 2\beta u\_1 + \lambda\_1 u\_2 + \delta e^{u\_1} \\ \gamma\_2 u\_1 - 2\beta u\_2 + \lambda\_2 u\_3 + \delta e^{u\_2} \\ \vdots \\ \gamma\_{N-2} u\_{N-3} - 2\beta u\_{N-2} + \lambda\_{N-2} u\_{N-1} + \delta e^{u\_{N-2}} \\ -2\beta u\_{N-1} + \lambda\_{N-1} u\_N - 2 + \delta e^{u\_{N-1}} \\ u\_N \end{bmatrix}. \tag{21}$$

In order to implement the Rosenbrock methods a number *s* of **k***<sup>i</sup>* vectors are computed with *s* the order chosen. The general equation given by (22) is solved iteratively to obtain each vector **k***<sup>i</sup>* for all *i* specified which will then be used to update the vector **u** for the next time step. We use the notation employed in [7] for the general equation to be used to obtain the values for **k***i*

$$-\frac{\mathbf{M}}{\beta}\mathbf{k}\_{i} = -\frac{\triangle t}{\beta}\mathbf{F}\left(t + \varphi\_{i}\triangle t, \mathbf{u} + \sum\_{j=1}^{i-1} a\_{ij}\mathbf{k}\_{j}\right)$$

$$-\frac{\mathbf{S}}{\beta}\sum\_{j=1}^{i-1} c\_{ij}\mathbf{k}\_{j} - \frac{\kappa\_{i}}{\beta}\triangle t^{2}\mathbf{F}\_{t}(t, \mathbf{u})\tag{22}$$

where we define *M* = **<sup>S</sup>** *<sup>κ</sup>* − �*t***F***<sup>u</sup>* were the function **F** is applied at partly augmented *t* and **u** values and the time derivative **F***<sup>t</sup>* is zero in this case since the system does not include functions of time. Having calculated the *s* **k***<sup>i</sup>* vectors the solution is obtained from

$$\mathbf{u}\_{m+1} = \mathbf{u}\_m + \sum\_{i=1}^{s} m\_i \mathbf{k}\_i \tag{23}$$

where the *mi* are weighting factors included in the tables of constants specified for each method (see [4] and [7]).

In this chapter we implement the ROS2 and ROWDA3 methods though there are other variants of the Rosenbrock methods. Lang [24] described a L-stable second-order formula called ROS2. A third-order variant thereof is called ROWDA3 and described by Roche [28] and later made more efficient by Lang [23]. The latter is a method favoured by Bieniasz who introduced Rosenbrock methods to electrochemical digital simulation [4, 5]. For a more detailed explanation and discussion regarding the method and its advantages refer to [7]. The focus of the work done here is with regards to whether the Rosenbrock algorithms lend themselves toward parallelized implementation. It has already been noted that functions such as the parfor command cannot be used in this instance. It now remains to consider the method's performance when run on a GPU via the MATLAB Parallel Computing Toolbox.

#### **4. Discussion of numerical results**

The results noticed, as per Table 1, indicate the extent to which implementing the code on the GPU slows down overall performance of the *CN*, *CNN*, ROS2 and ROWDA3 methods. The question is why this would be the case. In Table 1 the results for the different methods run on a CPU were obtained by running the code on one CPU only instead of all of those available to MATLAB on the computer employed. This was done to get a better understanding of the one-on-one performance between the processing units, and yet implementing the code on the GPU still led to poor performance.

To gain a better understanding of these results we consider the baseline structure for our *CN* code:

A = gpuArray(A); B = gpuArray(B);

8 Will-be-set-by-IN-TECH

necessary. The advantages of these methods are great efficiency, stability and a smooth error response if ROS2 or ROWDA3 are used (see [4] for instance) and the ease with which they are

�*x*<sup>2</sup> (2*u*<sup>1</sup> <sup>−</sup> <sup>2</sup>*u*0) <sup>+</sup> *<sup>δ</sup>eu*<sup>0</sup> if *<sup>n</sup>* <sup>=</sup> <sup>0</sup>

�*<sup>t</sup> un*+<sup>1</sup> <sup>+</sup> *<sup>δ</sup>eun* if *<sup>n</sup>* <sup>=</sup> 1, 2, ..., *<sup>n</sup>* <sup>−</sup> <sup>2</sup>

*dt* <sup>=</sup> **<sup>F</sup>**(*t*, **<sup>u</sup>**) (19)

(18)

(20)

�*<sup>t</sup> uN*−<sup>2</sup> <sup>+</sup> *<sup>δ</sup>euN*−<sup>1</sup> if *<sup>n</sup>* <sup>=</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

. . . . . . . . . . . . ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

. (21)

*u*0 *u*1 *u*2 . . . *uN*−<sup>2</sup> *uN*−<sup>1</sup> *uN*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

able to handle time-dependent and/or nonlinear systems.

(1+*k*)

�*x*<sup>2</sup> *uN*−<sup>1</sup> <sup>+</sup> *<sup>λ</sup>N*−<sup>1</sup>

�*x*<sup>2</sup> *un* <sup>+</sup> *<sup>λ</sup><sup>n</sup>*

**<sup>S</sup>** *<sup>d</sup>***<sup>u</sup>**

where **S** = diag(1, 1, 1, ..., 1, 0) is the selection matrix containing zeros in those positions where the set of differential algebraic equations has an algebraic equation (i.e. zero on the left-hand side of (18)) and unity in those positions corresponding to the ordinary differential equations. The function **F**(*t*, **u**) can be written as: **F**(*t*, **u**) = **Ju** + **s** where the matrix **J** is the Jacobian and the vector **s** arises from the constant terms of the set of differential algebraic equations. We

> −2(1 + *k*)*β* 2(1 + *k*)*β* 0 0 0 ... 0 0 0 0 0 *γ*<sup>1</sup> −2*β λ*<sup>1</sup> 0 ... 0 0 0 0 0 0 *γ*<sup>2</sup> −2*β λ*<sup>2</sup> ... 0 0 0 0

. . . ... ...

0 0 0 ... 0 0 *γN*−<sup>2</sup> −2*β λN*−<sup>2</sup> 0 0 0 0 0 ... 0 0 *γN*−<sup>1</sup> −2*β* 0 0 0 0 0 ... 0 0 0 0 1

> *δeu*<sup>0</sup> *δeu*<sup>1</sup> *δeu*<sup>2</sup> . . . *δeuN*−<sup>2</sup> *δeuN*−<sup>1</sup> 0

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(<sup>1</sup> <sup>+</sup> *<sup>k</sup>*)*β*(2*u*<sup>1</sup> <sup>−</sup> <sup>2</sup>*u*0) + *<sup>δ</sup>eu*<sup>0</sup> *<sup>γ</sup>*1*u*<sup>0</sup> <sup>−</sup> <sup>2</sup>*βu*<sup>1</sup> <sup>+</sup> *<sup>λ</sup>*1*u*<sup>2</sup> <sup>+</sup> *<sup>δ</sup>eu*<sup>1</sup> *<sup>γ</sup>*2*u*<sup>1</sup> <sup>−</sup> <sup>2</sup>*βu*<sup>2</sup> <sup>+</sup> *<sup>λ</sup>*2*u*<sup>3</sup> <sup>+</sup> *<sup>δ</sup>eu*<sup>2</sup> . . . *<sup>γ</sup>N*−<sup>2</sup>*uN*−<sup>3</sup> <sup>−</sup> <sup>2</sup>*βuN*−<sup>2</sup> <sup>+</sup> *<sup>λ</sup>N*−<sup>2</sup>*uN*−<sup>1</sup> <sup>+</sup> *<sup>δ</sup>euN*−<sup>2</sup> <sup>−</sup>2*βuN*−<sup>1</sup> <sup>+</sup> *<sup>λ</sup>N*−<sup>1</sup>*uN* <sup>−</sup> <sup>2</sup> <sup>+</sup> *<sup>δ</sup>euN*−<sup>1</sup> *uN*

. . .

+

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

We consider equation (1) as the following system

⎧ ⎪⎪⎨

*γn*

<sup>−</sup> <sup>2</sup>

which along with 0 = *uN* can be written in the compact form

�*<sup>t</sup> un*−<sup>1</sup> <sup>−</sup> <sup>2</sup>

⎪⎪⎩

*dun dt* ==

can thus write **F**(*t*, **u**) = **Ju** + **s** as

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

. . .

**<sup>F</sup>***<sup>u</sup>* <sup>=</sup> <sup>1</sup> �*t* ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ . . .

<sup>=</sup> <sup>1</sup> �*t*

such that

```
u0 = gpuArray(u0);
for m = 1 : T
    b = delta. ∗ dt. ∗ exp((1 + eps. ∗ u0).\(u0));
    u0 = mldivide(A, (B ∗ u0 + b));
end
```
In doing so, we realise that the main components thereof are matrix and elementwise vector operations. In order to understand why we are achieving the results we do (see Table 1) we run a few simple 'test'-codes to consider the speed taken by the CPU vs. the GPU to perform such elementary operations as C\d and d. ∗ f where *C* is a matrix and *d* and *f* are vectors. In Figure 1 we see the speed of the CPU over the GPU computed as *CPU running time GPU running time*. You will notice that as the size of the matrix and corresponding vector increases so too does the speed at which the GPU is able to compute C\d allowing it to overtake the CPU. This is what one would expect given that the GPU will only 'kick in' once the CPU is overloaded with data structures too large for it to compute effectively. Thus the efficiency in terms of running time of the code provided above is heavily dependent upon the size of the matrices *A* and *B*. At this juncture it is important to remember that we are considering the range *x* ∈ [0, 1] with �*x* = 0.1 which means that our *A* matrix is a 10 × 10 matrix and as such not large enough to give the GPU the chance to expose its ability to improve the performance of the algorithm. The reason for the choice in the size of the matrix for the problem considered is twofold: (1) it is due to the influence of the ratio *<sup>β</sup>* <sup>=</sup> �*t*/�*x*<sup>2</sup> which one usually tries to keep close to 1 for reasons of stability, and (2) the limitations of memory of the PC being used.

*β* = 0.01 and *T* = 0.3

CPU 7.8449e-05 2.7737e-04 2.3002e-05 9.1288e-05 GPU 0.0014 0.0960 0.0033 0.0063 *β* = 0.01 and *T* = 5

CPU 1.4783e-05 2.0018e-04 1.5354e-05 6.3574e-05 GPU 8.0940e-04 0.0047 0.0028 0.0047 *β* = 2 and *T* = 5

CPU 2.4573e-05 0.0033 1.7146e-05 6.8119e-05 GPU 0.0048 0.4731 0.0042 0.0073

> *�* = 0.01 *CN CNN ROS*2 *ROWDA*3 0.0137 0.0025 0.0059 0.0138 *�* = 0.05 *CN CNN ROS*2 *ROWDA*3 0.0133 0.0024 0.0067 0.0149 *�* = 0.1 *CN CNN ROS*2 *ROWDA*3 0.0146 0.0025 0.0057 0.0128 *�* = 0.25 *CN CNN ROS*2 *ROWDA*3 0.0145 0.0024 0.0059 0.0133

**Table 1.** Running times per iteration of *t* for the relevant methods implemented for *t* = 0.0001,

*x* = 0.1, *δ* = 1 and *�* = 0.

**Table 2.** The ratio *CPU running time*

**Table 3.** The ratio *CPU running time*

*�* = 0 and *T* = 0.3.

*δ* = 1 and *T* = 0.3.

*CN CNN ROS*2 *ROWDA*3

Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing 127

*CN CNN ROS*2 *ROWDA*3

*CN CNN ROS*2 *ROWDA*3

*GPU running time* for the relevant methods implemented for *t* = 0.0001, *x* = 0.1,

*GPU running time* for the relevant methods implemented for *t* = 0.0001, *x* = 0.1,

*δ* = 0.5 *CN CNN ROS*2 *ROWDA*3 0.0146 0.0012 0.0064 0.0150 *δ* = 1 *CN CNN ROS*2 *ROWDA*3 0.0275 0.0026 0.0061 0.0160 *δ* = 2 *CN CNN ROS*2 *ROWDA*3 0.0151 0.0030 0.0062 0.0151 *δ* = 3 *CN CNN ROS*2 *ROWDA*3 0.0160 0.0042 0.0063 0.0140

The next step in this evaluative process is to now consider the speed at which vector operations are performed. This was done in a similar fashion to the previous case by considering the speed taken by the CPU and GPU to perform the elementwise operation d. ∗ f where *d* and *f* are vectors. The ratios of the speeds *CPU running time GPU running time* were also considered for the in-built function arrayfun which performs elementwise operations on all its inputs. It can clearly be seen in Figure 2 that the in-built function outperforms the normal .∗ operation. What is interesting in this case is that the size of the vector required for the GPU to outperform the CPU is very large - we considered vectors of sizes between 200 000 and 201 000 as indicated. For smaller vector lengths the GPU is completely outperformed by the speed at which calculations are done on the CPU. As such, to improve the speed at which these vector calculations are performed we would either (1) have to diminish �*x* to the degree needed to obtain vectors of the required length (2) or be required to move the vectors from the GPU memory to the CPU memory every time calculations need to be made. The first approach would require a memory capacity beyond that of the computer used here and the second would greatly increase the running time of the algorithm and as such is not worth implementing.

As a means of further investigation we consider the *CN* code as a test case for the use of the arrayfun function. Obviously implementing this in-built function as follows

A = gpuArray(A); B = gpuArray(B); u0 = gpuArray(u0); u0 = arrayfun(@myCrank, u0, A, B)


10 Will-be-set-by-IN-TECH

In doing so, we realise that the main components thereof are matrix and elementwise vector operations. In order to understand why we are achieving the results we do (see Table 1) we run a few simple 'test'-codes to consider the speed taken by the CPU vs. the GPU to perform such elementary operations as C\d and d. ∗ f where *C* is a matrix and *d* and *f* are vectors. In

notice that as the size of the matrix and corresponding vector increases so too does the speed at which the GPU is able to compute C\d allowing it to overtake the CPU. This is what one would expect given that the GPU will only 'kick in' once the CPU is overloaded with data structures too large for it to compute effectively. Thus the efficiency in terms of running time of the code provided above is heavily dependent upon the size of the matrices *A* and *B*. At this juncture it is important to remember that we are considering the range *x* ∈ [0, 1] with �*x* = 0.1 which means that our *A* matrix is a 10 × 10 matrix and as such not large enough to give the GPU the chance to expose its ability to improve the performance of the algorithm. The reason for the choice in the size of the matrix for the problem considered is twofold: (1) it is due to the influence of the ratio *<sup>β</sup>* <sup>=</sup> �*t*/�*x*<sup>2</sup> which one usually tries to keep close to 1 for

The next step in this evaluative process is to now consider the speed at which vector operations are performed. This was done in a similar fashion to the previous case by considering the speed taken by the CPU and GPU to perform the elementwise operation d. ∗ f

the in-built function arrayfun which performs elementwise operations on all its inputs. It can clearly be seen in Figure 2 that the in-built function outperforms the normal .∗ operation. What is interesting in this case is that the size of the vector required for the GPU to outperform the CPU is very large - we considered vectors of sizes between 200 000 and 201 000 as indicated. For smaller vector lengths the GPU is completely outperformed by the speed at which calculations are done on the CPU. As such, to improve the speed at which these vector calculations are performed we would either (1) have to diminish �*x* to the degree needed to obtain vectors of the required length (2) or be required to move the vectors from the GPU memory to the CPU memory every time calculations need to be made. The first approach would require a memory capacity beyond that of the computer used here and the second would greatly increase the running time of the algorithm and as such is not worth

As a means of further investigation we consider the *CN* code as a test case for the use of the

arrayfun function. Obviously implementing this in-built function as follows

*GPU running time*. You will

*GPU running time* were also considered for

Figure 1 we see the speed of the CPU over the GPU computed as *CPU running time*

reasons of stability, and (2) the limitations of memory of the PC being used.

where *d* and *f* are vectors. The ratios of the speeds *CPU running time*

u0 = gpuArray(u0); for m = 1 : T

implementing.

A = gpuArray(A); B = gpuArray(B); u0 = gpuArray(u0);

u0 = arrayfun(@myCrank, u0, A, B)

end

b = delta. ∗ dt. ∗ exp((1 + eps. ∗ u0).\(u0));

u0 = mldivide(A, (B ∗ u0 + b));

**Table 1.** Running times per iteration of *t* for the relevant methods implemented for *t* = 0.0001, *x* = 0.1, *δ* = 1 and *�* = 0.


**Table 2.** The ratio *CPU running time GPU running time* for the relevant methods implemented for *t* = 0.0001, *x* = 0.1, *δ* = 1 and *T* = 0.3.


**Table 3.** The ratio *CPU running time GPU running time* for the relevant methods implemented for *t* = 0.0001, *x* = 0.1, *�* = 0 and *T* = 0.3.

all GPU array objects. We found for *T* = 10 with *t* = 0.0001 as we decreased *x* that computing on the GPU was faster than doing so on the CPU: for *x* = 0.1 and 0.01 the ratios

sense given that smaller values of *x* would increase the sizes of the matrices *A* and *B* and the vectors *b* and *u*0. As such, it seems likely that using a PC with a greater memory capacity

**4.1. Influence of changing parameter values on the running time of the algorithms** Just a few brief comments upon the results obtained for *CN*, *CNN*, ROS2 and ROWDA3 for varying values of *�* and *δ* will be made in this section. Firstly we considered the schemes for *δ* = 1 and *�* = 0.01, 0.05, 0.1 and 0.25 and then we also considered the case for *�* = 0 with *δ* = 0.1, 1, 2 and 3. The reader will notice considering Tables 2 and 3 that there does not seem to be any noticeable trend to the results obtained. As such the values of *�* and *δ* do not seem

The implementation of numerical algorithms such as those considered in this chapter are widely used for the solution of many differential equations which model physical processes and applications. As such it is of vital importance that we be able to perform such calculations at high speed given the requirement of fine grids to improve accuracy. It is in this context that the use of GPUs becomes of prime importance. However it is not simply a matter of running the algorithm on the GPU - the method employed needs to lend itself to being adjusted in the required manner so that the parallel processing power of the GPU may be taken advantage of. Though we found that the numerical methods considered here were not entirely suited to being implemented on the GPU as we would have hoped we were able to explain why this

This work has investigated the effectiveness of matrix and elementwise operations when run on a GPU vs. a CPU and found that the speed taken to do such operations heavily relies on the choice of *x*. It was discovered that the introduction of the nonlinear source term is problematic due to the length of time taken to do elementwise calculations on the GPU. While matrix operations were also shown to be slow it was more specifically this aspect of the code

We also discovered the power of the in-built function arrayfun which was able to improve upon the performance of the GPU with regards to computing time to the degree that it outperformed the CPU even for a grid with 'large' *x*, i.e. small matrices and vectors within the computations. As the grid became finer the performance of the GPU over the CPU improved, indicating the impact of the size of the matrices upon which computations are being performed and the degree to which arrayfun is able to improve computations occurring on the GPU. Thus, the manner in which arrayfun computes elementwise is extremely efficient and if such a structure could be developed for matrix operations then that

would truly allow the performance of the GPU to overtake that of CPU computing.

would lead to the GPU outperforming the CPU by a large margin as *x* decreases.

to have a meaningful impact on the speed at which the algorithms compute.

*GPU running time* = 6.5906 respectively. This makes

Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing 129

*GPU running time* <sup>=</sup> 1.5709 and *CPU running time*

were *CPU running time*

**5. Concluding remarks**

which increased the running time.

was the case.

**Figure 1.** Plot showing the CPU Running Time/GPU Running Time for matrices and corresponding vectors of sizes 100 to 1000.

**Figure 2.** Plot showing the CPU Running Time/GPU Running Time for vectors of sizes 200 000 to 201 000.

where the @myCrank function performs the loop through *m*, instead of the code presented previously produces incorrect results. The results obtained do however support our findings that the arrayfun function is able to increase the speed with which elementwise operations are performed. In this instance arrayfun is computing on the GPU since the inputs are all GPU array objects. We found for *T* = 10 with *t* = 0.0001 as we decreased *x* that computing on the GPU was faster than doing so on the CPU: for *x* = 0.1 and 0.01 the ratios were *CPU running time GPU running time* <sup>=</sup> 1.5709 and *CPU running time GPU running time* = 6.5906 respectively. This makes sense given that smaller values of *x* would increase the sizes of the matrices *A* and *B* and the vectors *b* and *u*0. As such, it seems likely that using a PC with a greater memory capacity would lead to the GPU outperforming the CPU by a large margin as *x* decreases.

## **4.1. Influence of changing parameter values on the running time of the algorithms**

Just a few brief comments upon the results obtained for *CN*, *CNN*, ROS2 and ROWDA3 for varying values of *�* and *δ* will be made in this section. Firstly we considered the schemes for *δ* = 1 and *�* = 0.01, 0.05, 0.1 and 0.25 and then we also considered the case for *�* = 0 with *δ* = 0.1, 1, 2 and 3. The reader will notice considering Tables 2 and 3 that there does not seem to be any noticeable trend to the results obtained. As such the values of *�* and *δ* do not seem to have a meaningful impact on the speed at which the algorithms compute.

## **5. Concluding remarks**

12 Will-be-set-by-IN-TECH

100 200 300 400 500 600 700 800 900 1000

Size of matrix

2 2.002 2.004 2.006 2.008 2.01

Size of vectors

**Figure 2.** Plot showing the CPU Running Time/GPU Running Time for vectors of sizes 200 000 to 201

where the @myCrank function performs the loop through *m*, instead of the code presented previously produces incorrect results. The results obtained do however support our findings that the arrayfun function is able to increase the speed with which elementwise operations are performed. In this instance arrayfun is computing on the GPU since the inputs are

x 105

Elelmentwise multiplication 'arrayfun' function

**Figure 1.** Plot showing the CPU Running Time/GPU Running Time for matrices and corresponding

0

0.5

000.

1

1.5

CPU Running Time/GPU Running TIme

2

2.5

3

vectors of sizes 100 to 1000.

0.2 0.4 0.6 0.8

1 1.2 1.4 1.6 1.8

CPU Running Time/GPU Running Time

The implementation of numerical algorithms such as those considered in this chapter are widely used for the solution of many differential equations which model physical processes and applications. As such it is of vital importance that we be able to perform such calculations at high speed given the requirement of fine grids to improve accuracy. It is in this context that the use of GPUs becomes of prime importance. However it is not simply a matter of running the algorithm on the GPU - the method employed needs to lend itself to being adjusted in the required manner so that the parallel processing power of the GPU may be taken advantage of. Though we found that the numerical methods considered here were not entirely suited to being implemented on the GPU as we would have hoped we were able to explain why this was the case.

This work has investigated the effectiveness of matrix and elementwise operations when run on a GPU vs. a CPU and found that the speed taken to do such operations heavily relies on the choice of *x*. It was discovered that the introduction of the nonlinear source term is problematic due to the length of time taken to do elementwise calculations on the GPU. While matrix operations were also shown to be slow it was more specifically this aspect of the code which increased the running time.

We also discovered the power of the in-built function arrayfun which was able to improve upon the performance of the GPU with regards to computing time to the degree that it outperformed the CPU even for a grid with 'large' *x*, i.e. small matrices and vectors within the computations. As the grid became finer the performance of the GPU over the CPU improved, indicating the impact of the size of the matrices upon which computations are being performed and the degree to which arrayfun is able to improve computations occurring on the GPU. Thus, the manner in which arrayfun computes elementwise is extremely efficient and if such a structure could be developed for matrix operations then that would truly allow the performance of the GPU to overtake that of CPU computing.

What the work in this chapter has shown is that the structures of the GPU and the Parallel Computing Toolbox in MATLAB are such that while certain algorithms have the ability to be adjusted for improved performance not all methods do. In particular it seems clear that implicit methods with matrix and vector operations will in fact run much slower on the GPU than the CPU. Thus whether GPU computing is able to improve the performance of a numerical scheme is very much dependent upon the type of computations which need to be done. In our case we discovered that the implicit and nonlinear nature of our numerical schemes do not lend themselves towards improved performance via the implementation of the parallel processing power of a GPU.

[10] Chambré, P. L. (1952). On the solution of the Poisson-Boltzmann equation with application to the theory of thermal explosions, *J*. Chem. Phys., 20:1795–1797 [11] Chan, Y. N. I.; Birnbaum, I. & Lapidus, L. (1978). Solution of Stiff Differential Equations and the Use of Imbedding Techniques, *I*nd. Eng. Chem. Fundam., 17(3):133–148 [12] Crank J. & Nicolson, E. (1947). A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type, *P*roc. Camb. Phil. Soc.,

Numerical Simulation of the Frank-Kamenetskii PDE: GPU vs. CPU Computing 131

[13] Crank J. & Furzeland, R. M. (1977). The treatment of boundary singularities in axially

[14] Evans. D. J. & Danaee, A. (1982). A new group Hopscotch method for the numerical solution of partial differential equations, *S*IAM J. Numer. Anal., 19(3):588–598 [15] Feldberg, S. W. (1987). Propagational inadequacy of the hopscotch finite difference algorithm: the enhancement of performance when used with an exponentially expanding grid for simulation of electrochemical diffusion problems, *J*. Electroanal.

[16] Frank-Kamenetskii, D. A. (1969). *D*iffusion and Heat Transfer in Chemical Kinetics,

[17] Hairer E. & Wanner, G. (1991). Solving Ordinary Differential Equations II, Stiff and Differential-Algebraic Problems, Springer-Verlag, 3 − 540 − 60452 − 9, Berlin [18] Harley, C. & Momoniat, E. (2007). Steady state solutions for a thermal explosion in a

[19] Harley, C. & Momoniat, E. (2008). Instability of invariant boundary conditions of a generalized Lane-Emden equation of the second-kind, *A*pplied Mathematics and

[20] Harley, C. & Momoniat, E. (2008). Alternate derivation of the critical value of the Frank-Kamenetskii parameter in the cylindrical geometry, *J*ournal of Nonlinear

[21] Harley, C. (2010). Explicit-implicit Hopscotch method: The numerical solution of the Frank-Kamenetskii partial differential equation, *J*ournal of Applied Mathematics and

[22] Harley, C. (2011). Crank-Nicolson and Hopscotch method: An emphasis on maintaining the implicit discretisation of the source term as a means of investigating critical parameters. Special Issue on 'Nonlinear Problems: Analytical and Computational

[23] Lang, J. (1996). High-resolution self-adaptive computations on chemical reaction-diffusion problems with internal boundaries, *C*hemical Engineering Science,

[24] Lang, J. (2001). *A*daptive Multilevel Solution of Nonlinear Parabolic PDE Systems,

[25] The MathWorks, Inc. ©1994-2012. Parallel Computing Toolbox Perform parallel computations on multicore computers, GPUs, and computer clusters, *http* :

[26] Press, W. H.; Teukolsky, S. A.; Vetterling, W. T. & Flannery, B. P. (1986). *N*umerical Recipes in Fortran, 2nd Edition, Cambridge University Press, 0 − 521 − 43064 − *X*,

Approach with Applications', *Abstract and Applied Analysis*, Submitted.

//*www*.*mathworks*.*com*/*products*/*parallel* − *computing*/.

cylindrical vessel, *M*odern Physics Letters B (MPLB), 21(14):831–841.

symmetric problems containing discs, *J*. Inst. Math. Appl., 20(3):355–370

43:50–67

Chem., 222:101–106

Plenum Press, New York

Computation, 198:621–633

Mathematical Physics, 15(1):69–76

Computation, 217(8):4065–4075

Springer, 9783540679004, Berlin

51(7):1055–1070

Cambridge

## **Acknowledgements**

I would like to thank Mr. Dario Fanucchi for invaluable discussions.

## **Author details**

Charis Harley

*Faculty of Science, University of the Witwatersrand, School of Computational and Applied Mathematics, Centre for Differential Equations, Continuum Mechanics and Applications, South Africa*

## **6. References**


[10] Chambré, P. L. (1952). On the solution of the Poisson-Boltzmann equation with application to the theory of thermal explosions, *J*. Chem. Phys., 20:1795–1797

14 Will-be-set-by-IN-TECH

What the work in this chapter has shown is that the structures of the GPU and the Parallel Computing Toolbox in MATLAB are such that while certain algorithms have the ability to be adjusted for improved performance not all methods do. In particular it seems clear that implicit methods with matrix and vector operations will in fact run much slower on the GPU than the CPU. Thus whether GPU computing is able to improve the performance of a numerical scheme is very much dependent upon the type of computations which need to be done. In our case we discovered that the implicit and nonlinear nature of our numerical schemes do not lend themselves towards improved performance via the implementation of

*Faculty of Science, University of the Witwatersrand, School of Computational and Applied Mathematics, Centre for Differential Equations, Continuum Mechanics and Applications, South Africa*

[1] Abd El-Salam, M. R. & Shehata, M. H. (2005). The numerical solution for reaction diffusion combustion with fuel consumption, *A*ppl. Math. Comp., 160:423U-435. ˝ [2] Anderson, C. A.; Zienkiewicz, O. C. (1974). Spontaneous ignition: finite element solutions for steady and transient conditions, *J*. Heat Transfer, 96(3):398–404 [3] Anderson, D.; Hamn*e*´n, H.; Lisak, M.; Elevant T. & Persson, H (1991). Transition to thermonuclear burn in fusion plasmas, *P*lasma Physics and Controlled Fusion,

[4] Bieniasz, L. K. (1999). Finite-difference electrochemical kinetic simulations using the Rosenbrock time integration scheme, *J*ournal of Electroanalytical Chemistry, 469:97–115 [5] Bieniasz, L. K. & Britz, D. (2001). Chronopotentiometry at a Microband Electrode: Simulation Study Using a Rosenbrock Time Integration Scheme for Differential-Algebraic Equations, and a Direct Sparse Solver, *J*ournal of

[6] Boddington, T. & Gray, P. (1970). Temperature profiles in endothermic and exothermic reactions and interpretation of experimental rate data, *P*roc. Roy. Soc. Lond Ser A - Mat.

[7] Britz, D. (2005). *D*igital Simulation in Electrochemistry, 3rd Edition, Lecture Notes in

[8] Britz, D.; Baronas, R.; Gaidamauskaite, E. & Ivanauskas, F. (2009). Further Comparisons ˙ of Finite Difference Schemes for Computational Modelling of Biosensors, *N*onlinear

[9] Britz, D.; Strutwolf J. & Østerby, O. (2011). Digital simulation of thermal reactions, *A*ppl.

the parallel processing power of a GPU.

I would like to thank Mr. Dario Fanucchi for invaluable discussions.

**Acknowledgements**

**Author details**

Charis Harley

**6. References**

33(10):1145–1159

Electroanalytical Chemistry, 503:141–152

Physics, Springer, 3 − 540 − 23979 − 0, Berlin Heidelberg

Analysis: Modelling and Control, 14(4):419–433

Math. and Comp., 218(4), 15:1280–1290

Phys. Sci., 320(1540):71–100

	- [27] Rice, O. K. (1940). The role of heat conduction in thermal gaseous explosions, *J*. Chem. Phys., 8(9):727–733
	- [28] Roche, M. (1988). Rosenbrock methods for differential algebraic equations, *N*umerische Mathematik, 52:45–63
	- [29] Rosenbrock, H. H. (1963). Some general implicit processes for the numerical solution of differential equations, *T*he Computer Journal, 5(4):329–330
	- [30] Sandu, A.; Verwer, J. G.; Blom, J.G.; Spee, E. J. & Carmichael, G. R. (1997). Benchmarking Stiff ODE Solvers for Atmospheric Chemistry Problems II: Rosenbrock Solvers, *A*tmospheric Environment, 31:3459–3472
	- [31] Steggerda, J. J. (1965). Thermal stability: an extension of Frank-Kamenetskii's theory, *J*. Chem. Phys., 43:4446–4448
	- [32] Zhang, G.; Merkin J. H. & Scott, S. K. (1991). Reaction-diffusion model for combustion with fuel consumption: Ii. Robin boundary conditions, *I*MA J. Appl. Math., 51:69–93
	- [33] Zhang, G.; Merkin J. H. & Scott, S. K. (1991). Reaction-diffusion model for combustion with fuel consumption: I. Dirichlet boundary conditions, *I*MA J. Appl. Math., 47:33–60
	- [34] Zeldovich, Y. B.; Barenblatt, G. I.; Librovich, V. B. & Makhviladze, G. M. (1985). The Mathematical Theory of Combustion and Explosions, *C*onsultants Bureau, New York

© 2012 Zhou, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

© 2012 Zhou, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In many complex decision-making problems, the decision information provided by the decision makers is often imprecise or uncertain due to time limit, lack of data, or the decision makers' limited attention and information processing capabilities. Decision-making in a fuzzy environment means a decision process in which the goals and/or the constraints, but not necessarily the system under control, are fuzzy in nature. This means that the goals and/or the constraints constitute classes of alternatives whose boundaries are not sharply defined. Fuzzy set, whose basic component is a membership function (Zadeh, 1965), was introduced in the following several decades. Fuzzy set theory has been applied successfully

Matlab is a suitable tool for solving fuzzy decision-making problems. This chapter is focusing on how to solve a specific class of fuzzy decision-making problem, that is, Fuzzy Analytical Network Process (FANP) by Matlab. Project selection is chosen as an example to illustrate the proposed method. The reason is that project selection is a complex decisionmaking process. It involves a search from the environment of opportunities, the generation of project options, and the evaluation of multiple attributes, both qualitative and

There are various mathematic techniques for selecting an optimal project. Mathematical programming models can be used to accomplish this decision. For example, the R&D project selection can be presented based on linear, non-linear, dynamic, goal, and stochastic mathematical programming (Wang & Hwang, 2007). Based on the idea of moments approximation method via linear programming, Fang et al. (2008) proposed a scenario generation approach for the mixed single-stage Research and Development (R&D) projects

**Fuzzy Analytical Network Process** 

**Implementation with Matlab** 

Additional information is available at the end of the chapter

Xiaoguang Zhou

**1. Introduction** 

http://dx.doi.org/10.5772/46466

in the decision-making field.

quantitative, by different stakeholders.

and multi-stage securities portfolio selection problem, etc.

## **Fuzzy Analytical Network Process Implementation with Matlab**

## Xiaoguang Zhou

16 Will-be-set-by-IN-TECH

[27] Rice, O. K. (1940). The role of heat conduction in thermal gaseous explosions, *J*. Chem.

[28] Roche, M. (1988). Rosenbrock methods for differential algebraic equations, *N*umerische

[29] Rosenbrock, H. H. (1963). Some general implicit processes for the numerical solution of

[30] Sandu, A.; Verwer, J. G.; Blom, J.G.; Spee, E. J. & Carmichael, G. R. (1997). Benchmarking Stiff ODE Solvers for Atmospheric Chemistry Problems II: Rosenbrock

[31] Steggerda, J. J. (1965). Thermal stability: an extension of Frank-Kamenetskii's theory, *J*.

[32] Zhang, G.; Merkin J. H. & Scott, S. K. (1991). Reaction-diffusion model for combustion with fuel consumption: Ii. Robin boundary conditions, *I*MA J. Appl. Math., 51:69–93 [33] Zhang, G.; Merkin J. H. & Scott, S. K. (1991). Reaction-diffusion model for combustion with fuel consumption: I. Dirichlet boundary conditions, *I*MA J. Appl. Math., 47:33–60 [34] Zeldovich, Y. B.; Barenblatt, G. I.; Librovich, V. B. & Makhviladze, G. M. (1985). The Mathematical Theory of Combustion and Explosions, *C*onsultants Bureau, New York

differential equations, *T*he Computer Journal, 5(4):329–330

Solvers, *A*tmospheric Environment, 31:3459–3472

Phys., 8(9):727–733

Mathematik, 52:45–63

Chem. Phys., 43:4446–4448

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/46466

## **1. Introduction**

In many complex decision-making problems, the decision information provided by the decision makers is often imprecise or uncertain due to time limit, lack of data, or the decision makers' limited attention and information processing capabilities. Decision-making in a fuzzy environment means a decision process in which the goals and/or the constraints, but not necessarily the system under control, are fuzzy in nature. This means that the goals and/or the constraints constitute classes of alternatives whose boundaries are not sharply defined. Fuzzy set, whose basic component is a membership function (Zadeh, 1965), was introduced in the following several decades. Fuzzy set theory has been applied successfully in the decision-making field.

Matlab is a suitable tool for solving fuzzy decision-making problems. This chapter is focusing on how to solve a specific class of fuzzy decision-making problem, that is, Fuzzy Analytical Network Process (FANP) by Matlab. Project selection is chosen as an example to illustrate the proposed method. The reason is that project selection is a complex decisionmaking process. It involves a search from the environment of opportunities, the generation of project options, and the evaluation of multiple attributes, both qualitative and quantitative, by different stakeholders.

There are various mathematic techniques for selecting an optimal project. Mathematical programming models can be used to accomplish this decision. For example, the R&D project selection can be presented based on linear, non-linear, dynamic, goal, and stochastic mathematical programming (Wang & Hwang, 2007). Based on the idea of moments approximation method via linear programming, Fang et al. (2008) proposed a scenario generation approach for the mixed single-stage Research and Development (R&D) projects and multi-stage securities portfolio selection problem, etc.

© 2012 Zhou, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Zhou, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Also, project selection has been presented in regard with multiple objectives. For instance, Gabriel et al. (2006) developed a multiobjective, integer-constrained optimization model with competing objectives for project selection, and the subjective rank is determined via the Analytic Hierarchy Process (AHP). Ghorbani & Rabbani (2009) proposed a multi-objective algorithm for project selection. Two objective functions have been considered to maximize total expected benefit of selected projects and minimize the summation of the absolute variation of allotted resource between each successive time periods. Gutjahr et al. (2010) presented a multi-objective optimization model for project portfolio selection taking employee competencies and their evolution into account, and so on.

Fuzzy Analytical Network Process Implementation with Matlab 135

In this study, a fuzzy logic is introduced for the pairwise comparison of ANP to make up the deficiency of the conventional AHP/ANP, referred to as FANP. The objective of this chapter is to present a FANP-based approach for the construction project selection problem using triangular fuzzy numbers. According to the fuzzy preference programming (FPP) method, local weights of fuzzy pairwise comparison matrices can be achieved. Then an unweighted and weighted supermatrix based on its network structure can be formed. For FANP, the key steps are to calculate the local weights and the limit supermatrix. Both of them can be solved by Matlab. A case will be given by the proposed method, and Matlab codes will be provided

The ANP, introduced by Saaty, is a generalization of the AHP (Saaty, 1996). ANP is the first mathematical theory that makes it possible to deal with all kinds of dependences and feedbacks by replacing hierarchies with networks (Saaty, 1996). ANP allows for complex inter-relationships among decision dimensions and attributes. The ANP feedback approach replaces hierarchies with networks in which the relationships between dimensions are not easily represented as higher or lower, dominant or subordinate, direct or indirect. For instance, not only does the importance of the criteria determine the importance of the attributes, as in a hierarchy, but also the importance of the attributes may have impact on the importance of the criteria. A hierarchical structure with a linear top-to-bottom form is

As we know, AHP/ANP has been proposed as a suitable multi-criteria decision analysis tool for project selection and evaluation. However, the conventional AHP/ANP-based decision model seems to be ineffective in dealing with the inherent fuzziness or uncertainty in judgment during the pairwise comparison process. Although the use of the discrete scale of 1-9 to represent the verbal judgment in pairwise comparisons has the advantage of simplicity, it does not take into account the uncertainty associated with the mapping of one's perception or judgment to a number. In real-life decision-making situations, the decision makers or stakeholders could be uncertain about their own level of preference, due to incomplete information or knowledge, complexity and uncertainty within the decision environment. Such situations will occur when selecting and evaluating an optimal project. Therefore, it's better to make project selection and assessment under fuzzy conditions. This chapter will focus on FANP in fuzzy decision-making based on triangular fuzzy numbers. Actually, some researchers have focused on decision-making based on FANP (Promentilla et al., 2008 ; Ayağ & Özdemir, 2009 ; Boran & Goztepe, 2010; Dağdeviren & Yüksel, 2010;

A fuzzy set is a class of objects with a continuum of grades of membership. Such a set is characterized by a membership function, which assigns to each object a grade of

as well.

**2. What's FANP?** 

not suitable for a complex system.

Pires et al., 2011 ; Ju et al., 2012 ; etc. ).

membership ranging between zero and one.

**2.1. Triangular fuzzy number** 

Fuzzy sets theory is utilized to cover the vagueness inherent in the nature of project selection problem as well. For example, Huang et al. (2008) presented a fuzzy analytic hierarchy process method and utilize crisp judgment matrix to evaluate subjective expert judgments. Bhattacharyya et al. (2011) developed a fuzzy multi-objective programming approach to facilitate decision making in the selection of R&D projects. Ebrahimnejad et al. (2011) considered a two-phase decision making approach, combining a modified version of the Analytic Network Process (ANP) method and an improved compromise ranking method under uncertainty. Chang & Lee (2012) proposed a Data Envelopment Analysis (DEA), knapsack formulation and fuzzy set theory integrated model to deal with the project selection , etc.

Some researchers tried the project selection problem in other ways. Dey (2006) proposed a decision support system, which analyses projects with respect to market, technicalities, and social and environmental impact in an integrated framework using analytic hierarchy process. Liesiö et al. (2007) developed the Robust Portfolio Modeling methodology which extends Preference Programming methods into portfolio problems, where a subset of project proposals are funded in view of multiple evaluation criteria. Smith-Perera et al. (2010) proposed an approach to prioritize project portfolio in an efficient and reliable way based on analytic network process method and the information obtained from the experts during the decision-making process. Shakhsi-Niaei et al. (2011) presented a procedure which used the PROMETHEE method linked to a Monte Carlo simulation in order to consider and possibly make lower all kinds of uncertainties of project selection problem in an acceptable complexity level, and so on.

As mentioned above, AHP or ANP was widely used during the process of selecting or evaluating an optimal project (Gabriel et al., 2006; Dey, 2006; Smith-Perera et al., 2010; Ebrahimnejad et al., 2011), and these literatures (Cheng & Li, 2005; Amiri, 2010; Aragonés-Beltrán et al., 2010; Jung & Seo, 2010; etc) are also about project selection based on AHP/ANP. In addition, AHP/ANP was proverbially used in other decision-making fields as well (Kahraman et al., 2006; Lee et al., 2009; Arunraj & Maiti, 2010; Huang et al., 2011; etc.). The main reason is that AHP/ANP can deal with qualitative and quantitative information at the same time, and ANP can take into account the interaction and feedback relationships between criteria and/or indices. However, due to the vagueness and uncertainty on the judgments of decision-makers, the crisp pairwise comparison in the conventional AHP/ANP seems insufficient and imprecise to capture the right judgments of decision-makers.

In this study, a fuzzy logic is introduced for the pairwise comparison of ANP to make up the deficiency of the conventional AHP/ANP, referred to as FANP. The objective of this chapter is to present a FANP-based approach for the construction project selection problem using triangular fuzzy numbers. According to the fuzzy preference programming (FPP) method, local weights of fuzzy pairwise comparison matrices can be achieved. Then an unweighted and weighted supermatrix based on its network structure can be formed. For FANP, the key steps are to calculate the local weights and the limit supermatrix. Both of them can be solved by Matlab. A case will be given by the proposed method, and Matlab codes will be provided as well.

## **2. What's FANP?**

134 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

employee competencies and their evolution into account, and so on.

selection , etc.

complexity level, and so on.

Also, project selection has been presented in regard with multiple objectives. For instance, Gabriel et al. (2006) developed a multiobjective, integer-constrained optimization model with competing objectives for project selection, and the subjective rank is determined via the Analytic Hierarchy Process (AHP). Ghorbani & Rabbani (2009) proposed a multi-objective algorithm for project selection. Two objective functions have been considered to maximize total expected benefit of selected projects and minimize the summation of the absolute variation of allotted resource between each successive time periods. Gutjahr et al. (2010) presented a multi-objective optimization model for project portfolio selection taking

Fuzzy sets theory is utilized to cover the vagueness inherent in the nature of project selection problem as well. For example, Huang et al. (2008) presented a fuzzy analytic hierarchy process method and utilize crisp judgment matrix to evaluate subjective expert judgments. Bhattacharyya et al. (2011) developed a fuzzy multi-objective programming approach to facilitate decision making in the selection of R&D projects. Ebrahimnejad et al. (2011) considered a two-phase decision making approach, combining a modified version of the Analytic Network Process (ANP) method and an improved compromise ranking method under uncertainty. Chang & Lee (2012) proposed a Data Envelopment Analysis (DEA), knapsack formulation and fuzzy set theory integrated model to deal with the project

Some researchers tried the project selection problem in other ways. Dey (2006) proposed a decision support system, which analyses projects with respect to market, technicalities, and social and environmental impact in an integrated framework using analytic hierarchy process. Liesiö et al. (2007) developed the Robust Portfolio Modeling methodology which extends Preference Programming methods into portfolio problems, where a subset of project proposals are funded in view of multiple evaluation criteria. Smith-Perera et al. (2010) proposed an approach to prioritize project portfolio in an efficient and reliable way based on analytic network process method and the information obtained from the experts during the decision-making process. Shakhsi-Niaei et al. (2011) presented a procedure which used the PROMETHEE method linked to a Monte Carlo simulation in order to consider and possibly make lower all kinds of uncertainties of project selection problem in an acceptable

As mentioned above, AHP or ANP was widely used during the process of selecting or evaluating an optimal project (Gabriel et al., 2006; Dey, 2006; Smith-Perera et al., 2010; Ebrahimnejad et al., 2011), and these literatures (Cheng & Li, 2005; Amiri, 2010; Aragonés-Beltrán et al., 2010; Jung & Seo, 2010; etc) are also about project selection based on AHP/ANP. In addition, AHP/ANP was proverbially used in other decision-making fields as well (Kahraman et al., 2006; Lee et al., 2009; Arunraj & Maiti, 2010; Huang et al., 2011; etc.). The main reason is that AHP/ANP can deal with qualitative and quantitative information at the same time, and ANP can take into account the interaction and feedback relationships between criteria and/or indices. However, due to the vagueness and uncertainty on the judgments of decision-makers, the crisp pairwise comparison in the conventional AHP/ANP

seems insufficient and imprecise to capture the right judgments of decision-makers.

The ANP, introduced by Saaty, is a generalization of the AHP (Saaty, 1996). ANP is the first mathematical theory that makes it possible to deal with all kinds of dependences and feedbacks by replacing hierarchies with networks (Saaty, 1996). ANP allows for complex inter-relationships among decision dimensions and attributes. The ANP feedback approach replaces hierarchies with networks in which the relationships between dimensions are not easily represented as higher or lower, dominant or subordinate, direct or indirect. For instance, not only does the importance of the criteria determine the importance of the attributes, as in a hierarchy, but also the importance of the attributes may have impact on the importance of the criteria. A hierarchical structure with a linear top-to-bottom form is not suitable for a complex system.

As we know, AHP/ANP has been proposed as a suitable multi-criteria decision analysis tool for project selection and evaluation. However, the conventional AHP/ANP-based decision model seems to be ineffective in dealing with the inherent fuzziness or uncertainty in judgment during the pairwise comparison process. Although the use of the discrete scale of 1-9 to represent the verbal judgment in pairwise comparisons has the advantage of simplicity, it does not take into account the uncertainty associated with the mapping of one's perception or judgment to a number. In real-life decision-making situations, the decision makers or stakeholders could be uncertain about their own level of preference, due to incomplete information or knowledge, complexity and uncertainty within the decision environment. Such situations will occur when selecting and evaluating an optimal project. Therefore, it's better to make project selection and assessment under fuzzy conditions. This chapter will focus on FANP in fuzzy decision-making based on triangular fuzzy numbers. Actually, some researchers have focused on decision-making based on FANP (Promentilla et al., 2008 ; Ayağ & Özdemir, 2009 ; Boran & Goztepe, 2010; Dağdeviren & Yüksel, 2010; Pires et al., 2011 ; Ju et al., 2012 ; etc. ).

## **2.1. Triangular fuzzy number**

A fuzzy set is a class of objects with a continuum of grades of membership. Such a set is characterized by a membership function, which assigns to each object a grade of membership ranging between zero and one.

A triangular fuzzy number (TFN) *M* is shown in Fig. 1. A TFN is denoted simply as (*l*, *m*, *u*). The parameters *l*, *m* and *u*, respectively, denote the smallest possible value, the most promising value, and the largest possible value that describe a fuzzy event. Each TFN has linear representations on its left and right side such that its membership function can be defined as

$$u\_M(\mathbf{x}) = \begin{cases} (\mathbf{x} - \mathbf{l}) / (m - l) & l \le \mathbf{x} \le m \\ (\mathbf{u} - \mathbf{x}) / (\mathbf{u} - m) & m \le \mathbf{x} \le u \\ 0 & \text{otherwise} \end{cases} \tag{1}$$
  $\mathbf{u} \llcorner \mathbf{x}$  
$$1 \llcorner \overbrace{\begin{bmatrix} \mathbf{x} & \mathbf{x} \\ \mathbf{x} & \mathbf{x} \end{bmatrix}}^{\star} \tag{2}$$

Fuzzy Analytical Network Process Implementation with Matlab 137

(2)

(3)

study, and the local weights can be easily solved by Matlab software. The stages of

Consider a prioritization problem with *n* elements, where the pairwise comparison judgements are represented by normal fuzzy sets or fuzzy numbers. Supposing that the decision-maker can provide a set *F*={*ãij*} of *m n*(*n*-1)/2 fuzzy comparison judgements, *i*=1, 2, , *n*-1; *j*=2, 3, , *n*; *j*>*i*, represented as triangular fuzzy numbers *ãij*=(*lij*, *mij*, *uij*). The problem is to derive a crisp priority vector *w*=(*w*1, *w*2, , *wn*)*T*, such that the priority ratios *wi*/*wj* are

> *i ij ij j <sup>w</sup> l u w*

Each crisp priority vector *w* satisfies the double-side inequality (2) with some degree, which can be measured by a membership function, linear with respect to the unknown ratio *wi*/*wj*

( ) ( ) , .

*ij ij j i*

*w ml w*

*w u ww w*

*j ij i j i*

The membership function (3) is linearly increasing over the interval (-, *mij*) and linearly decreasing over the interval (*mij*, ). The function takes negative values when *wi*/*wjlij* or *wi*/*wjuij*, and has a maximum value *uij*=1 at *wi*/*wj*=*mij*. Over the range (*lij*, *uij*), the membership

The solution to the prioritization problem by the FPP method is based on two main assumptions. The first one requires the existence of non-empty fuzzy feasible area *P* on the

{( , , ) 0, 1}

defined as an intersection of the membership functions, similar to (3) and the simplex

*<sup>P</sup>*( ) min ( ) 1,2, , 1; 2,3, , ; . *ij ij*

If the fuzzy judgements are very inconsistent, then *up*(*w*) could take negative values for all

The second assumption of the FPP method specifies a selection rule, which determines a priority vector, having the highest degree of membership in the aggregated membership

*ni i i*

( ) , ,

*ij*

*m*

*ij*

*m*

1

, (4)

*u w u w i n j nj i* (5)

*i j ij i*

*ww l w*

*ij ij j*

*um w*

Mikhailov's fuzzy prioritization approach are given as follows (Mikhailov, 2003).

approximately within the scopes of the initial fuzzy judgments, or

where the symbol denotes the statement "fuzzy less or equal to".

*ij*

function (3) coincides with the fuzzy triangular judgment (*lij*, *mij*, *uij*).

1

1 2

hyperplane (4). The membership function of the fuzzy feasible area is given by

*<sup>n</sup> <sup>n</sup>*

*Q ww w w w*

(*n*-1) dimensional simplex *Qn*-1

normalized priority vectors *wQn*-1.

*u*

**Figure 1.** A triangular fuzzy number *M*

#### **2.2. Fuzzy preference programming method**

A number of methods have been developed to handle fuzzy comparison matrices. For instance, Van Laarhoven & Pedrycz (1983) suggested a fuzzy logarithmic least squares method to obtain the fuzzy weights from a triangular fuzzy comparison matrix. Buckley (1985) utilized the geometric mean method to calculate fuzzy weights. Chang (1996) proposed an extent analysis method, which derives crisp weights for fuzzy comparison matrices. Xu (2000) brought forward a fuzzy least squares priority method (LSM). Csutora & Buckley (2001) came up with a Lambda-Max method, which is the direct fuzzification of the well-known *k*max method. Mikhailov (2003, 2004) developed a fuzzy preference programming method, which also derives crisp weights from fuzzy comparison matrices. Srdjevic (2005) proposed a multicriteria approach for combining prioritization methods within the AHP, including additive normalization, eigenvector, weighted least-squares, logarithmic least-squares, logarithmic goal programming and fuzzy preference programming. Wang et al. (2006) presented a modified fuzzy logarithmic least square method. Yu & Cheng (2007) developed a multiple objective programming approach for the ANP to obtain all local priorities for crisp or interval judgments at one time, even in an inconsistent situation. Huo et al. (2011) proposed new parametric prioritization methods (PPMs) to determine a family of priority vectors in AHP.

FPP method, as a reasonable and effective means, is adopted in this study. This method can acquire the consistency ratios of fuzzy pairwise comparison matrices without additional study, and the local weights can be easily solved by Matlab software. The stages of Mikhailov's fuzzy prioritization approach are given as follows (Mikhailov, 2003).

Consider a prioritization problem with *n* elements, where the pairwise comparison judgements are represented by normal fuzzy sets or fuzzy numbers. Supposing that the decision-maker can provide a set *F*={*ãij*} of *m n*(*n*-1)/2 fuzzy comparison judgements, *i*=1, 2, , *n*-1; *j*=2, 3, , *n*; *j*>*i*, represented as triangular fuzzy numbers *ãij*=(*lij*, *mij*, *uij*). The problem is to derive a crisp priority vector *w*=(*w*1, *w*2, , *wn*)*T*, such that the priority ratios *wi*/*wj* are approximately within the scopes of the initial fuzzy judgments, or

$$\mathcal{U}\_{ij} \le \frac{w\_i}{w\_j} \le u\_{ij} \tag{2}$$

where the symbol denotes the statement "fuzzy less or equal to".

136 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

 

*M*

**Figure 1.** A triangular fuzzy number *M*

**2.2. Fuzzy preference programming method** 

1

*uM*(*x*)

(PPMs) to determine a family of priority vectors in AHP.

defined as

A triangular fuzzy number (TFN) *M* is shown in Fig. 1. A TFN is denoted simply as (*l*, *m*, *u*). The parameters *l*, *m* and *u*, respectively, denote the smallest possible value, the most promising value, and the largest possible value that describe a fuzzy event. Each TFN has linear representations on its left and right side such that its membership function can be

> ( )/( ) ( ) ( )/( ) 0

*u x ux ummxu*

A number of methods have been developed to handle fuzzy comparison matrices. For instance, Van Laarhoven & Pedrycz (1983) suggested a fuzzy logarithmic least squares method to obtain the fuzzy weights from a triangular fuzzy comparison matrix. Buckley (1985) utilized the geometric mean method to calculate fuzzy weights. Chang (1996) proposed an extent analysis method, which derives crisp weights for fuzzy comparison matrices. Xu (2000) brought forward a fuzzy least squares priority method (LSM). Csutora & Buckley (2001) came up with a Lambda-Max method, which is the direct fuzzification of the well-known *k*max method. Mikhailov (2003, 2004) developed a fuzzy preference programming method, which also derives crisp weights from fuzzy comparison matrices. Srdjevic (2005) proposed a multicriteria approach for combining prioritization methods within the AHP, including additive normalization, eigenvector, weighted least-squares, logarithmic least-squares, logarithmic goal programming and fuzzy preference programming. Wang et al. (2006) presented a modified fuzzy logarithmic least square method. Yu & Cheng (2007) developed a multiple objective programming approach for the ANP to obtain all local priorities for crisp or interval judgments at one time, even in an inconsistent situation. Huo et al. (2011) proposed new parametric prioritization methods

0 *l m u x* 

FPP method, as a reasonable and effective means, is adopted in this study. This method can acquire the consistency ratios of fuzzy pairwise comparison matrices without additional

*xl ml lxm*

*otherwise*

(1)

Each crisp priority vector *w* satisfies the double-side inequality (2) with some degree, which can be measured by a membership function, linear with respect to the unknown ratio *wi*/*wj*

$$(u\_{ij}(\frac{\varpi v\_i}{\varpi\_j}) = \begin{cases} \frac{\{\varpi v\_i \nw\_j\} - l\_{ij}}{m\_{ij} - l\_{ij}}, & \frac{\varpi v\_i}{\varpi v\_j} \le m\_{ij}, \\\frac{u\_{ij} - \{\varpi v\_i \nw\_j\}}{u\_{ij} - m\_{ij}}, & \frac{\varpi v\_i}{\varpi v\_j} \ge m\_{ij}. \end{cases} \tag{3}$$

The membership function (3) is linearly increasing over the interval (-, *mij*) and linearly decreasing over the interval (*mij*, ). The function takes negative values when *wi*/*wjlij* or *wi*/*wjuij*, and has a maximum value *uij*=1 at *wi*/*wj*=*mij*. Over the range (*lij*, *uij*), the membership function (3) coincides with the fuzzy triangular judgment (*lij*, *mij*, *uij*).

The solution to the prioritization problem by the FPP method is based on two main assumptions. The first one requires the existence of non-empty fuzzy feasible area *P* on the (*n*-1) dimensional simplex *Qn*-1

$$\left| Q^{n-1} = \{ (w\_1, w\_2, \dots, w\_n) \Big| w\_i > 0, \sum\_{i=1}^n w\_i = 1 \} \right. \tag{4}$$

defined as an intersection of the membership functions, similar to (3) and the simplex hyperplane (4). The membership function of the fuzzy feasible area is given by

$$\mu\_P(\text{zv}) = \min\_{i \neq j} \left\{ \mu\_{ij}(\text{zv}) \middle| i = 1, 2, \dots, n - 1; j = 2, 3, \dots, n; j > i \right\}.\tag{5}$$

If the fuzzy judgements are very inconsistent, then *up*(*w*) could take negative values for all normalized priority vectors *wQn*-1.

The second assumption of the FPP method specifies a selection rule, which determines a priority vector, having the highest degree of membership in the aggregated membership function (5). It can easily be proved that *up*(*w*) is a convex set, so there is always a priority vector *w*\* *Qn*-1 that has a maximum degree of membership

$$\mathcal{X}^\* = \mathfrak{u}\_p(\mathfrak{w}^\*) = \max\_{w:\mathfrak{q}\mathcal{Q}^{n-1}} \min\_{i\bar{j}} \left\{ \mathfrak{u}\_{i\bar{j}}(\mathfrak{w}) \right\}.\tag{6}$$

Fuzzy Analytical Network Process Implementation with Matlab 139

Kumar (2006) pointed out that project selection belongs to the strategic level for any organizations, and today it is seriously influenced by environmental and social factors. Wang et al. (2009) suggested that during project selection, the evaluation index system can be divided into two parts: the bid/non-bid and which project to bid, then make a

With the interaction and feedback relationships between dimensions and/or attributeenablers being considered, a four-level evaluation index system is presented, as shown in Fig. 2. The first level is the objective, which is to find out an optimal project; the second level is the dimensions including project profitability, project risk, project owners and project bidding competition; the third level is attribute-enablers, 15 indicators included; and the

To find out the optimal project

Profitability (S1) Risk (S2) Owners (S3) Bidding competition (S4)

(S31)

(S32)

Compliance reputation

Number of competitors (S41) Size of competitors (S42) Competitor's strategy (S43)

Cooperation intention

Public Relations (S33) Communication skills

Project 2 … Project *n*

Generally, if project profitability (S1) has an effect on project bidding competition (S4), then a line with arrow from S4 to S1 is added. If the sub-criteria of project risk (S2) has interaction

**Step 1.** Model construction and problem structuring. With the relationships among dimensions and attribute-enablers being considered, a four levels selection and

with itself, then S2 is inner dependent, and an arc with arrow is added to S2.

The fuzzy ANP-based approach is presented step-by-step as follows:

evaluation index system is proposed, as shown in Fig. 2.

comprehensive evaluation, etc.

lowest level is the alternatives.

Project budget (S11) Rate of return (S12) Investment recovery period (S13)

**Figure 2.** The ANP structure for construction project selection

Project 1

Funds arrival rate (S21) Technical complexity (S22) Ratio of Completion time and average (S23) Cost over budget (S24) E i t l t ti

**3.2. FANP-based approach** 

The maximum prioritization problem (6) can be represented in the following way:

$$\begin{aligned} \text{Max } \lambda\\ \lambda \le \mu\_{ij} \text{(} \text{w} \text{)}, i = 1, 2, \dots, n - 1; j = 2, 3, \dots, n; j > i, \\ \sum\_{k=1}^{n} \text{w}\_{k} = 1, \text{w}\_{k} > 0, k = 1, 2, \dots, n. \end{aligned} \tag{7}$$

Taking the specific form of the membership functions (3) into consideration, the problem (7) can be further transformed into a bilinear program of the type

$$\begin{aligned} \text{Max } & \lambda \\ \lambda (m\_{ij} - l\_{ij}) \lambda \omega v\_j - \omega v\_i + l\_{ij} \omega v\_j &\le 0, \\ (u\_{ij} - m\_{ij}) \lambda \omega v\_j + \omega v\_i - u\_{ij} \omega v\_j &\le 0, \\ \sum\_{k=1}^n \omega v\_k &= 1, \omega v\_k > 0, k = 1, 2, \dots, n. \\ i &= 1, 2, \dots, n - 1; j = 2, 3, \dots, n; j > i. \end{aligned} \tag{8}$$

The optimal solution to the non-linear problem above (*w*\* , \* ) might be obtained by employing some appropriate numerical method for non-linear optimization. The optimal value \* , if it is positive (the maximum value is one), indicates that all solution ratios satisfy the fuzzy judgment completely, which means that the initial set of fuzzy judgments is rather consistent. A negative value of \* shows that the solutions ratios approximately satisfy all double-side inequalities (2). Therefore, the optimal value \* can be used for measuring the consistency of the initial set of fuzzy judgments.

#### **3. Proposed project selection and evaluation framework**

This study proposes an analytic approach based on the fuzzy ANP to assist in project selection and evaluation. We first identify the selection and evaluation criteria. Then we present the evaluation model in the following subsections.

#### **3.1. Criteria of project selection**

Some researchers investigated which factors have an effect on project selection. For example, Mohanty (1992) pointed out that a potential project should have four important features: minimum investment, low complex of technology, short period and high returns. Lin & Chen (2004) suggested that the contractors should consider the essence of bidding, competition, the value of tendering opportunities, resources, and corporate reputation. Kumar (2006) pointed out that project selection belongs to the strategic level for any organizations, and today it is seriously influenced by environmental and social factors. Wang et al. (2009) suggested that during project selection, the evaluation index system can be divided into two parts: the bid/non-bid and which project to bid, then make a comprehensive evaluation, etc.

With the interaction and feedback relationships between dimensions and/or attributeenablers being considered, a four-level evaluation index system is presented, as shown in Fig. 2. The first level is the objective, which is to find out an optimal project; the second level is the dimensions including project profitability, project risk, project owners and project bidding competition; the third level is attribute-enablers, 15 indicators included; and the lowest level is the alternatives.

**Figure 2.** The ANP structure for construction project selection

Generally, if project profitability (S1) has an effect on project bidding competition (S4), then a line with arrow from S4 to S1 is added. If the sub-criteria of project risk (S2) has interaction with itself, then S2 is inner dependent, and an arc with arrow is added to S2.

## **3.2. FANP-based approach**

138 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

*Qn*-1 that has a maximum degree of membership

1

*k*

can be further transformed into a bilinear program of the type

1

The optimal solution to the non-linear problem above (*w*\*

\*

**3. Proposed project selection and evaluation framework** 

double-side inequalities (2). Therefore, the optimal value

present the evaluation model in the following subsections.

consistency of the initial set of fuzzy judgments.

*k*

*k k*

*n*

*ij n*

vector *w*\*

value \*

consistent. A negative value of

**3.1. Criteria of project selection** 

function (5). It can easily be proved that *up*(*w*) is a convex set, so there is always a priority

*u w u w*

The maximum prioritization problem (6) can be represented in the following way:

*k k*

Max

*<sup>P</sup>*( ) max min ( ) . *<sup>n</sup>* <sup>1</sup> *ij w Q ij*

( ), 1,2, , 1; 2,3, , ; ,

*u w i n j nj i*

*wwk n*

Taking the specific form of the membership functions (3) into consideration, the problem (7)

Max ( ) 0, ( ) 0,

 

*ij ij j i ij j ij ij j i ij j*

*m l w w lw u m w w uw*

1, 0, 1,2, , .

*wwk n*

*i n j nj i*

employing some appropriate numerical method for non-linear optimization. The optimal

the fuzzy judgment completely, which means that the initial set of fuzzy judgments is rather

This study proposes an analytic approach based on the fuzzy ANP to assist in project selection and evaluation. We first identify the selection and evaluation criteria. Then we

Some researchers investigated which factors have an effect on project selection. For example, Mohanty (1992) pointed out that a potential project should have four important features: minimum investment, low complex of technology, short period and high returns. Lin & Chen (2004) suggested that the contractors should consider the essence of bidding, competition, the value of tendering opportunities, resources, and corporate reputation.

, if it is positive (the maximum value is one), indicates that all solution ratios satisfy

1,2, , 1; 2,3, , ; .

1, 0, 1,2, , .

(6)

, \*

shows that the solutions ratios approximately satisfy all

\* (7)

(8)

) might be obtained by

can be used for measuring the

The fuzzy ANP-based approach is presented step-by-step as follows:

**Step 1.** Model construction and problem structuring. With the relationships among dimensions and attribute-enablers being considered, a four levels selection and evaluation index system is proposed, as shown in Fig. 2.

During the process of project selection and evaluation, experts tend to specify their preferences in the form of natural language expressions. The fuzzy linguistic variable, whose value represents the range from natural to artificial language, is a variable that reflects different aspects of human language. When the values or meanings of a linguistic factor are being reflected, the resulting variable must also reflect appropriate modes of change for that linguistic factor. Moreover, variables describing a human word or sentence can be divided into numerous linguistic criteria, such as equally important, moderately important, important, very important or absolutely important. For the purposes of the present study, a 9-point scale is presented for the relative importance of pairwise comparison, as shown in Table 1.

Fuzzy Analytical Network Process Implementation with Matlab 141

(9)

, *D I w PA A ij i ij ij* (10)

(11)

**Step 6.** Obtaining the limit supermatrix. The weighted supermatrix will not be in a steady state until the row values of which converge to the same value for each column of the

> lim .*<sup>t</sup> t W W*

**Step 7.** Calculating the comprehensive weights. A comprehensive weight of each index can be obtained by multiplying the weight of the criterion level indicator, the weight of

where *wij* is the comprehensive weight of each factor, *Pi* is relative importance weight of dimension *i* on final goal; *<sup>D</sup> Aij* , relative importance weight for attribute-enabler *j* of dimension *i*, and for the dependency (*D*) relationships within attribute-enabler's component level; *<sup>I</sup> Aij* , stabilized relative importance weight for attribute-enabler *j* of dimension *i*, and for the independency (*I*) relationships within attribute-enabler's component level; *i*=1, 2, …,

**Step 8.** Selecting an optimal construction project. The equation of desirability index, *Dk* for

1 1

where *wij* is the comprehensive weight; *Sikj*, relative impact of construction project alternative

**4. How to use Matlab during the process of decision-making with FANP?** 

Two key steps in the process of FANP will be solved by Matlab. One is acquiring local weights of fuzzy pairwise comparison matrices (step 3); the other is obtaining the limit supermatrix (step 6). Both of them are associated with matrix calculation. Matlab is selected

> Max ( ) 0, ( ) 0,

 

*ij ij j i ij j ij ij j i ij j*

*m l w w lw u m w w uw*

1, 0, 1,2, , .

*wwk n*

*i n j nj i*

1,2, , 1; 2,3, , ; .

*m n k ij kij i j D wS* 

alternative *k* is calculated using the following equation:

*k* on attribute-enabler *j* of dimension *i* of selection network.

for its excellent performance on matrix operation and data processing.

1

*k*

*n*

**4.1. Acquiring local weights of fuzzy pairwise comparison matrices** 

*k k*

independent sub-criterion and the weight of interdependent sub-index.

matrix, then the limit supermatrix is achieved.

*m*; *j*=1, 2, …, *n*.

**Step 2.** Establishing pairwise comparison matrices by decision committee using the linguistic scales for relative importance given in Table 1. For example, complication of technique (S11) and maturity of technique (S12) are compared using the question "How important is the complication of technique when it is compared with the maturity of technique at the dimension of technique risk?" and the answer is "moderately important (MI)", so this linguistic scale is placed in the relevant cell against the triangular fuzzy numbers (2, 3, 4). All the fuzzy evaluation matrices are produced in the same manner.


**Table 1.** Linguistic scales for relative importance


**Step 6.** Obtaining the limit supermatrix. The weighted supermatrix will not be in a steady state until the row values of which converge to the same value for each column of the matrix, then the limit supermatrix is achieved.

140 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

presented for the relative importance of pairwise comparison, as shown in Table 1.

All the fuzzy evaluation matrices are produced in the same manner.

Linguistic scale for importance Triangular fuzzy

**Table 1.** Linguistic scales for relative importance

model.

During the process of project selection and evaluation, experts tend to specify their preferences in the form of natural language expressions. The fuzzy linguistic variable, whose value represents the range from natural to artificial language, is a variable that reflects different aspects of human language. When the values or meanings of a linguistic factor are being reflected, the resulting variable must also reflect appropriate modes of change for that linguistic factor. Moreover, variables describing a human word or sentence can be divided into numerous linguistic criteria, such as equally important, moderately important, important, very important or absolutely important. For the purposes of the present study, a 9-point scale is

**Step 2.** Establishing pairwise comparison matrices by decision committee using the linguistic scales for relative importance given in Table 1. For example, complication of technique (S11) and maturity of technique (S12) are compared using the question "How important is the complication of technique when it is compared with the maturity of technique at the dimension of technique risk?" and the answer is "moderately important (MI)", so this linguistic scale is placed in the relevant cell against the triangular fuzzy numbers (2, 3, 4).

Equally important(EI) (1, 1, 1) (1, 1, 1) Intermediate1(IM1) (1, 2, 3) (1/3, 1/2, 1) Moderately important(MI) (2, 3, 4) (1/4, 1/3, 1/2) Intermediate2(IM2) (3, 4, 5) (1/5, 1/4, 1/3) Important(I) (4, 5, 6) (1/6, 1/5, 1/4) Intermediate3(IM3) (5, 6, 7) (1/7, 1/6, 1/5) Very important(VI) (6, 7, 8) (1/8, 1/7, 1/6) Intermediate4(IM4) (7, 8, 9) (1/9, 1/8, 1/7) Absolutely important(AI) (9, 9, 9) (1/9, 1/9, 1/9)

**Step 3.** Calculating the local weights of the factors and sub-factors taking part in the second and third levels of the ANP model by FPP method according to formulation (8). **Step 4.** Constructing an unweighted supermatrix based on the interdependencies in the network. The supermatrix is a partitioned matrix, in which each submatrix is composed of a set of relationships between dimensions and attribute-enablers in the graphical

**Step 5.** Acquiring the weighted supermatrix. Because in each column it consists of several eigenvectors which of them sums to one (in a column of a stochastic), and hence the entire column of the matrix may sum to an integer greater than one, the unweighted

supermatrix needs to be stochastic to derive the weighted supermatrix.

scale

Triangular fuzzy reciprocal scale

$$\overline{\mathcal{W}} = \lim\_{t \to \infty} \mathcal{W}^t. \tag{9}$$

**Step 7.** Calculating the comprehensive weights. A comprehensive weight of each index can be obtained by multiplying the weight of the criterion level indicator, the weight of independent sub-criterion and the weight of interdependent sub-index.

$$w\_{ij} = P\_i \times A\_{ij}^D \times A\_{ij\prime\prime}^I \tag{10}$$

where *wij* is the comprehensive weight of each factor, *Pi* is relative importance weight of dimension *i* on final goal; *<sup>D</sup> Aij* , relative importance weight for attribute-enabler *j* of dimension *i*, and for the dependency (*D*) relationships within attribute-enabler's component level; *<sup>I</sup> Aij* , stabilized relative importance weight for attribute-enabler *j* of dimension *i*, and for the independency (*I*) relationships within attribute-enabler's component level; *i*=1, 2, …, *m*; *j*=1, 2, …, *n*.

**Step 8.** Selecting an optimal construction project. The equation of desirability index, *Dk* for alternative *k* is calculated using the following equation:

$$D\_k = \sum\_{i=1}^{m} \sum\_{j=1}^{n} \pi\_{ij} S\_{kij} \tag{11}$$

where *wij* is the comprehensive weight; *Sikj*, relative impact of construction project alternative *k* on attribute-enabler *j* of dimension *i* of selection network.

#### **4. How to use Matlab during the process of decision-making with FANP?**

Two key steps in the process of FANP will be solved by Matlab. One is acquiring local weights of fuzzy pairwise comparison matrices (step 3); the other is obtaining the limit supermatrix (step 6). Both of them are associated with matrix calculation. Matlab is selected for its excellent performance on matrix operation and data processing.

#### **4.1. Acquiring local weights of fuzzy pairwise comparison matrices**

$$\begin{aligned} \text{Max } \mathcal{X} \\ (m\_{ij} - l\_{ij}) \lambda \varpi\_j - \varpi\_i + l\_{ij} \varpi\_j &\le 0, \\ (u\_{ij} - m\_{ij}) \lambda \varpi\_j + \varpi\_i - u\_{ij} \varpi\_j &\le 0, \\ \sum\_{k=1}^n \varpi\_k = 1, \varpi\_k &> 0, k = 1, 2, \dots, n. \\ i = 1, 2, \dots, n - 1; j = 2, 3, \dots, n; j > i. \end{aligned}$$

As mentioned before, the local weights of fuzzy pairwise comparison matrices are acheived by FPP method. That is, the local weights will be obtained by solving the non-linear problem above.

Fuzzy Analytical Network Process Implementation with Matlab 143

*x*(1) + *x*(2) + … + *x*(*n*) + 0*x*(*n*+1) = 1, where *x*(1), *x*(2), …, *x*(*n*) are the first, second, …, *n*th local weight respectively, and *x*(*n*+1) is

"VLB and VUB" are the upper and lower bounds of the variables. According to the FPP method and FANP, all the local weights have a lower bound of zero, and the lower bound of consistency index is negative infinity. Since all the upper bounds are subject to the constraints, they can be replaced with empty arrays. In matlab, the postive and negative

"nonlcon" is the non-linear constraints, including non-linear inequality constaint *c* and nonlinear equality constraint ceq. As there is no non-linear equality constaint for FPP method,

To solve the non-linear problem (8), the following main program file "networkmain.m" is

[*x*, fval] = fmincon('networkf', *x*0, [ ], [ ],Aeq, beq, VLB, VUB, 'networknonlcon', OPT)

The LargeScale option specifies a preference for which algorithm to use. It is only a preference because certain conditions must be met to use the large-scale algorithm. For this function "fmincon", we choose the medium-scale algorithm. LargeScale use the mediumscale algorithm when set to 'off'. Two files, "networkf.m" and "networknonlcon.m", are called in the procedure above. They are defined for objective function and non-linear

where the value of function *f* is related to *n*. For example, for a matrix of order 3, *n*=3, *f*=-*x*(4); for a matrix of order 4, *n*=4, *f*=-*x*(5). Therefore, objective function *f* has different formats as a

According to the FPP method, every triangular fuzzy number (*lij*, *mij*, *uij*) needs to be

As we know, a triangular fuzzy comparison matrix is symmetric. Therefore, for the

following matrix, we only need to consider the constraints above the diagonal.

the consistency index. Then we have Aeq=[1 1 … 1 0] and beq=[1].

infinity symbols are named as inf and –inf respectively.

The program of "networkf.m" is developed as follows:

transformed into the following inequality constraints.

we can let ceq=[ ].

Aeq=[1 1 … 1 0];

VLB = [0; 0;…; 0; -inf];

OPT = optimset('LargeScale', 'off');

constraints independently.

function *f* =networkf(*x*);

result of different sizes of matrices.

(*mij* -*lij*)\**x*(*n*+1)\**x*(*j*) -*x*(*i*)+(*lij*)\**x*(*j*)≤0; (*uij*-*mij*)\**x*(*n*+1)\**x*(*j*)+*x*(*i*)-(*uij*)\**x*(*j*) ≤0.

*f* = -*x*(*n*+1);

*x*0 = [0.1; 0.2; …; 1];

developed.

beq=[1];

VUB = [ ];

As criteria and sub-criteria have different numbers, there are different orders of fuzzy pairwise comparison matrices, such as matices of order (22), (33), , (*nn*). Therefore, the local weights and consistency index may be derived from matrix of different order*s*. Function definition and non-linear program calculation will be first concerned in this section, then some examples will be given to illustrate the proposed method.

## *4.1.1. Function definition*

To acquire the local weights of fuzzy comparison matrices, some functions need to be defined. A procedure can be saved as the format - ".m" file in Matlab. The name of which can be used directly when you need to call it.

To obtain the local weights, a main program file is developed, named as "networkmain.m". The local weights can be easily acquired by inputting "networkmain" in the command window. As there are different forms of comparison matrices, we need to change slightly the main program file for matrices of different orders.

Matlab contains a lot of functions to solve linear and non-linear problems. For instance, nonlinear problems can be solved by function "fmincon", which is also the key function of the file "networkmain.m". The full expression of the function is "fmincon (fun, *x*0, *A*, *b*, Aeq, beq, VLB, VUB, nonlcon)". During the process of decision-making with FANP, in accordance with the FPP method, the fuzzy pairwise comparison matrices first need to be transformed into non-linear programming formats. Then it can be solved by the function "fmincon". The detail description of the function is as follows:

"Fun" is the objective function of a non-linear problem. A variable in the objective function is marked as *x*(*i*). If the total number of variables is *n*, then they are correspondingly named as *x*(1), *x*(2), …, *x*(*n*). There are (*n*+1) variables for a (*nn*) comparison matrix, including *n* local weights and a consistency index . The objective function in FPP method is to acquire the maximum value, but the default standard objective function of "fmincon" in Matlab is to find the minimum value, so it is necessary to convert *x*(*n*+1) into -*x*(*n*+1) in the function "fmincon". Of course, it might be solved by changing the constraints instead of changing the objective function as well.

"*x*0" is the initial value of the non-linear problem, and it has the same scale as the number of variables. Every local weight *x*(*i*) takes values in the range [0, 1], and their sum satisfies *x*(1)+*x*(2)+…+*x*(*n*)=1. Consistency index *x*(*n*+1) takes values in the range (-, 1].

"*A* and *b*" are the coefficients of linear inequality constraint *Ax*<=*b*. As there are no linear inequality constraints in FANP, *a* and *b* can be ignored or replaced with two empty arrays.

"Aeq and beq" are both the coefficients of linear equality constraint Aeq\**x*=beq. According to FANP, the sum of local weights should be one, then

$$
\infty(1) + \infty(2) + \dots + \infty(n) + 0\alpha(n+1) = 1,
$$

where *x*(1), *x*(2), …, *x*(*n*) are the first, second, …, *n*th local weight respectively, and *x*(*n*+1) is the consistency index. Then we have Aeq=[1 1 … 1 0] and beq=[1].

"VLB and VUB" are the upper and lower bounds of the variables. According to the FPP method and FANP, all the local weights have a lower bound of zero, and the lower bound of consistency index is negative infinity. Since all the upper bounds are subject to the constraints, they can be replaced with empty arrays. In matlab, the postive and negative infinity symbols are named as inf and –inf respectively.

"nonlcon" is the non-linear constraints, including non-linear inequality constaint *c* and nonlinear equality constraint ceq. As there is no non-linear equality constaint for FPP method, we can let ceq=[ ].

To solve the non-linear problem (8), the following main program file "networkmain.m" is developed.

```
Aeq=[1 1 … 1 0]; 
beq=[1]; 
VLB = [0; 0;…; 0; -inf]; 
VUB = [ ]; 
x0 = [0.1; 0.2; …; 1]; 
OPT = optimset('LargeScale', 'off'); 
[x, fval] = fmincon('networkf', x0, [ ], [ ],Aeq, beq, VLB, VUB, 'networknonlcon', OPT)
```
The LargeScale option specifies a preference for which algorithm to use. It is only a preference because certain conditions must be met to use the large-scale algorithm. For this function "fmincon", we choose the medium-scale algorithm. LargeScale use the mediumscale algorithm when set to 'off'. Two files, "networkf.m" and "networknonlcon.m", are called in the procedure above. They are defined for objective function and non-linear constraints independently.

The program of "networkf.m" is developed as follows:

function *f* =networkf(*x*); *f* = -*x*(*n*+1);

142 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

section, then some examples will be given to illustrate the proposed method.

above.

*4.1.1. Function definition* 

can be used directly when you need to call it.

local weights and a consistency index

objective function as well.

the main program file for matrices of different orders.

"fmincon". The detail description of the function is as follows:

to FANP, the sum of local weights should be one, then

As mentioned before, the local weights of fuzzy pairwise comparison matrices are acheived by FPP method. That is, the local weights will be obtained by solving the non-linear problem

As criteria and sub-criteria have different numbers, there are different orders of fuzzy pairwise comparison matrices, such as matices of order (22), (33), , (*nn*). Therefore, the local weights and consistency index may be derived from matrix of different order*s*. Function definition and non-linear program calculation will be first concerned in this

To acquire the local weights of fuzzy comparison matrices, some functions need to be defined. A procedure can be saved as the format - ".m" file in Matlab. The name of which

To obtain the local weights, a main program file is developed, named as "networkmain.m". The local weights can be easily acquired by inputting "networkmain" in the command window. As there are different forms of comparison matrices, we need to change slightly

Matlab contains a lot of functions to solve linear and non-linear problems. For instance, nonlinear problems can be solved by function "fmincon", which is also the key function of the file "networkmain.m". The full expression of the function is "fmincon (fun, *x*0, *A*, *b*, Aeq, beq, VLB, VUB, nonlcon)". During the process of decision-making with FANP, in accordance with the FPP method, the fuzzy pairwise comparison matrices first need to be transformed into non-linear programming formats. Then it can be solved by the function

"Fun" is the objective function of a non-linear problem. A variable in the objective function is marked as *x*(*i*). If the total number of variables is *n*, then they are correspondingly named as *x*(1), *x*(2), …, *x*(*n*). There are (*n*+1) variables for a (*nn*) comparison matrix, including *n*

the maximum value, but the default standard objective function of "fmincon" in Matlab is to find the minimum value, so it is necessary to convert *x*(*n*+1) into -*x*(*n*+1) in the function "fmincon". Of course, it might be solved by changing the constraints instead of changing the

"*x*0" is the initial value of the non-linear problem, and it has the same scale as the number of variables. Every local weight *x*(*i*) takes values in the range [0, 1], and their sum satisfies

"*A* and *b*" are the coefficients of linear inequality constraint *Ax*<=*b*. As there are no linear inequality constraints in FANP, *a* and *b* can be ignored or replaced with two empty arrays. "Aeq and beq" are both the coefficients of linear equality constraint Aeq\**x*=beq. According

. The objective function in FPP method is to acquire

*x*(1)+*x*(2)+…+*x*(*n*)=1. Consistency index *x*(*n*+1) takes values in the range (-, 1].

where the value of function *f* is related to *n*. For example, for a matrix of order 3, *n*=3, *f*=-*x*(4); for a matrix of order 4, *n*=4, *f*=-*x*(5). Therefore, objective function *f* has different formats as a result of different sizes of matrices.

According to the FPP method, every triangular fuzzy number (*lij*, *mij*, *uij*) needs to be transformed into the following inequality constraints.

(*mij* -*lij*)\**x*(*n*+1)\**x*(*j*) -*x*(*i*)+(*lij*)\**x*(*j*)≤0; (*uij*-*mij*)\**x*(*n*+1)\**x*(*j*)+*x*(*i*)-(*uij*)\**x*(*j*) ≤0.

As we know, a triangular fuzzy comparison matrix is symmetric. Therefore, for the following matrix, we only need to consider the constraints above the diagonal.

$$\begin{pmatrix} \begin{pmatrix} \begin{pmatrix} l\_{11}, m\_{11}, \mu\_{11} \end{pmatrix} & \begin{pmatrix} l\_{12}, m\_{12}, \mu\_{12} \end{pmatrix} & \dots & \begin{pmatrix} l\_{1n'}, m\_{1n'}, \mu\_{1n} \end{pmatrix} \\ \begin{pmatrix} l\_{21}, m\_{21'}, \mu\_{21} \end{pmatrix} & \begin{pmatrix} l\_{22}, m\_{22'}, \mu\_{22} \end{pmatrix} & \dots & \begin{pmatrix} l\_{2n'}, m\_{2n'}, \mu\_{2n} \end{pmatrix} \\ \vdots & \vdots & \vdots & \vdots \\ \begin{pmatrix} l\_{n1'}, m\_{n1'}, \mu\_{n1} \end{pmatrix} & \begin{pmatrix} l\_{n2'}, m\_{n2'}, \mu\_{n2} \end{pmatrix} & \dots & \begin{pmatrix} l\_{nn'}m\_{nn'}\mu\_{nn'} \end{pmatrix} \end{pmatrix} \end{pmatrix}$$

Fuzzy Analytical Network Process Implementation with Matlab 145

As it mentioned before, the opposite number of consistency index in the function "network" is taken. The corresponding objective function file "networkf.m" is as follows:

Since the triangular fuzzy comparison matrix is symmetric, only three constraints above the diagonal need to be considered, that is (1/6, 1/5, 1/4), (1/4, 1/3, 1/2) and (2, 3, 4). The

function *f* = networkf(*x*);

"networknonlcon3.m" file for this matrix is as follows:

**Figure 3.** The local weights and consistency index of Example 1

function [*c*,ceq] = networknonlcon3(*x*);

(1/5-1/6)\**x*(4)\**x*(2)-*x*(1)+(1/6)\**x*(2); (1/4-1/5)\**x*(4)\**x*(2)+*x*(1)-(1/4)\**x*(2); (1/3-1/4)\**x*(4)\**x*(3)-*x*(1)+(1/4)\**x*(3); (1/2-1/3)\**x*(4)\**x*(3)+*x*(1)-(1/2)\**x*(3); (3-2)\**x*(4)\**x*(3)-*x*(2)+(2)\**x*(3); (4-3)\**x*(4)\**x*(3)+*x*(2)-(4)\**x*(3);

*f* = -*x*(4);

*c* = [

]; ceq = [ ];

For instance, for a 3-by-3 fuzzy comparison matrix, only three elments need to be taken into account, which is, (*l*12, *m*12, *u*12), (*l*13, *m*13, *u*13) and (*l*23, *m*23, *u*23). Then, the corresponding "networknonlcon.m" file is given by

function [*c*, ceq] = networknonlcon3(*x*);

*c* = [

```
(m12-l12)*x(4)*x(2)-x(1)+(l12)*x(2); 
     (u12-m12)*x(4)*x(2)+x(1)-(u12)*x(2); 
     (m13-l13)*x(4)*x(3)-x(1)+(l13)*x(3); 
     (u13-m13)*x(4)*x(3)+x(1)-(u13)*x(3); 
     (m23-l23)*x(4)*x(3)-x(2)+(l23)*x(3); 
     (u23-m23)*x(4)*x(3)+x(2)-(u23)*x(3); 
     ]; 
ceq = [ ];
```
The programs for (44), (55), …, (*nn*) matrices can be developed in the same manner. Several examples for acquiring the local weights are given as follows.

## *4.1.2. Local weights of a fuzzy pairwise comparison matrix of order 3*

**Example 1.** How to solve the local weights of the following fuzzy pairwise comparison matrix of order 3?


First, the initial parameter of the main function "networkmain" needs to be set for this (33) matrix, which has three local weights, named as *x*(1), *x*(2) and *x*(3), and a consistency index, referred to as *x*(4). That means there are four variables in linear equality constraint. The corresponding "networkmain.m" file is as follows:

```
Aeq=[1 1 1 0]; 
beq=[1]; 
VLB = [0; 0; 0; -inf]; 
VUB = [ ]; 
x0 = [0.2; 0.5; 0.3; 1]; 
OPT = optimset('LargeScale', 'off'); 
[x, fval] = fmincon('networkf', x0, [ ], [ ],Aeq, beq, VLB, VUB, 'networknonlcon3', OPT)
```
As it mentioned before, the opposite number of consistency index in the function "network" is taken. The corresponding objective function file "networkf.m" is as follows:

```
function f = networkf(x); 
f = -x(4);
```
144 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

1 11 2 22

"networknonlcon.m" file is given by

*c* = [

]; ceq = [ ];

matrix of order 3?

Aeq=[1 1 1 0]; beq=[1];

VUB = [ ];

VLB = [0; 0; 0; -inf];

*x*0 = [0.2; 0.5; 0.3; 1];

OPT = optimset('LargeScale', 'off');

function [*c*, ceq] = networknonlcon3(*x*);

(*m*12-*l*12)\**x*(4)\**x*(2)-*x*(1)+(*l*12)\**x*(2); (*u*12-*m*12)\**x*(4)\**x*(2)+*x*(1)-(*u*12)\**x*(2); (*m*13-*l*13)\**x*(4)\**x*(3)-*x*(1)+(*l*13)\**x*(3); (*u*13-*m*13)\**x*(4)\**x*(3)+*x*(1)-(*u*13)\**x*(3); (*m*23-*l*23)\**x*(4)\**x*(3)-*x*(2)+(*l*23)\**x*(3); (*u*23-*m*23)\**x*(4)\**x*(3)+*x*(2)-(*u*23)\**x*(3);

11 11 11 12 12 12 1 11 21 21 21 22 22 22 2 22

*lmu lmu lmu lmu lmu l mu*

(, , )(, , ) (, , ) (, , )(, , ) ( , , )

 

 

*n nn n nn*

( , , )( , , ) ( , , )

*lmu lmu lmu*

*n nn n nn nn nn nn*

For instance, for a 3-by-3 fuzzy comparison matrix, only three elments need to be taken into account, which is, (*l*12, *m*12, *u*12), (*l*13, *m*13, *u*13) and (*l*23, *m*23, *u*23). Then, the corresponding

The programs for (44), (55), …, (*nn*) matrices can be developed in the same manner.

**Example 1.** How to solve the local weights of the following fuzzy pairwise comparison

(1,1,1) (1 / 6,1 / 5,1 / 4) (1 / 4,1 / 3,1 / 2) (4,5,6) (1,1,1) (2,3,4) (2,3,4) (1 / 4,1 / 3,1 / 2) (1,1,1)

 

First, the initial parameter of the main function "networkmain" needs to be set for this (33) matrix, which has three local weights, named as *x*(1), *x*(2) and *x*(3), and a consistency index, referred to as *x*(4). That means there are four variables in linear equality constraint. The

[*x*, fval] = fmincon('networkf', *x*0, [ ], [ ],Aeq, beq, VLB, VUB, 'networknonlcon3', OPT)

Several examples for acquiring the local weights are given as follows.

*4.1.2. Local weights of a fuzzy pairwise comparison matrix of order 3* 

corresponding "networkmain.m" file is as follows:

Since the triangular fuzzy comparison matrix is symmetric, only three constraints above the diagonal need to be considered, that is (1/6, 1/5, 1/4), (1/4, 1/3, 1/2) and (2, 3, 4). The "networknonlcon3.m" file for this matrix is as follows:

```
function [c,ceq] = networknonlcon3(x); 
c = [ 
     (1/5-1/6)*x(4)*x(2)-x(1)+(1/6)*x(2); 
     (1/4-1/5)*x(4)*x(2)+x(1)-(1/4)*x(2); 
     (1/3-1/4)*x(4)*x(3)-x(1)+(1/4)*x(3); 
     (1/2-1/3)*x(4)*x(3)+x(1)-(1/2)*x(3); 
     (3-2)*x(4)*x(3)-x(2)+(2)*x(3); 
     (4-3)*x(4)*x(3)+x(2)-(4)*x(3); 
     ]; 
ceq = [ ];
```
**Figure 3.** The local weights and consistency index of Example 1

In Fig. 3, *x* is the optimal solution and fval is the optimal value. The first three values of *x* are the local weights of the matrix, and the last one is the consistency index.

Fuzzy Analytical Network Process Implementation with Matlab 147

Finally, "networkmain" is run in the command panel to obtain the local weights. The optimal solutions are *x*(1)=0.3891, *x*(2)=0.2229, *x*(3)=0.2685, *x*(4)=0.1196, *x*(5)=0.4910, as shown in Figure 4. The consistency index *x*(5) is 0.4910>0, which means the fuzzy comparison

where *x* is the optimal solution and fval is the optimal value. The first four values of *x* are

matrix has a good consistency, and the results are acceptable.

**Figure 4.** The local weights and consistency index of Example 2

the local weights of the matrix, and the last one is the consistency index.

(2 - 3/2)\**x*(5)\**x*(3) + *x*(1) - (2)\**x*(3); (3-5/2)\**x*(5)\**x*(4) - *x*(1) + (5/2)\**x*(4); (7/2-3)\**x*(5)\**x*(4) + *x*(1) - (7/2)\**x*(4); (1-2/3)\**x*(5)\**x*(3) - *x*(2) + (2/3)\**x*(3); (2-1)\**x*(5)\**x*(3) + *x*(2) - (2)\**x*(3); (2-3/2)\**x*(5)\**x*(4) - *x*(2) + (3/2)\**x*(4); (5/2-2)\**x*(5)\**x*(4) + *x*(2) - (5/2)\**x*(4); (5/2 - 2)\**x*(5)\**x*(4) - *x*(3) + (2)\**x*(4); (3 - 5/2 )\**x*(5)\**x*(4) + *x*(3) - (3)\**x*(4);

]; ceq = [ ];

The final step, is to run "networkmain" in the command window to acquire the local weights. The local weights *x*(1), *x*(2), *x*(3) are 0.1128, 0.6265, 0.2607 respectively, as shown in Figure 3. The consistency index *x*(4) is 0.4031>0, which means the fuzzy comparison matrix has a good consistency, and the results are acceptable.

## *4.1.3. Local weights of a fuzzy pairwise comparison matrix of order 4*

**Example 2.** How to solve the local weights of the following fuzzy pairwise comparison matrix of order 4?


The calculation process for a matrix of order 4 is similar to that of a matrix of order 3. A matrix of order 4 has four local weights, named as *x*(1), *x*(2), *x*(3) and *x*(4), and a consistency index, referred as *x*(5). The following program is the corresponding "networkmain.m" file for this matrix.

```
Aeq=[1 1 1 1 0]; 
beq=[1]; 
VLB = [0; 0; 0; 0; -inf]; 
VUB = [ ]; 
x0 = [0.1; 0.4; 0.2; 0.3; -1]; 
OPT = optimset('LargeScale', 'off'); 
[x, fval] = fmincon('networkf', x0, [ ], [ ],Aeq, beq, VLB, VUB, 'networknonlcon4', OPT)
```
The corresponding objective function file for this matrix is as follows:

function *f* = networkf(*x*); *f* = -*x*(5);

Six constraints for this fuzzy pairwise comparison matrix need to be calculated, that is, (3/2, 2, 5/2), (1, 3/2, 2), (5/2, 3, 7/2), (2/3, 1, 2), (3/2, 2, 5/2) and (2, 5/2, 3). The "networknonlcon4.m" file for this matrix of order 4 is as follows:

function [*c*, ceq] = networknonlcon4(x); c = [ (2-3/2)\**x*(5)\**x*(2) - *x*(1) + (3/2)\**x*(2); (5/2 - 2)\**x*(5)\**x*(2) + *x*(1) - (5/2)\**x*(2); (3/2-1)\**x*(5)\**x*(3) - *x*(1) + (1)\**x*(3);

```
(2 - 3/2)*x(5)*x(3) + x(1) - (2)*x(3); 
(3-5/2)*x(5)*x(4) - x(1) + (5/2)*x(4); 
(7/2-3)*x(5)*x(4) + x(1) - (7/2)*x(4); 
(1-2/3)*x(5)*x(3) - x(2) + (2/3)*x(3); 
(2-1)*x(5)*x(3) + x(2) - (2)*x(3); 
(2-3/2)*x(5)*x(4) - x(2) + (3/2)*x(4); 
(5/2-2)*x(5)*x(4) + x(2) - (5/2)*x(4); 
(5/2 - 2)*x(5)*x(4) - x(3) + (2)*x(4); 
(3 - 5/2 )*x(5)*x(4) + x(3) - (3)*x(4); 
]; 
ceq = [ ];
```
the local weights of the matrix, and the last one is the consistency index.

*4.1.3. Local weights of a fuzzy pairwise comparison matrix of order 4* 

has a good consistency, and the results are acceptable.

matrix of order 4?

for this matrix.

Aeq=[1 1 1 1 0];

VLB = [0; 0; 0; 0; -inf];

*x*0 = [0.1; 0.4; 0.2; 0.3; -1];

function *f* = networkf(*x*);

OPT = optimset('LargeScale', 'off');

function [*c*, ceq] = networknonlcon4(x);

(2-3/2)\**x*(5)\**x*(2) - *x*(1) + (3/2)\**x*(2); (5/2 - 2)\**x*(5)\**x*(2) + *x*(1) - (5/2)\**x*(2); (3/2-1)\**x*(5)\**x*(3) - *x*(1) + (1)\**x*(3);

beq=[1];

VUB = [ ];

*f* = -*x*(5);

c = [

In Fig. 3, *x* is the optimal solution and fval is the optimal value. The first three values of *x* are

The final step, is to run "networkmain" in the command window to acquire the local weights. The local weights *x*(1), *x*(2), *x*(3) are 0.1128, 0.6265, 0.2607 respectively, as shown in Figure 3. The consistency index *x*(4) is 0.4031>0, which means the fuzzy comparison matrix

**Example 2.** How to solve the local weights of the following fuzzy pairwise comparison

 

The calculation process for a matrix of order 4 is similar to that of a matrix of order 3. A matrix of order 4 has four local weights, named as *x*(1), *x*(2), *x*(3) and *x*(4), and a consistency index, referred as *x*(5). The following program is the corresponding "networkmain.m" file

[*x*, fval] = fmincon('networkf', *x*0, [ ], [ ],Aeq, beq, VLB, VUB, 'networknonlcon4', OPT)

Six constraints for this fuzzy pairwise comparison matrix need to be calculated, that is, (3/2, 2, 5/2), (1, 3/2, 2), (5/2, 3, 7/2), (2/3, 1, 2), (3/2, 2, 5/2) and (2, 5/2, 3). The

The corresponding objective function file for this matrix is as follows:

"networknonlcon4.m" file for this matrix of order 4 is as follows:

(1,1,1) (3 / 2,2,5 / 2) (1,3 / 2,2) (5 / 2,3,7 / 2) (2 / 5,1 / 2,2 / 3) (1,1,1) (2 / 3,1,2) (3 / 2,2,5 / 2) (1 / 2,2 / 3,1) (1 / 2,1,3 / 2) (1,1,1) (2,5 / 2,3) (2 / 7,1 / 3,2 / 5) (2 / 5,1 / 2,2 / 3) (1 / 3,2 / 5,1 / 2) (1,1,1)

Finally, "networkmain" is run in the command panel to obtain the local weights. The optimal solutions are *x*(1)=0.3891, *x*(2)=0.2229, *x*(3)=0.2685, *x*(4)=0.1196, *x*(5)=0.4910, as shown in Figure 4. The consistency index *x*(5) is 0.4910>0, which means the fuzzy comparison matrix has a good consistency, and the results are acceptable.

**Figure 4.** The local weights and consistency index of Example 2

where *x* is the optimal solution and fval is the optimal value. The first four values of *x* are the local weights of the matrix, and the last one is the consistency index.

## *4.1.4. Local weights of a fuzzy pairwise comparison matrix of order n*

**Example 3.** How to solve the local weights of the following fuzzy pairwise comparison matrix of order *n*?

Fuzzy Analytical Network Process Implementation with Matlab 149

(*u*1*<sup>n</sup>*-*m*1*<sup>n</sup>*)\**x*(*n*+1)\**x*(*n*)+*x*(1)-(*u*1*<sup>n</sup>*)\**x*(*n*); (*m*23-*l*23)\**x*(*n*+1)\**x*(3)-*x*(2)+(*l*23)\**x*(3); (*u*23-*m*23)\**x*(*n*+1)\**x*(3)+*x*(2)-(*u*23)\**x*(3);

(*m(n*-1)*<sup>n</sup>*-*l(n*-1)*<sup>n</sup>*)\**x*(*n*+1)\**x*(*n*)-*x*(*n*-1)+(*l(n*-1)*<sup>n</sup>*)\**x*(*n*); (*u(n*-1)*<sup>n</sup>*-*m(n*-1)*<sup>n</sup>*)\**x*(*n*+1)\**x*(*n*)+*x*(*n*-1)-(*u(n*-1)*<sup>n</sup>*)\**x*(*n*);

until it is satisfied with consistency requirement.

*4.2.1. The program for acquiring stable limit supermatrix* 

"fanp\_limitedsupermatrix.m" is given as follows:

 0.00000,0.00000,0.00000,0.19200,0.25000,0.20000; 0.00000,0.00000,0.00000,0.17400,0.50000,0.40000; 0.63700,0.58200,0.13600,0.00000,0.00000,0.00000; 0.10500,0.10900,0.65400,0.00000,0.00000,0.00000; 0.25800,0.30900,0.21000,0.00000,0.00000,0.00000];

weightedsupermatrix = [0.00000,0.00000,0.00000,0.63400,0.25000,0.40000;

newMatrix = fanp\_multiMatrix(weightedsupermatrix,weightedsupermatrix);

circulantCheckResult = fanp\_circulantCheck(matrixRecord, newMatrix)

function B = fanp\_limitedsupermatrix( );

matrixRecord = weightedsupermatrix;

matrixRecord(:, :, 2) = newMatrix;

while circulantCheckResult(1) == 0

times = 1;

**4.2. Obtaining limit supermatrix** 

supermatrix is determined.

Finally, we run "networkmain" in the command panel to obtain the local weights. If the consistency index is positive, the fuzzy comparison matrix has a good consistency and the results are acceptable. Otherwise, we need to modify the fuzzy pairwise comparison matrix

A limit supermatrix is a weighted supermatrix in a stable state. The weighted supermatrix may be convergent or not. If it is convergent, the limit supermatrix can be achieved. Unfortunately, it is usually not convergent, and then a periodic result is obtained. Under this condition, the limit supermatrix will be achieved only after the periodicity of the

To obtain a limit supermatrix, four functions named as "fanp\_limitedsupermatrix.m", "fanp\_multiMatrix.m", "fanp\_circulantCheck.m" and "fanp\_equal.m" are developed. The first file is the main program by which limit supermatrix can be solved; supermatrix multiplication is implemented by the second file; the third file is used to determine whether a supermatrix is stable or periodic; and the last one is used to test whether a supermatrix after iterations is the same as it was before or not. The second and third files will be called in the main procedure, and the last one will be used in the third program. File

…

]; ceq = [ ];

$$\begin{pmatrix} \begin{pmatrix} \begin{pmatrix} l\_{11}, m\_{11}, \mu\_{11} \end{pmatrix} & \begin{pmatrix} l\_{12}, m\_{12}, \mu\_{12} \end{pmatrix} & \dots & \begin{pmatrix} l\_{1n'}m\_{1n'}\mu\_{1n} \end{pmatrix} \\ \begin{pmatrix} l\_{21}, m\_{21'}\mu\_{21} \end{pmatrix} & \begin{pmatrix} l\_{22}, m\_{22'}\mu\_{22} \end{pmatrix} & \dots & \begin{pmatrix} l\_{2n'}m\_{2n'}\mu\_{2n} \end{pmatrix} \\ & \vdots \\ \begin{pmatrix} l\_{n1'}m\_{n1'}\mu\_{n1} \end{pmatrix} & \begin{pmatrix} l\_{n2'}m\_{n2'}\mu\_{n2} \end{pmatrix} & \dots & \begin{pmatrix} l\_{nn'}m\_{nn'}\mu\_{nn} \end{pmatrix} \end{pmatrix} \end{pmatrix}$$

A (*nn*) matrix has (*n*+1) variables, including *n* local weights and one consistency index, named as *x*(1), *x*(2), …, *x*(*n*+1). Linear equality constraint is as follows:

$$\alpha(1) + \alpha(2) + \dots + \alpha(n) + 0^\*\alpha(n+1) = 1$$

Then, Aeq = [1 1 … 1 0], beq = [1].

The lower bounds of the local weights are zero, and the lower bound of consistency index is negative infinity. There is no specific upper bound. Then we have

$$\text{VLB} = [0; 0; \dots; 0; \text{ -inf}]; \text{ VUB} = [\ ]. \text{ -}$$

The initial values of the variables can be arbitrary in the range of feasible region. If different initial values lead to different results, it means the nonlinear problem has multiple optimal solutions. Then, we have

*x*0 = [1; 1; …; 1], [*x*, fval] = fmincon('networkf', *x*0, [ ], [ ],Aeq, beq, VLB, VUB, 'networknonlcon').

The corresponding objective function file networkf.m is as follows:

```
function f = networkf(x); 
f = -x(n+1);
```
For the non-linear constraints, only those elements above the diagonal need to be considered. That is, these triangular fuzzy numbers (*lij*, *mij*, *uij*) need to be taken into account, *i*<*j*; *i*=1, 2, …, *n*; *j*= 2, 3,…, *n*.

The "networknonlcon*n*.m" file in this case is as follows:

```
function [c,ceq] = networknonlconn(x); 
c = [ 
     (m12-l12)*x(n+1)*x(2)-x(1)+(l12)*x(2); 
     (u12-m12)*x(n+1)*x(2)+x(1)-(u12)*x(2); 
     (m13-l13)*x(n+1)*x(3)-x(1)+(l13)*x(3); 
     (u13-m13)*x(n+1)*x(3)+x(1)-(u13)*x(3); 
      … 
     (m1n-l1n)*x(n+1)*x(n)-x(1)+(l1n)*x(n);
```

```
(u1n-m1n)*x(n+1)*x(n)+x(1)-(u1n)*x(n); 
(m23-l23)*x(n+1)*x(3)-x(2)+(l23)*x(3); 
(u23-m23)*x(n+1)*x(3)+x(2)-(u23)*x(3); 
 … 
(m(n-1)n-l(n-1)n)*x(n+1)*x(n)-x(n-1)+(l(n-1)n)*x(n); 
(u(n-1)n-m(n-1)n)*x(n+1)*x(n)+x(n-1)-(u(n-1)n)*x(n); 
]; 
ceq = [ ];
```
Finally, we run "networkmain" in the command panel to obtain the local weights. If the consistency index is positive, the fuzzy comparison matrix has a good consistency and the results are acceptable. Otherwise, we need to modify the fuzzy pairwise comparison matrix until it is satisfied with consistency requirement.

## **4.2. Obtaining limit supermatrix**

148 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

**Example 3.** How to solve the local weights of the following fuzzy pairwise comparison

11 11 11 12 12 12 1 11 21 21 21 22 22 22 2 22

*lmu lmu lmu lmu lmu l mu*

(, , )(, , ) (, , ) (, , )(, , ) ( , , )

 

 

*n nn n nn*

( , , )( , , ) ( , , )

*lmu lmu lmu*

*n nn n nn nn nn nn*

A (*nn*) matrix has (*n*+1) variables, including *n* local weights and one consistency index,

*x*(1) + *x*(2) + … + *x*(*n*) + 0\**x*(*n*+1) = 1

The lower bounds of the local weights are zero, and the lower bound of consistency index is

VLB = [0; 0;…; 0; -inf]; VUB = [ ]. The initial values of the variables can be arbitrary in the range of feasible region. If different initial values lead to different results, it means the nonlinear problem has multiple optimal

For the non-linear constraints, only those elements above the diagonal need to be considered. That is, these triangular fuzzy numbers (*lij*, *mij*, *uij*) need to be taken into account,

[*x*, fval] = fmincon('networkf', *x*0, [ ], [ ],Aeq, beq, VLB, VUB, 'networknonlcon').

*4.1.4. Local weights of a fuzzy pairwise comparison matrix of order n* 

1 11 2 22

named as *x*(1), *x*(2), …, *x*(*n*+1). Linear equality constraint is as follows:

negative infinity. There is no specific upper bound. Then we have

The corresponding objective function file networkf.m is as follows:

The "networknonlcon*n*.m" file in this case is as follows:

matrix of order *n*?

Then, Aeq = [1 1 … 1 0], beq = [1].

solutions. Then, we have

function *f* = networkf(*x*);

*i*<*j*; *i*=1, 2, …, *n*; *j*= 2, 3,…, *n*.

function [*c*,ceq] = networknonlcon*n*(x);

(*m*12-*l*12)\**x*(*n*+1)\**x*(2)-*x*(1)+(*l*12)\**x*(2); (*u*12-*m*12)\**x*(*n*+1)\**x*(2)+*x*(1)-(*u*12)\**x*(2); (*m*13-*l*13)\**x*(*n*+1)\**x*(3)-*x*(1)+(*l*13)\**x*(3); (*u*13-*m*13)\**x*(*n*+1)\**x*(3)+*x*(1)-(*u*13)\**x*(3);

(*m*1*<sup>n</sup>*-*l*1*<sup>n</sup>*)\**x*(*n*+1)\**x*(*n*)-*x*(1)+(*l*1*<sup>n</sup>*)\**x*(*n*);

*x*0 = [1; 1; …; 1],

*f* = -*x*(*n*+1);

*c* = [

…

A limit supermatrix is a weighted supermatrix in a stable state. The weighted supermatrix may be convergent or not. If it is convergent, the limit supermatrix can be achieved. Unfortunately, it is usually not convergent, and then a periodic result is obtained. Under this condition, the limit supermatrix will be achieved only after the periodicity of the supermatrix is determined.

## *4.2.1. The program for acquiring stable limit supermatrix*

To obtain a limit supermatrix, four functions named as "fanp\_limitedsupermatrix.m", "fanp\_multiMatrix.m", "fanp\_circulantCheck.m" and "fanp\_equal.m" are developed. The first file is the main program by which limit supermatrix can be solved; supermatrix multiplication is implemented by the second file; the third file is used to determine whether a supermatrix is stable or periodic; and the last one is used to test whether a supermatrix after iterations is the same as it was before or not. The second and third files will be called in the main procedure, and the last one will be used in the third program. File "fanp\_limitedsupermatrix.m" is given as follows:

```
function B = fanp_limitedsupermatrix( ); 
weightedsupermatrix = [0.00000,0.00000,0.00000,0.63400,0.25000,0.40000; 
 0.00000,0.00000,0.00000,0.19200,0.25000,0.20000; 
 0.00000,0.00000,0.00000,0.17400,0.50000,0.40000; 
 0.63700,0.58200,0.13600,0.00000,0.00000,0.00000; 
 0.10500,0.10900,0.65400,0.00000,0.00000,0.00000; 
 0.25800,0.30900,0.21000,0.00000,0.00000,0.00000]; 
newMatrix = fanp_multiMatrix(weightedsupermatrix,weightedsupermatrix); 
matrixRecord = weightedsupermatrix; 
times = 1; 
matrixRecord(:, :, 2) = newMatrix; 
circulantCheckResult = fanp_circulantCheck(matrixRecord, newMatrix) 
while circulantCheckResult(1) == 0
```
Fuzzy Analytical Network Process Implementation with Matlab 151

rLength = *a*(3);

end

cycles

displayed.

 *i*  return end end

for i = rLength-1 : -1 : 1

 disp('cycle started...') circulantFlag = 1;

for *j* = *i* + 1 : rLength - 1

cycles = cycles + 1;

disp('stable matrix :')

disp('cycle start times:')

 limitedMatrix disp('cycle:')

limitedMatrix = matrixRecord(:,:,i);

limitedMatrix = limitedMatrix / cycles;

above to obtain the ultimate limit supermatrix.

function equalFlag = fanp\_equal(array1,array2)

disp('the two matrix are not equal')

same as the former one or not.

if array1(*i*, *j*) ~= array2(*i*, *j*)

equalFlag = 0;

 return end

*n* = size(array1);

 for i=1: *n* for *j*=1: *n*

end

if fanp\_equal(matrixRecord(:, :, *i*),newMatrix) == 1

limitedMatrix = limitedMatrix + matrixRecord(:, :, *j*);

In the program above, "matrixRecord" and "newMatrix" are the input parameters, and "circulantFlag" is the output and the sign of a cycle. If the supermatrix has a cycle, "1" is returned. The limit supermatrix, cycles and iterative times of the cycle starting will be

Comparisons will be made between the iterative output and the existing results one by one. If any two of them are equal, then the cycle of the supermatrix occurs. In this case, the final limit supermatrix is acquired by dividing the summation of all the supermatrices in the cycle by the value of period. For a supermatrix without a cycle, we can assume that its period is one. Therefore, whatever the results of the comparison, we can use the procedure

The following function "fanp\_equal" is used to test whether the new supermatrix is the

```
times = times + 1; 
    disp(times) 
    newMatrix = fanp_multiMatrix(newMatrix,weightedsupermatrix); 
    matrixRecord(:, :, times+1) = newMatrix; 
    circulantCheckResult = fanp_circulantCheck(matrixRecord, newMatrix); 
end 
disp('total multied :') 
disp(times) 
disp('times')
```
where the "weighted supermatrix" can be arbitrary, which is derived from an unweighted supermatrix. It needs to use a semicolon to separate each row of the supermatrix. "newMatrix" is the new matrix after an iteration. The variable "time" is to keep count of iterations. Variable "matrixRecord" is a three-dimensional variable used to record the output of each iteration.

After each iteration, "matrixRecord" is called to check whether the "newMatrix" is stable or periodic. Whenever the "newMatrix" reaches a stable or periodic state, the program terminates. Otherwise, the program will keep on iterating. Finally, the total number of iterations will be displayed.

The program for a matrix or supermatrix multiplication is as follows:

```
function array3 = fanp_multiMatrix(array1, array2) 
 n = size(array1); 
 for i = 1: n
 for j = 1: n
 sum = 0; 
 for m = 1: n
 sum = sum + array1(i, m)*array2(m, j); 
 end 
 array3(i, j) = sum; 
 end 
 end
```
where the parameters "array1" and "array2" are the results of current iteration, and the return value "array3" records the result of a new iteration.

The following function "fanp\_circulantCheck" is for determining whether the new supermatrix is stable or periodic. Function "fanp\_equal" will be called in this procedure.

function [circulantFlag,rLength,limitedMatrix,cycles] = fanp\_circulantCheck(matrixRecord, newMatrix);

circulantFlag = 0; limitedMatrix = [ ]; cycles = 1; *a* = size(matrixRecord);

```
rLength = a(3); 
for i = rLength-1 : -1 : 1 
 if fanp_equal(matrixRecord(:, :, i),newMatrix) == 1 
 disp('cycle started...') 
 circulantFlag = 1; 
 limitedMatrix = matrixRecord(:,:,i); 
 for j = i + 1 : rLength - 1 
 limitedMatrix = limitedMatrix + matrixRecord(:, :, j); 
 cycles = cycles + 1; 
 end 
 limitedMatrix = limitedMatrix / cycles; 
 disp('stable matrix :') 
 limitedMatrix 
 disp('cycle:') 
 cycles 
 disp('cycle start times:') 
 i 
 return 
 end 
end
```
newMatrix = fanp\_multiMatrix(newMatrix,weightedsupermatrix);

The program for a matrix or supermatrix multiplication is as follows:

function array3 = fanp\_multiMatrix(array1, array2)

sum = sum + array1(*i*, *m*)\*array2(*m*, *j*);

return value "array3" records the result of a new iteration.

circulantCheckResult = fanp\_circulantCheck(matrixRecord, newMatrix);

where the "weighted supermatrix" can be arbitrary, which is derived from an unweighted supermatrix. It needs to use a semicolon to separate each row of the supermatrix. "newMatrix" is the new matrix after an iteration. The variable "time" is to keep count of iterations. Variable "matrixRecord" is a three-dimensional variable used to record the

After each iteration, "matrixRecord" is called to check whether the "newMatrix" is stable or periodic. Whenever the "newMatrix" reaches a stable or periodic state, the program terminates. Otherwise, the program will keep on iterating. Finally, the total number of

where the parameters "array1" and "array2" are the results of current iteration, and the

The following function "fanp\_circulantCheck" is for determining whether the new supermatrix is stable or periodic. Function "fanp\_equal" will be called in this procedure.

function [circulantFlag,rLength,limitedMatrix,cycles] = fanp\_circulantCheck(matrixRecord,

matrixRecord(:, :, times+1) = newMatrix;

times = times + 1; disp(times)

disp('total multied :')

output of each iteration.

iterations will be displayed.

 *n* = size(array1); for *i* = 1: *n* for *j* = 1: *n* sum = 0; for m = 1: *n*

array3(*i*, *j*) = sum;

end

 end end

newMatrix); circulantFlag = 0; limitedMatrix = [ ]; cycles = 1;

*a* = size(matrixRecord);

end

disp(times) disp('times')

> In the program above, "matrixRecord" and "newMatrix" are the input parameters, and "circulantFlag" is the output and the sign of a cycle. If the supermatrix has a cycle, "1" is returned. The limit supermatrix, cycles and iterative times of the cycle starting will be displayed.

> Comparisons will be made between the iterative output and the existing results one by one. If any two of them are equal, then the cycle of the supermatrix occurs. In this case, the final limit supermatrix is acquired by dividing the summation of all the supermatrices in the cycle by the value of period. For a supermatrix without a cycle, we can assume that its period is one. Therefore, whatever the results of the comparison, we can use the procedure above to obtain the ultimate limit supermatrix.

> The following function "fanp\_equal" is used to test whether the new supermatrix is the same as the former one or not.

```
function equalFlag = fanp_equal(array1,array2) 
 n = size(array1); 
 for i=1: n
 for j=1: n
 if array1(i, j) ~= array2(i, j) 
 disp('the two matrix are not equal') 
 equalFlag = 0; 
 return 
 end 
 end
```

```
 end 
 disp('the two matrix are equal') 
 equalFlag = 1; 
 return
```
where elements of "array1" and "array2" will be compared one by one. If they are equal, return 1; otherwise, return 0.

Fuzzy Analytical Network Process Implementation with Matlab 153

.

According to the result, the limit supermatrix is

**Figure 6.** The operation result of Example 5

0.0439 0.0439 0.0439 0.0439 0.2193 0.2193 0.2193 0.2193 0.4211 0.4211 0.4211 0.4211 0.3158 0.3158 0.3158 0.3158

where "cycles" is 1 means the limit supermatrix is not periodic, and "cycle start times" is 90 means the cycle appears at the ninetieth iteration. The total number of iterations is 91. The limit supermatrix is obtained at the end of ninetieth iteration though the program was implemented one more time. The local weights are (0.0439, 0.2193, 0.4211, 0.3158) for this supermatrix.

0.000 0.250 0.266 0.086 0.269 0.100 0.250 0.269 0.000 0.000 0.000 0.000 0.133 0.133 0.083 0.083 0.000 0.067 0.268 0.085 0.200 0.125 0.085 0.000 0.000 0.000 0.000 0.133 0.133 0.167 0.250 0.083 0.000 0.146 0.146 0.200 0.125 0.146 0.000 0.000 0.000 0.000 0.067 0.067 0.083 0.123 0.106 0.104 0.000 0.263 0.132 0.230 0.206 0.290 0.315 0.298 0.250 0.000 0.000 0.000 0.052 0.039 0.033 0.233 0.000 0.084 0.128 0.115 0.032 0.035 0.033 0.028 0.000 0.000 0.000 0.052 0.058 0.104 0.087 0.093 0.000 0.071 0.115 0.065 0.057 0.081 0.120 0.000 0.000 0.000 0.075 0.106 0.059 0.127 0.093 0.230 0.000 0.064 0.081 0.057 0.055 0.074 0.000 0.000 0.000 0.032 0.024 0.033 0.053 0.051 0.054 0.071 0.000 0.032 0.035 0.033 0.028 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.288 0.116 0.086 0.043 0.088 0.058 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.286 0.000 0.321 0.268 0.150 0.154 0.156 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.143 0.157 0.000 0.146 0.097 0.056 0.084 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.071 0.056 0.063 0.000 0.043 0.036 0.036 0.177 0.102 0.102 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.083 0.111 0.100 0.056 0.176 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.111 0.000 0.222 0.056 0.176 0.056 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.223 0.250 0.000

*4.2.3. Acquiring the limit supermatrix from a supermatrix with a cycle* 

**Example 5.** Acquiring the limit supermatrix from the following supermatrix

#### *4.2.2. Acquiring the limit supermatrix from a supermatrix without a cycle*

**Example 4.** Acquiring the limit supermatrix from the following weighted supermatrix.


As mentioned before, the weighted supermatrix can be specified in the function "fanp\_limitedsupermatrix". The result obtained by running it in the command window, as shown in Fig. 5.


**Figure 5.** The operation result of Example 4

.

According to the result, the limit supermatrix is

152 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

*4.2.2. Acquiring the limit supermatrix from a supermatrix without a cycle* 

where elements of "array1" and "array2" will be compared one by one. If they are equal,

**Example 4.** Acquiring the limit supermatrix from the following weighted supermatrix.

0.0000 0.2000 0.0000 0.0000 0.2000 0.0000 0.5000 0.0000 0.4000 0.4000 0.0000 1.0000 0.4000 0.4000 0.0000 0.0000

As mentioned before, the weighted supermatrix can be specified in the function "fanp\_limitedsupermatrix". The result obtained by running it in the command window, as

 

end

return

equalFlag = 1;

shown in Fig. 5.

**Figure 5.** The operation result of Example 4

disp('the two matrix are equal')

return 1; otherwise, return 0.


where "cycles" is 1 means the limit supermatrix is not periodic, and "cycle start times" is 90 means the cycle appears at the ninetieth iteration. The total number of iterations is 91. The limit supermatrix is obtained at the end of ninetieth iteration though the program was implemented one more time. The local weights are (0.0439, 0.2193, 0.4211, 0.3158) for this supermatrix.

#### *4.2.3. Acquiring the limit supermatrix from a supermatrix with a cycle*

**Example 5.** Acquiring the limit supermatrix from the following supermatrix



**Figure 6.** The operation result of Example 5

Specify the weighted supermatrix in the function "fanp\_limitedsupermatrix" and run the program in the command window, the result is shown in Fig. 6. Where "cycles" is 2 means the limit supermatrix is periodic, and the period is 2. "cycle start times" is 72 means the cycle appears since the 72nd iteration, and the limit supermatrix is obtained at the end of the 73rd iteration. According to the FPP method, the local weights are (0.1372, 0.1064, 0.1091, 0.1231, 0.0655, 0.0588, 0.0729, 0.0327, 0.0292, 0.0478, 0.0276, 0.0132, 0.0578, 0.0586, 0.0601). This example is actually the weighted supermatrix of the following case in section 5, and the result is the limit supermatrix of the case.

Fuzzy Analytical Network Process Implementation with Matlab 155

**Step 3.** Local weights of the factors and sub-factors which take part in the second and third levels of the ANP model, provided in Fig. 2, are calculated by FPP method. For example, according to equation (8), the local weights of Table 3 can be achieved by

> 

Max 1 / 20 1 / 5 0; 1 / 12 1 / 3 0; 1 / 12 1 / 4 0; 1/6 1 / 2 0; 1/6 1 / 3 0;

 

*ww w ww w ww w ww w ww w*

3 4

, , , ³ 0. *w w*

1/2 0;

*www*

43 4

1;

*ww w*

 S11 S12 S13 S21 S22 S23 S24 S25 S31 S32 S33 S34 S41 S42 S43 S11 0.000 0.750 0.800 0.171 0.538 0.200 0.500 0.538 0.000 0.000 0.000 0.000 0.400 0.400 0.250 S12 0.250 0.000 0.200 0.536 0.170 0.400 0.250 0.170 0.000 0.000 0.000 0.000 0.400 0.400 0.500 S13 0.750 0.250 0.000 0.293 0.293 0.400 0.250 0.293 0.000 0.000 0.000 0.000 0.200 0.200 0.250 S21 0.368 0.318 0.313 0.000 0.526 0.263 0.458 0.412 0.581 0.631 0.595 0.500 0.000 0.000 0.000 S22 0.155 0.118 0.099 0.467 0.000 0.169 0.256 0.230 0.065 0.070 0.066 0.056 0.000 0.000 0.000 S23 0.155 0.173 0.313 0.174 0.186 0.000 0.143 0.230 0.129 0.115 0.162 0.241 0.000 0.000 0.000 S24 0.226 0.318 0.176 0.253 0.186 0.460 0.000 0.128 0.161 0.115 0.110 0.148 0.000 0.000 0.000 S25 0.095 0.073 0.099 0.107 0.102 0.108 0.143 0.000 0.065 0.070 0.066 0.056 0.000 0.000 0.000 S31 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.575 0.231 0.171 0.130 0.263 0.174 S32 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.571 0.000 0.644 0.536 0.450 0.460 0.467 S33 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.286 0.314 0.000 0.293 0.290 0.169 0.253 S34 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.143 0.111 0.125 0.000 0.130 0.108 0.107 S41 0.535 0.307 0.306 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.250 0.333 S42 0.299 0.168 0.527 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.333 0.000 0.667 S43 0.167 0.525 0.167 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.667 0.750 0.000

As mentioned before, the non-linear programming can be solved by Matlab. The optimal

=0.6667, which shows the

solutions are *w*1=0.0989, *w*2=0.4240, *w*3=0.2544, *w*4=0.2226, and

0; 3 0; 0; 3 0; 1/6 1 / 3 0;

434

 

 

*www ww w www ww w*

1234

*wwww*

414

1/2 0;

*www*

solving the following non-linear programming.

*w w*

**Table 4.** The unweighted supermatrix

1 2

## **5. Case study**

Supposing that a company has the opportunity to select an optimal project from a number of alternatives. Through pre-test, three projects, named as D1, D2 and D3, need further evaluation. A cross-functional project team consists of various departments working to select the best project. Firstly the selection criteria are identified. Then according to the FANP method, the optimal alternative is derived. The decision-making process of choosing an optimal project based on FANP is as follows:



**Table 2.** The comparison matrix using linguisitc scales at the dimension of optimal project

Expert opinions will be converted into the corresponding triangular fuzzy numbers, as shown in Table 3.


**Table 3.** The comparison matrix using TFNs at the dimension of optimal project

**Step 3.** Local weights of the factors and sub-factors which take part in the second and third levels of the ANP model, provided in Fig. 2, are calculated by FPP method. For example, according to equation (8), the local weights of Table 3 can be achieved by solving the following non-linear programming.

#### Max

$$\begin{aligned} &\left(1/20\right)\dot{\lambda}w\_{2}-w\_{1}+\left(1/5\right)w\_{2}\leq 0;\\ &\left(1/12\right)\dot{\lambda}w\_{2}+w\_{1}-\left(1/3\right)w\_{2}\leq 0;\\ &\left(1/12\right)\dot{\lambda}w\_{3}-w\_{1}+\left(1/4\right)w\_{3}\leq 0;\\ &\left(1/6\right)\dot{\lambda}w\_{3}+w\_{1}-\left(1/2\right)w\_{3}\leq 0;\\ &\left(1/6\right)\dot{\lambda}w\_{4}-w\_{1}+\left(1/3\right)w\_{4}\leq 0;\\ &\left(1/2\right)\dot{\lambda}w\_{4}+w\_{1}-w\_{4}\leq 0;\\ &\dot{\lambda}w\_{3}-w\_{2}+w\_{3}\leq 0;\\ &\dot{\lambda}w\_{3}+w\_{2}-3w\_{3}\leq 0;\\ &\dot{\lambda}w\_{4}-w\_{2}+w\_{4}\leq 0;\\ &\dot{\lambda}w\_{4}+w\_{2}-3w\_{4}\leq 0;\\ &\left(1/6\right)\dot{\lambda}w\_{4}-w\_{3}+\left(1/3\right)w\_{4}\leq 0;\\ &\left(1/2\right)\dot{\lambda}w\_{4}+w\_{3}-w\_{4}\leq 0;\\ &w\_{1}+w\_{2}+w\_{3}+w\_{4}=1;\\ &w\_{1}w\_{2},w\_{3},w\_{4}^{3}\geq 0.\end{aligned}$$


**Table 4.** The unweighted supermatrix

154 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

result is the limit supermatrix of the case.

an optimal project based on FANP is as follows:

proposed, as shown in Fig. 2.

**5. Case study** 

same way.

shown in Table 3.

Specify the weighted supermatrix in the function "fanp\_limitedsupermatrix" and run the program in the command window, the result is shown in Fig. 6. Where "cycles" is 2 means the limit supermatrix is periodic, and the period is 2. "cycle start times" is 72 means the cycle appears since the 72nd iteration, and the limit supermatrix is obtained at the end of the 73rd iteration. According to the FPP method, the local weights are (0.1372, 0.1064, 0.1091, 0.1231, 0.0655, 0.0588, 0.0729, 0.0327, 0.0292, 0.0478, 0.0276, 0.0132, 0.0578, 0.0586, 0.0601). This example is actually the weighted supermatrix of the following case in section 5, and the

Supposing that a company has the opportunity to select an optimal project from a number of alternatives. Through pre-test, three projects, named as D1, D2 and D3, need further evaluation. A cross-functional project team consists of various departments working to select the best project. Firstly the selection criteria are identified. Then according to the FANP method, the optimal alternative is derived. The decision-making process of choosing

**Step 1.** Model construction and problem structuring. Taking the interaction among dimensions and attribute-enablers into account, a four-level evaluation index system is

**Step 2.** Pairwise comparison matrices among dimensions/attributes are formed by the decision committee using the scales given in Table 1, and the scores of the three projects are determined as well. For instance, Table 2. is the pairwise comparison matrix for the profitability (S1), risk (S2), owners (S3) and Bidding competition (S4) at the dimension of choosing an optimal project. All the fuzzy comparison matrices are produced in the

optimal project S1 S2 S3 S4

Expert opinions will be converted into the corresponding triangular fuzzy numbers, as

optimal project S1 S2 S3 S4 Local weights S1 (1, 1, 1) (1/5, 1/4, 1/3) (1/4, 1/3, 1/2) (1/3, 1/2, 1) 0.0989 S2 (3, 4, 5) (1, 1, 1) (1, 2, 3) (1, 2, 3) 0.4240 S3 (2, 3, 4) (1/3, 1/2, 1) (1, 1, 1) (1/3, 1/2, 1) 0.2544 S4 (1, 2, 3) (1/3, 1/2, 1) (1, 2, 3) (1, 1, 1) 0.2226

> =0.6667

**Table 3.** The comparison matrix using TFNs at the dimension of optimal project

**Table 2.** The comparison matrix using linguisitc scales at the dimension of optimal project

S1 EI 1/IM2 1/MI 1/IM1 S2 IM2 EI IM1 IM1 S3 MI 1/IM1 EI 1/IM1 S4 IM1 1/ IM1 IM1 EI

> As mentioned before, the non-linear programming can be solved by Matlab. The optimal solutions are *w*1=0.0989, *w*2=0.4240, *w*3=0.2544, *w*4=0.2226, and =0.6667, which shows the

experts' opinions are of good consistency, and the comparison result is acceptable, as shown in Table 3. All the local weights are acquired in the same manner.

Fuzzy Analytical Network Process Implementation with Matlab 157

can be calculated. Then we can choose the optimal alternative. A numerical example is given

Two key steps in the process of decision-making with FANP are solved by Matlab. One is acquiring local weights of the fuzzy pairwise comparison matrices; the other is obtaining the limit supermatrix. Matlab is selected for its excellent performance on data processing and matrix operation. Compared with the existing research results, the proposed method fully takes into consideration the interaction and feedback relationships between the dimensions and/or attributes, and it uses triangular fuzzy numbers to represent the preference opinions

S11 0.099 0.538 0.137 0.007 0.104 0.381 0.333 0.286 0.039 0.035 0.030 S12 0.099 0.170 0.106 0.002 0.025 0.300 0.300 0.400 0.008 0.008 0.010 S13 0.099 0.293 0.109 0.003 0.045 0.263 0.368 0.368 0.012 0.017 0.017 S21 0.424 0.361 0.123 0.019 0.268 0.286 0.333 0.381 0.076 0.089 0.102 S22 0.424 0.243 0.066 0.007 0.096 0.267 0.400 0.333 0.026 0.038 0.032 S23 0.424 0.147 0.059 0.004 0.052 0.304 0.304 0.391 0.016 0.016 0.020 S24 0.424 0.147 0.073 0.005 0.065 0.214 0.357 0.429 0.014 0.023 0.028 S25 0.424 0.102 0.033 0.001 0.020 0.375 0.250 0.375 0.008 0.005 0.008 S31 0.254 0.228 0.029 0.002 0.024 0.273 0.318 0.409 0.007 0.008 0.010 S32 0.254 0.571 0.048 0.007 0.099 0.364 0.318 0.318 0.036 0.031 0.031 S33 0.254 0.124 0.028 0.001 0.012 0.286 0.333 0.381 0.004 0.004 0.005 S34 0.254 0.077 0.013 0.000 0.004 0.333 0.286 0.381 0.001 0.001 0.001 S41 0.223 0.170 0.058 0.002 0.031 0.400 0.250 0.350 0.012 0.008 0.011 S42 0.223 0.300 0.059 0.004 0.056 0.421 0.263 0.316 0.023 0.015 0.018 S43 0.223 0.529 0.060 0.007 0.101 0.286 0.333 0.381 0.029 0.034 0.038

*ij A w w*′ *S*1*ij S*2*ij S*3*ij d*<sup>1</sup> *d*<sup>2</sup> *d*<sup>3</sup>

The score of alternative D*<sup>k</sup>* 0.31 0.33 0.36

*Dongling School of Economics and Management, University of Science and Technology Beijing, Beijing* 

The author is very grateful to the editor, Dr. Vasilios Katsikis for his valuable suggestions, and Robert Ulbrich who gives constructive comments and suggestions on English grammar,

of experts. It helps to make a more accurate and scientific decision.

**Table 6.** The comprehensive weights and the ranking of the alternatives

*ij <sup>A</sup> <sup>D</sup>*

by the proposed method as well.

index *Pj <sup>I</sup>*

**Author details** 

Xiaoguang Zhou

**Acknowledgement** 

*China* 



**Table 5.** The limit supermatrix

**Step 7.** Calculate the comprehensive weights of each index, as shown in Table 6.

**Step 8.** According to equation (11), the scores of each alternative can be calculated, D1=0.31, D1=0.33, D3=0.36, as shown in Table 6. Therefore, we can choose project D3 as the best one.

## **6. Conclusions**

Taking the interaction and feedback relationships between criteria and/or indicators into account, an evaluation index system for selecting a construction project is proposed. With the uncertain and inaccurate information during the evaluation process being considered, an evaluation and selection model based on fuzzy analytic network process method is presented. The weights of the indices, including the weights of the criterion level indicators, the weights of independent sub-indices and the weights of dependent sub-indices are determined by the fuzzy preference programming method. Meanwhile, an unweighted supermatrix based on its network structure is built for interactional indicators, and the convergent limit supermatrix is calculated after randomizing the unweighted supermatrix. Accordingly, the comprehensive weight of each index and the final score of each alternative can be calculated. Then we can choose the optimal alternative. A numerical example is given by the proposed method as well.

Two key steps in the process of decision-making with FANP are solved by Matlab. One is acquiring local weights of the fuzzy pairwise comparison matrices; the other is obtaining the limit supermatrix. Matlab is selected for its excellent performance on data processing and matrix operation. Compared with the existing research results, the proposed method fully takes into consideration the interaction and feedback relationships between the dimensions and/or attributes, and it uses triangular fuzzy numbers to represent the preference opinions of experts. It helps to make a more accurate and scientific decision.


**Table 6.** The comprehensive weights and the ranking of the alternatives

## **Author details**

156 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

in Table 3. All the local weights are acquired in the same manner.

unweighted supermatrix is built, as shown in Table 4.

as Example 5 in the former section.

**Table 5.** The limit supermatrix

**6. Conclusions** 

experts' opinions are of good consistency, and the comparison result is acceptable, as shown

**Step 4.** According to the interdependencies among dimensions and attribute-enablers, an

 S11 S12 S13 S21 S22 S23 S24 S25 S31 S32 S33 S34 S41 S42 S43 S11 0.137 0.137 0.137 0.137 0.137 0.137 0.137 0.137 0.137 0.137 0.137 0.137 0.137 0.137 0.137 S12 0.106 0.106 0.106 0.106 0.106 0.106 0.106 0.106 0.106 0.106 0.106 0.106 0.106 0.106 0.106 S13 0.109 0.109 0.109 0.109 0.109 0.109 0.109 0.109 0.109 0.109 0.109 0.109 0.109 0.109 0.109 S21 0.123 0.123 0.123 0.123 0.123 0.123 0.123 0.123 0.123 0.123 0.123 0.123 0.123 0.123 0.123 S22 0.066 0.066 0.066 0.066 0.066 0.066 0.066 0.066 0.066 0.066 0.066 0.066 0.066 0.066 0.066 S23 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 S24 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 S25 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.033 0.033 S31 0.029 0.029 0.029 0.029 0.029 0.029 0.029 0.029 0.029 0.029 0.029 0.029 0.029 0.029 0.029 S32 0.048 0.048 0.048 0.048 0.048 0.048 0.048 0.048 0.048 0.048 0.048 0.048 0.048 0.048 0.048 S33 0.028 0.028 0.028 0.028 0.028 0.028 0.028 0.028 0.028 0.028 0.028 0.028 0.028 0.028 0.028 S34 0.013 0.013 0.013 0.013 0.013 0.013 0.013 0.013 0.013 0.013 0.013 0.013 0.013 0.013 0.013 S41 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 S42 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 0.059 S43 0.060 0.060 0.060 0.060 0.060 0.060 0.060 0.060 0.060 0.060 0.060 0.060 0.060 0.060 0.060

**Step 7.** Calculate the comprehensive weights of each index, as shown in Table 6.

**Step 8.** According to equation (11), the scores of each alternative can be calculated, D1=0.31, D1=0.33, D3=0.36, as shown in Table 6. Therefore, we can choose project D3 as the best one.

Taking the interaction and feedback relationships between criteria and/or indicators into account, an evaluation index system for selecting a construction project is proposed. With the uncertain and inaccurate information during the evaluation process being considered, an evaluation and selection model based on fuzzy analytic network process method is presented. The weights of the indices, including the weights of the criterion level indicators, the weights of independent sub-indices and the weights of dependent sub-indices are determined by the fuzzy preference programming method. Meanwhile, an unweighted supermatrix based on its network structure is built for interactional indicators, and the convergent limit supermatrix is calculated after randomizing the unweighted supermatrix. Accordingly, the comprehensive weight of each index and the final score of each alternative

**Step 5.** Randomize the unweighted supermatrix to derive the weighted supermatrix. **Step 6.** Multiply the weighted supermatrix by itself until the values of each row converge to the same value for every column of the supermatrix. Then we choose any column from the stable limit supermatrix as the local weights of interdependency indicators, as shown in Table 5. It can be solved by Matlab, and the process of calculation is the same

Xiaoguang Zhou

*Dongling School of Economics and Management, University of Science and Technology Beijing, Beijing China* 

## **Acknowledgement**

The author is very grateful to the editor, Dr. Vasilios Katsikis for his valuable suggestions, and Robert Ulbrich who gives constructive comments and suggestions on English grammar,

which have been very helpful in improving the book. This work was supported by "the Fundamental Research Funds for Chinese Central Universities (No. FRF-BR-11-009A)".

Fuzzy Analytical Network Process Implementation with Matlab 159

Gabriel, Steven A.; Kumar, Satheesh.; Ordóňez, Javier. & Nasserian, Amirali. (2006). A multiobjective optimization model for project selection with probabilistic

Ghorbani, S. & Rabbani, M. (2009). A new multi-objective algorithm for a project selection

Gutjahr, Walter J.; Katzensteiner, Stefan,; Reiter, Peter,; Stummer, Christian. & Denk, Michaela. (2010). Multi-objective decision analysis for competence-oriented project

Huang, Ivy B.; Keisler, Jeffrey. & Linkov Igor. (2011). Multi-criteria decision analysis in environmental sciences- Ten years of applications and trends. *Science of the Total* 

Huo, Liang-an.; Lan, Jibin. & Wang, Zhongxing. (2011). New parametric prioritization methods for an analytical hierarchy process based on a pairwise comparison matrix.

Ju, Yanbing.; Wang, Aihua. & Liu, Xiaoyue. (2012). Evaluating emergency response capacity by fuzzy AHP and 2-tuple fuzzy linguistic approach. *Expert Systems with Applications*,

Jung, U. & Seo, D. W. (2010). An ANP approach for R&D project evaluation based on interdependencies between research objectives and evaluation criteria. *Decision Support* 

Kahraman, Cengiz.; Ertay, Tijen. & Büyüközkan, Gülçin. (2006). A fuzzy optimization model for QFD planning process using analytic network approach. *European Journal of* 

Kumar, D. P. (2006). Integrated project evaluation and selection using multiple-attribute decision-making technique. *International Journal of Production Economics*, vol. 103, pp. 90-

Lee, Hakyeon.; Kim, Chulhyun.; Cho, Hyunmyung. & Park, Yongtae. (2009). An ANP-based technology network for identification of core technologies: A case of telecommunication

Liesiö, Juuso.; Mild, Pekka. & Salo, Ahti. (2007). Preference programming for robust portfolio modeling and project selection. *European Journal of Operational Research*, vol.

Lin, C. T. & Chen, Y. T. (2004). Bid/no-bid decision-making-a fuzzy linguistic approach.

Mikhailov, L. (2003). Deriving priorities from fuzzy pairwise comparison judgements. *Fuzzy* 

Mikhailov, L. (2004). Group prioritization in the AHP by fuzzy preference programming

Mohanty, R. P. (1992). Project selection by a multiple-criteria decision making method: An example from a developing country. *International Journal of Project Management*, vol. 10,

portfolio selection. *European Journal of Operational Research*, vol. 205, pp. 670-679 Huang, Chi-Cheng.; Chu, Pin-Yu. & Chiang, Yu-Hsiu. (2008). A fuzzy AHP application in

government-sponsored R&D project selection. *Omega*, vol. 36, pp. 1038-1052

considerations. *Socio-Economic Planning Sciences*, vol. 40, pp. 297-313

problem. *Advances in Engineering Software*, vol. 40, pp. 9-14

*Mathematical and Computer Modelling*, vol. 54, pp. 2736-2749

technologies. *Expert Systems with Applications*, vol. 36: 894-908.

*International Journal of Project Management*, vol. 22, pp. 585-593

method. *Computers & Operations Research*, vol. 31, pp. 293-301

*Environment*, vol. 409, pp. 3578-3594

vol. 39, pp. 6972-6981

181, pp. 1488-1505

pp. 31-38

103

*Systems*, vol. 49, pp. 335-342

*Operational Research*, vol. 171, pp. 390-411

*Sets and Systems*, vol. 134, pp. 365-385

## **7. References**


Gabriel, Steven A.; Kumar, Satheesh.; Ordóňez, Javier. & Nasserian, Amirali. (2006). A multiobjective optimization model for project selection with probabilistic considerations. *Socio-Economic Planning Sciences*, vol. 40, pp. 297-313

158 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

goal programming. *Safety Science*, vol. 48, pp. 238-247

*Journal of Operational Research*, vol. 95, pp. 649-655

*Fuzzy Sets and Systems*, vol. 120, pp. 181-195

*Applications*, vol. 37, pp. 1005-1014

2011. 11. 050

700-715

**7. References** 

62, pp. 3857-3870

*Applications*, vol. 37, pp. 1939-1945

which have been very helpful in improving the book. This work was supported by "the Fundamental Research Funds for Chinese Central Universities (No. FRF-BR-11-009A)".

Amiri, Morteza Pakdin. (2010). Project selection for oil-fields development by using the AHP and fuzzy TOPSIS methods. *Expert Systems with Applications*, vol. 37, pp. 6218-6224 Aragonés-Beltrán, P.; Chaparro-González, F.; Pastor-Ferrando, J. P. & Rodríguez-Pozo, F. (2010). An ANP-based approach for the selection of photovoltaic solar power plant investment projects. *Renewable and Sustainable Energy Reviews*, vol. 14, pp. 249-264 Arunraj, N. S. & Maiti, J. (2010). Risk-based maintenance policy selection using AHP and

Ayağ, Z. & Özdemir, R. G. (2009). A hybrid approach to concept selection through fuzzy analytic network process. *Computers & Industrial Engineering*, vol. 56, pp. 368–379 Bhattacharyya, Rupak.; Kumar, Pankaj. & Kar, Samarjit. (2011). Fuzzy R&D portfolio selection of interdependent projects. *Computers and Mathematics with Applications*, vol.

Boran, Semra. & Goztepe, Kerim. (2010). Development of a fuzzy decision support system for commodity acquisition using fuzzy analytic network process. *Expert Systems with* 

Buckley, J. J. (1985). Fuzzy hierarchical analysis. *Fuzzy Sets and Systems*, vol. 17, pp. 233-247 Chang, D. Y. (1996). Applications of the extent analysis method on fuzzy AHP. *European* 

Chang, P. T. & Lee, J. H. (2012). A fuzzy DEA and knapsack formulation integrated model

Cheng, E. W. L. & Li, H. (2005). Analytic Network Process Applied to Project Selection.

Csutora, R. & Buckley, J. J. (2001). Fuzzy hierarchical analysis: The Lamda-Max method.

Dağdeviren, Metin. & Yüksel, İhsan. (2010). A fuzzy analytic network process (ANP) model for measurement of the sectoral competititon level (SCL). *Expert Systems with* 

Dey, Prasanta Kumar. (2006). Integrated project evaluation and selection using multipleattribute decision-making technique. *Int. J. Production Economics*, vol. 103, pp. 90–103 Ebrahimnejad, S.; Mousavi, S. M.; Tavakkoli-Moghaddam, R.; Hashemi, H. & Vahdani, B. (2011). A novel two-phase group decisionmaking approach for construction project selection in a fuzzy environment. *Applied Mathematical Modelling*. doi: 10. 1016/j. apm.

Fang, Yong.; Chen, Lihua. & Fukushima, Masao. (2008). A mixed R&D projects and securities portfolio selection model. *European Journal of Operational Research*, vol. 185, pp.

for project selection. *Computers & Operations Research*, vol. 39, pp. 112–125

*Journal of Construction Engineering and Management*, vol. 131 (4), pp. 459-466.

	- Pires, Ana.; Chang, Ni-Bin. & Martinho, Graça. (2011). An AHP-based fuzzy interval TOPSIS assessment for sustainable expansion of the solid waste management system in Setúbal Peninsula, Portugal. *Resources, Conservation and Recycling*, vol. 56, pp. 7-21

**Chapter 0**

**Chapter 7**

**Fractal Dimension Estimation Methods for**

The use of medical images has its main aim in the detection of potential abnormalities. This goal is accurately achieved with the synergy between the ability in recognizing unique image patterns and finding the relationship between them and possible diagnoses. One of the methods used to aid this process is the extrapolation of important features from the images called texture; texture is an important source of visual information and is a key component in

The current evolution of both texture analysis algorithms and computer technology made boosted development of new algorithms to quantify the textural properties of an image and for medical imaging in recent years. Promising results have shown the ability of texture analysis methods to extract diagnostically meaningful information from medical images that were obtained with various imaging modalities such as positron emission tomography (PET) and magnetic resonance imaging (MRI). Among the texture analysis techniques, fractal geometry has become a tool in medical image analysis. In fact, the concept of fractal dimension can be used in a large number of applications, such as shape analysis[1] and image segmentation[2]. Interestingly, even though the fact that self-similarity can hardly be verified in biological objects imaged with a finite resolution, certain similarities at different spatial scales are quite evident. Precisely, the fractal dimension offers the ability to describe and to characterize the complexity of the images or more precisely of their texture composition.

A fractal is a geometrical object characterized by two fundamental properties: *Self-similarity* and *Hausdorff Besicovich dimension*. A self-similar object is exactly or approximately similar to a part of itself and that can be continuously subdivided in parts each of which is (at least approximately) a reduced-scale copy of the whole. Furthermore, a fractal generally

> ©2012 Napolitano et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly

©2012 Napolitano et al., licensee InTech. This is a paper distributed under the terms of the Creative CommonsAttribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

Antonio Napolitano, Sara Ungania and Vittorio Cannata

Additional information is available at the end of the chapter

**Biomedical Images**

http://dx.doi.org/10.5772/48760

**1. Introduction**

image analysis.

**2. Fractals**

**2.1. Fractal geometry**

cited.

