**5.2. The result of simulation**

170 Petri Nets – Manufacturing and Computer Science

**Learning Algorithm of LFPNSD: Step 1**. Make all weights on arcs be �; **Step 2**. For every service in training data set,

from WSDL;

*α*(*dij*, *trm*) =1.

fires.

Discussion:

the weight.

formula (16), else set *α*(*dij*, *trk*) =0.

they are added into the weight.

the correct rate of output.

set weight of arcs to be ��and *α*(*dij*, *trm*) =1.

vector;

Repeat:

As shown in Figure 13, service free textual description, WSDL description and operation and port information are used as input vector in the learning algorithm. And, service classification, WSDL address, all of service operation names and service SOAP messages are used as output vector. Because the training data type is the keyword, the learning algorithm of LFPN is developed into a learning algorithm of LFPNSD. The learning algorithm of

2.1 Get free textual description; Draw out WSDL description and operation and port name

2.2 Set service textual description, WSDL description, operation and port information as input

If every keyword in weight is in the input data, then compute *α*(*dij*, *trk*) according to

If each of *α*(*dij*, *tr*1), *α*(*dij*, *tr*2), …, *α*(*dij*, *trm-*1) equates 0 and the weight of *trm* is �, then set

If each of *α*(*dij*, *tr*1), *α*(*dij*, *tr*2), …, *α*(*dij*, *trm-*1) equates 0�and *trm* doesn't exist, a new transition *trm* and the arcs which connect *trm* with input and output place are constituted,

If training time is *t* and the weight is �, *t* keywords in service description are gotten and

2.4 If *α*(*d*11,*<sup>a</sup>*, *trk*) •*α*(*d*12,*<sup>b</sup>*, *trk*) •*α*(*d*13,*<sup>c</sup>*, *trk*) =max((*α*(*d*11,*<sup>a</sup>*, *tri*) •*α*(*d*12,*<sup>b</sup>*, *tri*) •*α*(*d*13,*<sup>c</sup>*, *tri*)) 1≤*i*≤*<sup>m</sup>*), then *trk*

2.5 If the *trk* fired, get a keyword in service description but not in the weight, and add it into

2.6 If the *trk* fired, compare out training data (service classification, WSDL address, service operation and message) with the weight of *wk,*21, *wk,*22, *wk,*23, *wk,*24, and calculate and record

learning fuzzy Petri net for Web service discovery is shown in the table 5.

2.3 Compare the input with the keywords on the weight of input arc:

2.7 Update *wk,*21, *wk,*22, *wk,*23, *wk,*24 according to output of training data.

**Table 5.** Learning algorithm of learning fuzzy Petri net for Web service discovery

**Step 3**. Repeat step 2, until each *α*(*d*11,*<sup>a</sup>*, *trk*), *α*(*d*12,*<sup>b</sup>*, *trk*), *α*(*d*13,*<sup>c</sup>*, *trk*) meets the requirement value *thk*.

data and added into weight. In this case, the learning rate is 1/(10-6)=0.25.

1. We discuss about the learning rate *γ* in the learning algorithm of LFPNSD. In the algorithm, the keyword is learned and added into weights one by one. Hereby, *Xj*–*W<sup>k</sup>* (new) =1 and *Xj*−*W*(old) equates the difference between the number of input data keywords and the number of keywords on arc weight. Because *Xj*−*W*(old) is not constant, the learning rate *γ* is different at each learning episode. For example, when input data has 10 keywords and arc weight has 6 keywords firstly, one keyword is learnt from input

2. If keyword isn't learning one by one, the keywords on *W*1, *W*2, …, *Wk*, …, *Wm* will do not balance at beginning stage of training. Then, the similar but different description The two simulations are carried out. One is a more efficient service selection through QoS prediction using LFPN. The other is a service selection for appropriate function using LFPNSD.

## *Simulation for more efficient Web service selection*

During the process of Web services discovery, there are maybe several services which have same function. One service which has the best QoS needs to be select. Hereby, the service performance context is used to predict the QoS value for next execution of service. If the prediction is precise enough, an appropriate service maybe selected.

In this simulation, LFPN is used as learning model for predicting service execution time which is main part of QoS. There are 11 inputs and 1 output in this model. 11 inputs include 10 data which are last 10 times execution time of a service and one data which is reliability of the service. The output is a prediction for execution time of service's next execution. 10 transitions of LFPN is set when initialization.

A Web service performance dataset is employed for simulation. This dataset includes 100 publicly available Web services located in more than 20 countries. 150 service users executed about 100 invocations on each Web service. Each service user recorded execution time and invocation failures in dataset [27]. We selected one use's invocation data as training data. Last 10 times execution time and reliability of each service was set as input and next time execution time was set as output. 20 sets of training data were selected for each of 100 services.

The initial threshold is selected as 0.2 and the threshold is increased 0.001 at every training episode. The initial learning rate is set as 1/1.1 for every transition. The learning rate is 1/(0.1+*t*) when a transition fired *t* times. Prediction result and training output data are noted as *Outputpredict* and *Outputtraining*. Prediction precision probability *Prepro* is used to evaluate the precision result. And the precision probability is computed using:

#### *Prepro* =1-(|*Outputpredict*−*Outputtraining*|/ *Outputtraining*).

Three different training stop conditions are set as that three threshold values equal to 0.7, 0.8, and 0.9. The simulation result is listed in Table 6. Here, the number of service, which their execution time is precisely predicted, increased with the training threshold value increasing.

#### 172 Petri Nets – Manufacturing and Computer Science

In the paper [3], the authors improved the traditional BP algorithm based on three-term method consisting of a learning rate, a momentum factor and a proportional factor for predicting service performance according to service context information. In this paper, this model is used to predict service execution time. The training data is same to LFPN's. And the learning rate is 0.6, momentum factor 0.9, proportional factor 1 and training times is 10,000. We compared the simulation result of the method of [3], i.e. the conventional method, with that of LFPN in Table 7. From Table 7, it is shown that Web service number of high precision in LFPN's prediction is bigger than the number of BP algorithm's prediction and Web service number of low precision in LFPN's prediction is smaller that BP algorithm's prediction. Hereby, the result of LFPN is better than result of three term's BP algorithm.

Construction and Application of Learning Petri Net 173

**Figure 14.** The results of simulation using LFPNSD and its learning algorithm− Discovery Precision

**Figure 15.** The results of simulation using LFPNSD and its learning algorithm − Discovery Precision

A method for evaluating the proximity of services is proposed [21]. In the method, WSDL document is represented as *Dwsdl*={*t1*, *t2*, …, *twsdl*} and *Ddesc*={*t1*, *t2*, …, *tdesc*} represents the textual description of the service. Because there is another descriptor of operation and port parameters in LFPNSD model, we add this descriptor as *Dop&port* ={*t1*, *t2*, …, *top&port*} in order to compare two methods. Here, *twsdl*, *tdesc* and *top&port* are last keyword of WSDL, textural description and operation and port parameters. In the proximity of services method, the descriptor of natural language request which is provided by a user is *Duser* and descriptor of invoked service is *Dinv*. The three Context Overlaps (CO) are defined as same keywords between *Dwsdluser*, *Ddescuser*, *Dop&portuser* and *Dwadlinv*, *Ddescinv*, D*op&portinv*. The proximity of user

Probability for total services

Probability for classification services


**Table 6.** Prediction ability of LFPN


**Table 7.** Prediction ability compares for two methods

#### *Simulation for selection of Web service's function*

In this simulation, LFPNSD is used as leaning model. The benchmark Web services which listed at www.xmethods.net are used as training data. Each service of these 260 services has a textual description and its WSDL address. And, we can get WSDL description, operation and port parameters from the WSDL. We want to classify the Web service into four classes: 1) business, 2) finance, 3) nets and 4) life services. After training, Web services are invoked by natural language request [14]. The natural language is decompounded into three inputs of this model. For example, we want to get a short message service (SMS) for sending a message to a mobile phone. The nature language of this discovery is input and decomposed into three parts: 1) WSDL description: send a message to a mobile phone; 2) free textual service description: sending a message to a mobile phone through the Internet; 3) operation and port parameters maybe have operation names: send messages, send message multiple recipients, and so on; port names send service SOAP, and so on.

In this simulation, we firstly set 100 transitions for LFPNSD model. The training stop condition is *thk* (1≤*k*≤*m*) ≥ 0.6. The service selection precision is recorded after every time of training. As shown in Figure 14 and 15, using LFPNSD model and its learning algorithm described in Section 5.1, every service class precision probability raised to more than 0.9 when the training time reaches to 10.

172 Petri Nets – Manufacturing and Computer Science

**Table 6.** Prediction ability of LFPN

Number of Web services using the

Number of Web services using the

**Table 7.** Prediction ability compares for two methods

recipients, and so on; port names send service SOAP, and so on.

*Simulation for selection of Web service's function* 

when the training time reaches to 10.

Precision 0.99~1 0.98~0.99 0.95~

algorithm.

LFPN(th=0.9)

conventional method

In the paper [3], the authors improved the traditional BP algorithm based on three-term method consisting of a learning rate, a momentum factor and a proportional factor for predicting service performance according to service context information. In this paper, this model is used to predict service execution time. The training data is same to LFPN's. And the learning rate is 0.6, momentum factor 0.9, proportional factor 1 and training times is 10,000. We compared the simulation result of the method of [3], i.e. the conventional method, with that of LFPN in Table 7. From Table 7, it is shown that Web service number of high precision in LFPN's prediction is bigger than the number of BP algorithm's prediction and Web service number of low precision in LFPN's prediction is smaller that BP algorithm's prediction. Hereby, the result of LFPN is better than result of three term's BP

0.98

21 14 17 15 10 8 9 6

6 7 15 18 20 12 10 12

Number of Web services (*th*= 0.9) 21 14 17 15 10 8 9 6 Number of Web services (*th*= 0.8) 17 12 14 11 10 12 10 14 Number of Web services (th= 0.7) 10 10 16 8 8 11 19 18

Precision 0.99~1 0.98~0.99 0.95~0.98 0.9~0.95 0.8~0.9 0.7~0.8 0.6~0.7 0~0.6

In this simulation, LFPNSD is used as leaning model. The benchmark Web services which listed at www.xmethods.net are used as training data. Each service of these 260 services has a textual description and its WSDL address. And, we can get WSDL description, operation and port parameters from the WSDL. We want to classify the Web service into four classes: 1) business, 2) finance, 3) nets and 4) life services. After training, Web services are invoked by natural language request [14]. The natural language is decompounded into three inputs of this model. For example, we want to get a short message service (SMS) for sending a message to a mobile phone. The nature language of this discovery is input and decomposed into three parts: 1) WSDL description: send a message to a mobile phone; 2) free textual service description: sending a message to a mobile phone through the Internet; 3) operation and port parameters maybe have operation names: send messages, send message multiple

In this simulation, we firstly set 100 transitions for LFPNSD model. The training stop condition is *thk* (1≤*k*≤*m*) ≥ 0.6. The service selection precision is recorded after every time of training. As shown in Figure 14 and 15, using LFPNSD model and its learning algorithm described in Section 5.1, every service class precision probability raised to more than 0.9

0.9~0.95 0.8~0.9 0.7~0.8 0.6~0.7 0~0.6

**Figure 14.** The results of simulation using LFPNSD and its learning algorithm− Discovery Precision Probability for total services

**Figure 15.** The results of simulation using LFPNSD and its learning algorithm − Discovery Precision Probability for classification services

A method for evaluating the proximity of services is proposed [21]. In the method, WSDL document is represented as *Dwsdl*={*t1*, *t2*, …, *twsdl*} and *Ddesc*={*t1*, *t2*, …, *tdesc*} represents the textual description of the service. Because there is another descriptor of operation and port parameters in LFPNSD model, we add this descriptor as *Dop&port* ={*t1*, *t2*, …, *top&port*} in order to compare two methods. Here, *twsdl*, *tdesc* and *top&port* are last keyword of WSDL, textural description and operation and port parameters. In the proximity of services method, the descriptor of natural language request which is provided by a user is *Duser* and descriptor of invoked service is *Dinv*. The three Context Overlaps (CO) are defined as same keywords between *Dwsdluser*, *Ddescuser*, *Dop&portuser* and *Dwadlinv*, *Ddescinv*, D*op&portinv*. The proximity of user requested service and invoked service is defined as a root of sum of three CO's squares. When a user invoking comes, it is compared with all services in services repository. Then, one service in *Dinv*, which has the biggest proximity value with *Duser*, was selected. We compared the discovery precision probability of this method (conventional method) with the proposed LFPNSD. The simulation results are shown in Figure 16. The LFPNSD method yielded higher precision probabilities than the conventional method proposed in [21]. Especially when the service number of Web services' repository becomes more than 88, the difference is much more significant. Here, a correct service is selected in 14 services, 24 services, 37 services, 54 services, 88 services, 151 services just as they were used in [21].

Construction and Application of Learning Petri Net 175

these propositions have different degrees of truth toward different transitions; the truth degree of proposition can be learnt through adjusting of the arc's weight function. The LFPN model obtains the capability of fuzzy production rules learning through truth degree updating. The LFPN learning algorithm which introduced network learning method into Petri net update was proposed and the convergence of the algorithm was analyzed. The LFPN model was used into discovery of Web service. Using the LFPN model, different service functional descriptions are used to evaluate service function and an appropriate service is selected firstly, Secondly, context of QoS is used to predict QoS and a more

In the future, the different intelligent computing methods will be used into Petri net for constructing different type of LPN. The efficient different types of LPN used in different special area will be compared and an efficient LPN model for solving various problems will be founded.

[1] Konar A., Chakraborty U. K. and Wang P. P. Supervised Learning on a Fuzzy Petri Net.

[2] Hrúz B., Zhou M.C. Modeling and Control of Discrete-event Dynamic Systems: with

[3] Cai H., Hu X., Lu Q. and Cao Q. A Novel Intelligent Service Selection Algorithm and Application for Ubiquitous Web Services Environment. Expert Systems with

[4] Doya, K. Reinforcement Learning in Continuous Time and Apace, Neural Computation,

[5] Feng L. B, Obayashi M., Kuremoto T. and Kobayashi K. A Learning Petri Net Model Based on Reinforcement Learning. Proceedings of the 15th International Symposium on

[6] Feng L. B., Obayashi M., Kuremoto T. and Kobayashi K. An Intelligent Control System Construction Using High-Level Time Petri Net and Reinforcement Learning. Proceedings of International Conference on Control, Automation, and Systems (ICCAS

[7] Feng L. B., Obayashi M., Kuremoto T. and Kobayashi K. A learning Petri net Model. IEEJ Transactions on Electrical and Electronic Engineering 2012; Volume 7, Issue 3, pages

[8] Frederick J. R. Statistical Methods for Speech. The MIT Press. Cambridge, Massachusetts,

Liangbing Feng, Masanao Obayashi, Takashi Kuremoto and Kunikazu Kobayashi *Division of Computer Science & Design Engineering, Yamaguchi University, Ube, Japan*

*Shenzhen Institutes of Advanced Technology, Shenzhen, China* 

Information Sciences 2005; Vol.172, No.3-4, 397-416.

Applications 2009; Vol. 36, No. 2, 2200-2212.

Artificial Life and Robotics (AROB2010); 290-293.

2000; Vol.12, No.1, 219–245.

2010); 535 – 539.

274–282.

USA, 1999.

Petri Nets and Other Tools. Springer Press. London, UK, 2007.

efficient service is selected.

**Author details** 

Liangbing Feng

**7. References** 

**Figure 16.** Comparison of two discovery methods

#### **6. Conclusion**

In this chapter, Learning Petri net (LPN) was constructed based on High-level Time Petri net and reinforcement learning (RL). The RL was used for adjusting the parameter of Petri net. Two kinds of learning algorithm were proposed for Petri net's discrete and continuous parameter learning. And verification for LPN was shown. LPN model was applied to dynamical system control. We had used the LPN in three robot systems control - the AIBO, Guide Dog. The LPN models were found and controlled for these robot systems. These robot systems could adjust their parameters while system was running. And the correctness and effectiveness of our proposed model were confirmed in these experiments. LPN model was improved to the hierarchical LPN model and this improved hierarchical LPN model was applied to QoS optimization of Web service composition. The hierarchical LPN model was constructed based on stochastic Petri net and RL. When the model was used, the Web service composition was modeled with stochastic Petri net. A Web service dynamical composing framework is proposed for optimizing QoS of web service composition. The neural network learning method was used to Fuzzy Petri net. Learning fuzzy Petri net (LFPN) was proposed. Contrasting with the existing FPN, there are three extensions in the new model: the place can possess different tokens which represent different propositions; these propositions have different degrees of truth toward different transitions; the truth degree of proposition can be learnt through adjusting of the arc's weight function. The LFPN model obtains the capability of fuzzy production rules learning through truth degree updating. The LFPN learning algorithm which introduced network learning method into Petri net update was proposed and the convergence of the algorithm was analyzed. The LFPN model was used into discovery of Web service. Using the LFPN model, different service functional descriptions are used to evaluate service function and an appropriate service is selected firstly, Secondly, context of QoS is used to predict QoS and a more efficient service is selected.

In the future, the different intelligent computing methods will be used into Petri net for constructing different type of LPN. The efficient different types of LPN used in different special area will be compared and an efficient LPN model for solving various problems will be founded.
