**1. Introduction**

142 Petri Nets – Manufacturing and Computer Science

semantics. We go no further here in this chapter.

**9. Conclusions and acknowledgement** 

a very precise sense, physically implementable".

process and workflow are not synonym of each other.

build a road to success.

**Author details** 

**10. References** 

electronics industry, 2005.

Chongyi Yuan

case semantics consistent with the logic. The two pairs are derivable from the logic and

Prof. Carl Adam Petri wrote: "In order to apply net theory with success, a user of net theory can just rely on the fact that every net which he can specify explicitly (draw on paper) can be connected by a short (≤ 4) chain of net morphisms to the physical real world; your net is, in

We have seen this quotation at the very beginning of this chapter. This chapter tries to make clear how to specify a net that is "in a very precise sense physically implementable". A successful application of net theory starts from a full understanding of the application problem. A well designed business process ((T,<), B ,C) leads to the discovery of workflow logic, case semantics and management pairs, a tree-layer model for workflow modeling.

Without theory, full understanding of application problems becomes hard. Well-structured business process ((T,<), B , C) starts from a clear distinction between business process and workflow, a clear distinction between business tasks and management tasks, and a clear distinction between transition synchronizations and place synchronizations. Business

A full understanding of application problems and a good grasp of theories in Petri nets that

The author is grateful to the editors of this book. It is a great honor to me as well as a good

[5] C.Y. Yuan: Principles and applications of Petri Net. Beijing, China: Publishing house of

[7] C.Y. Yuan, W. Zhao, S.K. Zhang, Y. Huang: A Three Layer Model for Business Process —Process Logic, Case Semantics and Workflow Management, Journal of Computer

chance for me to exchange ideas on Petri nets with friends outside my country.

*School of Electronics Engineering and Computer Science, Peking University, Beijing, China* 

[1] C.Y. Yuan: *Petri Nets* in Chinese, Southeast University Press,(1989). [2] W. Brauer (editor): Net Theory and Applications, Springe LNCS Vol.84.

[3] C. A. Petri: Nonsequential Processes GMD-ISF Report 77, (1977). [4] C.A. Petri: Concurrency incl. in 2 (in Theoretical Computer Science).

[6] W. Aalst, K. Hee: *Workflow Management*, The MIT Press (2000).

Science & Technology, Vol.22 No.3, 410-425 (2007).

[8] C.Y. Yuan: *Petri Net Application*, to appear .

Petri nets are excellent networks which have great characteristics of combining a welldefined mathematical theory with a graphical representation of the dynamic behavior of systems. The theoretical aspect of Petri nets allows precise modeling and analysis of system behavior, at the same time, the graphical representation of Petri nets enable visualization of state changes of the modeled system [32]. Therefore, Petri nets are recognized as one of the most adequate and sound tool for description and analysis of concurrent, asynchronous and distributed dynamical system. However, the traditional Petri nets do not have learning capability. Therefore, all the parameters which describe the characteristics of the system need to be set individually and empirically when the dynamic system is modeled. Fuzzy Petri net (FPN) combined Petri nets approach with fuzzy theory is a powerful modeling tool for fuzzy production rules-based knowledge systems. However, it is lack of learning mechanism. That is the significant weakness while modeling uncertain knowledge systems.

At the same time, intelligent computing is taken to achieve the development and application of artificial intelligence (AI) methods, i.e. tools that exhibit characteristics associated with intelligence in human behaviour. Reinforcement Learning (RL) and artificial neural networks have been widely used in pattern recognition, decision making, data clustering, and so on. Thus, if intelligent computing methods are introduced into Petri nets, this may make Petri nets have the learning capability, and also performance and the applicable areas of Petri nets models will be widely expanded. The dynamic system can be modeled by Petri nets with the learning capability and then the parameters of the system can be adjusted by online (data-driven) learning. At the same way, if the generalized FPNs are expanded by adding neural networks and their leaning

© 2012 Feng et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Feng et al., licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### 144 Petri Nets – Manufacturing and Computer Science

capability, then FPNs are able to realize self-adapting and self-learning functions. Consequently, it achieves automatic knowledge reasoning and fuzzy production rules learning.

Construction and Application of Learning Petri Net 145

*Tr* is a finite set of nodes, called "Transitions", which disjoints from *P*, *PTr= ID*:*TrN* is a function marking *Tr*. *tr*1, *tr*2, …, *trm* represents the elements of *Tr*, *m* is the

 *C* is a weight function on *F*. If *F* (*P×Tr*), the weight function *W* is *Win* that decides which colored Token can go through the arc and enable *T* fire. This color tokens will be consumed when transition is fired. If *F* (*Tr×P*), the weight function *W* is *Wout*

*F* (*P×Tr*)∪(*Tr×P*) is a finite set of directional arcs, known as the flow relation;

that decides which colored Token will be generated by *T* and be input to *P*.

iv. DT: *TrR* is a delay time function of a transition which has a *Time* delay for an enable

v. *M0*: *P*Up<sup>P</sup> *μC*(*p*) such that*pP*, *M0*(*p*)*μC*(*p*) is the initial marking function which

In HLTPN, the weight functions of input and output arc for a transition decide the input and output token of a transition. These weight functions express the input-output mapping of transitions. If these weight functions are able to be updated according to the change of system, modeling ability of Petri net will be expanded. The delay time of HLTPN expresses the pre-state lasting time. If the delay time is able to be learnt while system is running, representing ability of Petri net will be enhanced. RL is a learning method interacting with a complex, uncertain environment to achieve an optimal policy for the selection of actions of the learner. RL suits to update dynamic system parameters through interaction with environment [18]. Hence, we consider using the RL to update the weight function and transition's delay time of Petri net for constructing the LPN. In another word, LPN is an expanded HLTPN, in which some transition's input arc weight function and transition delay

ii. *C* is a finite and non-empty color set for describing different type of data;

transition fired or the fire of a transition lasting time.

associates a multi-set of tokens of correct type with each place.

time have a value item which records the reward from the environment. **Definition** 2: LPN has a 3-tuple structure, *LPN*= (*HLTPN, VW, VT*), where

DT and each *DT* has a reward value item *VT* ∈real number.

i. *HLTPN=* (*NG, C, W, DT, M0*) is a High-Level Time Petri Net and *NG=* (*P*, *Tr*, *F*).

ii. *VW* (value of weight function)*: WinR*, is a function marking on *Win*. An arc *F* (*P×Tr*) has a set of weight function *Win* and each *Win* has a reward value item *VW* ∈real number. iii. VT (value of delay time): DTR, is a function marking on DT. A transition has a set of

cardinality of set *Tr*;

iii. *W: F*

**2.2. Definition of LPN** 

**Figure 1.** An example of LPN model

Recently, there are some researches for making the Petri net have learning capability and making it optimize itself. The global variables are used to record all state of colored Petri net when it is running [22]. The global variables are optimized and colored Petri net is updated according to these global variables. A learning Petri net model which combines Petri net with a neural network is proposed by Hirasawa et al., and it was applied to nonlinear system control [10]. In our former work [5, 6], a learning Petri net model has been proposed based on reinforcement learning (RL). RL is applied to optimize the parameters of Petri net. And, this learning Petri net model has been applied to robot system control. Konar gave an algorithm to adjust thresholds of a FPN through training instances [1]. In [1], the FPN architecture is built on the connectionism, just like a neural network, and the model provides semantic justification of its hidden layer. It is capable of approximate reasoning and learning from noisy training instances. A generalized FPN model was proposed by Pedrycz et al., which can be transformed into neural networks with OR/AND logic neuron, thus, parameters of the corresponding neural networks can be learned (trained) [24]. Victor and Shen have developed a reinforcement learning algorithm for the high-level fuzzy Petri net models [23].

This chapter focuses on combining the Petri net and fuzzy Petri net with intelligent learning method for construction of learning Petri net and learning fuzzy Petri net (LFPN), respectively. These are applied to dynamic system controls and a system optimization. The rest of this paper is organized as follow. Section 2 elaborates on the Learning Petri net construction and Learning algorithm. Section 3 describes how to use the Learning Petri net model in the robots systems. Section 4 constructs a LFPN. Section 5 shows the LFPN is used in Web service discovery problem. Section 6 summarizes the models of Petri net described in the chapter and results of their applications and demonstrates the future trends concerned with Learning Petri nets.
