**5. Conclusion**

280 Petri Nets – Manufacturing and Computer Science

transition *t1, t6, t9, t10*.

which is derived from the annotated UML model. Initially, there will be a token from place *P1* to *P10*. According to rule 2, in order to define the upper bound of the execution of parallel processes by a network node, we introduce three places *PP1*, *PP2* and *PP3* in the SPN model for the corresponding three physical nodes and initially, these three places will contain *q* (*q* > 0) tokens where *q* will define the maximum number of the process that will be handled in parallel by a physical node at certain time. In order to ensure the upper bound of the parallel processing of a network node *n1*, we introduce arcs from place *PP1* to transition *t4*, *t7* and *t8*. That means, components *C4*, *C7* and *C8* can start their processing if there is token available in place *PP1* as the firing of transitions *t4*, *t7* and *t8* not only depend on the availability of the token in the place *P4*, *P7* and *P8* but also depend on the availability of the token in the place *PP1*. Likewise, to ensure the upper bound of the parallel processing of a network node *n2* and *n***3**, we introduce arcs from place *PP2* to transition *t2*, *t3* and *t5* and from place *PP3* to

For generating the SPN model from annotated UML model, firstly, we will consider the collaboration roles deploy on the processor node *n1* which are *C4*, *C7* and *C8*. Here components *C7* connects to *C4* and *C8*. The communication cost between the components is zero but there is still cost for execution of the background process. So according to rule 3, after the completion of the transition from place *P7* to *d7* (places of component *C7*), from *P4* to *d4* (places of component *C4*) and from *P8* to *d8* (places of component *C8*) the places *d7*, *d4* and *d7*, *d8* are connected by the timed transition *k8* and *k9* to generate the SPN model. Collaboration roles *C2*, *C3* and *C5* deploy on the processor node *n2*. Likewise, after the completion of the transition from place *P2* to *d2* (places of component *C2*), from *P3* to *d3* (places of component *C3*) and from *P5* to *d5* (places of component *C5*) the places *d2*, *d3* and *d2*, *d5* are connected by the timed transition *k3* and *k4* to generate the SPN model according to rule 3. Collaboration roles *C6*, *C1*, *C9* and *C10* deploy on the processor node *n3*. In the same way, after the completion of the transition from place *P1* to *d1* (places of component *C*1), from *P6* to *d6* (places of component *C6*), *P9* to *d9* (places of component *C9*) and from *P10* to *d10* (places of component *C10*) the places *d1*, *d6*; *d1*, *d9* and *d9*, *d10* are connected by the timed transition *k11*, *k12* and *K14* to generate the SPN model following rule 3. In order to generate the system level SPN model we need to combine the entire three SPN model generated for three processor nodes by considering the interconnection among them. In order to compose the SPN models of processor node *n1* and *n2*, places *d4* and *d3* are connected by the timed transition *k1* and places *d4* and *d5* are connected by the timed transition *k2* according to rule 3. Likewise, to compose the SPN models of processor node *n2* and *n3*, places *d2* and *d1* are connected by the timed transition *k5* and places *d5* and *d1* are connected by the timed transition *k6* according to rule 3. In order to compose the SPN models of processor node *n1* and *n3*, places *d7* and *d1* are connected by the timed transition *k7*, places *d8* and *d6* are connected by the timed transition *k10* and places *d8* and *d9* are connected by the timed transition *k13* according to rule 3. By the above way, the system level SPN model is derived and all these are done automatically. The algorithm for automatic generation of SPN model from the

The throughput calculation according to equation (8) for the different deployment mapping including the optimal deployment mapping is shown in Table 2. The optimal deployment mapping presented in Table 1 (first entry) also ensures the optimality in case of throughput

annotated UML model is beyond the scope of this chapter.

The contribution of this chapter is to develop a framework that focuses on the performance evaluation of the distributed system using SPN model. The developed framework recognizes the fact of rapid and efficient way of capturing the system dynamics utilizing reusable specification of software components that has been utilized to generate SPN performance model. The deployment logic presented here, is applied to provide the optimal, initial mapping of components to hosts, i.e. the network is considered rather static. Performance related QoS information is taken into account and included in the SPN model with equivalent timing and probabilistic assumption for enabling the evaluation of performance prediction result of the system at the early stage of the system development process. However, our eventual goal is to develop support for run-time redeployment of components, this way keeping the service within an allowed region of parameters defined by the requirements. Our modeling framework support that, our logic will be a prominent candidate for a robust and adaptive service execution platform for assessing a deployment of service components on an existing physical topology. Future work includes providing a tool based support of the developed performance modeling framework.
