**3.1. Model definition**

244 Petri Nets – Manufacturing and Computer Science

**W=4**

<sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>8</sup> <sup>9</sup> <sup>10</sup>

0 0.5 1 1.5 2 2.5 3

**Instruction window size**

**Speedup**

instructions and the large number of mispredicted branches.

λi=1, μi=2, 1 pBMISi=0.95 λi=1, μi=2, 1-pBMISi=1 λi=1, μi=3, 1 pBMISi=0.95

independent instructions in the window even without value prediction, regardless of the window size. Again, the main reasons for this behavior are the small number of consuming

**Figure 10.** Speedup achieved by perfect value prediction with varying instruction window size

modest prediction mechanisms but more favorable operational profile [20,21].

**3. Part B: fluid atoms in P2P streaming networks** 

In order to investigate the operational environment influence, we partitioned the input space into several program classes, each of them with at least one different aspect: branch rate, consuming instruction rate, probability to classify a branch as easy-to-predict or probability to classify a value as easy-to-predict. We concluded that the set of programs executed on a machine have a considerable influence on the *perceived IPC.* Since the term *program* may be interchangeably used with the term *instruction stream*, these observations give good reason for the analysis of the time varying behavior of programs in order to find simulation points in applications to achieve results representative of the program as a whole. From a user perspective, a machine with more sophisticated prediction mechanisms will not always lead to a higher *perceived performance* as compared to a machine with more

(a) (b)

<sup>λ</sup>i=1, μi=3, 1-pBMISi=1 <sup>16</sup> <sup>20</sup> <sup>24</sup> <sup>28</sup> <sup>32</sup> <sup>36</sup> <sup>40</sup> 0 1 2 3 4 5 6

**Instruction window size**

**W=16**

λi=4, μi=8, 1 pBMISi=0.95 λi=4, μi=8, 1-pBMISi=1 λi=4, μi=14, 1 pBMISi=0.95 λi=4, μi=14, 1-pBMISi=1

**Speedup**

In P2P live streaming systems every user (peer) maintains connections with other peers and forms an application level *logical network* on top of the *physical network*. The video stream is divided in small pieces called *chunks* which are streamed from the source to the peers and every peer acts as a client as well as a server, forwarding the received video chunks to the next peer after some short buffering. The peers are usually organized in one of the two basic types of logical topologies: *tree* or *mesh*. Hence, the tree topology forms structured network of a single tree as in [22], or multiple multicast trees as in [23], while mesh topology is unstructured and does not form any firm logical construction, but organizes peers in *swarming* or gossiping-like environments, as in [24]. To make greater use of their complementary strengths, some protocols use combination of these two aspects, forming a hybrid network topology, such as [25]. Hence, members are free to join or leave the system at their own free will (*churn*), which leads to a certain user driven dynamics resulting in

As a base for our modeling we use the work in [29,30], where several important terms are defined. One of them is the maximum achievable rate that can be streamed to any individual peer at a given time, which is presented in Eq. (26).

$$r\_{MAX} = \min\left\{ r\_{SERVER}, \frac{r\_{SERVER} + \sum\_{i=1}^{n} r\_{Pi}}{n} \right\} \tag{26}$$

where:

*rMAX* – maximum achievable streaming rate *rSERVER* – upload rate of the server *rPi* – upload rate of the *i th* peer *n* – number of participating peers.

Clearly, *rMAX* is a function of *rSERVER*, *rPi* and *n,* i.e. *rMAX* = *φ*(*rSERVER, rP, n*)*.* This maximum achievable rate to a single peer is further referred to as the *fluid function*, or *φ*(). The second important definition is of the term *Universal Streaming*. Universal Streaming refers to the streaming situations when each participating peer receives the video stream with bitrate no less than the video rate, and in [29] it is achievable if and only if:

$$
\varphi \mathbf{O} \ge \mathbf{r}\_{\text{VDEO}} \tag{27}
$$

Fluid Stochastic Petri Nets:

From Fluid Atoms in ILP Processor Pipelines to Fluid Atoms in P2P Streaming Networks 247

����� �� �⁄ (30)

**Figure 11.** FSPN model of a P2P live video streaming system

whole (*SSIZE*) and is derived from the queuing theory:

out of the discrete places. Fluid arcs, through which fluid is pumped, are drawn as double lines to suggest a pipe. The fluid is pumped through fluid arcs and is streamed to and out of the unique fluid place *PB* which represents a single peer buffer. The rectangles represent timed transitions with exponentially distributed firing times, and the thin short lines are immediate transitions. Peer arrival, in general, is described as a stochastic process with exponentially distributed interarrival times, with mean 1/*λ*, where *λ* represents the arrival rate. We make another assumption that after joining the system peers' sojourn times (*T*) are also exponentially distributed. Clearly, since each peer is immediately served after joining the system, we have a queuing network model with an infinite number of servers and exponentially distributed joining and leaving rates. Hence, the mean service time *T* is equal to 1/*μ*, which transferred to FSPN notation leads to the definition of the departure rate as *μ* multiplied by the number of peers that are concurrently being served. Now, *λ* represents peer arrival in general, but the different types of peers do not share the same occurrence probability (*pH* and *pL*). This occurrence distribution is defined by immediate transitions *TAHP* and *TALP* and their weight functions *pH* and *pL*. Hence, HP arrive with rate *λH* = *pH \* λ*, and LP arrive with rate *λL = pL \* λ*, where *pH + pL = 1*. In this particular case *pH = pL = 0.5,* but, if needed, these occurrence probabilities can be altered. This way the model with peer churn is represented by two independent *M/M/∞* Poisson processes, one for each of the different types of peers. The average number of peers that are concurrently being served defines the size of the system as a

*TA* is a timed transition with exponentially distributed firing times that represents peer arrival, and upon firing (with rate *λ*) puts a token in *PCS*. *PCS* (representing the control

where *rVIDEO* is the rate of the streamed video content.

Hence, the performance measures of the system are easily obtained by calculating the *Probability for Universal Streaming* (*PUS*).

Now, we add one more parameter to the previously mentioned to fulfill the requirements of our model. We define the *stream function ψ*() which, instead of the maximum, represents the *actual* streaming rate to any individual peer at any given time, and *ψ*() satisfies:

$$
\varphi(\mathbf{j}) \le \varphi(\mathbf{j})\tag{28}
$$

#### **3.2. FSPN representation**

The FSPN representation of the P2P live streaming system model that accounts for: network topology, peer churn, scalability, peer average group size, peer upload bandwidth heterogeneity, video buffering, control traffic overhead and admission control for lesser contributing peers, is given in Figure 11. We assume asymmetric network settings where peers have infinite download bandwidths, while stream delay, peer selection strategies and chunk size are not taken into account.

Similar as in [29] we assume two types of peers: high contributing peers (HP) with upload bitrate higher than the video rate, and low contributing peers (LP) with upload bitrate lower than the video rate. Different from the fluid function *φ*(), beside the dependency to *rSERVER, rP,* and *n*, the stream function *ψ*() depends on the level of fluid in the unique fluid place *PB*  as well:

$$\{\varphi\} = f\left(r\_{\text{SERVER}} \# P\_{\text{HP}}, \# P\_{\text{LP}}, r\_{\text{HP}}, r\_{\text{LP}}, Z\_{\text{B}}\right) \tag{29}$$

where *ZB* represents the level of fluid in *PB*.

The FSPN model in Figure 11 comprises two main parts: the discrete part and the continuous (fluid) part of the net. Single line circles represent discrete places that can contain discrete tokens. The tokens, which represent peers, move via single line arcs to and

**Figure 11.** FSPN model of a P2P live video streaming system

246 Petri Nets – Manufacturing and Computer Science

*rSERVER* – upload rate of the server

*n* – number of participating peers.

*rPi* – upload rate of the *i*

*rMAX* – maximum achievable streaming rate

*th* peer

() *VIDEO*

where *rVIDEO* is the rate of the streamed video content.

*Probability for Universal Streaming* (*PUS*).

**3.2. FSPN representation** 

as well:

chunk size are not taken into account.

where *ZB* represents the level of fluid in *PB*.

less than the video rate, and in [29] it is achievable if and only if:

Clearly, *rMAX* is a function of *rSERVER*, *rPi* and *n,* i.e. *rMAX* = *φ*(*rSERVER, rP, n*)*.* This maximum achievable rate to a single peer is further referred to as the *fluid function*, or *φ*(). The second important definition is of the term *Universal Streaming*. Universal Streaming refers to the streaming situations when each participating peer receives the video stream with bitrate no

Hence, the performance measures of the system are easily obtained by calculating the

Now, we add one more parameter to the previously mentioned to fulfill the requirements of our model. We define the *stream function ψ*() which, instead of the maximum, represents the

The FSPN representation of the P2P live streaming system model that accounts for: network topology, peer churn, scalability, peer average group size, peer upload bandwidth heterogeneity, video buffering, control traffic overhead and admission control for lesser contributing peers, is given in Figure 11. We assume asymmetric network settings where peers have infinite download bandwidths, while stream delay, peer selection strategies and

Similar as in [29] we assume two types of peers: high contributing peers (HP) with upload bitrate higher than the video rate, and low contributing peers (LP) with upload bitrate lower than the video rate. Different from the fluid function *φ*(), beside the dependency to *rSERVER, rP,* and *n*, the stream function *ψ*() depends on the level of fluid in the unique fluid place *PB* 

() ,# ,# , , , *SERVER HP LP HP LP B*

The FSPN model in Figure 11 comprises two main parts: the discrete part and the continuous (fluid) part of the net. Single line circles represent discrete places that can contain discrete tokens. The tokens, which represent peers, move via single line arcs to and

*f r P Pr rZ* (29)

*r* (27)

(28)

*actual* streaming rate to any individual peer at any given time, and *ψ*() satisfies:

() () 

where:

out of the discrete places. Fluid arcs, through which fluid is pumped, are drawn as double lines to suggest a pipe. The fluid is pumped through fluid arcs and is streamed to and out of the unique fluid place *PB* which represents a single peer buffer. The rectangles represent timed transitions with exponentially distributed firing times, and the thin short lines are immediate transitions. Peer arrival, in general, is described as a stochastic process with exponentially distributed interarrival times, with mean 1/*λ*, where *λ* represents the arrival rate. We make another assumption that after joining the system peers' sojourn times (*T*) are also exponentially distributed. Clearly, since each peer is immediately served after joining the system, we have a queuing network model with an infinite number of servers and exponentially distributed joining and leaving rates. Hence, the mean service time *T* is equal to 1/*μ*, which transferred to FSPN notation leads to the definition of the departure rate as *μ* multiplied by the number of peers that are concurrently being served. Now, *λ* represents peer arrival in general, but the different types of peers do not share the same occurrence probability (*pH* and *pL*). This occurrence distribution is defined by immediate transitions *TAHP* and *TALP* and their weight functions *pH* and *pL*. Hence, HP arrive with rate *λH* = *pH \* λ*, and LP arrive with rate *λL = pL \* λ*, where *pH + pL = 1*. In this particular case *pH = pL = 0.5,* but, if needed, these occurrence probabilities can be altered. This way the model with peer churn is represented by two independent *M/M/∞* Poisson processes, one for each of the different types of peers. The average number of peers that are concurrently being served defines the size of the system as a whole (*SSIZE*) and is derived from the queuing theory:

$$\mathcal{S}\_{\text{SIZE}} = \lambda / \mu \tag{30}$$

*TA* is a timed transition with exponentially distributed firing times that represents peer arrival, and upon firing (with rate *λ*) puts a token in *PCS*. *PCS* (representing the control server) checks the type of the token and immediately forwards it to one of the discrete places *PHP* or *QLP* (PLP). Places *PHP* and *PLP* accommodate the different types of peers in our P2P live streaming system model. *QLP* on the other hand, represents queuing station for the LP, which is connected to the place *PLP* with the immediate transition *TI* that is guarded by a *Guard function* G.

The Guard function G is a Boolean function whose values are based on a given condition. The expression of a given condition is the argument of the Guard function and serves as enabling condition for the transition *TI*. If the argument of G evaluates to true, *TI* is enabled. Otherwise, if the argument of G evaluates to false, *TI* is disabled. For the model that does not take admission control into account G is always enabled, but when we want to evaluate the performance of a system that incorporates admission control we set the argument of the guard function as in Eq. (31):

$$\left\{ G \left| \frac{r\_{SERVER} + \#\,P\_{HP} \cdot r\_{HP} + (\#\,P\_{LP} + 1) \cdot r\_{LP}}{\#\,P\_{HP} + \#\,P\_{LP}} \ge r\_{VIDEO} + r\_{CONTROL} \right. \right\} \tag{31}$$

Fluid Stochastic Petri Nets:

From Fluid Atoms in ILP Processor Pipelines to Fluid Atoms in P2P Streaming Networks 249

continuous place *PB*, with rate *rPLAY*. *TCONTROL*, that represents the exchange of control messages among neighboring peers, is the third transition that is always enabled, has the priority over *TPLAY*, and constantly drains fluid from *PB* with rate *rCONTROL*. For further analysis we derived the rate of *rCONTROL* from [31] where it is declared that it *linearly* depends on the number of peers in the neighborhood, and for *rVIDEO* of 128 kbps, the protocol overhead is 2% for a group of 64 users, which leads to a bitrate of 2.56 kbps. Thus, for our performance analysis we assume that peers are organized in neighborhoods with an average size of 60 members where *rCONTROL* is 2.4 kbps. For the sake of convenience and chart plotting we also define the average upload rate of the participating peers as *rAVERAGE*, which is given in Eq. (32):

> # \* #\* # # *HP HP LP LP*

Since in our model of a P2P live video streaming system we take in consideration *rCONTROL* as

() *VIDEO CONTROL*

The FSPN model of a P2P live video streaming system accurately describes the behavior of the system, but suffers from state space explosion and therefore analytic/numeric solution is infeasible. Hence, we provide a solution to the presented model using *process-based discreteevent simulation* (DES) language. The simulations are performed using SimPy which is a DES package based on standard Python programming language. It is quite simple, but yet extremely powerful DES package that provides the modeler with simulation processes that can be used for active model components (such as customers, messages or vehicles), and resource facilities (resources, levels and stores) which are used for passive simulation components that form limited capacity congestion points like servers, counters, and tunnels. SimPy also provides monitor variables that help in gathering statistics, and the random

Now, although we deal with vast state space, we provide the solution by identifying four distinct cases of state types. These cases of state types are combination of states of the discrete part and the continuous part of the FSPN, and are presented in Table 2. Hence, the rates at which fluid builds up in the fluid place *PB*, in each of these four cases, can be

**case 1** if *ZB = ZBMAX* and *φ*() *<sup>≥</sup> rVIDEO+rCONTROL* then *ψ()* = *rVIDEO+rCONTROL* and

**case 2** if 0 <*ZB* ≤ *ZBMAX and φ*() < *rVIDEO+rCONTROL* then *ψ()* = *φ*() and *rPLAY = rVIDEO* **case 3** if 0 ≤ *ZBUF*<*ZBUFMAX and φ*() ≥ *rVIDEO+rCONTROL* then *ψ()* = *φ*() and *rPLAY = rVIDEO* **case 4** if *ZBUF* = 0 *and φ*() < *rVIDEO+rCONTROL* then *ψ()* = *φ*() and *rPLAY < rVIDEO*

*HP LP P r Pr*

(32)

*r r* (33)

*rPLAY = rVIDEO*

*P P*

*AVERAGE*

variables are provided by the standard Python random module.

described with linear differential equations that are given in Eq. (34).

*r*

well, Universal Streaming is achievable if and only if:

**3.3. Discrete-event simulation** 

**Table 2.** Cases of state types

Transitions *TDHP* and *TDLP* are enabled only when there are tokens in discrete places *PHP* and *PLP*. These are marking dependent transitions, which, when enabled, have exponentially distributed firing times with rate *μ·#PHP* and *μ·#PLP* respectively, where *#PHP* and *#PLP* represent the number of tokens in each discrete place. Upon firing they take one token out of the discrete place to which they are connected.

Concerning the fluid part of the model, we represent bits as atoms of fluid that travel through fluid pipes (network infrastructure) with rate dependent on the system's state (marking). Beside the stream function as a derivative of several parameters, we identify three separate fluid flows (streams) that travel through the network with different bitrates. The main video stream represents the video data that is streamed from the source to the peers that we refer to as the *video rate* (*rVIDEO*). The second stream is the play stream which is the stream at which each peer plays the streamed video data, referred to as the *play rate* (*rPLAY*), and the third stream is the control traffic overhead, referred to as *control rate* (*rCONTROL*), which describes the exchange of control messages needed for the logical network construction and management. As mentioned earlier, transitions *TDHP* and *TDLP* are enabled only when there are tokens in discrete places *PHP* and *PLP* respectively and beside the fact that they consume tokens when firing, when enabled, they constantly pump fluid through the fluid arc to the fluid place. Flow rates of *ψ*() are piecewise constant and depend on the number of tokens in the discrete places and their upload capabilities. Continuous place *PB* represents single peer's buffer, which is constantly filled with rate *ψ*() and drained with rate (*rPLAY* + *rCONTROL*). *ZB* is the amount of fluid in *PB* and *ZBMAX* is the buffer's maximum capacity. Transition *TSERVER* represents the functioning of the server, which is always enabled (except when there are no tokens in any of the discrete places) and constantly pumps fluid toward the continuous place *PB* with maximum upload rate of *rSERVER*. Transition *TPLAY* represents the video play rate, which is also always enabled and constantly drains fluid from the continuous place *PB*, with rate *rPLAY*. *TCONTROL*, that represents the exchange of control messages among neighboring peers, is the third transition that is always enabled, has the priority over *TPLAY*, and constantly drains fluid from *PB* with rate *rCONTROL*. For further analysis we derived the rate of *rCONTROL* from [31] where it is declared that it *linearly* depends on the number of peers in the neighborhood, and for *rVIDEO* of 128 kbps, the protocol overhead is 2% for a group of 64 users, which leads to a bitrate of 2.56 kbps. Thus, for our performance analysis we assume that peers are organized in neighborhoods with an average size of 60 members where *rCONTROL* is 2.4 kbps. For the sake of convenience and chart plotting we also define the average upload rate of the participating peers as *rAVERAGE*, which is given in Eq. (32):

$$r\_{AVERAGE} = \frac{\#P\_{HP} \text{ \* } r\_{HP} + \#P\_{LP} \text{ \* } r\_{LP}}{\#P\_{HP} + \#P\_{LP}} \tag{32}$$

Since in our model of a P2P live video streaming system we take in consideration *rCONTROL* as well, Universal Streaming is achievable if and only if:

$$
\mu\text{Q} \ge r\_{\text{VIDEO}} + r\_{\text{CONTROL}} \tag{33}
$$

#### **3.3. Discrete-event simulation**

248 Petri Nets – Manufacturing and Computer Science

*Guard function* G.

guard function as in Eq. (31):

the discrete place to which they are connected.

server) checks the type of the token and immediately forwards it to one of the discrete places *PHP* or *QLP* (PLP). Places *PHP* and *PLP* accommodate the different types of peers in our P2P live streaming system model. *QLP* on the other hand, represents queuing station for the LP, which is connected to the place *PLP* with the immediate transition *TI* that is guarded by a

The Guard function G is a Boolean function whose values are based on a given condition. The expression of a given condition is the argument of the Guard function and serves as enabling condition for the transition *TI*. If the argument of G evaluates to true, *TI* is enabled. Otherwise, if the argument of G evaluates to false, *TI* is disabled. For the model that does not take admission control into account G is always enabled, but when we want to evaluate the performance of a system that incorporates admission control we set the argument of the

Transitions *TDHP* and *TDLP* are enabled only when there are tokens in discrete places *PHP* and *PLP*. These are marking dependent transitions, which, when enabled, have exponentially distributed firing times with rate *μ·#PHP* and *μ·#PLP* respectively, where *#PHP* and *#PLP* represent the number of tokens in each discrete place. Upon firing they take one token out of

Concerning the fluid part of the model, we represent bits as atoms of fluid that travel through fluid pipes (network infrastructure) with rate dependent on the system's state (marking). Beside the stream function as a derivative of several parameters, we identify three separate fluid flows (streams) that travel through the network with different bitrates. The main video stream represents the video data that is streamed from the source to the peers that we refer to as the *video rate* (*rVIDEO*). The second stream is the play stream which is the stream at which each peer plays the streamed video data, referred to as the *play rate* (*rPLAY*), and the third stream is the control traffic overhead, referred to as *control rate* (*rCONTROL*), which describes the exchange of control messages needed for the logical network construction and management. As mentioned earlier, transitions *TDHP* and *TDLP* are enabled only when there are tokens in discrete places *PHP* and *PLP* respectively and beside the fact that they consume tokens when firing, when enabled, they constantly pump fluid through the fluid arc to the fluid place. Flow rates of *ψ*() are piecewise constant and depend on the number of tokens in the discrete places and their upload capabilities. Continuous place *PB* represents single peer's buffer, which is constantly filled with rate *ψ*() and drained with rate (*rPLAY* + *rCONTROL*). *ZB* is the amount of fluid in *PB* and *ZBMAX* is the buffer's maximum capacity. Transition *TSERVER* represents the functioning of the server, which is always enabled (except when there are no tokens in any of the discrete places) and constantly pumps fluid toward the continuous place *PB* with maximum upload rate of *rSERVER*. Transition *TPLAY* represents the video play rate, which is also always enabled and constantly drains fluid from the

*VIDEO CONTROL*

(31)
