**4.1 Equations of the general model**

Viewing the simplifications that have been developed, we might observe that, in the study of this model, a topology is repeated at different levels of abstraction. This topology corresponds with a closed network model with two queues in series; first, a simple one, and second, another one with multiple servers, as shown in Fig. 6. This structure usually occurs in every processing stage. Processing at Kernel level is usually not parallelizable, and therefore, the model is represented as a simple queue. On the other hand, the user processing is usually parallelizable and it is represented by a multiple queue with p servers, being p the number of processor of the platform. The appearance of this topology allows us to define a simple model that we can solve analytically.

Fig. 6. Closed queue network simplified for the general model.

In order to get the total throughput of the system, first, we calculate the state probabilities for the network, putting N packets in circulation through the closed network, but assuming that the upper multiple queue can have at most p packets being served and the rest waiting in the queue. We also assume that the service capacity in every state of the multiple queue is not proportional to the number of packets. Thus, we will consider μi as the service capacity for the state i. The state diagram for this topology is presented in Fig. 7. In this model we are representing the state i of the multiple queue. N packets are flowing through the closed network and we refer to the state i when there are i packets in the multiple queue and the rest, N-i, in the simple queue. The probability of that state is represented as pi. Finally, the simple queue with rate λ is the packet injection queue.

Fig. 7. State diagram for the multiple queue.

This model adapts perfectly to Ksensor, because we identify a non-parallelizable process that corresponds with the packet capture and parallelizable processes that are related to

This section presents the analytical study of the model. It can be directly addressed by analytical calculation, assuming Poisson arrivals and exponential service times. Perhaps the greatest difficulty lies in determining the abstractions that are necessary to adapt the model to the actual characteristics of the traffic monitoring system. Likewise, we propose a method of calculation based on mean value analysis which allows us to solve systems with more

Viewing the simplifications that have been developed, we might observe that, in the study of this model, a topology is repeated at different levels of abstraction. This topology corresponds with a closed network model with two queues in series; first, a simple one, and second, another one with multiple servers, as shown in Fig. 6. This structure usually occurs in every processing stage. Processing at Kernel level is usually not parallelizable, and therefore, the model is represented as a simple queue. On the other hand, the user processing is usually parallelizable and it is represented by a multiple queue with p servers, being p the number of processor of the platform. The appearance of this topology allows us

μ

*p*

μ

λ

In order to get the total throughput of the system, first, we calculate the state probabilities for the network, putting N packets in circulation through the closed network, but assuming that the upper multiple queue can have at most p packets being served and the rest waiting in the queue. We also assume that the service capacity in every state of the multiple queue is not proportional to the number of packets. Thus, we will consider μi as the service capacity for the state i. The state diagram for this topology is presented in Fig. 7. In this model we are representing the state i of the multiple queue. N packets are flowing through the closed network and we refer to the state i when there are i packets in the multiple queue and the rest, N-i, in the simple queue. The probability of that state is represented as pi. Finally, the

*W = N*

Fig. 6. Closed queue network simplified for the general model.

simple queue with rate λ is the packet injection queue.

μ**eq(n)** 

analysis. Both μS and μM (in packets per second) can be measured in the laboratory.

elements, where the analytical solution may be more complex to develop.

**4. Analytical study of the model** 

**4.1 Equations of the general model** 

to define a simple model that we can solve analytically.

It is possible to deduce the balance equations from the diagram of states and, subsequently, the expression of the probability of any state i as a function of the probability of zero state p0:

$$\forall \mathbf{i} = \mathbf{1}, \mathbf{\cdot}, \mathbf{\cdot}, \mathbf{p} \Rightarrow \begin{cases} \mathbf{p}\_0 \cdot \lambda = \mathbf{p}\_1 \cdot \mu\_1 \\ \mathbf{p}\_1 \cdot \lambda = \mathbf{p}\_2 \cdot \mu\_2 \\ \mathbf{x}, \mathbf{x}, \mathbf{\cdot}, \mathbf{x}, \mathbf{\cdot} \\ \mathbf{p}\_{\mathbf{p}-1} \cdot \lambda = \mathbf{p}\_{\mathbf{p}} \cdot \mu\_{\mathbf{p}} \end{cases} \end{cases} \implies \mathbf{p}\_i = \frac{\lambda}{\mu\_i} \cdot \mathbf{p}\_{i-1} \tag{3}$$

$$\mathbf{p}\_{i} \Rightarrow \mathbf{p}\_{i} = \overbrace{\frac{\lambda}{\mu\_{i}} \cdot \frac{\lambda}{\mu\_{i-1}} \cdots \frac{\lambda}{\mu\_{1}}}^{\text{i terms}} \cdot \mathbf{p}\_{0} = \frac{\lambda^{i}}{\prod\_{j=1}^{i} \mu\_{j}} \cdot \mathbf{p}\_{0} \tag{4}$$

From this equation, we deduce pp, the probability of the state p:

$$\implies \mathbf{p}\_{\mathbf{p}} = \frac{\lambda^{\mathbf{p}}}{\prod\_{j=1}^{\mathbf{p}} \mu\_{j}} \cdot \mathbf{p}\_{0} \tag{5}$$

For the states with i>p, their probabilities can be expressed as:

$$\mathbf{V}\mathbf{\dot{i}} = \mathbf{p} + \mathbf{1}, \mathbf{\dot{v}} \dots \mathbf{N} \Rightarrow \begin{vmatrix} \mathbf{p}\_{\mathbf{p}} \cdot \mathbf{\lambda} = \mathbf{p}\_{\mathbf{p}+1} \cdot \boldsymbol{\mu}\_{\mathbf{p}} \\ \mathbf{p}\_{\mathbf{p}+1} \cdot \lambda = \mathbf{p}\_{\mathbf{p}+2} \cdot \boldsymbol{\mu}\_{\mathbf{p}} \\ \mathbf{0} \\ \mathbf{p}\_{\mathbf{N}-1} \cdot \lambda = \mathbf{p}\_{\mathbf{N}} \cdot \boldsymbol{\mu}\_{\mathbf{p}} \end{vmatrix} \Rightarrow \mathbf{p}\_{\mathbf{i}} = \mathbf{p}\_{\mathbf{i}-1} \cdot \frac{\lambda}{\boldsymbol{\mu}\_{\mathbf{p}}} \tag{6}$$

$$\mathbf{p}\_{i} = \overbrace{\frac{\lambda}{\mu\_{\rm p}} \cdot \frac{\lambda}{\mu\_{\rm p}} \cdots \frac{\lambda}{\mu\_{\rm p}}}^{\text{(6-p) terms}} \cdot \mathbf{p}\_{\rm p} = \left(\frac{\lambda}{\mu\_{\rm p}}\right)^{i-\rm p} \cdot \mathbf{p}\_{\rm p} \tag{7}$$

From this equation we can also derive the expression of the probability pN, which is interesting because it indicates the probability of having all the packets in the multiple queue and there is none in the simple queue. This probability defines the blocking probability (PB) of the simple queue.

Modelling a Network Traffic Probe Over a Multiprocessor Architecture 315

Norton equivalence for a traffic monitoring system, one must begin with a model that contains a simple queue and a multi-server queue. This is a particular case of the general

*µU*

 *p* 

*µK*

The simple queue with service rate μK models non-parallelizable Kernel services, whereas the multiple queue with p servers and service rate μU models the system capacity to parallelize certain services. The particularity of this model with regard to the general model is that, at most, only p packets can circulate on the closed network maximum. We are interested in solving this model to work out the equivalent service rate of the traffic

*N* ≤ *p* 

*0 1 2* **…** 

Fig. 9. State diagram for the traffic monitoring system equivalence.

0K 1U

⋅μ = ⋅μ

⋅μ = ⋅ ⋅μ

p p p p2

p pi

−

i1 K i U

*<sup>µ</sup> <sup>K</sup> <sup>µ</sup> <sup>K</sup> <sup>µ</sup> <sup>K</sup>*

*µ U 2µ <sup>U</sup> N µ <sup>U</sup>*

The state diagram makes sense for values of N that are less or equal to the highest number of processors. The service rate of the traffic monitoring system will be different for every value of N and, given that some services are not parallelizable, in general, it does not follow a linear evolution. Following a similar approach to the general case, we can calculate the probability of the highest state, pN, which is useful to estimate the effective service rate of

1K 2 U K

⋅μ = ⋅ μ <sup>μ</sup> = ⋅ ⋅ μ

( )

KK K i i 1 2 i i 2 0 U U U pp p p <sup>i</sup> ii1 i! − −

i i 1 U

μμ μ = ⋅ = ⋅ == ⋅ ⋅ μ μ ⋅⋅ − μ ⋅ (15)

−

(14)

p p <sup>i</sup>

2 i

*N*

Fig. 8. Equivalence for the traffic monitoring system.

monitoring system for every state in the network.

model studied before.

the equivalence.

$$\mathbf{p}\_{\rm N} = \mathbf{P}\_{\rm B} = \frac{\lambda^{\rm N}}{\mu\_{\rm P}^{\rm N-p} \cdot \prod\_{j=1}^{\rm P} \mu\_{j}} \cdot \mathbf{p}\_{0} \tag{8}$$

Applying the normalization condition (the sum of all probabilities must be equal to 1), we can obtain the general expression for p0 and, then, we get every state probabilities.

$$\sum\_{i=0}^{N} \mathbf{p}\_i = \mathbf{1} = \mathbf{p}\_0 + \sum\_{i=1}^{P} \mathbf{p}\_i + \sum\_{i=p+1}^{N} \mathbf{p}\_i \tag{9}$$

$$\mathbf{1} = \mathbf{p}\_0 + \mathbf{p}\_0 \sum\_{\mathbf{i}=1}^{\mathbf{P}} \frac{\boldsymbol{\lambda}^{\mathbf{i}}}{\prod\_{j=1}^{\mathbf{i}} \mu\_j} + \mathbf{p}\_0 \frac{\boldsymbol{\lambda}^{\mathbf{P}}}{\prod\_{j=1}^{\mathbf{P}} \mu\_j} \cdot \sum\_{\mathbf{i}=\mathbf{p}+1}^{\mathbf{N}} \frac{\boldsymbol{\lambda}^{\mathbf{i}-\mathbf{p}}}{\mu\_{\mathbf{p}}^{\mathbf{i}-\mathbf{p}}} \tag{10}$$

$$\mathbf{p} \implies \mathbf{p}\_0 = \left( \mathbf{1} + \sum\_{i=1}^{\mathbf{P}} \frac{\lambda^i}{\prod\_{j=1}^i \mu\_j} + \frac{\lambda^{\mathbf{P}}}{\prod\_{j=1}^{\mathbf{P}} \mu\_j} \cdot \sum\_{i=\mathbf{P}+1}^{\mathbf{N}} \frac{\lambda^{i-\mathbf{P}}}{\mu\_{\mathbf{P}}^{i-\mathbf{P}}} \right)^{-1} \tag{11}$$

Considering equations (8) and (11), we have the following blocking probability pN.

$$\mathbf{p}\_{\mathbf{N}} = \frac{\lambda^{\mathbf{N}} \Big/ \mu\_{\mathbf{p}}^{\mathbf{N} - \mathbf{p}}}{\left(\prod\_{j=1}^{\mathbf{P}} \mu\_{j} + \sum\_{i=1}^{\mathbf{P}} \left(\lambda^{i} \cdot \prod\_{j=i}^{\mathbf{P}} \mu\_{j}\right) + \lambda^{\mathbf{P}} \cdot \sum\_{i=\mathbf{p}+1}^{\mathbf{N}} \frac{\lambda^{i-\mathbf{p}}}{\mu\_{\mathbf{p}}^{i-\mathbf{p}}}\right)} \tag{12}$$

PN is the probability of having N packets in the multiple queue (traffic analysis system queue) of Fig. 6 , so there is not any packet in the injection queue. This situation describes the loss of the system. In order to calculate the throughput γ of the system, (13) is used.

$$
\gamma = \lambda \cdot \left(1 - \mathbf{P}\_{\rm N}\right) \tag{13}
$$

Taking into account these expressions, which are valid for the general case, we can develop the equations of the model for some particular cases that will be detailed below: the calculation of the equivalence for the traffic monitoring system and the solution for the closed network with incoming traffic load.

#### **4.2 Calculation of the equivalence for the traffic monitoring system**

In general, multiprocessor platforms that implement traffic monitoring systems have certain limitations to parallelize some parts of the processing they do. In particular, Kernel services are not usually parallelizable. This means that, despite having a multiprocessor architecture with p processors that can work in parallel, some services will be performed sequentially and we will lose some of the potential of the platform. For all this, in order to calculate the

p P p −

<sup>λ</sup> == ⋅

Applying the normalization condition (the sum of all probabilities must be equal to 1), we

i 0i i i 0 i1 ip1 p 1p p p = = =+

00 0 <sup>i</sup> <sup>p</sup> i p i 1 ip1 <sup>p</sup> <sup>j</sup> <sup>j</sup> j 1 j 1

= =

<sup>0</sup> i p i p i 1 ip1 <sup>p</sup> <sup>j</sup> <sup>j</sup> j 1 j 1

N

λ

= = = = +

PN is the probability of having N packets in the multiple queue (traffic analysis system queue) of Fig. 6 , so there is not any packet in the injection queue. This situation describes the loss of the system. In order to calculate the throughput γ of the system, (13) is used.

Taking into account these expressions, which are valid for the general case, we can develop the equations of the model for some particular cases that will be detailed below: the calculation of the equivalence for the traffic monitoring system and the solution for the

In general, multiprocessor platforms that implement traffic monitoring systems have certain limitations to parallelize some parts of the processing they do. In particular, Kernel services are not usually parallelizable. This means that, despite having a multiprocessor architecture with p processors that can work in parallel, some services will be performed sequentially and we will lose some of the potential of the platform. For all this, in order to calculate the

**4.2 Calculation of the equivalence for the traffic monitoring system** 

= =

p i p ip N

λλλ

<sup>−</sup> <sup>=</sup> = +

p i p ip N

 λλ λ

<sup>−</sup> <sup>=</sup> = +

N p p

−

p p <sup>p</sup> <sup>N</sup> i p <sup>i</sup> <sup>p</sup> j j i p j 1 i 1 j i ip1 p

 <sup>μ</sup> ∏ ∏

<sup>μ</sup> <sup>=</sup> <sup>λ</sup> μ + λ ⋅ μ +λ ⋅

<sup>μ</sup> <sup>∏</sup> <sup>μ</sup> <sup>∏</sup> <sup>μ</sup> 

<sup>μ</sup> <sup>∏</sup> <sup>μ</sup> <sup>∏</sup> <sup>μ</sup>

N N p

can obtain the general expression for p0 and, then, we get every state probabilities.

1p p p

=+ + ⋅

Considering equations (8) and (11), we have the following blocking probability pN.

p 1

N

p

closed network with incoming traffic load.

=+ + ⋅

N N B p 0 N p p j j 1

μ ⋅μ ∏

=

== + + (9)

−

(10)

1

(11)

− −

γ=λ⋅ − ( ) 1 PN (13)

(12)

−

−

(8)

Norton equivalence for a traffic monitoring system, one must begin with a model that contains a simple queue and a multi-server queue. This is a particular case of the general model studied before.

Fig. 8. Equivalence for the traffic monitoring system.

The simple queue with service rate μK models non-parallelizable Kernel services, whereas the multiple queue with p servers and service rate μU models the system capacity to parallelize certain services. The particularity of this model with regard to the general model is that, at most, only p packets can circulate on the closed network maximum. We are interested in solving this model to work out the equivalent service rate of the traffic monitoring system for every state in the network.

Fig. 9. State diagram for the traffic monitoring system equivalence.

The state diagram makes sense for values of N that are less or equal to the highest number of processors. The service rate of the traffic monitoring system will be different for every value of N and, given that some services are not parallelizable, in general, it does not follow a linear evolution. Following a similar approach to the general case, we can calculate the probability of the highest state, pN, which is useful to estimate the effective service rate of the equivalence.

$$\begin{aligned} \mathbf{p}\_{\rm 0} \cdot \boldsymbol{\mu}\_{\rm K} &= \mathbf{p}\_{1} \cdot \boldsymbol{\mu}\_{\rm U} \\ \mathbf{p}\_{1} \cdot \boldsymbol{\mu}\_{\rm K} &= \mathbf{p}\_{2} \cdot 2 \boldsymbol{\mu}\_{\rm U} \\ &\cdots \\ \mathbf{p}\_{i-1} \cdot \boldsymbol{\mu}\_{\rm K} &= \mathbf{p}\_{i} \cdot \mathbf{i} \cdot \boldsymbol{\mu}\_{\rm U} \end{aligned} \Longrightarrow \mathbf{p}\_{i} = \frac{\boldsymbol{\mu}\_{\rm K}}{\mathbf{i} \cdot \boldsymbol{\mu}\_{\rm U}} \cdot \mathbf{p}\_{i-1} \tag{14}$$

$$\mathbf{p}\_{\mathbf{i}} = \frac{\mu\_{\mathbf{K}}}{\mathbf{i} \cdot \mu\_{\mathbf{U}}} \cdot \mathbf{p}\_{\mathbf{i}-1} = \frac{\mu\_{\mathbf{K}}^2}{\mu\_{\mathbf{U}}^2 \cdot \mathbf{i} \cdot (\mathbf{i}-1)} \cdot \mathbf{p}\_{\mathbf{i}-2} = \dots = \frac{\mu\_{\mathbf{K}}^i}{\mu\_{\mathbf{U}}^i \cdot \mathbf{i}!} \cdot \mathbf{p}\_0 \tag{15}$$

Modelling a Network Traffic Probe Over a Multiprocessor Architecture 317

a finite number N of packets circulate. In general, this number N is greater than p, the

The analytical solution of this model is similar to that proposed for the general model taking into account that the service rates μ1, μ2..., μp will correspond with the calculation of the Norton equivalent model μeq(n, qa) with values of n from 1 to p. This model allows us to calculate the theoretical throughput of the traffic monitoring system for different loads of

The value of N will allow us to estimate the system losses. There will be losses when the N packets of the closed network are located in the upper queue. At that time, the traffic injection queue will be empty and, therefore, it will simulate the blocking of the incoming

Apart from the analytic solution explained above, we have also considered an iterative method based on the mean value analysis (MVA), in order to simplify the calculations even more. This theorem states that 'when one customer in an N-customer closed system arrives at a service facility he/she observes the rest of the system to be in the equilibrium state for a system with N−1 customers' [Reiser&Lavengerg, 1980]. The application of this theorem to our case requires taking into account the dependencies between some states and others in a complex state diagram, where the state transitions can be also performed with different

The mean value analysis is based on the iterative dependency between the probability of a certain state with regard to the probabilities of the closest states. The state transitions will

It is necessary to do a balance of probability flows between states considering the service

μ

μ*eq=*μ*j*

 *p* 

μ

 *j* 

not be possible between any two states, they can only occur between adjacent states.

traffic. That will be less likely, the higher the value of N is.

probabilities, because there are state dependent service rates.

**4.4.1 Probability flows between adjacent states** 

rates that are dependent on the state of each queue.

Fig. 11. General model for the closed queue network.

μ*i*

*i* 

γ=λ⋅ − ( ) 1 pN (21)

p(i, j) f p(i 1, =− − ( ) j),p(i, j 1) (22)

number of available processors.

network traffic.

**4.4 Mean value analysis** 

After considering the normalization condition, we can determine the expression for pN:

$$\mathbf{p}\_0 + \sum\_{i=1}^{N} \mathbf{p}\_i = \mathbf{1} = \mathbf{p}\_0 + \sum\_{i=1}^{N} \frac{\mu\_\mathbf{K}^i}{\mu\_\mathbf{U}^i \cdot \mathbf{i}!} \cdot \mathbf{p}\_0 = \mathbf{p}\_0 \cdot \left(\mathbf{1} + \sum\_{i=1}^{N} \frac{\mathbf{p}^i}{\mathbf{i}!} \right) \tag{16}$$

$$\implies \mathbf{p}\_0 = \frac{1}{\left(1 + \sum\_{i=1}^N \frac{\mathbf{p}^i}{\mathbf{i} \ \mathbf{i} \ \mathbf{i}}\right)}\tag{17}$$

$$\mathbf{p}\_{\text{N}} = \frac{\boldsymbol{\mu}\_{\text{K}}^{\text{N}}}{\boldsymbol{\mu}\_{\text{U}}^{\text{N}} \cdot \text{N}!} \cdot \frac{1}{\left(1 + \sum\_{i=1}^{\text{N}} \frac{\boldsymbol{\rho}^{i}}{\mathbf{i}!} \right)} = \frac{\boldsymbol{\rho}^{\text{N}}}{\text{N}! + \sum\_{i=1}^{\text{N}} \frac{\text{N}! \cdot \boldsymbol{\rho}^{i}}{\text{i}!}} = \frac{\boldsymbol{\rho}^{\text{N}}}{\sum\_{i=0}^{\text{N}} \frac{\text{N}! \cdot \boldsymbol{\rho}^{i}}{\text{i}!}} \tag{18}$$

Thus, taking into account that the throughput of the closed network is the equivalent service rate, we have the following expression:

$$
\mu\_{\rm eq}(\mathbf{n}) = \mu\_{\rm K} \cdot \left(1 - \mathbf{p}\_{\rm n}\right) \tag{19}
$$

$$\mu\_{\rm eq}(\mathbf{n}) = \mu\_{\rm K} \cdot \left( 1 - \frac{\mathbf{p}^{\rm n}}{\sum\_{i=0}^{n} \frac{\mathbf{n} \cdot \mathbf{p}^{i}}{\mathbf{i} \cdot \mathbf{i}}} \right) \quad \bigvee \quad \mathfrak{o} = \frac{\mu\_{\rm K}}{\mu\_{\rm U}} \tag{20}$$

Note that this case is really a particular case of the general case where λ= μK and μi=i⋅μU.

#### **4.3 Solution for the closed network model with incoming traffic**

The previously explained Norton equivalence takes into consideration the internal problems of the traffic monitoring system related to the non-parallelizable tasks. Now we will complete the model adding the traffic injection queue to the equivalent system calculated before.

Fig. 10. General model with incoming traffic.

The entire system under traffic load is modelled as a closed network with an upper multiple queue, which is the Norton equivalent queue of the traffic analysis system, and a lower simple queue, simulating the injection of network traffic with rate λ. In this closed network, a finite number N of packets circulate. In general, this number N is greater than p, the number of available processors.

The analytical solution of this model is similar to that proposed for the general model taking into account that the service rates μ1, μ2..., μp will correspond with the calculation of the Norton equivalent model μeq(n, qa) with values of n from 1 to p. This model allows us to calculate the theoretical throughput of the traffic monitoring system for different loads of network traffic.

$$\gamma = \mathbb{X} \cdot (1 - \mathbf{p}\_{\mathrm{N}}) \tag{21}$$

The value of N will allow us to estimate the system losses. There will be losses when the N packets of the closed network are located in the upper queue. At that time, the traffic injection queue will be empty and, therefore, it will simulate the blocking of the incoming traffic. That will be less likely, the higher the value of N is.

#### **4.4 Mean value analysis**

316 Telecommunications Networks – Current Status and Future Trends

NN N i i K

+ == + ⋅ = ⋅ +

0 N i

1

N N N i N N i i

Thus, taking into account that the throughput of the closed network is the equivalent service

n

 ρ μ μ =μ ⋅ − ρ = ⋅ ρ <sup>μ</sup> 

i 0

Note that this case is really a particular case of the general case where λ= μK and μi=i⋅μU.

i! <sup>=</sup>

The previously explained Norton equivalence takes into consideration the internal problems of the traffic monitoring system related to the non-parallelizable tasks. Now we will complete the

μeq

μeq

The entire system under traffic load is modelled as a closed network with an upper multiple queue, which is the Norton equivalent queue of the traffic analysis system, and a lower simple queue, simulating the injection of network traffic with rate λ. In this closed network,

λ

*p*

μ ρρ =⋅ = = <sup>μ</sup> ⋅ ⋅ <sup>ρ</sup> <sup>ρ</sup> ⋅ρ <sup>+</sup> <sup>+</sup> 

<sup>p</sup> N! N! N! <sup>1</sup> N!

 <sup>=</sup> <sup>ρ</sup> <sup>+</sup> 

i 1

N N N

i! <sup>=</sup>

i 1 i1 i0

i! i! i! <sup>=</sup> = =

μ =μ ⋅ − eq K n (n) 1 p ( ) (19)

U

μ**eq(n,qa)**

/ K

1

μ ρ

μ ⋅ (16)

(17)

(18)

(20)

i 1 i 1 U i 1 p p 1p p p 1 i! i! == =

After considering the normalization condition, we can determine the expression for pN:

0i 0 i 0 0

p

1

eq K n i

**4.3 Solution for the closed network model with incoming traffic** 

*N*

Fig. 10. General model with incoming traffic.

(n) 1 n!

model adding the traffic injection queue to the equivalent system calculated before.

*n* ≤*p*

K

U

rate, we have the following expression:

Apart from the analytic solution explained above, we have also considered an iterative method based on the mean value analysis (MVA), in order to simplify the calculations even more. This theorem states that 'when one customer in an N-customer closed system arrives at a service facility he/she observes the rest of the system to be in the equilibrium state for a system with N−1 customers' [Reiser&Lavengerg, 1980]. The application of this theorem to our case requires taking into account the dependencies between some states and others in a complex state diagram, where the state transitions can be also performed with different probabilities, because there are state dependent service rates.

#### **4.4.1 Probability flows between adjacent states**

The mean value analysis is based on the iterative dependency between the probability of a certain state with regard to the probabilities of the closest states. The state transitions will not be possible between any two states, they can only occur between adjacent states.

$$\mathbf{p(i,j)} = \mathbf{f(p(i-1,j),p(i,j-1))}\tag{22}$$

It is necessary to do a balance of probability flows between states considering the service rates that are dependent on the state of each queue.

Fig. 11. General model for the closed queue network.

Modelling a Network Traffic Probe Over a Multiprocessor Architecture 319

We can calculate every state probability in the stage N as the ratio of the average stay time in this state, tN(i,j) and the total time for that stage TTN. The total time TTN can be calculated

( ) <sup>N</sup> ( )

N

If we consider Reiser's theorem [Reiser, 1981], it is possible to set a relation between the state probabilities of a certain state with regard to the ones which are adjacent in the previous stage. In particular, in equilibrium, when we have N packets, the state probability distribution is equal to the distribution at the moment of a new packet arrival at the closed network. In the state diagram of our model, in general, every state depends on two states of

Knowing the iterative relations of the probabilities between different stages and basing on Little's formula, we can calculate the average stay time tN(i, j) in the system in a given state,

> ( ) ( )

ii i E i p i, <sup>j</sup> i p i 1, <sup>j</sup> <sup>i</sup> t (i, j) ii i <sup>−</sup> ⋅ −⋅ == = μμ μ

> ( ) ( )

jj j <sup>E</sup> <sup>j</sup> p i, j j p i, <sup>j</sup> <sup>1</sup> <sup>j</sup> t (i, j) jj j <sup>−</sup> ⋅ −⋅ == = μμ μ

N 1 N 1

( ) ( )

i j p (i 1, j) i p (i, <sup>j</sup> 1) <sup>j</sup> t (i, j) i j − − − ⋅ −⋅ = + μ μ

TOTAL,N N i 0 T t i,N i =

TOTAL,N

( )

N j N j 1 E(n ) j p (N j, j) =

t i, <sup>j</sup> p i,j <sup>T</sup> <sup>=</sup> (25)

p (i, N N1 j) p (i 1, j) = − <sup>−</sup> (27)

p (i, N N1 j) p (i, j 1) = − <sup>−</sup> (28)

n(i, j) and the average time in queue j, tj

() () () <sup>i</sup> <sup>j</sup> t i, N N <sup>j</sup> = + t i, <sup>j</sup> t i, <sup>N</sup> j (29)

( ) ( )

( ) ( )

n(i, j).

(30)

(31)

(32)

= − (26)

=⋅ − (24)

N i N i 1 E(n ) i p (i,N i) =

=⋅ −

as the sum of all the partial times tN(i, j) of each state at that stage.

the previous stage. We will have the following probability flows:

'

''

( ) ( )

( ) ( )

j '' j N N N 1

i ' i N N N 1

Transition (i-1,j) → (i,j) a new packet arrives at queue i

Transition (i,j-1) → (i,j) a new packet arrives at queue j

accumulating the average time in queue i, ti

N

N

Considering the probability distribution of the previous stage:

N

Applying Little's law:

N

To begin with, we consider the general model for the closed queue network. We call queue i to the simple queue of the model. We assume that this simple queue is in state i and its service rate is μi. Likewise, we call queue j to the multi-server queue which is in state j with a state dependent equivalent service rate μj. A fixed number of packets (N) are circulating in the closed network, so that there is a dependence between the state i and j.

Fig. 12. Probability flows between adjacent states with two processors.

Fig.12 shows the dependencies of the probability of a given state with regard to the closer states in the previous stage with one packet less.

#### **4.4.2 Iterative calculation method**

Little's law [Little, 1961] can help us to interpret the relationship between the state probabilities at different stages of the closed queue network.

$$\mathrm{E}\left(\mathrm{T}\right) = \frac{\mathrm{E}\left(\mathrm{n}\right)}{\mathrm{\mathcal{N}}} \tag{23}$$

This formula is applied to any queue system that is in equilibrium in which there are users who enter the system, consume time to be attended and then leave. In the formula, γ can be understood as the throughput of the system, E(T) as the average time spent in the system and E (n) as the average number of users.

The iterative method applied to the closed queue network is based on solving certain interesting statistics of the network at every stage, using the data obtained in the previous stage. You go from one stage with N packets to the next with N+1 packets, adding one packet to the closed queue network once the system is in stable condition. Knowing the state probability distribution in stage N, we can calculate the average number of users on each server.

To begin with, we consider the general model for the closed queue network. We call queue i to the simple queue of the model. We assume that this simple queue is in state i and its service rate is μi. Likewise, we call queue j to the multi-server queue which is in state j with a state dependent equivalent service rate μj. A fixed number of packets (N) are circulating in

*p(2,0)* 

i

*p(2,1)* 

*μi*

*p(2,2)* 

Fig.12 shows the dependencies of the probability of a given state with regard to the closer

Little's law [Little, 1961] can help us to interpret the relationship between the state

( ) E n( ) E T <sup>=</sup> <sup>γ</sup>

This formula is applied to any queue system that is in equilibrium in which there are users who enter the system, consume time to be attended and then leave. In the formula, γ can be understood as the throughput of the system, E(T) as the average time spent in the system

The iterative method applied to the closed queue network is based on solving certain interesting statistics of the network at every stage, using the data obtained in the previous stage. You go from one stage with N packets to the next with N+1 packets, adding one packet to the closed queue network once the system is in stable condition. Knowing the state probability distribution in stage N, we can calculate the average number of users on each

*μi μj*

*μi μj*

*p(3,0)* 

States without blocking

*p(3,1)* 

*μj*

*p(4,0)* 

(23)

j

*p(i,j)*

the closed network, so that there is a dependence between the state i and j.

*p(0,0)* 

*μi μj*

*μi μj*

*μi μj*

*μi μj*

states in the previous stage with one packet less.

**4.4.2 Iterative calculation method** 

and E (n) as the average number of users.

server.

*p(0,1) p(1,0)* 

*p(0,2) p(1,1)* 

*μi μj*

*μi μj*

*p(0,3) p(1,2)* 

*μi μj*

Fig. 12. Probability flows between adjacent states with two processors.

*p(0,4) p(1,3)* 

probabilities at different stages of the closed queue network.

$$\mathbf{E(n\_i)} = \sum\_{i=1}^{N} \mathbf{i} \cdot \mathbf{p\_N(i, N - i)} \qquad \qquad \mathbf{E(n\_j)} = \sum\_{j=1}^{N} \mathbf{j} \cdot \mathbf{p\_N(N - j, j)} \tag{24}$$

We can calculate every state probability in the stage N as the ratio of the average stay time in this state, tN(i,j) and the total time for that stage TTN. The total time TTN can be calculated as the sum of all the partial times tN(i, j) of each state at that stage.

$$\mathbf{p}\_{\rm N} \left( \mathbf{i}, \mathbf{j} \right) = \frac{\mathbf{t}\_{\rm N} \left( \mathbf{i}, \mathbf{j} \right)}{\mathbf{T}\_{\rm TOTAL, N}} \tag{25}$$

$$\mathbf{T}\_{\text{TOTAL},\text{N}} = \sum\_{\mathbf{i}=\mathbf{0}}^{\text{N}} \mathbf{t}\_{\text{N}} \left(\mathbf{i}, \mathbf{N} - \mathbf{i}\right) \tag{26}$$

If we consider Reiser's theorem [Reiser, 1981], it is possible to set a relation between the state probabilities of a certain state with regard to the ones which are adjacent in the previous stage. In particular, in equilibrium, when we have N packets, the state probability distribution is equal to the distribution at the moment of a new packet arrival at the closed network. In the state diagram of our model, in general, every state depends on two states of the previous stage. We will have the following probability flows:

Transition (i-1,j) → (i,j) a new packet arrives at queue i

$$\mathbf{p}\_{\rm N}^{\prime}(\mathbf{i}, \mathbf{j}) = \mathbf{p}\_{\rm N-1}(\mathbf{i} - \mathbf{1}, \mathbf{j}) \tag{27}$$

Transition (i,j-1) → (i,j) a new packet arrives at queue j

$$\mathbf{p}\_{\rm{N}}^{\rm{"{i}}}(\mathbf{i}, \mathbf{j}) = \mathbf{p}\_{\rm{N}-1}(\mathbf{i}, \mathbf{j} - \mathbf{1}) \tag{28}$$

Knowing the iterative relations of the probabilities between different stages and basing on Little's formula, we can calculate the average stay time tN(i, j) in the system in a given state, accumulating the average time in queue i, ti n(i, j) and the average time in queue j, tj n(i, j).

$$\mathbf{t}\_{\rm N} \left( \mathbf{i}, \mathbf{j} \right) = \mathbf{t}\_{\rm N}^{\rm i} \left( \mathbf{i}, \mathbf{j} \right) + \mathbf{t}\_{\rm N}^{\rm j} \left( \mathbf{i}, \mathbf{j} \right) \tag{29}$$

Applying Little's law:

$$\mathbf{t}\_{\rm N}^{i} \left( \mathbf{i}, \mathbf{j} \right) = \frac{\mathbf{E}\_{\rm N}^{i} \left( \mathbf{i} \right)}{\mu\_{\rm i} \left( \mathbf{i} \right)} = \frac{\mathbf{p}\_{\rm N}^{\prime} \left( \mathbf{i}, \mathbf{j} \right) \cdot \mathbf{i}}{\mu\_{\rm i} \left( \mathbf{i} \right)} = \frac{\mathbf{p}\_{\rm N-1} \left( \mathbf{i} - \mathbf{1}, \mathbf{j} \right) \cdot \mathbf{i}}{\mu\_{\rm i} \left( \mathbf{i} \right)} \tag{30}$$

$$\mathbf{t}\_{\rm N}^{\rm j}(\mathbf{i}, \mathbf{j}) = \frac{\mathbf{E}\_{\rm N}^{\rm j}(\mathbf{j})}{\mu\_{\rm j}(\mathbf{j})} = \frac{\mathbf{p}\_{\rm N}^{\rm n}(\mathbf{i}, \mathbf{j}) \cdot \mathbf{j}}{\mu\_{\rm j}(\mathbf{j})} = \frac{\mathbf{p}\_{\rm N-1}(\mathbf{i}, \mathbf{j} - \mathbf{1}) \cdot \mathbf{j}}{\mu\_{\rm j}(\mathbf{j})} \tag{31}$$

Considering the probability distribution of the previous stage:

$$\mathbf{t}\_{\rm N}(\mathbf{i}, \mathbf{j}) = \frac{\mathbf{p}\_{\rm N-1}(\mathbf{i} - \mathbf{1}\_{\rm \prime} \mathbf{j}) \cdot \mathbf{i}}{\mu\_{\rm i}(\mathbf{i})} + \frac{\mathbf{p}\_{\rm N-1}(\mathbf{i}, \mathbf{j} - \mathbf{1}) \cdot \mathbf{j}}{\mu\_{\rm j}(\mathbf{j})} \tag{32}$$

Modelling a Network Traffic Probe Over a Multiprocessor Architecture 321

A parameter that can be difficult to assess is N, the number of packets that are circulating in the closed network. In general, this parameter depends on specific features of the platform, such as the number of available processors and the ability of the Kernel to accept packets in

One conclusion to be drawn from the model, is that it is possible to estimate the value of the parameter N by adjusting the losses that the model has with regard to those which actually

This section presents the validation tests to verify the correctness of our analytical model. The aim is to compare theoretical results with those obtained by direct measurement in a real traffic monitoring system, in particular, in the Ksensor prototype developed by NQaS which is integrated into a testing architecture. It is also worth mentioning that, prior to obtaining the theoretical performance results, it is necessary to introduce some input parameters for the model. These initial necessary values will also be extracted from experimental measurements in Ksensor and the testing platform, making use of an appropriate methodology. With all this, we report experimental and analysis results of the traffic monitoring system in terms of two key measures, which are the mean throughput and the CPU utilization. These measures are plotted against incoming packet arrival rate.

In this section, we describe the hardware and software setup that we use for our evaluation. Our hardware setup (see Fig. 14) consists of four computers: one for traffic generation (injector), a second one for capturing and analysing the traffic (sensor or Ksensor), a third one for packet reception (receiver) and the last one for managing, configuring and launching the tests (manager). All they are physically connected to the same Gigabit Ethernet switch.

**Capturing network**

<sup>3</sup> Catalyst 2950SERIES

**Switch management Capturing network Management network**

SYST RPS STRT UTIL DUPLXSPEED MODE

4 5 6 7 8 9 <sup>10</sup> 1 2 <sup>11</sup> <sup>12</sup> <sup>13</sup> <sup>14</sup> <sup>15</sup> <sup>16</sup> <sup>19</sup> <sup>20</sup> <sup>21</sup> <sup>22</sup> <sup>23</sup> <sup>24</sup> <sup>25</sup> <sup>26</sup> <sup>17</sup> <sup>18</sup> <sup>27</sup> <sup>28</sup> <sup>29</sup> <sup>30</sup> <sup>31</sup> <sup>32</sup> <sup>35</sup> <sup>36</sup> <sup>37</sup> <sup>38</sup> <sup>39</sup> <sup>40</sup> <sup>41</sup> <sup>42</sup> <sup>33</sup> <sup>34</sup> <sup>43</sup> <sup>44</sup> <sup>45</sup> <sup>46</sup> <sup>47</sup> <sup>48</sup> 2 1

**Manager Injector Sensor Receiver**

transit regardless of whether they have processors available at that time.

occur in a traffic monitoring system.

Finally, we discuss the results obtained.

**Management network**

Fig. 14. Hardware setup for validation tests.

**5. Model validation** 

**5.1 Test setup** 

Taking into account that, for a given state (i, j), the average stay time of a packet in the queues i and j is given by ti and tj respectively, we can express the probability of that state as:

$$
\pi\_i = \frac{\mathbf{i}}{\mu\_i(\mathbf{i})} \qquad \qquad \qquad \pi\_j = \frac{\mathbf{j}}{\mu\_j(\mathbf{j})} \tag{33}
$$

$$\mathbf{t}\_{\rm N}(\mathbf{i}, \mathbf{j}) = \frac{\mathbf{p}\_{\rm N-1}(\mathbf{i} - \mathbf{1}\_{\rm \prime} \mathbf{j}) \cdot \mathbf{i}}{\mu\_{\rm i}(\mathbf{i})} + \frac{\mathbf{p}\_{\rm N-1}(\mathbf{i}, \mathbf{j} - \mathbf{1}) \cdot \mathbf{j}}{\mu\_{\rm j}(\mathbf{j})} \tag{34}$$

$$\mathbf{p}\_{\rm N}(\mathbf{i}, \mathbf{j}) = \frac{\mathbf{t}\_{\rm N}(\mathbf{i}, \mathbf{j})}{\mathbf{T}\_{\rm TN}} = \mathbf{p}\_{\rm N-1}(\mathbf{i} - \mathbf{1}, \mathbf{j}) \cdot \frac{\mathbf{t}\_{\rm i}}{\mathbf{T}\_{\rm TN}} + \mathbf{p}\_{\rm N-1}(\mathbf{i}, \mathbf{j} - \mathbf{1}) \cdot \frac{\mathbf{t}\_{\rm j}}{\mathbf{T}\_{\rm IN}} \tag{35}$$

Eq. 35 allows us to calculate a certain state probability of the stage with N packets, having the probabilities of the adjacent states in the stage N. Using this equation, we can iteratively calculate the state probability distribution for every stage.

#### **4.4.3 Adjusting losses depending on N**

The losses of the traffic monitoring system can be measured assessing the blocking probability of the injection queue. If we consider the general model with an incoming traffic of λ, we can calculate (Eq. 21) the volume of traffic processed by the traffic monitoring system (γ) and also the caused losses (δ).

$$\gamma = \lambda \cdot \left(1 - \mathbf{p}(0, \mathbf{N})\right) \tag{36}$$

$$
\mathcal{S} = \mathcal{X} - \mathcal{Y} = \mathcal{X} \cdot \mathcal{p}(0, N) \tag{37}
$$

If we look at the evolution of the blocking probability of the injection queue with increasing number of packets N in the closed network, we can see how that probability stage is reduced in each stage. The same conclusion can be derived from Eq. 18.

Fig. 13. Evolution of probability flows as a function of N.

A parameter that can be difficult to assess is N, the number of packets that are circulating in the closed network. In general, this parameter depends on specific features of the platform, such as the number of available processors and the ability of the Kernel to accept packets in transit regardless of whether they have processors available at that time.

One conclusion to be drawn from the model, is that it is possible to estimate the value of the parameter N by adjusting the losses that the model has with regard to those which actually occur in a traffic monitoring system.
