**3.2 Simplifications of the model**

The model presented in Fig. 3 is very general, but if we observe it, some simplifications are possible. Simplifications allow us to group different service rates to identify parameters that may be analysed easily. Among the possible simplifications, we highlight two: one related to CPU consumption and another one, to the equivalent traffic monitoring system.

### **3.2.1 Model of CPU consumption**

This simplification proposes to group all the kernel consumptions in a simple queue, whereas user processes consumptions are represented in a multi-queue. It considers that kernel services are hardly parallelizable.

Fig. 4. Model of CPU consumption.

310 Telecommunications Networks – Current Status and Future Trends

• Analysis stage (analysis queues): it is integrated by two queues with μAk and μAu. This stage simulates the analysis treatment that the system does to packets that need further analysis. Not all the packets need to be analysed in this stage. For this reason, a rate called qa has been defined to represent the proportion of received packet that has to be

• Traffic injection stage (injection queue): it is a simple queue of λ capacity. This stage simulates the arrival of packets to the system with a λ rate. Since the number of packets in the closed network is fixed to N, the traffic injection queue can be empty. This situation simulates the blocking and new packets will not be introduced on the system.

Each service queue has p servers that represent the p processors of a multiprocessor system. Multiple server representation has been chosen to emphasize the possibility of parallelizing every stage of processing. However, all stages may not be necessarily parallelizable. For example, only one processor can access NIC at the same time, so the packet capturing

Another aspect to consider is that packets cannot flow freely in the closed network, because the sum of packets attended in the servers that represent the traffic monitoring system never exceeds the maximum number of processors available. Therefore, we have to assure that, at any time, the maximum number of packets in the upper queues of Fig 3 is not greater than p

Considering an arrival rate of λ packets per second, the traffic analysis system will be able to keep pace with a part of that traffic, defined as q⋅λ. Remaining traffic ((1-q)⋅λ) will be lost because the platform is not capable of dealing with all the packets. Captured traffic, q⋅λ, goes through the system and basic treatment stages. Nevertheless, all traffic will not be subject of further analysis because of features of the modelled system. For example, a system in charge of calculating QoS parameters of all connections that arrive to a server will discard the packets with other destination address or monitoring systems which use sampling techniques will discard a percentage of packets or intrusion detection will apply further detection techniques only to suspicious packets. Therefore, qa coefficient has been defined to represent the rate of captured packets liable of being further analysed (analysis stage) than treated only (treatment stage). Thus, qa⋅q⋅λ of the initial flow will go through the analysis

The model presented in Fig. 3 is very general, but if we observe it, some simplifications are possible. Simplifications allow us to group different service rates to identify parameters that may be analysed easily. Among the possible simplifications, we highlight two: one related

This simplification proposes to group all the kernel consumptions in a simple queue, whereas user processes consumptions are represented in a multi-queue. It considers that

to CPU consumption and another one, to the equivalent traffic monitoring system.

process will not be parallelizable in different instances.

analysed.

(the number of processors).

**3.2 Simplifications of the model** 

**3.2.1 Model of CPU consumption** 

kernel services are hardly parallelizable.

stage.

The equivalent service rates can be calculated as follows.

$$\frac{1}{\mu\_{\text{k}}} = \frac{1}{\mu\_{\text{pk}}} + \frac{1}{\mu\_{\text{kk}}} = \frac{\mathbf{q}\_{\text{a}}}{\mu\_{\text{Ak}}} + \frac{1}{\mu\_{\text{Tk}}} + \frac{1}{\mu\_{\text{kk}}} \tag{1}$$

$$\frac{1}{\mu\_{\rm U}} = \frac{1}{\mu\_{\rm pu}} = \frac{\mathbf{q}\_{\rm a}}{\mu\_{\rm Au}} + \frac{1}{\mu\_{\rm Tu}} \tag{2}$$

#### **3.2.2 Model of the equivalent traffic monitoring system**

The main feasible simplification preserving the identity of the system is to replace the whole system with an equivalent multi-server queue applying the Norton equivalence [Chandy et al., 1975]. The Norton theorem establishes that in networks with solution in product form, any subnetwork can be replaced by a queue with a state-dependent service capacity. Our theoretical model has exponential service rates in all stages, so applying the Norton equivalence, the new equivalent queue will have a state-dependent service capacity μeq(n,qa).

The simple queue μS of the Fig. 5 represents non-parallelizable processes of the system and the multiple queue μM represents parallelizable ones.

Fig. 5. Traffic monitoring system that Norton equivalence is applied to.

Modelling a Network Traffic Probe Over a Multiprocessor Architecture 313

0 1 p-1 p p+1 N-1 N **… …**

It is possible to deduce the balance equations from the diagram of states and, subsequently, the expression of the probability of any state i as a function of the

> 0 11 1 22

⋅λ = ⋅μ

p p i 1, ,p p p

p p

⋅λ = ⋅μ <sup>λ</sup> ∀ = = ⋅ <sup>μ</sup>

p p

i terms

i i1 1

−

From this equation, we deduce pp, the probability of the state p:

For the states with i>p, their probabilities can be expressed as:

probability (PB) of the simple queue.

−

p1 p p

i 00 i

p p 0 p j j 1

p p

λ = ⋅

∏μ

p p1 p p1 p2 p

+ +

⋅λ = ⋅μ ⋅λ = ⋅μ <sup>λ</sup> ∀= + = ⋅ <sup>μ</sup>

+

p p

p p i p 1, ,N p p

p p

−

N1 N p

(i-p) terms i p

i pP pp p p p pp

From this equation we can also derive the expression of the probability pN, which is interesting because it indicates the probability of having all the packets in the multiple queue and there is none in the simple queue. This probability defines the blocking

λλ λ λ = ⋅ ⋅= ⋅ μμ μ μ

⋅λ = ⋅μ

=

p pp

λλ λ λ =⋅ ⋅= ⋅ μμ μ ∏μ

⋅λ = ⋅μ

λ

λ

λ

λ

μp

μp

i i1 i

(3)

i

=

j j 1

−

(4)

i i1

(6)

−

−

(7)

(5)

p

μp

μp

μp

λ

λ

λ

λ

μp-1

μ2

μ1

probability of zero state p0:

Fig. 7. State diagram for the multiple queue.

This model adapts perfectly to Ksensor, because we identify a non-parallelizable process that corresponds with the packet capture and parallelizable processes that are related to analysis. Both μS and μM (in packets per second) can be measured in the laboratory.
