**3.1 Description of the model**

308 Telecommunications Networks – Current Status and Future Trends

The softIRQ handler takes out packets from the ring buffer. In Ksensor, after taking out a packet from the ring buffer, the handler stores it in a special queue called packet queue, as

The system decides when a softIRQ handler is executed. When its execution starts, the handler polls the first interface in the poll list and starts taking out packets from its ring buffer. In each poll, the softIRQ handler can only pull out packets up to a maximum number called quota. When it reaches the quota it has to poll the next interface in the poll list. If an interface does not have more packets it is deleted from the poll list. Besides, in a softIRQ, the handler can only take out a maximum number of packets called budget. When the handler reaches this maximum, the softIRQ finishes. If there are interfaces left in the poll list, a new softIRQ is scheduled. Furthermore, a softIRQ may take one jiffy (4 ms) at most. If it consumes this time

and there are still packets to pull out, the softIRQ finishes and a new one is scheduled.

There is only one poll list in each processor. When the hardIRQ handler is called it registers the network interface in the poll list of the processor that is executing the handler. The softIRQ handler is executed in the same processor. At any given time, a network interface

Ksensor has a system to improve the performance in case of congestion. When the packet queue reaches a maximum number of stored packets, this system forces NAPI to stop capturing packets. This means that all the resources of all the processors are dedicated to analysing instances. When the number of packets in the packet queue reaches a fixed

This section introduces an analytical model which works out some characteristics of network traffic analysis systems. There are several alternatives to model theoretically this type of system. For example, you can use models of queuing theory, Petri nets and, even, mixed models. The ultimate goal is to have a theoretical model that allows us to study the performance of a network traffic analysis system, considering those parameters that are the

We have chosen a theoretical model based on closed queuing networks. It is able to represent accurately the behaviour of a system in charge of analysing network traffic loaded in a multiprocessor architecture. Queuing theory allows us to develop models in order to study the performance of computer's systems [Kobayashi, 1978]. Proposed model consists in a closed queue network where CPU consumptions are related to the service capacity of the

It is worth mentioning that both the flowing traffic and the processing capacity at the nodes are modelled by Poisson arrival rates and exponential service rates. Poisson's distributions are considered to be acceptable for modelling incoming traffic [Barakat et al., 2002]. This assumption can be relaxed to more general processes such as MAPs (Markov Arrival Processes) [Altman et al., 2000], or non homogeneous Poisson processes, but we will keep working with it for simplicity of the analysis. Regarding service rate modelling, although

most representative: throughput, number of processors, analysis load and so on.

**2.3 Network interfaces polling** 

can only be registered in one poll list.

threshold value the system starts capturing again.

**3. Model for a traffic monitoring system** 

we can see in Fig. 2.

queues.

The proposed queuing network for modelling a traffic monitoring system is showed in Fig. 3. It consists of two parts; the upper one has a set of multi-server queues which represents the processing ability of the traffic analysis system. The lower part models the injection of network traffic with λ rate with a simple queue. The number of packets that are permitted in the closed queue network is fixed and its value is N.

Fig. 3. General model for the traffic analysis system.

Some stages are divided into multiple queues, due to the need to differentiate the processing done in the Kernel and the processing done at user level. Although the process code is usually running on the user level, system calls that require Kernel services are also used.

Four different stages have been distinguished for the closed network, each one with a specific function:


Modelling a Network Traffic Probe Over a Multiprocessor Architecture 311

λ

a K pk Kk Ak Tk Kk 111 11 <sup>q</sup> =+= ++ μμ μ μ μ μ

> a U pu Au Tu 11 1 <sup>q</sup> ==+ μμ μ μ

The main feasible simplification preserving the identity of the system is to replace the whole system with an equivalent multi-server queue applying the Norton equivalence [Chandy et al., 1975]. The Norton theorem establishes that in networks with solution in product form, any subnetwork can be replaced by a queue with a state-dependent service capacity. Our theoretical model has exponential service rates in all stages, so applying the Norton equivalence, the new equivalent queue will have a state-dependent service capacity

The simple queue μS of the Fig. 5 represents non-parallelizable processes of the system and

*W = N*

Fig. 4. Model of CPU consumption.

μeq(n,qa).

The equivalent service rates can be calculated as follows.

**3.2.2 Model of the equivalent traffic monitoring system** 

the multiple queue μM represents parallelizable ones.

Fig. 5. Traffic monitoring system that Norton equivalence is applied to.

μ*K* μ*U*

> μ*U p*

γ

(1)

(2)


Each service queue has p servers that represent the p processors of a multiprocessor system. Multiple server representation has been chosen to emphasize the possibility of parallelizing every stage of processing. However, all stages may not be necessarily parallelizable. For example, only one processor can access NIC at the same time, so the packet capturing process will not be parallelizable in different instances.

Another aspect to consider is that packets cannot flow freely in the closed network, because the sum of packets attended in the servers that represent the traffic monitoring system never exceeds the maximum number of processors available. Therefore, we have to assure that, at any time, the maximum number of packets in the upper queues of Fig 3 is not greater than p (the number of processors).

Considering an arrival rate of λ packets per second, the traffic analysis system will be able to keep pace with a part of that traffic, defined as q⋅λ. Remaining traffic ((1-q)⋅λ) will be lost because the platform is not capable of dealing with all the packets. Captured traffic, q⋅λ, goes through the system and basic treatment stages. Nevertheless, all traffic will not be subject of further analysis because of features of the modelled system. For example, a system in charge of calculating QoS parameters of all connections that arrive to a server will discard the packets with other destination address or monitoring systems which use sampling techniques will discard a percentage of packets or intrusion detection will apply further detection techniques only to suspicious packets. Therefore, qa coefficient has been defined to represent the rate of captured packets liable of being further analysed (analysis stage) than treated only (treatment stage). Thus, qa⋅q⋅λ of the initial flow will go through the analysis stage.
