**5. Model validation**

320 Telecommunications Networks – Current Status and Future Trends

Taking into account that, for a given state (i, j), the average stay time of a packet in the queues i

<sup>μ</sup> ( ) <sup>j</sup>

N 1 N 1

t (i, j) p (i, j) p (i 1, j) p (i, j 1) T TT − −

Eq. 35 allows us to calculate a certain state probability of the stage with N packets, having the probabilities of the adjacent states in the stage N. Using this equation, we can iteratively

The losses of the traffic monitoring system can be measured assessing the blocking probability of the injection queue. If we consider the general model with an incoming traffic of λ, we can calculate (Eq. 21) the volume of traffic processed by the traffic monitoring

If we look at the evolution of the blocking probability of the injection queue with increasing number of packets N in the closed network, we can see how that probability stage is

τ = μ

( ) ( )

i j p (i 1, j) i p (i, <sup>j</sup> 1) <sup>j</sup> t (i, j) i j − − − ⋅ −⋅ = + μ μ

N j i

TN TN TN

j j j

τ τ = = − ⋅ + −⋅ (35)

γ=λ⋅ − ( ) 1 p 0,N ( ) (36)

=−=⋅ *p*( ) 0,*N* (37)

(33)

(34)

and j is given by ti and tj respectively, we can express the probability of that state as:

N N 1 N 1

δ λγ λ

reduced in each stage. The same conclusion can be derived from Eq. 18.

Fig. 13. Evolution of probability flows as a function of N.

( ) <sup>i</sup> i i i

τ =

N

calculate the state probability distribution for every stage.

**4.4.3 Adjusting losses depending on N** 

system (γ) and also the caused losses (δ).

This section presents the validation tests to verify the correctness of our analytical model. The aim is to compare theoretical results with those obtained by direct measurement in a real traffic monitoring system, in particular, in the Ksensor prototype developed by NQaS which is integrated into a testing architecture. It is also worth mentioning that, prior to obtaining the theoretical performance results, it is necessary to introduce some input parameters for the model. These initial necessary values will also be extracted from experimental measurements in Ksensor and the testing platform, making use of an appropriate methodology. With all this, we report experimental and analysis results of the traffic monitoring system in terms of two key measures, which are the mean throughput and the CPU utilization. These measures are plotted against incoming packet arrival rate. Finally, we discuss the results obtained.

#### **5.1 Test setup**

In this section, we describe the hardware and software setup that we use for our evaluation. Our hardware setup (see Fig. 14) consists of four computers: one for traffic generation (injector), a second one for capturing and analysing the traffic (sensor or Ksensor), a third one for packet reception (receiver) and the last one for managing, configuring and launching the tests (manager). All they are physically connected to the same Gigabit Ethernet switch.

Fig. 14. Hardware setup for validation tests.

Modelling a Network Traffic Probe Over a Multiprocessor Architecture 323

• Agents are responsible for attending manager's requests and acting on different devices. Agents are always listening and they have to start and stop the daemons, as well as to collect the performance results. During a test in the infrastructure, one agent

• Daemons are in charge of acting on the different physical elements which are involved in each test. Its function can be very variable. For example, the injection of network traffic according to the desired parameterization, the configuration of the capturing buffers, the execution of control programs in the sensor, the acquisition of information or some element's statics, etc. Depending on the relationship with the agent two different types of daemons can be distinguished: master and slave. Master daemons have got some intelligence. The agent will start them but they will indicate when their work has finished. On the other hand, slave daemons do not determine the end of its execution. In each test, to do all the tasks, as many daemons as necessary are executed

• Formatters are the programs which select and translate the information stored by the manager to more appropriate formats for its representation. They are executed in the

In section 3, we have defined an analytical model which functionally responds to a traffic monitoring system. In order to perform an assessment of the model, first we need some values for certain input parameters. We are referring to some service rates that appear in the model based on closed queue networks and are necessary to obtain theoretical performance results. Then we can compare these analytical results with those obtained in the laboratory. In general, we talk about μ service rates, but, in this subsection, it is easier to talk about mean service times. For this reason, we use the nomenclature based on average processing time in which an average time tij can be expressed as the inverse of its service rate 1/μ ij.

We want to adapt the theoretical model to Ksensor, a real network traffic probe. The best approach is to consider the model of the equivalent traffic monitoring system (see Fig.5) where we distinguish a non-parallelizable process and a parallelizable one. In Ksensor, this

The packet capturing process is not parallelizable because the softIRQ is responsible for the capture and it only runs in one CPU. Fig. 16 shows experimental measurements about average packet capturing times. They have been obtained running tests with Ksensor under different conditions: variable packet injection rate in packets per second and traffic analysis load in number of cycles (null, 1K, 5K or 25K). The inverse of the average softIRQ times

On the other hand, the analysis process is parallelizable in Ksensor. In the same way that softIRQ times have been obtained, we experimentally get average analysis processing times that are shown in Fig. 17. The inverse of the average times shown in Fig.17 will be the service rate μM that appears in the multi-queue of the model. It is necessary to comment that, in Fig. 16, the average softIRQ times are not constant. This is because neither all the injected packets are captured by the system, nor all the captured packets are analysed and this

separation corresponds with the packet capturing process and the analysis process.

shown in Fig. 16 will be the service rate μs that appears in the model.

causes different computational flow balances.

is executed in the injector and another one, in the sensor.

in the injector and in the sensor.

machine called manager, at the end of every test.

**5.3 Experimental estimation for certain parameters of the model** 

However, two virtual networks are distinguished: the first one is the capturing network that connects the elements that play some role during the tests; the second one is the management network which contains the elements that are responsible for the management tasks that can be needed before or after doing tests. The use of two separate networks is necessary, so that the information exchange between the management elements does not interfere with the test results.

The basic idea is to overwhelm Ksensor (sensor) with high traffic generated from the injector. Despite the fact that we do not have 10 Gigabit Ethernet hardware for our tests available, we can achieve our goal of studying the behaviour of the traffic capturing and analysis software at high rates. In addition, we can compare the results with the analytical model and also identify the possible bottlenecks of all analysed systems.

Regarding software, we use a testing architecture [Beaumont et al., 2005] designed by NQaS that allows the automation of tasks like configuration, running and gathering results related to validation tests. The manager, the injector and the sensor that appear in Fig. 14 are part of this testing architecture. They have installed the necessary software to perform the functions of manager, agent, daemon or formatter as we will explain in the next subsection. On the other hand, the receiver is simply the destination of the traffic entered into the network by the injector and it does not have any other purpose.

### **5.2 Architecture to automatically test a traffic capturing and analysis system**

As mentioned previously, in this section, we use a testing architecture for experimental performance measures and, also, to estimate the values of certain input parameters required for the analytical model. It is, therefore, advisable to explain, albeit briefly, the main elements of this platform.

Fig. 15. Logical elements of the testing architecture used in validation tests.

The testing architecture consists of four types of logical elements as Fig. 15 shows. Each of them implements a perfectly defined function:

• Manager is the interface with the user. This element, in the infrastructure shown in Fig. 14, is located on the machine with the same name. It is in charge of managing the rest of the logical elements (agents, daemons and formatters) according to the configuration received from the administrator. After introducing the test setup, it is distributed from the manager to the other elements and the test is launched when the manager sends the start command. At the end of every test, the manager receives and stores the results obtained by the rest of the elements.

However, two virtual networks are distinguished: the first one is the capturing network that connects the elements that play some role during the tests; the second one is the management network which contains the elements that are responsible for the management tasks that can be needed before or after doing tests. The use of two separate networks is necessary, so that the information exchange between the management elements does not

The basic idea is to overwhelm Ksensor (sensor) with high traffic generated from the injector. Despite the fact that we do not have 10 Gigabit Ethernet hardware for our tests available, we can achieve our goal of studying the behaviour of the traffic capturing and analysis software at high rates. In addition, we can compare the results with the analytical

Regarding software, we use a testing architecture [Beaumont et al., 2005] designed by NQaS that allows the automation of tasks like configuration, running and gathering results related to validation tests. The manager, the injector and the sensor that appear in Fig. 14 are part of this testing architecture. They have installed the necessary software to perform the functions of manager, agent, daemon or formatter as we will explain in the next subsection. On the other hand, the receiver is simply the destination of the traffic entered into the network by

As mentioned previously, in this section, we use a testing architecture for experimental performance measures and, also, to estimate the values of certain input parameters required for the analytical model. It is, therefore, advisable to explain, albeit briefly, the main

The testing architecture consists of four types of logical elements as Fig. 15 shows. Each of

• Manager is the interface with the user. This element, in the infrastructure shown in Fig. 14, is located on the machine with the same name. It is in charge of managing the rest of the logical elements (agents, daemons and formatters) according to the configuration received from the administrator. After introducing the test setup, it is distributed from the manager to the other elements and the test is launched when the manager sends the start command. At the end of every test, the manager receives and stores the results

**5.2 Architecture to automatically test a traffic capturing and analysis system** 

Fig. 15. Logical elements of the testing architecture used in validation tests.

model and also identify the possible bottlenecks of all analysed systems.

the injector and it does not have any other purpose.

them implements a perfectly defined function:

obtained by the rest of the elements.

interfere with the test results.

elements of this platform.

