**5.3 Experimental estimation for certain parameters of the model**

In section 3, we have defined an analytical model which functionally responds to a traffic monitoring system. In order to perform an assessment of the model, first we need some values for certain input parameters. We are referring to some service rates that appear in the model based on closed queue networks and are necessary to obtain theoretical performance results. Then we can compare these analytical results with those obtained in the laboratory.

In general, we talk about μ service rates, but, in this subsection, it is easier to talk about mean service times. For this reason, we use the nomenclature based on average processing time in which an average time tij can be expressed as the inverse of its service rate 1/μ ij.

We want to adapt the theoretical model to Ksensor, a real network traffic probe. The best approach is to consider the model of the equivalent traffic monitoring system (see Fig.5) where we distinguish a non-parallelizable process and a parallelizable one. In Ksensor, this separation corresponds with the packet capturing process and the analysis process.

The packet capturing process is not parallelizable because the softIRQ is responsible for the capture and it only runs in one CPU. Fig. 16 shows experimental measurements about average packet capturing times. They have been obtained running tests with Ksensor under different conditions: variable packet injection rate in packets per second and traffic analysis load in number of cycles (null, 1K, 5K or 25K). The inverse of the average softIRQ times shown in Fig. 16 will be the service rate μs that appears in the model.

On the other hand, the analysis process is parallelizable in Ksensor. In the same way that softIRQ times have been obtained, we experimentally get average analysis processing times that are shown in Fig. 17. The inverse of the average times shown in Fig.17 will be the service rate μM that appears in the multi-queue of the model. It is necessary to comment that, in Fig. 16, the average softIRQ times are not constant. This is because neither all the injected packets are captured by the system, nor all the captured packets are analysed and this causes different computational flow balances.

Modelling a Network Traffic Probe Over a Multiprocessor Architecture 325

Fig. 18. Theoretical and experimental throughputs without analysis load.

**packets per second**

50187

100872

151301

203755

255939

311906

356427

415767

Fig. 20. Theoretical and experimental throughputs with 5Kcycle analysis load.

453554

525112

586586

**packets per second**

623233

664779

712259

767005

830490

905881

996379

1106938

1245104

1487978

**packets per second**

50187

100872

151301

203755

255939

311906

356427

415767

Fig. 19. Theoretical and experimental throughputs with 1Kcycle analysis load.

453554

525112

586586

**Throughput (1K)**

**packets per second**

**Throughput (5K)**

623233

664779

712259

767005

830490

905881

996379

1106938

1245104

1487978

LAB N=1 N=2 N=3 N=8 N=16 N=40

LAB N=1 N=2 N=3 N=8 N=16 N=40

The values μs and μM, derived from these experimental measurements, will be taken to the performance evaluation of the model that will be explained later. In addition to the two parameters mentioned, there is another one which is qa, but it is always qa=1 in our test configuration.

Fig. 16. Average softIRQ per captured packet.

Fig. 17. Analysis time per packet.

#### **5.4 Performance measurements - Evaluation and discussion**

The analytical model has been tested with Ksensor under different conditions: packet injection rate (packets per second) varies between 0 and 1.5 million, packet length is 64-1500 bytes and traffic analysis load (at present we simulate QoS algorithm processing times, from 0 to 25000 cycles). The number of processors has been 2 in every test.

The values μs and μM, derived from these experimental measurements, will be taken to the performance evaluation of the model that will be explained later. In addition to the two parameters mentioned, there is another one which is qa, but it is always qa=1 in our test

**Average softIRQ time per captured packet**

configuration.

**nanoseconds**

50189,24725

100872,4304

Fig. 16. Average softIRQ per captured packet.

Fig. 17. Analysis time per packet.

**nanoseconds**

50189,24725

100872,4304

151301,8783

203755,5702

255939,234

**5.4 Performance measurements - Evaluation and discussion** 

0 to 25000 cycles). The number of processors has been 2 in every test.

311906,2172

356427,0293

415766,813

453553,6402

525111,5932

586586,2453

**packets per second**

The analytical model has been tested with Ksensor under different conditions: packet injection rate (packets per second) varies between 0 and 1.5 million, packet length is 64-1500 bytes and traffic analysis load (at present we simulate QoS algorithm processing times, from

623232,9908

664779,0579

712258,5792

767004,5743

830489,8918

905881,1018

996378,8711

1106937,965

1245104,324

1487977,796

151301,8783

203755,5702

255939,234

311906,2172

356427,0293

415766,813

453553,6402

525111,5932

586586,2453

**packets per second**

**Analysis time per packet**

623232,9908

664779,0579

712258,5792

767004,5743

830489,8918

905881,1018

996378,8711

1106937,965

1245104,324

1487977,796

null 1K 5K 25K

null 1K 5K 25K

Fig. 18. Theoretical and experimental throughputs without analysis load.

Fig. 19. Theoretical and experimental throughputs with 1Kcycle analysis load.

Fig. 20. Theoretical and experimental throughputs with 5Kcycle analysis load.

Modelling a Network Traffic Probe Over a Multiprocessor Architecture 327

This paper has also come in useful to explain the main aspects of Ksensor, a multithreaded kernel-level probe developed by NQaS research group. It is remarkable that this system introduces performance improving design proposals into traffic analysis systems for passive

As a future work, we suggest two main lines: the first one is related to Ksensor and it is about a new hardware-centered approach whose objective is to embed our proposals onto programmable network devices like FPGAs. The second research line aims at completing and adapting the model to the real system in a more accurate way. We are already making progress on new mathematical scenarios which can represent, in detail, aspects such as packet capturing process, congestion avoidance mechanisms between capturing and analysis stages, specific analysis algorithms applied in QoS monitoring and packet

Finally, it is worth mentioning that the test setup, which has been used to validate the model, will be improved acquiring network hardware at 10 Gbps and installing Ksensor over a server with more than two processors. The model will be tested under these new

Thus, further work is necessary to analyse this type of systems with a higher precision, compare their results, in certain conditions, better and prevent us from developing high-cost

Altman, E.; Avratchenkov,K. & Barakat, C.. (2000). A stochastic model for TCP/IP with

Barakat, C.; Thiran, P.; Iannaccone, G.; Diot, C. & Owezarski, P. (2002). A flow-based model

Beaumont, A.; Fajardo, J.; Ibarrola, E. & Perfecto, C. (2005). Arquitectura de red para la automatización de pruebas. *VI Jornadas de Ingeniería Telemática.,* Vigo, Spain.

Biswas, A.; Sinha, P. (2006). Efficient real-time Linux interface for PCI devices: A study on

Cardigliano, A. (2011). Towards wire-speed network monitoring using Virtual Machines.

Chandy, K.M.; Herzog, U. & Woo, L.S. (1975). Parametric Analysis of Queueing Networks

Cleary, J.; Donnelly, S.; Graham, I.; McGregor, A. & Pearson, M. (2000). Design principles for

Deri, L. (2004). Improving Passive Packet Capture: Beyond Device Polling. *SANE 2004,* 

Deri, L. (2005). nCap: Wire-speed Packet Capture and Transmission. *E2EMON 2005,* Nice,

for Internet backbone traffic, *Proceedings of the 2nd ACM SIGCOMM Workshop on* 

hardening a Network Intrusion Detection System. *SANE 2006,* Delft, The

Learning Techniques, *IBM J. Research and Development*, vol. 19, no. 1, pp. 43-49,

accurate passive measurement. *Passive and Active Measurement. PAM 2000,* 

conditions and we hope to obtain satisfactory results, too.

stationary random losses. *ACM SIGCOMM 2000*.

Benvenuti, C. (2006). *Understanding Linux Network Internals,* O' Reilly Media.

*Internet measurement, 2002*.

*Master Thesis,* University of Pisa, Italy.

Netherlands.

January 1975.

France.

Hamilton, New Zealand.

Amsterdam, The Netherlands.

QoS monitoring.

filtering.

prototypes.

**7. References** 

Fig. 21. Theoretical and experimental throughputs with 25Kcycle analysis load.

Fig. 18, Fig. 19, Fig. 20 and Fig. 21 show the comparison between the theoretical model's throughput for different values of N and the real probe's throughput measured experimentally (marked as LAB in the graph). 64 byte-length packets have been used in the lab test and its corresponding service rates in the theoretical calculation. The service rates has been calculated according to the method explained in subsection 5.3.

In all the cases, the throughput grows until a maximum is reached (saturation point). We also observe in these graphs that, with increasing N, the theoretical throughput is close to the real one. It shows, therefore, that the analytical model fits the real system.
