**3. Measurements campaigns**

As already anticipated, the main concern of this paper is to present a realistic testbed for QoS management; four measurements campaigns carried out are described in the following, whose scenario was described in Section 2.1 (Fig. 1).

The adopted dynamic priority-based method is twofold. On the one hand, users are assigned the shared bandwidth on the basis of their profiles: the higher the guaranteed bandwidth, the higher the shared bandwidth assigned. On the other hand, not only users but also services within a profile are prioritized, so each user is aware that his or her bandwidth is accordingly shared among his or her applications.

The QoS management is presented in an increased way: no QoS in the first campaign; QoS with neither minimum nor maximum set in the second; minimum and maximum bandwidth defined in the third; dynamic redistribution is also managed in the fourth.

A Testbed About Priority-Based Dynamic

Table 2. results in single links.

mainly focused on flow management.

WireShark.

second 280.

third flow.

being charged of both signal reception and transmission.

Iperf was moved in the (C) node, so as to check Link 3.

bandwidth being inadequate to support both of them.

constant and bandwidth waste is kept under control.

Connection Profiles in QoS Wireless Multimedia Networks 85

Nodes A, C, D hosted laptops for the connection to the Iperf server. In this way, throughput

The highest UDP throughput at disposal in Iperf connnections was set to 10 Mbit/s; results are reported in Tab. 2 and derive from a large number of measurements properly mediated. Note that the throughput from the Management Server to the football pitch (Link 2 + Link 3) is almost 2Mbit/s lower than in single links 2 and 3: this derives from the the same device

**Link Description Throughput** 

The second campaign aimed at verifying the efficiency of the QoS management server. The

An ethernet cable subsituted the wireless connection during a PPP connection. Delays and packet loss, in fact, are not particularly relevant in this kind of control, attention being

The following tools were adopted: (1) QoS Server; (2) Server-side Vlc for MMS over http video flow transmission (www.videolan.org/vlc); (3) Client-side Vlc for flow reception; (4)

Fig. 7, diagrammed through WireShark, represents the scenario and the first measurements of this campaign. It refers to the following profile: no QoS applied, symmetric upload and dowmlod of 1Mbit/s, no bandwidth guaranteed. The client receives the first video (red line) until second 230, then the second video (green line) starts and the firts one is interrupted at

As expected, in the concurrent period (sec. 230 to 280) both TCP videos are blocked, the

Fig. 8 represents the second test and involves three videos on three distinct ports; in this case, a QoS profile was enabled which guarantees an increasing priority from the first to the

First starts the video on the lowest priority port (blue curve); the intermediate priority video starts 20 seconds later (green curve) and, in consequence, the first data flow declines. In the period between sec. 40 to 80 the maximum throughput was increased, so as to emphasize the effect of dark and still scenes in the second video. In this way, the total throughput is

could be measured first in Link1, then Link2 and Link 3 and finally in Link2 + Link3.

Link 1 From the Shelter to the Management Server 7.8 Mbit/s Link 2 Management Server to Waterworks 6.6 Mbit/s Link 3 Waterworks to Football pitch 6.9 Mbit/s Link2 + Link 3 Management Server to Football pitch 5.2 Mbit/s

**3.2 QoS applied to multimedia TCP flows (no min/max bandwidth set)** 

Fig. 5. statistics about a single user.

#### **3.1 Throughput on single links (no QoS)**

The first campaign represents throughput maximization in the single links of the whole infrastructure. Fig. 6 shows results in Link 1, from the Shelter to the Management Server.

Fig. 6. throughput in the link from the Shelter to the Management Server (bit/s).

The above measurements were performed using the following tools: (1) Iperf (www.noc.ucf.edu/Tools/Iperf), which allows to send a TCP or UDP data streams and measure their throughput; (2) WireShark (www.wireshark.org), a network analyzer which allows to capture and diagram Iperf streams.

In this campaign, the Iperf server was installed on an Acer Travelmate with Linux Debian OS and located in the B node, so as to receive UDP connections. A Macbook Pro with Mac OSX Leopard was used as the client.

Nodes A, C, D hosted laptops for the connection to the Iperf server. In this way, throughput could be measured first in Link1, then Link2 and Link 3 and finally in Link2 + Link3.

The highest UDP throughput at disposal in Iperf connnections was set to 10 Mbit/s; results are reported in Tab. 2 and derive from a large number of measurements properly mediated.

Note that the throughput from the Management Server to the football pitch (Link 2 + Link 3) is almost 2Mbit/s lower than in single links 2 and 3: this derives from the the same device being charged of both signal reception and transmission.


Table 2. results in single links.

84 Telecommunications Networks – Current Status and Future Trends

The first campaign represents throughput maximization in the single links of the whole infrastructure. Fig. 6 shows results in Link 1, from the Shelter to the Management Server.

Fig. 6. throughput in the link from the Shelter to the Management Server (bit/s).

The above measurements were performed using the following tools: (1) Iperf (www.noc.ucf.edu/Tools/Iperf), which allows to send a TCP or UDP data streams and measure their throughput; (2) WireShark (www.wireshark.org), a network analyzer which

In this campaign, the Iperf server was installed on an Acer Travelmate with Linux Debian OS and located in the B node, so as to receive UDP connections. A Macbook Pro with Mac

Fig. 5. statistics about a single user.

**3.1 Throughput on single links (no QoS)** 

allows to capture and diagram Iperf streams.

OSX Leopard was used as the client.
