**2.2 The QoS architecture adopted**

The QoS scheme is based on the dynamic assignment and redistribution of bandwidth on the basis of priority and users' profiles. The main parameters of each kind of profile are

A Testbed About Priority-Based Dynamic

numbered N (PPN, N = 0, 1,. ..).

structures;

In consequence, two further difficulties had to be faced:

minimal bandwidth per PPP connection could be guaranteed.

forwarded to the class it pertains to, so as to achieve the desired result.

and sent to the IMQ, where connection classes were defined.

and the most important flows are assigned the highest priority.

protocols within a session was impossible.

structure in the PPPN interfaces.

server. Each connection, in fact, is managed as a separate network interface.

dynamically and equally redistributed among all connections.

as to redistribute unexploited resources among tunnels.

availability.

Connection Profiles in QoS Wireless Multimedia Networks 81

all the packets whose speed exceeds a maximum value set. To this purpose, queuing algorithms are applied and bandwidth can consequently be tuned according to actual

Some problems had to be solved along the way: shaping could only be applied to the outgoing traffic, already processed by the kernel, whereas both the uploading and downloading flows should normally be shaped by queuing methods on both the incoming and outcoming interfaces. In this case, though, many independent PPP interfaces were simultaneously active and each PPPoE was thus identified through a PP system interface

1. too many iptables rules were generated and so was a further branch in the queuing

2. the htp qdisc bandwidth sharing capabilities could not be fully exploited and no

As a matter of fact, each PPP having its own independent queuing, the traffic on the network interface was managed in an unpredictable way: no minimum bandwidth per connection could be even assigned and the unexploited bandwidth could not be

In order to handle the above situation, a qdisc (common to all connections) and subclasses for each kind of connection (with minimum and maximum bandwidth set) were defined, so

To this purpose, a hierarchical structure based on HTB (Hierarchical Token Bucket, http://luxik.cdi.cz/~devik/qos/htb) queuing was developed, whose nodes specify their own minimum and maximum bandwidth. In this way, the traffic of each tunnel is

Nevertheless, qdiscs can only manage the traffic of their own interface, so it was still impossible to identify a single connection by accessing the network interface of the PPPoE

An IMQ (Intermediate Queueing Device, www.linuximq.net) interface was thus adopted, which allows to manage qdiscs and the whole traffic: iptables are deviated to such interface and traffic can be shaped. Each single PPP interface must be assigned a connection identifier

In this way, each connection can monitored, traffic can be classified on the basis of protocols

Another problem faced was that each packet could only be marked by means of an an identifier, so, theoretically speaking, the simultaneous identification of connections and

Several tests demonstrated that the problem could be solved through the joint use of u32filter+MARK and CLASSIFY TARGET. This was done defining a further HTB class


summed up in Tab. 1. In particular, when QoS management is enabled in a profile (QoS flag), different priorities can be assigned to distinct protocols.

Table 1. main fields of a connection profile.

Fig. 2 shows the graphic interface for the definition and management of each type of profile. The seven panels "Band 1",. .. "Band 7" on the right allow to assign priorities, 1 being the highest, 7 the lowest. In particular, a protocol can be associated to a specif priority by dragging and dropping its name in the chosen band panel.

The minimum bandwidth guaranteed (in percentage) and maximum available must also be setup for each sub-bandwidth. Once a profile has been defined, it can be assigned to many distinct users, so as to tailor service supply easily and quickly.


Fig. 2. priority assignment through drag&drop operations.

As far as dynamic QoS management is concerned, the basic idea is to limit both upload and download operations through the Egress policier (www.egress.com), so as to discard

summed up in Tab. 1. In particular, when QoS management is enabled in a profile (QoS

**Parameter Description** 

Upload guaranteed Min upload bandwidth guaranteed (kbit/s)

Download guaranteed Min download bandwidth guaranteed (kbit/s) QoS Flag for enabling QoS traffic management

Fig. 2 shows the graphic interface for the definition and management of each type of profile. The seven panels "Band 1",. .. "Band 7" on the right allow to assign priorities, 1 being the highest, 7 the lowest. In particular, a protocol can be associated to a specif priority by

The minimum bandwidth guaranteed (in percentage) and maximum available must also be setup for each sub-bandwidth. Once a profile has been defined, it can be assigned to many

As far as dynamic QoS management is concerned, the basic idea is to limit both upload and download operations through the Egress policier (www.egress.com), so as to discard

flag), different priorities can be assigned to distinct protocols.

Upload bandwidth Max upload bandwidth (kbit/s)

Download bandwidth Max download bandwidth (kbit/s)

Name Profile name Description Profile description

Table 1. main fields of a connection profile.

dragging and dropping its name in the chosen band panel.

distinct users, so as to tailor service supply easily and quickly.

Fig. 2. priority assignment through drag&drop operations.

all the packets whose speed exceeds a maximum value set. To this purpose, queuing algorithms are applied and bandwidth can consequently be tuned according to actual availability.

Some problems had to be solved along the way: shaping could only be applied to the outgoing traffic, already processed by the kernel, whereas both the uploading and downloading flows should normally be shaped by queuing methods on both the incoming and outcoming interfaces. In this case, though, many independent PPP interfaces were simultaneously active and each PPPoE was thus identified through a PP system interface numbered N (PPN, N = 0, 1,. ..).

In consequence, two further difficulties had to be faced:


As a matter of fact, each PPP having its own independent queuing, the traffic on the network interface was managed in an unpredictable way: no minimum bandwidth per connection could be even assigned and the unexploited bandwidth could not be dynamically and equally redistributed among all connections.

In order to handle the above situation, a qdisc (common to all connections) and subclasses for each kind of connection (with minimum and maximum bandwidth set) were defined, so as to redistribute unexploited resources among tunnels.

To this purpose, a hierarchical structure based on HTB (Hierarchical Token Bucket, http://luxik.cdi.cz/~devik/qos/htb) queuing was developed, whose nodes specify their own minimum and maximum bandwidth. In this way, the traffic of each tunnel is forwarded to the class it pertains to, so as to achieve the desired result.

Nevertheless, qdiscs can only manage the traffic of their own interface, so it was still impossible to identify a single connection by accessing the network interface of the PPPoE server. Each connection, in fact, is managed as a separate network interface.

An IMQ (Intermediate Queueing Device, www.linuximq.net) interface was thus adopted, which allows to manage qdiscs and the whole traffic: iptables are deviated to such interface and traffic can be shaped. Each single PPP interface must be assigned a connection identifier and sent to the IMQ, where connection classes were defined.

In this way, each connection can monitored, traffic can be classified on the basis of protocols and the most important flows are assigned the highest priority.

Another problem faced was that each packet could only be marked by means of an an identifier, so, theoretically speaking, the simultaneous identification of connections and protocols within a session was impossible.

Several tests demonstrated that the problem could be solved through the joint use of u32filter+MARK and CLASSIFY TARGET. This was done defining a further HTB class structure in the PPPN interfaces.

A Testbed About Priority-Based Dynamic

only a small excerpt was reported.

load and many others.

server.

**3. Measurements campaigns** 

whose scenario was described in Section 2.1 (Fig. 1).

bandwidth is accordingly shared among his or her applications.

**2.2.1 Statistics** 

Connection Profiles in QoS Wireless Multimedia Networks 83

The platform allows to visualize information about the infrastructure and its use and, in

Fig. 4, for instance, refers to all the users' total connection time. For the sake of compactness,

Diagrams are also available describing the system components, such as CPU load, network

As far as each user is concerned (Fig. 5), the following data can be monitored: connections, data volumes exchange, diagrams about his or her traffic and, as indicated by law, packets logging.The authentication system adopted is Radius and traffic is encapsulated through the PPPoE protocol. In consequence, a PPP tunnel is active between each user and the

As already anticipated, the main concern of this paper is to present a realistic testbed for QoS management; four measurements campaigns carried out are described in the following,

The adopted dynamic priority-based method is twofold. On the one hand, users are assigned the shared bandwidth on the basis of their profiles: the higher the guaranteed bandwidth, the higher the shared bandwidth assigned. On the other hand, not only users but also services within a profile are prioritized, so each user is aware that his or her

The QoS management is presented in an increased way: no QoS in the first campaign; QoS with neither minimum nor maximum set in the second; minimum and maximum bandwidth defined in the third; dynamic redistribution is also managed in the fourth.

consequence, to make statistics about connected users and their traffic volume.

Fig. 4. small excerpt from all the users' total connection time.

The bandwidth redistribution problem having been solved, attention could be focused on flow priority within each connection.

Fig. 3 shows how queuing algorithms were applied. A root node (qdisc) was created in the imq0 interface; class 1:1 was added in order to define the total bandwidth (100 Mbit/s in this case). Subclasses were then defined for the management of single connections, each specifying the minimum guaranteed (rate) and maximum at disposal (ceil).

Note that a higher QoS could be achieved if the SFQ (Stochastic Fair Queuing) were applied, so as to manage single flows through a Round Robin policy.

In order to manage priority of single flows, a hierarchic structure was created within each PPPN interface.

As in the former case, several subclasses were added to the root node (qdisc), each with a minimum and maximum bandwidth; the SFQ algorithm was applied afterwards. A u32filter list is defined in the root node of each interface, so as to drive single packets on the class they pertain to on the basis of their protocol. Packets were initially divided by the CLASSIFY TARGET tool according to the connection; a further division was then made by u32filter+MARK on the basis of protocols.

Besides, the identifiers range is 1:10000 and 1:65535 in hex, so the highest attention must be paid to each class handles. The following synatx was adopted:

```
iptables -t mangle -A POSTROUTING -j CLASSIFY --set-class x:y
```
In this way, filters could be avoided and everything is managed by iptables.

Fig. 3. hierarchical QoS management.
