**10.1.1 Response time scale**

Response time scale can be categorized as one of the following: long, medium and short. **In the long time scale**, expansion of the network capacity is considered. This expansion is based on estimates of future traffic demands, and traffic distribution. Because the network elements are expensive, upgrades take place in a long time scale between week to month or years. **In the medium time scale**, network control policies are considered (e.g. adjusting the routing protocol parameters to reroute traffic from a congested network node). These policies are mostly based on a measurement, and the actions are applied during a period of minutes to days. **In the short time scale**, packet level processing and buffer management functions in routers are considered (e.g. active queue control schemes in TCP traffic using Random Early Detection (RED)).

## **10.1.2 Reactive versus preventive**

In reactive congestion control, congestion recovery takes place to restore the operation of a network to its normal state after congestion has occurred. Control policies react to existing congestion problems to remove or reduce them. In preventive congestion control, keeping the operation of a network at or near the point of maximum power is the main objective, so congestion will never occur. Control policies applied to prevent congestion are based on estimates and predictions of possible congestion appearance.

## **10.1.3 Supply side versus demand side**

Increasing the capacity in the network is called a supply side congestion control. Supply side control is achieved by increasing the network capacity or balancing the traffic, (e.g. capacity planning to estimate traffic workload). For demand side control, policies are applied to regulate the offered traffic to avoid congestion (e. g. traffic shaping mechanism is used to regulate the offered load).

## **10.2 Control policies**

Different congestion control policies have been proposed to deal with congestion in networks. Generally speaking, these policies differ in the use of control messages. The following will describe some of them.

**Source Quench:** Source Quench is the current method of congestion control in the Internet. When a network node responds to congestion by dropping packets, it could send an Internet Control Message Protocol Source Quench message (ICMP) to the source**,** informing it of packet drop. The drawback of this policy is that it is a family of varied policies. The major gateway manufacturers have implemented various source quench methods. This variation makes the end-system user, on receiving a Source Quench**,** uncertain of the cause in which the message was issued (e.g. heavy congestion, approaching congestion, burst causing massive overload).

**Random Drop:** Random Drop is a congestion control policy intended to give feedback to users whose traffic congests the gateway by dropping packets. In this policy, randomly selected packets for a particular user, from incoming traffic, will be dropped. A user generating much traffic will have much more packets drop than the user who generate little

Response time scale can be categorized as one of the following: long, medium and short. **In the long time scale**, expansion of the network capacity is considered. This expansion is based on estimates of future traffic demands, and traffic distribution. Because the network elements are expensive, upgrades take place in a long time scale between week to month or years. **In the medium time scale**, network control policies are considered (e.g. adjusting the routing protocol parameters to reroute traffic from a congested network node). These policies are mostly based on a measurement, and the actions are applied during a period of minutes to days. **In the short time scale**, packet level processing and buffer management functions in routers are considered (e.g. active queue control schemes in TCP traffic using

In reactive congestion control, congestion recovery takes place to restore the operation of a network to its normal state after congestion has occurred. Control policies react to existing congestion problems to remove or reduce them. In preventive congestion control, keeping the operation of a network at or near the point of maximum power is the main objective, so congestion will never occur. Control policies applied to prevent congestion are based on

Increasing the capacity in the network is called a supply side congestion control. Supply side control is achieved by increasing the network capacity or balancing the traffic, (e.g. capacity planning to estimate traffic workload). For demand side control, policies are applied to regulate the offered traffic to avoid congestion (e. g. traffic shaping mechanism is

Different congestion control policies have been proposed to deal with congestion in networks. Generally speaking, these policies differ in the use of control messages. The

**Source Quench:** Source Quench is the current method of congestion control in the Internet. When a network node responds to congestion by dropping packets, it could send an Internet Control Message Protocol Source Quench message (ICMP) to the source**,** informing it of packet drop. The drawback of this policy is that it is a family of varied policies. The major gateway manufacturers have implemented various source quench methods. This variation makes the end-system user, on receiving a Source Quench**,** uncertain of the cause in which the message was issued (e.g. heavy congestion, approaching congestion, burst

**Random Drop:** Random Drop is a congestion control policy intended to give feedback to users whose traffic congests the gateway by dropping packets. In this policy, randomly selected packets for a particular user, from incoming traffic, will be dropped. A user generating much traffic will have much more packets drop than the user who generate little

**10.1.1 Response time scale** 

Random Early Detection (RED)).

**10.1.2 Reactive versus preventive** 

**10.1.3 Supply side versus demand side** 

used to regulate the offered load).

following will describe some of them.

**10.2 Control policies** 

causing massive overload).

estimates and predictions of possible congestion appearance.

traffic. The selection of packets drop in this policy is completely uniform. Random Drop can be categorized as Congestion recovery or congestion avoidance.

Congestion recovery tries to restore an operating state, when demand has exceeded capacity. Congestion avoidance is preventive in nature. It tries to keep the demand on the network at or near the point of maximum power, so that the congestion never occurs.

**Congestion Indication:** The so-called Congestion Indication policy uses a similar technique as the Source Quench policy to inform the source gateway of congestion. The information is communicated in a single bit. The Congestion Experienced Bit (CEB) is set in the network header of the packets already being forwarded by a gateway. Based on the value of this bit, the end-system user should make an adjustment to the sending window. The Congestion Indication policy works based upon the total demand on the gateway. For fairness the total number of users causing the congestion is not considered. Only users who are sending more than their fair share (allowed bandwidth) should be asked to reduce their load, while others could attempt to increase their load where possible.

**Fair Queuing:** Fair queuing is a congestion control policy where separate gateway output queues are maintained for individual end-systems on a source-destination-pair basis. When congestion occurs, packets are dropped from the longest queue. At the gateway, the processing and link resources are distributed to the end-systems on a round-robin basis. Round-robin is an arrangement of choosing all elements in a group equally in a circular. Equal allocations of resources are provided to each source-destination pair.

A Bit-Round Fair Queuing algorithm was an improvement over the fair queuing. It computes the order of service to packets using their lengths, by using a technique that emulates a bit-by-bit round-robin discipline. In this case, long packets do not get an advantage over short packets. Otherwise the round-robin would be unfair.

Stochastic Fairness Queuing (SFQ) is a similar mechanism to Fair Queuing. SFQ looks up the source-destination address pair in the incoming packets and locates the appropriate queue that packet will have to be placed in. It uses a simple hash function to map from the sourcedestination address pair to a fixed set of queues. The price paid to implement SFQ is that it requires a potentially large number of queues.
