**3. Traffic engineering in the wavelength domain**

Noteworthy, at the ingress edge nodes of an OBS network, data bursts are kept in electronic buffers before a wavelength channel is assigned to them and they are transmitted optically towards the egress edge nodes. Clearly, the flexibility of scheduling data bursts in the wavelength channels is considerably higher when the bursts are still buffered at the ingress nodes than when they have already been converted to the optical domain. For instance, a data burst can be delayed at one of the ingress buffers by the exact amount of time required for a wavelength channel to become available in the designated output fibre link. This procedure is not possible at the core nodes due to the lack of optical RAM. The capability of delaying data bursts at an ingress node by a random amount of time, not only increases the chances of successfully scheduling bursts at the output fibre link of their ingress nodes, but also enables implementing strategies that reduce in advance the probability of contention at the core nodes.

Optical Burst-Switched Networks Exploiting Traffic Engineering in the Wavelength Domain 291

voids created in the fibre links traversed before wavelength conversion is used, improving

The task of keeping the data bursts, which are directed to the same routing path and have been serialized at the ingress node, in the same wavelength channel requires minimizing the chances that bursts on overlapping routing paths contend for the same wavelength channel and, as a result, demand wavelength conversion. This objective is the same as that of the HMPI algorithm presented in Section 2. For that reason, the strategy proposed in (Pedro et al., 2009b), which is designated as Traffic Engineering in the wavelength domain with Delayed Burst Scheduling (TE-DBS), combines the wavelength contention minimization capability of HMPI with selectively delaying data bursts at the electronic buffers of their ingress nodes not only to smooth burst traffic, but also to maximize the amount of data bursts carried in the wavelength channels ranked with the highest priorities by HMPI.

The key principles of the TE-DBS strategy can be illustrated with the example of Fig. 4. The OBS network depicted comprises six nodes and five fibre links. Three paths, π1, π2, and π3,

Fig. 4. Example of using TE-DBS to minimize contention at the core nodes.

network performance.

The Burst Overlap Reduction Algorithm proposed in (Li & Qiao, 2004) exploits the additional degree of freedom provided by delaying data bursts at the electronic buffers of the ingress nodes to shape the burst traffic departing from these nodes in such way that the probability of contention at the core nodes can be reduced. The principle underlying BORA is that a decrease on the number of different wavelength channels allocated to the data bursts assembled at an ingress node can smooth the burst traffic at the input fibre links of the core nodes and, as a result, reduce the probability that the number of overlapping data bursts directed to the same output fibre link exceeds the number of wavelength channels. In its simpler implementation, BORA relies on using the same wavelength search ordering at all the ingress nodes of the network and utilizing the buffers in these nodes to transmit the maximum number of bursts in the first wavelength channels according to such ordering. In order to limit the extra transfer delay incurred by data bursts, as well as the added buffering and processing requirements, the ingress node can impose a maximum ingress burst delay, RAM max Δ*t* , defined as the maximum amount of time a data burst can be kept at an electronic buffer of its ingress node excluding the time required to assemble the burst and the offset time between the data burst and its correspondent BHP.

The concept of BORA is appealing in OBS networks with wavelength conversion, since these algorithms have not been designed to mitigate wavelength contention. Moreover, BORA algorithms do not account for the capacity fragmentation of the wavelength channels, which is also a performance limiting factor in OBS networks. These limitations have motivated the development of a novel strategy in (Pedro et al., 2009b) that also exploits the electronic buffers of the ingress edge nodes to selectively delay data bursts, while providing a twofold advantage over BORA: enhanced contention minimization at the core nodes and support of core node architectures with relaxed wavelength conversion capabilities.

The first principle of the proposed strategy is related with the availability of RAM at the ingress nodes. In the process of judiciously delaying bursts to schedule them using the smallest number of different wavelength channels, the delayed bursts can be scheduled with minimum voids between them and the preceding bursts already scheduled on the same wavelength channel. This is only possible because the bursts assembled at the node can be delayed by a random amount of time. The serialization of data bursts not only smoothes the burst traffic, with the consequent decrease of the chances of contention at the core nodes, but also reduces the fragmentation of the wavelengths capacity at the output fibre links of the ingress nodes. These serialized data bursts traverse the core nodes, where some of them must be converted to other wavelength channels to resolve contention. The wavelength conversions break the series of data bursts and, as a result, create voids between a burst converted to another wavelength channel and the bursts already scheduled on this wavelength. A large number of these voids lead to wasting bandwidth, as the core nodes will not be able to use them to carry data.

In essence, the first key principle consists of serializing data bursts at the ingress nodes to mitigate the voids between them. Noticeably, if these bursts traverse a set of common fibre links without experiencing wavelength conversion, the formation of unusable voids is reduced at those links. Hence, the second key principle of the proposed strategy consists of improving the probability that serialized bursts routed via the same path are kept in the same wavelength channel for as long as possible. This can reduce the number of unusable

The Burst Overlap Reduction Algorithm proposed in (Li & Qiao, 2004) exploits the additional degree of freedom provided by delaying data bursts at the electronic buffers of the ingress nodes to shape the burst traffic departing from these nodes in such way that the probability of contention at the core nodes can be reduced. The principle underlying BORA is that a decrease on the number of different wavelength channels allocated to the data bursts assembled at an ingress node can smooth the burst traffic at the input fibre links of the core nodes and, as a result, reduce the probability that the number of overlapping data bursts directed to the same output fibre link exceeds the number of wavelength channels. In its simpler implementation, BORA relies on using the same wavelength search ordering at all the ingress nodes of the network and utilizing the buffers in these nodes to transmit the maximum number of bursts in the first wavelength channels according to such ordering. In order to limit the extra transfer delay incurred by data bursts, as well as the added buffering and processing requirements, the ingress node can impose a maximum ingress burst delay,

max Δ*t* , defined as the maximum amount of time a data burst can be kept at an electronic buffer of its ingress node excluding the time required to assemble the burst and the offset

The concept of BORA is appealing in OBS networks with wavelength conversion, since these algorithms have not been designed to mitigate wavelength contention. Moreover, BORA algorithms do not account for the capacity fragmentation of the wavelength channels, which is also a performance limiting factor in OBS networks. These limitations have motivated the development of a novel strategy in (Pedro et al., 2009b) that also exploits the electronic buffers of the ingress edge nodes to selectively delay data bursts, while providing a twofold advantage over BORA: enhanced contention minimization at the core nodes and

The first principle of the proposed strategy is related with the availability of RAM at the ingress nodes. In the process of judiciously delaying bursts to schedule them using the smallest number of different wavelength channels, the delayed bursts can be scheduled with minimum voids between them and the preceding bursts already scheduled on the same wavelength channel. This is only possible because the bursts assembled at the node can be delayed by a random amount of time. The serialization of data bursts not only smoothes the burst traffic, with the consequent decrease of the chances of contention at the core nodes, but also reduces the fragmentation of the wavelengths capacity at the output fibre links of the ingress nodes. These serialized data bursts traverse the core nodes, where some of them must be converted to other wavelength channels to resolve contention. The wavelength conversions break the series of data bursts and, as a result, create voids between a burst converted to another wavelength channel and the bursts already scheduled on this wavelength. A large number of these voids lead to wasting bandwidth, as the core nodes

In essence, the first key principle consists of serializing data bursts at the ingress nodes to mitigate the voids between them. Noticeably, if these bursts traverse a set of common fibre links without experiencing wavelength conversion, the formation of unusable voids is reduced at those links. Hence, the second key principle of the proposed strategy consists of improving the probability that serialized bursts routed via the same path are kept in the same wavelength channel for as long as possible. This can reduce the number of unusable

support of core node architectures with relaxed wavelength conversion capabilities.

time between the data burst and its correspondent BHP.

will not be able to use them to carry data.

RAM

voids created in the fibre links traversed before wavelength conversion is used, improving network performance.

The task of keeping the data bursts, which are directed to the same routing path and have been serialized at the ingress node, in the same wavelength channel requires minimizing the chances that bursts on overlapping routing paths contend for the same wavelength channel and, as a result, demand wavelength conversion. This objective is the same as that of the HMPI algorithm presented in Section 2. For that reason, the strategy proposed in (Pedro et al., 2009b), which is designated as Traffic Engineering in the wavelength domain with Delayed Burst Scheduling (TE-DBS), combines the wavelength contention minimization capability of HMPI with selectively delaying data bursts at the electronic buffers of their ingress nodes not only to smooth burst traffic, but also to maximize the amount of data bursts carried in the wavelength channels ranked with the highest priorities by HMPI.

The key principles of the TE-DBS strategy can be illustrated with the example of Fig. 4. The OBS network depicted comprises six nodes and five fibre links. Three paths, π1, π2, and π3,

Fig. 4. Example of using TE-DBS to minimize contention at the core nodes.

Optical Burst-Switched Networks Exploiting Traffic Engineering in the Wavelength Domain 293

core nodes and the resource reservation is made using the JET protocol. It is also assumed that all the wavelength channels in a fibre link have a capacity μ = 10 Gb/s, the time required to configure an optical space switch matrix is *t*g = 1.6 μs, each node can process the BHP of a data burst in *t*p = 1 μs and the offset time between BHP and data burst is given by *t*g + *hi·t*p, where *hi* is the number of hops of burst path π*i* ∈ Π. The switch matrix of each node is assumed to be strictly non-blocking. Unless stated otherwise, the simulation results were

The traffic pattern used in the simulations is uniform, in the sense that a burst generated at an ingress node is randomly destined to one of the remaining nodes. Bursts are always routed via the shortest path. Both the data burst size and the burst interarrival time are negative-exponentially distributed. An average burst size of 100 kB is used, which results in an average burst duration of 80 μs. In the network simulations, increasing the average offered traffic load is obtained through reducing the average burst interarrival time. The

<sup>π</sup> γ

In OBS networks, the most relevant performance metric is the average burst blocking probability, which measures the average fraction of burst traffic that is discarded by the network. The network performance can also be evaluated via the average offered traffic load that results in an objective average burst blocking probability *B*obj. This metric is estimated by performing simulations with values of Γ spaced by 0.05, determining the load values between which the value with blocking probability *B*obj is located and then using linear interpolation (with logarithmic scale for the average burst blocking probability). All of the results presented in this section were obtained through running 10 independent simulations for calculating the average value of the performance metric of interest, as well as a 95% confidence interval on this value. However, these confidence intervals were found to be so

The majority of OBS proposals assumes the utilization of full-range wavelength converters deployed in a dedicated configuration, that is, one full-range wavelength converter is used at each output port of the switch matrix, as illustrated in Fig. 5. Each full-range wavelength converter must be capable of converting any wavelength at its input to a fixed wavelength

Fig. 6 plots the average burst blocking probability as a function of the maximum ingress burst delay for different values of the offered traffic load and considering both TE-DBS and the previously described BORA strategy. It also displays the blocking performance that corresponds to delaying bursts at the ingress nodes whenever a free wavelength channel is not immediately found. More precisely, the DBS strategy consists of delaying a data burst at its ingress node by the minimum amount of time, upper-bounded to the maximum ingress

at its output and if a node has *M* output fibres, its total number of converters is *M·W*.

burst delay, such that one wavelength becomes available in the output fibre link.

Γ = ⋅ ⋅ 

SP is the number of links traversed between the edge nodes of π*i* ∈ Π.

SP

, (12)

μ *<sup>i</sup> i i <sup>h</sup> L W* ∈Π ⋅

obtained assuming *W* = 32 wavelength channels per fibre link.

where *hi*

average offered traffic load normalized to the network capacity is given by,

narrow that have been omitted from the plots for improving readability.

are used to transmit bursts between one of the three ingress nodes, *v*1, *v*2, and *v*4, and node *v*6. Contention between bursts from different input fibre links and directed to the same output fibre link can occur at core nodes *v*3 and *v*5. Each ingress node uses its own wavelength search ordering and selectively delays bursts with the purpose of transmitting them on the wavelength channels which have been ranked with the highest priorities by an algorithm for minimizing contention in the wavelength domain. Similarly to what occurs with BORA, a maximum ingress burst delay, RAM max Δ*t* , is imposed at each ingress node.

As can be seen, *v*1 has assembled three data bursts (DB 1, DB 2, and DB 3), which overlap in time, and *v*2 has assembled two data bursts (DB 4 and DB 5), which also overlap in time. The first two bursts assembled by *v*1 are transmitted in wavelength channel λ1, whereas the third cannot be transmitted in this wavelength without infringing the maximum ingress burst delay and, therefore, has to be transmitted in λ2. The two bursts assembled by *v*2 are transmitted in the wavelength ranked with highest priority, λ3. These bursts traverse *v*3, where contention is avoided since the bursts arrive in different wavelengths. Meanwhile, the ingress node *v*4 has assembled two data bursts (DB 6 and DB 7) and transmits them in the wavelength ranked with highest priority, λ2. All seven data bursts traverse core node *v*5, where DB 7 must be converted to another wavelength in order to resolve contention.

The major observations provided by this example are as follows. Similarly to using BORA, the burst traffic is smoothed at the ingress nodes, reducing contention at the core nodes from an excessive number of data bursts directed to the same output fibre link. Moreover, since the burst traffic of routing paths π1, π2, and π3 is mostly carried in different wavelengths, contention for the same wavelength channel is also reduced. As a result, the pairs of bursts serialized at the ingress nodes, DB 1 and DB 2 in routing path π1 and DB 4 and DB5 in routing path π2, can be kept in the same wavelength channel until they reach node *v*6, mitigating the fragmentation of the capacity of wavelengths λ1 and λ3 in the fibre links traversed by routing paths π1 and π2. Since this is accomplished through minimizing the probability of wavelength contention, it can also relax the wavelength conversion capabilities of the core nodes without significantly degrading network performance.

The TE-DBS strategy requires the computation of one wavelength search ordering, {λ1(π*i*), λ2(π*i*), …, λ*W*(π*i*)}, for each routing path π*i*. The HMPI algorithm is used to optimize offline the wavelength search orderings. These orderings are stored at the ingress nodes and the control unit of these nodes uses them for serializing data bursts on the available wavelength channel ranked with the highest priority on the routing path the bursts will follow.

### **4. Results and discussion**

This section presents a performance analysis of the framework for traffic engineering in the wavelength domain TE-DBS, described in the Section 3, and assuming the HMPI algorithm, detailed in Section 2, is employed offline to optimize the wavelength search ordering for each routing path in the network.

The results are obtained via network simulation using the event-driven network simulator described in (Pedro et al., 2006a). The network topology used in the performance study is a 10-node ring network. All of the network nodes have the functionalities of both edge and

are used to transmit bursts between one of the three ingress nodes, *v*1, *v*2, and *v*4, and node *v*6. Contention between bursts from different input fibre links and directed to the same output fibre link can occur at core nodes *v*3 and *v*5. Each ingress node uses its own wavelength search ordering and selectively delays bursts with the purpose of transmitting them on the wavelength channels which have been ranked with the highest priorities by an algorithm for minimizing contention in the wavelength domain. Similarly to what occurs

As can be seen, *v*1 has assembled three data bursts (DB 1, DB 2, and DB 3), which overlap in time, and *v*2 has assembled two data bursts (DB 4 and DB 5), which also overlap in time. The first two bursts assembled by *v*1 are transmitted in wavelength channel λ1, whereas the third cannot be transmitted in this wavelength without infringing the maximum ingress burst delay and, therefore, has to be transmitted in λ2. The two bursts assembled by *v*2 are transmitted in the wavelength ranked with highest priority, λ3. These bursts traverse *v*3, where contention is avoided since the bursts arrive in different wavelengths. Meanwhile, the ingress node *v*4 has assembled two data bursts (DB 6 and DB 7) and transmits them in the wavelength ranked with highest priority, λ2. All seven data bursts traverse core node *v*5,

where DB 7 must be converted to another wavelength in order to resolve contention.

capabilities of the core nodes without significantly degrading network performance.

channel ranked with the highest priority on the routing path the bursts will follow.

**4. Results and discussion** 

each routing path in the network.

The TE-DBS strategy requires the computation of one wavelength search ordering, {λ1(π*i*), λ2(π*i*), …, λ*W*(π*i*)}, for each routing path π*i*. The HMPI algorithm is used to optimize offline the wavelength search orderings. These orderings are stored at the ingress nodes and the control unit of these nodes uses them for serializing data bursts on the available wavelength

This section presents a performance analysis of the framework for traffic engineering in the wavelength domain TE-DBS, described in the Section 3, and assuming the HMPI algorithm, detailed in Section 2, is employed offline to optimize the wavelength search ordering for

The results are obtained via network simulation using the event-driven network simulator described in (Pedro et al., 2006a). The network topology used in the performance study is a 10-node ring network. All of the network nodes have the functionalities of both edge and

The major observations provided by this example are as follows. Similarly to using BORA, the burst traffic is smoothed at the ingress nodes, reducing contention at the core nodes from an excessive number of data bursts directed to the same output fibre link. Moreover, since the burst traffic of routing paths π1, π2, and π3 is mostly carried in different wavelengths, contention for the same wavelength channel is also reduced. As a result, the pairs of bursts serialized at the ingress nodes, DB 1 and DB 2 in routing path π1 and DB 4 and DB5 in routing path π2, can be kept in the same wavelength channel until they reach node *v*6, mitigating the fragmentation of the capacity of wavelengths λ1 and λ3 in the fibre links traversed by routing paths π1 and π2. Since this is accomplished through minimizing the probability of wavelength contention, it can also relax the wavelength conversion

max Δ*t* , is imposed at each ingress node.

with BORA, a maximum ingress burst delay, RAM

core nodes and the resource reservation is made using the JET protocol. It is also assumed that all the wavelength channels in a fibre link have a capacity μ = 10 Gb/s, the time required to configure an optical space switch matrix is *t*g = 1.6 μs, each node can process the BHP of a data burst in *t*p = 1 μs and the offset time between BHP and data burst is given by *t*g + *hi·t*p, where *hi* is the number of hops of burst path π*i* ∈ Π. The switch matrix of each node is assumed to be strictly non-blocking. Unless stated otherwise, the simulation results were obtained assuming *W* = 32 wavelength channels per fibre link.

The traffic pattern used in the simulations is uniform, in the sense that a burst generated at an ingress node is randomly destined to one of the remaining nodes. Bursts are always routed via the shortest path. Both the data burst size and the burst interarrival time are negative-exponentially distributed. An average burst size of 100 kB is used, which results in an average burst duration of 80 μs. In the network simulations, increasing the average offered traffic load is obtained through reducing the average burst interarrival time. The average offered traffic load normalized to the network capacity is given by,

$$\Gamma = \frac{\sum\_{\mathbf{n}\_i \neq \mathbf{II}} \mathbf{y}\_i \cdot h\_i^{\text{SP}}}{L \cdot \mathcal{W} \cdot \mu},\tag{12}$$

where *hi* SP is the number of links traversed between the edge nodes of π*i* ∈ Π.

In OBS networks, the most relevant performance metric is the average burst blocking probability, which measures the average fraction of burst traffic that is discarded by the network. The network performance can also be evaluated via the average offered traffic load that results in an objective average burst blocking probability *B*obj. This metric is estimated by performing simulations with values of Γ spaced by 0.05, determining the load values between which the value with blocking probability *B*obj is located and then using linear interpolation (with logarithmic scale for the average burst blocking probability). All of the results presented in this section were obtained through running 10 independent simulations for calculating the average value of the performance metric of interest, as well as a 95% confidence interval on this value. However, these confidence intervals were found to be so narrow that have been omitted from the plots for improving readability.

The majority of OBS proposals assumes the utilization of full-range wavelength converters deployed in a dedicated configuration, that is, one full-range wavelength converter is used at each output port of the switch matrix, as illustrated in Fig. 5. Each full-range wavelength converter must be capable of converting any wavelength at its input to a fixed wavelength at its output and if a node has *M* output fibres, its total number of converters is *M·W*.

Fig. 6 plots the average burst blocking probability as a function of the maximum ingress burst delay for different values of the offered traffic load and considering both TE-DBS and the previously described BORA strategy. It also displays the blocking performance that corresponds to delaying bursts at the ingress nodes whenever a free wavelength channel is not immediately found. More precisely, the DBS strategy consists of delaying a data burst at its ingress node by the minimum amount of time, upper-bounded to the maximum ingress burst delay, such that one wavelength becomes available in the output fibre link.

Optical Burst-Switched Networks Exploiting Traffic Engineering in the Wavelength Domain 295

The results also indicate TE-DBS is substantially more efficient than BORA in exploiting larger maximum ingress burst delays to reduce the burst blocking probability. The proposed strategy outperforms BORA for the same maximum ingress burst delay or, alternatively, requires a smaller maximum ingress burst delay to attain the same blocking performance of BORA. Particularly, the decrease rate of the burst losses with increasing the maximum ingress burst delay is considerably larger with TE-DBS than that with BORA. In addition, with TE-DBS the slope of the curves of the burst blocking probability is much steeper for smaller values of the average offered traffic load, a trend less pronounced with BORA.

Table 3 presents the average traffic load that can be offered to the network as to support an objective average burst blocking probability, *B*obj, of 10-3 and 10-4. The results include two

max Δ = *t* 200 μs RAM

BORA TE-DBS BORA TE-DBS

max <sup>Δ</sup>*<sup>t</sup>* = 400 μs, and the case of immediate burst scheduling at the ingress nodes, RAM

10-3 0.522 0.654 0.723 0.689 0.782 10-4 0.453 0.584 0.659 0.632 0.729 Table 3. Average offered traffic load for an objective average burst blocking probability of

The OBS network supports more offered traffic load for the same average burst blocking probability when using the TE-DBS and BORA strategies instead of employing immediate burst scheduling. In addition, the former strategy provides the largest improvements in supported offered traffic load. For instance, with *B*obj = 10-3, the network supports 32% more offered traffic load when using BORA with a maximum ingress burst delay of 400 μs instead of immediate burst scheduling, whereas when using the TE-DBS strategy the performance

In order to provide evidence of the principles underlying contention minimization with BORA and TE-DBS, the first set of results differentiates the burst blocking probability at the ingress nodes (ingress bursts) and at the core nodes (transit bursts). Fig. 7 plots the average burst blocking probability, discriminated in terms of ingress bursts and transit bursts, as a

The plot shows that without additional delays at the ingress nodes, the blocking probability of ingress bursts and of transit bursts are of the same order of magnitude. However, as the maximum ingress burst delay is increased, the blocking probability of ingress bursts is rapidly reduced, as a result of the enhanced ability of ingress nodes to buffer bursts during longer periods of time. This holds for the three channel scheduling algorithms. Therefore, the average burst blocking probability of transit bursts becomes the dominant source of blocking. Notably, using DBS does not reduce burst losses at the core nodes, rendering this strategy useless, whereas BORA and TE-DBS strategies exploit the selective ingress delay to reduce blocking of transit bursts. Moreover, TE-DBS is increasingly more effective than BORA in reducing these losses, which supports its superior performance displayed in Fig. 6.

improvement is more expressive, enabling an increase of 50% in offered traffic load.

max Δ*t* = 200 μs and

max Δ = *t* 400 μs

max Δ*t* = 0.

values of the maximum ingress burst delay for BORA and TE-DBS, RAM

RAM

*B*obj

10-3 and 10-4 (Pedro et al., 2009a).

RAM

max Δ = *<sup>t</sup>* <sup>0</sup> RAM

function of the maximum ingress burst delay for Γ = 0.70.

Fig. 5. OBS core node architecture with dedicated full-range wavelength converters.

Fig. 6. Network performance with dedicated full-range wavelength converters for different values of the average offered traffic (Pedro et al., 2009a).

The curves for DBS show that exploiting the electronic buffers at the ingress nodes only for contention resolution does not improve blocking performance. On the contrary, with both BORA and TE-DBS the average burst blocking probability is decreased as the maximum ingress burst delay is increased, confirming that these strategies proactively reduce the probability of contention by selectively delaying bursts at their ingress nodes.

Fig. 5. OBS core node architecture with dedicated full-range wavelength converters.

Γ = 0.70

Γ = 0.80

Γ = 0.60

values of the average offered traffic (Pedro et al., 2009a).

0 40 80 120 160 200 240 280 320 360 400

DBS BORA TE-DBS

Maximum ingress burst delay [μs]

Fig. 6. Network performance with dedicated full-range wavelength converters for different

The curves for DBS show that exploiting the electronic buffers at the ingress nodes only for contention resolution does not improve blocking performance. On the contrary, with both BORA and TE-DBS the average burst blocking probability is decreased as the maximum ingress burst delay is increased, confirming that these strategies proactively reduce the

probability of contention by selectively delaying bursts at their ingress nodes.

1.0E-5

10-5

1.0E-4

10-4

1.0E-3

Average burst blocking probability

1.0E-2

10-110-3

10-2

1.0E-1

1.0E+0

100

The results also indicate TE-DBS is substantially more efficient than BORA in exploiting larger maximum ingress burst delays to reduce the burst blocking probability. The proposed strategy outperforms BORA for the same maximum ingress burst delay or, alternatively, requires a smaller maximum ingress burst delay to attain the same blocking performance of BORA. Particularly, the decrease rate of the burst losses with increasing the maximum ingress burst delay is considerably larger with TE-DBS than that with BORA. In addition, with TE-DBS the slope of the curves of the burst blocking probability is much steeper for smaller values of the average offered traffic load, a trend less pronounced with BORA.

Table 3 presents the average traffic load that can be offered to the network as to support an objective average burst blocking probability, *B*obj, of 10-3 and 10-4. The results include two values of the maximum ingress burst delay for BORA and TE-DBS, RAM max Δ*t* = 200 μs and RAM max <sup>Δ</sup>*<sup>t</sup>* = 400 μs, and the case of immediate burst scheduling at the ingress nodes, RAM max Δ*t* = 0.


Table 3. Average offered traffic load for an objective average burst blocking probability of 10-3 and 10-4 (Pedro et al., 2009a).

The OBS network supports more offered traffic load for the same average burst blocking probability when using the TE-DBS and BORA strategies instead of employing immediate burst scheduling. In addition, the former strategy provides the largest improvements in supported offered traffic load. For instance, with *B*obj = 10-3, the network supports 32% more offered traffic load when using BORA with a maximum ingress burst delay of 400 μs instead of immediate burst scheduling, whereas when using the TE-DBS strategy the performance improvement is more expressive, enabling an increase of 50% in offered traffic load.

In order to provide evidence of the principles underlying contention minimization with BORA and TE-DBS, the first set of results differentiates the burst blocking probability at the ingress nodes (ingress bursts) and at the core nodes (transit bursts). Fig. 7 plots the average burst blocking probability, discriminated in terms of ingress bursts and transit bursts, as a function of the maximum ingress burst delay for Γ = 0.70.

The plot shows that without additional delays at the ingress nodes, the blocking probability of ingress bursts and of transit bursts are of the same order of magnitude. However, as the maximum ingress burst delay is increased, the blocking probability of ingress bursts is rapidly reduced, as a result of the enhanced ability of ingress nodes to buffer bursts during longer periods of time. This holds for the three channel scheduling algorithms. Therefore, the average burst blocking probability of transit bursts becomes the dominant source of blocking. Notably, using DBS does not reduce burst losses at the core nodes, rendering this strategy useless, whereas BORA and TE-DBS strategies exploit the selective ingress delay to reduce blocking of transit bursts. Moreover, TE-DBS is increasingly more effective than BORA in reducing these losses, which supports its superior performance displayed in Fig. 6.

Optical Burst-Switched Networks Exploiting Traffic Engineering in the Wavelength Domain 297

is absent in BORA, is critical to mitigate the fragmentation of the wavelengths capacity, resulting in the smaller transit burst losses reported with TE-DBS in Fig. 7 and ultimately

0 40 80 120 160 200 240 280 320 360 400

Γ = 0.80

Γ = 0.70

Γ = 0.60

Maximum ingress burst delay [μs]

Fig. 9 shows the blocking performance as a function of the maximum ingress burst delay for different numbers of wavelength channels and Γ = 0.80. The results indicate that the slope of the average burst blocking probability curves for TE-DBS increases with the number of wavelength channels, augmenting the performance gain of using this strategy instead of BORA. This behaviour is due to the fact that when the number of wavelength channels per fibre link increases the effectiveness of the HMPI algorithm in determining appropriate wavelength search orderings improves, enhancing the isolation degree of serialized burst

In principle, only a fraction of transit bursts experience wavelength contention, demanding the use of a wavelength converter. Consequently, the deployment of a smaller number of converters, in a shared configuration, has been proposed in the literature. Converter sharing at the core nodes can be implemented on a per-link or per-node basis, depending on whether each converter can only be used by bursts directed to a specific output link or can be used by bursts directed to any output link of the node (Chai et al., 2002). The latter sharing strategy enables to deploy a smaller number of converters. Fig. 10 exemplifies the architecture of a core node with *C* full-range wavelength converters shared per-node, where *C* ≤ *M·W*. In this core node architecture, each wavelength converter must be capable of converting any wavelength channel at its input to any wavelength channel at its output and the switch matrix has to be augmented with *C* input

explaining the enhanced blocking performance provided by this strategy.

BORA TE-DBS

Fig. 8. Average wavelength conversion probability (Pedro et al., 2009b).

traffic from overlapping routing paths on different wavelength channels.

1.0E-3

ports and *C* output ports.

10-3

1.0E-2

10-2

1.0E-1

10-1

Average wavelength conversion probability

1.0E+0

100

Fig. 7. Average burst blocking probability of ingress and transit bursts (Pedro et al., 2009a).

The major dissimilarity between the TE-DBS and BORA strategies is the order by which free wavelength channels are searched to schedule the data bursts assembled at the ingress nodes. Particularly, the TE-DBS strategy exploits the selective delaying of data bursts at the electronic buffers of these nodes not only to smooth the burst traffic entering the core network, similarly to BORA, but also to proactively reduce the unusable voids formed between consecutive data bursts scheduled in the same wavelength channel. As described in Section 3, complying with the latter objective demands enforcing that the serialized data bursts are kept in the same wavelength for as long as possible along their routing path, which means that contention for the same wavelength among bursts on overlapping paths must be minimized. Intuitively, the success of keeping the serialized data bursts in the same wavelength channel for as long as possible should be visible in the form of a reduced number of bursts experiencing wavelength conversion at the core nodes. In order to observe this effect, Fig. 8 presents the average wavelength conversion probability, defined as the fraction of transit data bursts that undergo wavelength conversion, as a function of the maximum ingress burst delay for different values of the average offered traffic load.

The curves for TE-DBS exhibit a declining trend as the maximum ingress burst delay increases, with this behaviour being more pronounced for smaller average offered traffic load values. These observations confirm that the probability of the data bursts serialized at the ingress nodes being kept in the same wavelength channel, as they go through the core nodes, is higher for larger values of the maximum ingress burst delay and smaller values of offered traffic load. Conversely, with BORA the wavelength conversion probability remains insensitive to variations in both the maximum ingress burst delay and offered traffic load, corroborating the fact that it cannot reduce the utilization of wavelength conversion at the core nodes. The reduced wavelength contention characteristic of the TE-DBS strategy, which

Transit Bursts

Ingress Bursts

DBS BORA TE-DBS

0 40 80 120 160 200 240 280 320 360 400

Maximum burst ingress delay [μs]

Fig. 7. Average burst blocking probability of ingress and transit bursts (Pedro et al., 2009a).

The major dissimilarity between the TE-DBS and BORA strategies is the order by which free wavelength channels are searched to schedule the data bursts assembled at the ingress nodes. Particularly, the TE-DBS strategy exploits the selective delaying of data bursts at the electronic buffers of these nodes not only to smooth the burst traffic entering the core network, similarly to BORA, but also to proactively reduce the unusable voids formed between consecutive data bursts scheduled in the same wavelength channel. As described in Section 3, complying with the latter objective demands enforcing that the serialized data bursts are kept in the same wavelength for as long as possible along their routing path, which means that contention for the same wavelength among bursts on overlapping paths must be minimized. Intuitively, the success of keeping the serialized data bursts in the same wavelength channel for as long as possible should be visible in the form of a reduced number of bursts experiencing wavelength conversion at the core nodes. In order to observe this effect, Fig. 8 presents the average wavelength conversion probability, defined as the fraction of transit data bursts that undergo wavelength conversion, as a function of the

maximum ingress burst delay for different values of the average offered traffic load.

The curves for TE-DBS exhibit a declining trend as the maximum ingress burst delay increases, with this behaviour being more pronounced for smaller average offered traffic load values. These observations confirm that the probability of the data bursts serialized at the ingress nodes being kept in the same wavelength channel, as they go through the core nodes, is higher for larger values of the maximum ingress burst delay and smaller values of offered traffic load. Conversely, with BORA the wavelength conversion probability remains insensitive to variations in both the maximum ingress burst delay and offered traffic load, corroborating the fact that it cannot reduce the utilization of wavelength conversion at the core nodes. The reduced wavelength contention characteristic of the TE-DBS strategy, which

1.0E-5

10-5

1.0E-4

10-4

1.0E-3

10-3

Average burst blocking probability

1.0E-2

10-2

1.0E-1

10-1

1.0E+0

100

is absent in BORA, is critical to mitigate the fragmentation of the wavelengths capacity, resulting in the smaller transit burst losses reported with TE-DBS in Fig. 7 and ultimately explaining the enhanced blocking performance provided by this strategy.

Fig. 8. Average wavelength conversion probability (Pedro et al., 2009b).

Fig. 9 shows the blocking performance as a function of the maximum ingress burst delay for different numbers of wavelength channels and Γ = 0.80. The results indicate that the slope of the average burst blocking probability curves for TE-DBS increases with the number of wavelength channels, augmenting the performance gain of using this strategy instead of BORA. This behaviour is due to the fact that when the number of wavelength channels per fibre link increases the effectiveness of the HMPI algorithm in determining appropriate wavelength search orderings improves, enhancing the isolation degree of serialized burst traffic from overlapping routing paths on different wavelength channels.

In principle, only a fraction of transit bursts experience wavelength contention, demanding the use of a wavelength converter. Consequently, the deployment of a smaller number of converters, in a shared configuration, has been proposed in the literature. Converter sharing at the core nodes can be implemented on a per-link or per-node basis, depending on whether each converter can only be used by bursts directed to a specific output link or can be used by bursts directed to any output link of the node (Chai et al., 2002). The latter sharing strategy enables to deploy a smaller number of converters. Fig. 10 exemplifies the architecture of a core node with *C* full-range wavelength converters shared per-node, where *C* ≤ *M·W*. In this core node architecture, each wavelength converter must be capable of converting any wavelength channel at its input to any wavelength channel at its output and the switch matrix has to be augmented with *C* input ports and *C* output ports.

Optical Burst-Switched Networks Exploiting Traffic Engineering in the Wavelength Domain 299

The minimization of wavelength contention experienced by transit bursts is a key enabler for TE-DBS to improve the loss performance of OBS networks. Particularly, the simulation results presented in Fig. 8 confirm that the utilization of this strategy reduces the probability of wavelength conversion, and consequently the utilization of the wavelength converters, as the maximum ingress burst delay is increased. This attribute can extend the usefulness of TE-DBS to OBS networks with shared full-range wavelength converters because in this network scenario the lack of available converters at the core nodes can become the major

In order to illustrate the added-value of the TE-DBS strategy in OBS networks whose core nodes have shared full-range wavelength converters, consider the 10-node ring network with *W* = 32. When using wavelength converters in a dedicated configuration, each node of this network needs *M·W* = 64 converters. Fig. 11 plots the average burst blocking probability as a function of the number of shared full-range wavelength converters at the nodes, *C*, for different values of the average offered traffic load and using BORA and TE-DBS strategies

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42

BORA TE-DBS

Γ = 0.80

Γ = 0.70

Γ = 0.60

Number of shared wavelength converters per node

Fig. 11. Network performance with shared full-range wavelength converters for different

The blocking performance curves clearly show that the OBS network using TE-DBS can benefit not only in terms of enhanced blocking performance, but also from enabling using simplified core node architectures. More precisely, the burst loss curves indicate that for very small numbers of shared wavelength converters, the utilization of TE-DBS results in a burst blocking probability that can be multiple orders of magnitude lower than that obtained using BORA. Furthermore, using TE-DBS demands a much smaller number of shared wavelength converters to match the blocking performance of a network using core

values of the average offered traffic load (Pedro et al., 2009a).

cause of unresolved contention, specially for small values of *C*.

with RAM

max Δ*t* = 160 μs.

1.0E-5

10-5

1.0E-4

10-4

1.0E-3

10-3

Average burst blocking probability

1.0E-2

10-2

1.0E-1

10-1

1.0E+0

100

Fig. 9. Network performance for different numbers of wavelength channels (Pedro et al., 2009b).

Fig. 10. OBS core node architecture with shared full-range wavelength converters.

*W* = 64

*W* = 32

*W* = 128

BORA TE-DBS

0 40 80 120 160 200 240 280 320 360 400

Maximum ingress burst delay [μs]

Fig. 9. Network performance for different numbers of wavelength channels (Pedro et al.,

Fig. 10. OBS core node architecture with shared full-range wavelength converters.

1.0E-5

2009b).

10-5

1.0E-4

10-4

1.0E-3

Average burst blocking probability

1.0E-2

10-110-3

10-2

1.0E-1

1.0E+0

100

The minimization of wavelength contention experienced by transit bursts is a key enabler for TE-DBS to improve the loss performance of OBS networks. Particularly, the simulation results presented in Fig. 8 confirm that the utilization of this strategy reduces the probability of wavelength conversion, and consequently the utilization of the wavelength converters, as the maximum ingress burst delay is increased. This attribute can extend the usefulness of TE-DBS to OBS networks with shared full-range wavelength converters because in this network scenario the lack of available converters at the core nodes can become the major cause of unresolved contention, specially for small values of *C*.

In order to illustrate the added-value of the TE-DBS strategy in OBS networks whose core nodes have shared full-range wavelength converters, consider the 10-node ring network with *W* = 32. When using wavelength converters in a dedicated configuration, each node of this network needs *M·W* = 64 converters. Fig. 11 plots the average burst blocking probability as a function of the number of shared full-range wavelength converters at the nodes, *C*, for different values of the average offered traffic load and using BORA and TE-DBS strategies with RAM max Δ*t* = 160 μs.

Fig. 11. Network performance with shared full-range wavelength converters for different values of the average offered traffic load (Pedro et al., 2009a).

The blocking performance curves clearly show that the OBS network using TE-DBS can benefit not only in terms of enhanced blocking performance, but also from enabling using simplified core node architectures. More precisely, the burst loss curves indicate that for very small numbers of shared wavelength converters, the utilization of TE-DBS results in a burst blocking probability that can be multiple orders of magnitude lower than that obtained using BORA. Furthermore, using TE-DBS demands a much smaller number of shared wavelength converters to match the blocking performance of a network using core

Optical Burst-Switched Networks Exploiting Traffic Engineering in the Wavelength Domain 301

IETF (2002). *RFC 3945: Generalized Multi-Protocol Label Switching (GMPLS) Architecture*,

ITU-T (2006). *Recommendation G.8080: Architecture for the Automatically Switched Optical* 

Korotky, S. (2004). Network Global Expectation Model: A Statistical Formalism for Quickly

Li, J. & Qiao, C. (2004). Schedule Burst Proactively for Optical Burst Switched Networks.

Papadimitriou, G.; Papazoglou, C. & Pomportsis, A. (2003). Optical Switching: Switch

Pedro, J.; Castro, J.; Monteiro, P. & Pires, J. (2006a). On the Modelling and Performance

Pedro, J.; Monteiro, P. & Pires, J. (2006b). Wavelength Contention Minimization Strategies

Pedro, J.; Monteiro, P. & Pires, J. (2009a). On the Benefits of Selectively Delaying Bursts at

Pedro, J.; Monteiro, P. & Pires, J. (2009b). Contention Minimization in Optical Burst-

Pedro, J.; Monteiro, P. & Pires, J. (2009c). Traffic Engineering in the Wavelength Domain for

Poustie, A. (2005). Semiconductor Devices for All-Optical Signal Processing, *Proceedings of* 

Qiao, C. & Yoo, M. (1999). Optical Burst Switching (OBS) – A New Paradigm for an Optical

Sahara, A.; Shimano, K.; Noguchi, K.; Koga, M. & Takigawa, Y. (2003). Demonstration of

Sun, Y.; Hashiguchi, T. ; Minh, V.; Wang, X.; Morikawa, H. & Aoyama, T. (2005). Design and

*Network (ASON)*, International Telecommunication Union – Telecommunication

Quantifying Network Needs and Costs. *IEEE/OSA Journal of Lightwave Technology*,

Fabrics, Techniques, and Architectures. *IEEE/OSA Journal of Lightwave Technology*,

Evaluation of Optical Burst-Switched Networks, *Proceedings of IEEE CAMAD 2006 11th International Workshop on Computer-Aided Modeling, Analysis and Design of Communication Links and Networks*, pp. 30-37, ISBN 0-7803-9536-0, Trento, Italy, June

for Optical-Burst Switched Networks, *Proceedings of IEEE GLOBECOM 2006 49th Global Telecommunications Conference*, paper OPNp1-5, ISBN 1-4244-0356-1, San

the Ingress Edge Nodes of an OBS Network, *Proceedings of IFIP ONDM 2009 13th Conference on Optical Network Design and Modelling*, ISBN 978-1-4244-4187-7,

Switched Networks Combining Traffic Engineering in the Wavelength Domain and Delayed Ingress Burst Scheduling. *IET Communications*, Vol. 3, No. 3, (March 2009),

Optical Burst-Switched Networks. *IEEE/OSA Journal of Lightwave Technology*, Vol.

*ECOC 2005 31st European Conference on Optical Communication*, Vol. 3, pp. 475-478,

Internet. *Journal of High Speed Networks*, Vol. 8, No. 1, (January 1999), pp. 69-84,

Optical Burst Data Switching using Photonic MPLS Routers operated by GMPLS Signalling, *Proceedings of OFC 2003 Optical Fiber Communications Conference*, Vol. 1,

Implementation of an Optical Burst-Switched Network Testbed. *IEEE* 

Internet Engineering Task Force, September 2002

Vol. 22, No. 3, (March 2004), pp. 703-722, ISSN 0733-8724

*Computer Networks*, Vol. 44, (2004), pp. 617-629, ISSN 1389-1286

Vol. 21, No. 2, (February 2003), pp. 384-405, ISSN 0733-8724

Francisco, USA, November 27-December 1, 2006

Braunschweig, Germany, February 18-20, 2009

27, No. 15, (August 2009), pp. 3075-3091, ISSN 0733-8724

ISBN 0-86341-543-1, Glasgow, Scotland, September 25-29, 2005

pp. 220-222, ISBN 1-55752-746-6, Atlanta, USA, March 23-28, 2003

pp. 372-380, ISSN 1751-8628

ISSN 0926-6801

Standardization Sector, June 2006

8-9, 2006

nodes with dedicated wavelength converters. Particularly, with TE-DBS around 16 shared converters per node are enough to match the loss performance obtained with 64 dedicated converters, whereas with BORA this number more than doubles, since around 36 shared converters are required. The larger savings in the number of wavelength converters enabled by TE-DBS also mean that the expansion of the switch matrix to accommodate the shared converters is smaller, leading to an even more cost-effective network solution.
