**2. Challenges of load balancing**

The load balancer implements several load balancing algorithms to determine the suitable resource. However, it faces several issues while distributing the load across available resources. Several major issues with their respective solutions are presented in the next section.

## **2.1 Increased web traffic**

Over the last few years, the web traffic is increased very rapidly due to numerous registered websites and online transactions. As the numbers of requests are increased, the response of server becomes slow due to the limited number of open connections. The requests are added to the overall processing capability of resources. When incoming requests go beyond the capability of the resource, a resource crashes or fault occurs. Several authors analyzed and suggested the solution to resolve the issue. The first solution is the server upgradation in which requests are handled by a more powerful server for a while. But, scalability, interruption, and maintenance issues are associated with this solution. Another solution is the outsourcing in which requests are sent to another suitable server for speedy response. But this approach is costly and has limited control over the QoS issues. Chen et al. [8] suggested that the web page size and number of users both affect the system response time.

**65**

*Analysis of Effective Load Balancing Techniques in Distributed Environment*

approach has its benefits, applications, and limitations.

the request among servers using one of these routing policy:

*2.1.1 Centralized queueing model for load balancing*

*2.1.2 Distributed queueing model for load balancing*

and average waiting time of web servers.

• Random policy

• Shortest queue policy

• RR policy

**Figure 7.**

*Centralized queueing model.*

algorithms.

The most favorable solution is to use the multiple servers with an efficient load balancer which balances the load among servers. The performances of these servers are analyzed through queueing models or waiting line models. Broadly, two types of load balancing models are used to analyze the web server performance. Each

In this mechanism, homogeneous servers with finite buffer sizes are used as shown in **Figure 7**. The load balancer receives request from the user and redirects

Zhang and Fan [9] compared these policies in terms of rejection rate and system response time. They analyzed that these algorithms perform well when traffic is light. But when web traffic becomes high, shortest queue policy performs better than random and RR policy. The number of rejections in RR and random policy is increased as the traffic increases. Singh and Kumar [10] presented a queueing algorithm for measuring the overloading and serving capacity of server in distributed load balancing environment. The algorithm performs better in both homogeneous and heterogeneous environment than the remaining capacity (RC) and server content based queue (QSC)

These mechanisms address the network latency issue also, which avoids network congestion. The queueing models follow certain arrival and distribution rules to distribute the requests. Zhang and Fan [9] suggested that distributed queueing models perform well in heavy traffic conditions. Routing decisions are taken on the basis of queue length differences of web servers. The collected information is used in traffic distribution for improving the performance of web servers. Singh and Kumar [11] suggested that task completion time directly affects the queue length of the web server. They presented a model based on the ratio factor of the task's average completion time. The model is compared with the model presented by Birdwell et al. [12], and it performs better for two performance metrics: average queue length

Li et al. [13] analyzed network delay and presented a delay controlled load balancing approach for improving network performance. However, the approach

has limited applicability and is suitable for stable path states.

*DOI: http://dx.doi.org/10.5772/intechopen.91460*

*Analysis of Effective Load Balancing Techniques in Distributed Environment DOI: http://dx.doi.org/10.5772/intechopen.91460*

**Figure 7.** *Centralized queueing model.*

*Linked Open Data - Applications, Trends and Future Developments*

**1.2 Dynamic load balancing**

**Figure 6.**

*Dynamic load balancing.*

within the cluster is managed centrally.

**2. Challenges of load balancing**

presented in the next section.

**2.1 Increased web traffic**

round robin (WRR)-LBA, and random allocation algorithm [7]. However, these algorithms have limited scope due to the dynamic nature of distributed environment.

It varies from the SLB algorithms in which clients' requests are distributed among available resources at run time. The LB assigns the request based on the dynamic information collected from all the resources as shown in **Figure 6**.

head than non-distributed DLB due to its interaction with all the resources.

DLB algorithms can be classified in two categories—distributed and non-distributed. In distributed DLB, all computing resources are equally responsible for balancing the load. The responsibility of load balancing is shared among all the resources. But in non-distributed algorithms, each resource performs independently to accomplish the common goal. Generally, distributed DLB algorithms generated more message over-

Distributed algorithms perform better in fault conditions as it degrades only the sectional of the system instead of global system performance. Non-distributed algorithms are further classified into two categories—centralized and semi-centralized. In centralized algorithm, a central server is responsible for executing load balancing algorithm. In semi-centralized, servers are arranged in clusters, and load balancing

The load balancer implements several load balancing algorithms to determine the suitable resource. However, it faces several issues while distributing the load across available resources. Several major issues with their respective solutions are

Over the last few years, the web traffic is increased very rapidly due to numerous registered websites and online transactions. As the numbers of requests are increased, the response of server becomes slow due to the limited number of open connections. The requests are added to the overall processing capability of resources. When incoming requests go beyond the capability of the resource, a resource crashes or fault occurs. Several authors analyzed and suggested the solution to resolve the issue. The first solution is the server upgradation in which requests are handled by a more powerful server for a while. But, scalability, interruption, and maintenance issues are associated with this solution. Another solution is the outsourcing in which requests are sent to another suitable server for speedy response. But this approach is costly and has limited control over the QoS issues. Chen et al. [8] suggested that the

web page size and number of users both affect the system response time.

**64**

The most favorable solution is to use the multiple servers with an efficient load balancer which balances the load among servers. The performances of these servers are analyzed through queueing models or waiting line models. Broadly, two types of load balancing models are used to analyze the web server performance. Each approach has its benefits, applications, and limitations.
