Your Guide To Load Balancing

Load Balancers and Application Delivery Controllers are essential network solutions that mitigate the threat of downtime, providing scalability and optimised application performance. The proliferation of application delivery solutions available on the market and ever shifting demands on modern infrastructure can make selecting the right load balancer for your environment a complex process. This site explores factors to consider when evaluating Load Balancers, guiding you through discussions on platform availability, technology feature-sets and budgetary criteria.

Load Balancer FAQ

What is a Hardware Load Balancer?

A load balancer is a network appliance solution that distributes application traffic across multiple servers.
Learn More

Why use a Hardware Load Balancer?

Learn MoreLoad balancers make networks more efficient and service delivery more reliable, enabling users to:

  • Provide high availability and resilience for applications
  • Increase system capacity
  • Scale operations to accommodate more users, new applications or peaks in traffic
  • Improve application performance

Based on the benefits that they deliver, load balancers are increasingly deployed as standard practice for business-critical web applications.

Learn More

How Does Load Balancing Work?

Load balancers sit in front of web servers, intercepting application requests and directing traffic among healthy servers to ensure always-available application service.
Load Balancers also reduce load on backend servers to improve performance and user experience.
Learn More
Network traffic is sent to a shared IP, in many cases called a virtual IP (VIP), or listening IP. This VIP is an address that is attached to the load balancer. Once the load balancer receives a request on this VIP it makes a decision on which server to send it on to. This decision is normally determined by a “load balancing method or strategy”, a “server health check” or, in the case of a next generation application delivery controller device, a rule set.
Learn More

What are Load Balancing Methods and Strategies?

A load balancing method or strategy instructs the load balancer on where to send the request. There are many load balancing strategies available depending on the specific solution, however a few common ones are listed below:
Learn More Round Robin: The most simple load balancing method where each server takes a turn to receive a request.

Least Number of Connections: The load balancer will keep track of the number of connections a server has and send the next request to the server with the least connections.

Weighted: Typically servers are allocated a percentage capability as one server could be twice as powerful as another.  Weighted methods are useful if the load balancer does not know the real and actual performance of the server.

Fastest Response Time: This method is normally only available on more advanced products. The request will be sent to the fastest responding server.

Learn More

What is Server Health Checking?

Load Balancers run server health checks against web servers to determine if they are live, healthy and providing service.

Server health monitoring is the key to delivering resilient applications, and depending on the solution chosen, some load balancers are able to use Layer 7 health checks which offer greater sophistication in their problem detection.

Below is a summary of the different methods of server health checks.

Learn More
Ping: This is the most simple method of server health check, however it is not very reliable because the Load Balancer can report that the server is up, whilst the web service can still be down.

TCP connect:  This is a more sophisticated health check method which can check if a service is up and running. An example of this is services on port 80 for web.

Simple HTTP GET: This method of server health check will make a HTTP GET request to the web server and typically check for a header response such as a 200 OK.

Full HTTP GET: This server health check will make a HTTP GET and check the actual content body for a correct response. This feature is only available on some of the more advanced load balancing solutions but is the superior method for web applications because it will check that the actual application is available.

Customisable Server Health Checks: Some load balancing solutions are able to accommodate custom monitors for TCP / IP applications for better control over their services.

Learn More

What is Persistence?

Persistence is a feature that is required by many web applications and websites. Once a user has interacted with a particular server, all subsequent requests are sent to the same server thus 'persisting' to that particular server. Session persistence ensures a continuity of service and seamless end user experience and is often a requirement of ecommerce applications.
Learn More
Learn More

What is SSL Acceleration and SSL Offload?

SSL acceleration or SSL offload is the ability for a load balancer to establish a secure tunnel with the client thus, in most cases, replacing the requirement for the web server to perform SSL.

SSL Offload is the termination/decryption of SSL requests on the Load Balancer rather than on the application server.

Learn More
In order for the load balancer to perform this function it must be configured with an SSL certificate either self generated or signed by a certificate authority. By default SSL is initiated to the server on port 443, as such, the load balancer should be configured to either terminate this or simply pass it straight through to the web server.

Some load balancers offer the facility to import and export certificates from common web servers such as MS IIS and Apache.

Learn More

Why Offload SSL?

SSL (Secure Sockets Layer) can be a very CPU intensive operation thus reducing the speed and capacity of the web server. Offloading SSL termination to a load balancer allows you to centrally manage your certificates and frees up your servers to focus on delivering the application rather than decrypting SSL.

In summary, the cost saving and management simplifications of having all the certificates stored in one place make SSL Offload a key feature of load balancing.

Learn More
Learn More

What is a Next Generation Load Balancer or Application Delivery Controller (ADC)?

Application Delivery Controller or ADC is a term heavily used by technology analyst Gartner, defining an advanced or next generation load balancer. Typically ADCs offer advanced features and functionality to improve capacity and application performance whilst speeding up system performance.
Learn More
Application Delivery Controllers tend to offer a richer feature set including but not limited to advanced load balancing, content caching, content compression, connection management, connection Pooling, SSL, advanced traffic routing, highly configurable server health monitoring and content manipulation.

These devices tend to run at the Layer 7 level on the OSI model.

Learn More

What is Web Acceleration?

Advanced Load Balancers deploy a variety of features and functionality to accelerate application content, reduce backend server load and optimise overall performance for a positive end user experience.

These application acceleration features include but are not limited to HTTP Compression, Connection Management / Pooling, Content Caching and SSL Offload.

Learn More
HTTP Compression
Load Balancers are able to deploy HTTP Compression on web content during its journey from server to client. HTTP Compression can dramatically reduce the size of a typical web page by 70-90%.
Because decompression is handled by the client-side browser (such as Internet Explorer, Firefox or Chrome), no additional plug-in is required, meaning deployment is transparent to users. As well as enhancing delivery, compression reduces bandwidth costs and alleviates burdens on networking infrastructure.

TCP Connection Management
Load Balancers are able to optimise the performance of TCP / IP by opening, maintaining and reusing multiple connections to web servers. This reduces unnecessary TCP connections and the effects sometimes referred to TCP / IP slow start.

Content Caching
Content Caching is an advanced load balancer feature whereby frequently requested static and dynamic HTTP objects are stored and served from the load balancer as opposed to the web server. This performance enhancing feature means that only unique requests are sent to the web server, thus reducing the number of object requests and connections needed.

Learn More

Layer 4 vs Layer 7 Load Balancing

The terms Layer 4 and Layer 7 refer to the protocol layers at which a load balancer operates within the OSI networking model.

Layer 4 load balancers operate at the transport layer, whilst Layer 7 load balancers operate at the application protocol level, affording them greater visibility and understanding of the application it is processing itself. This enables advanced functionality and optimisation features including intelligent traffic management, content caching, security features and compression.

Learn More
Layer 4 load balancers are still commonly available although their market share has been reducing significantly as Layer 7 advanced load balancers and ADC’s become more powerful and cost effective.

Some Layer 7 devices can run at over 3.2gig! This together with n=n scalability, a feature which is often found with layer 7 devices, means that it is very unlikely there are any web sites in the world that can’t take advantage of this technology.