Tomcat Clustering Analysis

How Clustering Works

Three parts:

  1. Load Balancer
  2. Tomcat server
  3. session replication

Scalability

Scalability and clustering are not the same thing. Rather, clustering is a method of achieving scalability. Scalability has to do with the ability of a server to efficiently process multiple concurrent requests simultaneously, with the stated goal that the time it takes to process an ever increasing number of simultaneous requests should be as close to the time it took to process the initial request as possible.

Load Balancing

Load balancing is a group of technologies aimed at distributing request load across a group of servers. Load balancing is a key component of a clustering solution, as it provides several services required to achieve the other goals of clustering.

To enable scalability, a load balancing implementation attempts to route requests to the server with the least amount of current load, for faster processing. To enable high-availability, which we will define next, a load balancing implementation must keep track of the status of its various servers, so that requests are never dropped.

Many load balancing solutions also take advantage of the fact that a server is now fronting the actual request processing software to provide an additional layer of security, ignoring and dropping malicious traffic before it can even reach the application servers.

Finally, the load balancing implementation makes the whole clustering structure functional by encapsulating the cluster within a virtual container, with one point of access. This means that the client attempting to access the web application served by the cluster never needs to know whether or not a cluster is being used.

High Availability

High availability is a group of interrelated technologies and strategies with the aim of increasing the amount of time that the network is available to process requests. The most common of these techniques are failover, state replication, and load balancing.

Apache HTTPD and mod_jk/mod_proxy

The most popular server/software set-up for Tomcat clustering is to front a cluster of Tomcat servers with an Apache Web Server running either the mod_JK or mod_proxy connector module. These modules, which are also often used simply to provide basic interoperability between Apache Web Server and Tomcat, also each include built-in load balancing capabilities.

At one time, it was common practice to favor mod_jk over mod_proxy; this was because mod_jk was developed as part of the JK project, a Tomcat subproject aimed at improving connectivity between Tomcat and various web servers, and had support for AJP, an efficient protocol developed specifically for meta-data-rich communication between Apache Web Server and other types of servers.

The speed of AJP made this protocol preferable, and was a big vote on the side of mod_jk. However, when mod_proxy was refactored in Apache Web Server 2.2, it was vastly improved, and included new sub-modules offering support for AJP and load balancing features.Thus, the key differentiators between the two protocols are now the maturity of their load balancing features and the ease with which they can be configured.

As far as ease of configuration is concerned, mod_proxy is the clear winner. The module was developed alongside the Apache Web Server, and its configuration is very straightforward, only requiring a set of changes within Apache Web Server’s main configuration file, httpd.conf.

By comparison, mod_jk must be configured within httpd.conf, and then directed to an additional file called workers.properties, which defines all the available Tomcat servers as “workers”, as well as a number of “virtual workers”, processes that are responsible for the actual work of load balancing. This is often confusing, and can be a real source of frustration. On the other hand, mod_proxy, being the more mature project, offers a much finer-grained level of control over the load balancing.

In terms of sophistication, mod_jk wins hands down, and this makes it our recommended choice if you want real control over your load balancing. Although mod_proxy and mod_jk both include a web GUI, but mod_jk’s is much richer, offering a full page of information about each node, as well as a GUI tool for configuring hot load balancing properties, meaning that servers can be taken online and offline for updates one by one without interruption of service.

The load balancing algorithms used by mod_jk are also more robust than mod_proxy’s, distributing load based on the number of HTTP sessions per server and each server’s “lbfactor”, a user-defined value used to incorporate the absolute performance potential of different servers into the equation

Session Persistance

The final piece of the clustering puzzle is session persistence – making sure that the information from an individual user’s session is always available to them, even if the server currently hosting their session goes down, so that application state is maintained. There are a number of ways that session persistence can be factored into a cluster.

First of all, factoring the need to run in a clustered environment into the initial spec of an application can influence design decisions to a certain point. Non-complex state information that does not pose a security risk, such as the user’s current tab, can be preserved on the client side via hidden fields, cookies, and URI-rewriting. These methods can be used effectively for a variety of data types, but are unsuitable for complex or security-sensitive operations.

Secondly, the majority of modern load balancers, including mod_proxy and mod_jk, support a feature called “session stickiness”, which means that the load balancer remembers which cluster worker is storing the session information for each client’s request, and proxies all concurrent requests from the same client to the same worker. While this ensures that state is maintained while all servers are working properly, if a server goes down for any reason, while the load balancer will begin directing requests to the remaining active servers, state data stored on the failed server will be lost.

Thus, a method of replicating the server-side session data must be provided to ensure that the cluster will truly never lose a transaction. There are several methods of doing this, which can be combined to create the best performing solution.

The simplest method of replicating session data within a cluster is to copy the data to at least one other worker. This “buddy system” method, in combination with some kind of health check or heartbeat function, allows the load balancer to detect when a server goes offline, and begin passing requests to its appropriate buddy worker.

Ideally, the client should perceive the service as uninterrupted. However, this method can introduce overhead under high loads – the load balancer must preserve increasing amounts of session-routing information, while the Tomcat workers take on database-like load in addition to their dynamic content processing load, which can create a bottleneck.

Load balancer bottleneck can be eliminated by using a multi-cast replication model, where each node of the cluster replicates its session data to every other node. For large environments, this can mean that the overall cluster is split into several smaller clusters using the DeltaManager component. However, small cluster set-ups without significant load should not experience these problems.

Other methods of achieving session persistence are to store the session information in a shared file system or JDBC-compliant database, or to use a cloud-based object cache system such as Terracotta. All of these methods carry an additional performance cost, as they require the additional step of writing and retrieving information to and from a database. However, as the overall goal of clustering is to improve availability, performance, and failover protection, this performance hit must be balanced against the other factors.

 

FROM HERE

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s