spring mvc root url map with tomcat

We typically use

 @RequestMapping("/")

to map the root url. It might not work with tomcat sometimes.

One reason was that that something (Tomcat?) was forwarding from “/” to “/index.jsp” when I had the file index.jsp in my WebContent directory. When I removed that, the request did not get forwarded anymore.

So the important thing here is: Don’t have any files sitting in WebContent that would be considered default pages(index.html, index.jsp, default.html, etc)

From here

get tomcat hot swap work with intellij

It usually takes a lot of time whenever you’re recompiling a Java web project. The IDE would usually recompile the entire project, package them into a war and have it redeployed on your  application server (i.e., Tomcat) then letting the server reinitialize itself. It takes a lot of time, and it just gets worse as your project grows bigger.

Uh, Hot Swap?

Hot swapping is a more efficient process of doing this. Hot swapping works by replacing the classes that you’ve changed instead of having the entire project recompiled. The resulting classes are then replaced (or rather, hot swapped) from the application server, and everything else just runs normally, without the need for a restart since all the other components didn’t have to be reloaded.
Now, Hotswapping isn’t exactly new. Java actually supports it with its own hot swapping solution via HotSpot VM, but it wasn’t exactly usable enough. It’s dealbreaking flaws prevented you from adding new methods or adding new classes. For the most part, that just leaves you to modifying existing method bodies in existing classes. In most cases, that just isn’t enough.
DCEVM fixes that problem. It’s essentially a Java VM modification that allows the redefinition of loaded classes in runtime. This lets you do the points I mentioned above that couldn’t be achieved before.

Getting Hot Swap to Work

This guide demonstrates how to configure Hot Swapping for IntelliJ using DCEVM.

While in a fully-functional IntelliJ Java web project*

  1. Get the updated, forked version of DCEVM here: http://dcevm.github.io/
    • 64-bit is supported!
    • Works in Linux with matching build numbers
    • While unsupported, it’s currently working on Oracle Java JDK versions as well.
  2. Run the package, and choose Install DCEVM as altjvm
  3. In IntelliJ, under your project’s build configurations (Run -> Edit Configurations), make sure that your project uses exploded war artifacts, instead of the normal war packages.
    • The use of exploded war packages allows IntelliJ to update both classes AND resources whenever they are updated. That means this change will also help you in swiftly reloading JSP pages that have been updated.
  4. In the Server tab, add -XXaltjvm=dcevm to the VM options (seems to me this is optional.)
  5. Change On ‘update’ action and On frame deactivation to Update classes and resources.
    • These changes will ensure that IntelliJ will update whenever changes are made and when you’ve shifted the focus away from IntelliJ. If you don’t want to do this automatically, then set Do nothing for these options.
    • If you have the Live Edit plugin, you can tick with Javascript debugger on and all of your changes on JSP files will also reflect as soon as you’ve finished typing. I wouldn’t recommend this since it’ll trigger a refresh every time, but it’s convenient if you want a WYSIWYG experience.
The end result should look like this.
Once you’re done, simply use Debug mode when executing your web application.
Depending on your settings:
  • IntelliJ will make and compile any classes you’ve changed once you’ve switched your focus away from the IDE. Your changes will reflect as soon as you refresh the page.
  • If you’ve used HotSpot VM before (the default Hot swap), you’ll be able to create new classes and methods in both new and existing classes. Give it a try.
  • Frameworks like Spring and Hibernate are likely not fully supported, due to the nature of how they work or how they are brought up (and how it might be improperly done by hot swapping). It’s workable, but you just have to be careful when to modify or add code that relies on these frameworks.
    • If you REALLY need to, then JRebel has support for it. Unfortunately, JRebel costs money and DCEVM being free is the exact reason why I wrote this post.

 

FROM HERE

ssh to the ec2 instance(beanstalk)

Configure Security Group

  1. In the AWS console, open the EC2 tab.
  2. Select the relevant region and click on Security Group.
  3. You should have an elasticbeanstalk-default security group if you have launched an Elastic Beanstalk instance in that region.
  4. Edit the security group to add a rule for SSH access. The below will lock it down to only allow ingress from a specific IP address.
    SSH | tcp | 22 | 22 | 192.168.1.1/32
    

Configure the environment of your Elastic Beanstalk Application

  1. If you haven’t made a key pair yet, make one by clicking Key Pairs below Security Group in the ec2 tab.
  2. In the AWS console, open the Elastic Beanstalk tab.
  3. Select the relevant region.
  4. Environment Details | Edit Configuration | Instances
  5. Under “EC2 key pair:”, select the name of your keypair in the Existing Key Pair field.

Once the instance has relaunched, you need to get the host name from the AWS Console EC2 instances tab, or via the API. You should then be able to ssh onto the server.

$ ssh -i path/to/keypair.pub ec2-user@ec2-an-ip-address.compute-1.amazonaws.com

Note: For adding a keypair to the environment configuration, the instances’ termination protection must be off as Beanstalk would try to terminate the current instances and start new instances with the KeyPair.

.pem are too open

you might get ‘WARNING: UNPROTECTED PRIVATE KEY FILE!’

what you need to do is to

chmod 0400 keyPairFile

Note: If something is not working, check the “Events” tab in the Beanstalk application / environments and find out what went wrong.

access the tomcat log

you might get access denied when trying to access the tomcat directory because the ec2-user is not in tomcat group.

Instead of trying to access the logs as root user, it may be simpler to change the permissions on the server to grant access to the ec2-user. This can usually be done with the commands chown and chmod, but the exact steps depend on the way your server is set up. If you need help to do that, you can post the output of the following commands and I’ll try to help:

sudo ls -ld /var/log/tomcat7
id

Edit: Ok based on your output bellow, what you could do is change the group of the log directory to tomcat (instead of root) and then add the ec2-user to the tomcat group:

sudo chown -R tomcat:tomcat /var/log/tomcat7
sudo usermod -G ec2-user,wheel,tomcat ec2-user

Then you must log out and log back in for the new group membership to apply.

Reference here, here

Servlet Container

1. What is a Web Server?

To know what is a Servlet container, we need to know what is a Web Server first.

web server

A web server uses HTTP protocol to transfer data. In a simple situation, a user type in a URL (e.g. http://www.programcreek.com/static.html) in browser (a client), and get a web page to read. So what the server does is sending a web page to the client. The transformation is in HTTP protocol which specifies the format of request and response message.

2. What is a Servlet Container?

As we see here, the user/client can only request static webpage from the server. This is not good enough, if the user wants to read the web page based on his input. The basic idea of Servlet container is using Java to dynamically generate the web page on the server side. So servlet container is essentially a part of a web server that interacts with the servlets.

web server & servlet container

Servlet container is the container for Servlets.

3. What is a Servlet?

Servlet is an interface defined in javax.servlet package. It declares three essential methods for the life cycle of a servlet – init(), service(), and destroy(). They are implemented by every servlet(defined in SDK or self-defined) and are invoked at specific times by the server.

  1. The init() method is invoked during initialization stage of the servlet life cycle. It is passed an object implementing the javax.servlet.ServletConfig interface, which allows the servlet to access initialization parameters from the web application.
  2. The service() method is invoked upon each request after its initialization. Each request is serviced in its own separate thread. The web container calls the service() method of the servlet for every request. The service() method determines the kind of request being made and dispatches it to an appropriate method to handle the request.
  3. The destroy() method is invoked when the servlet object should be destroyed. It releases the resources being held.

From the life cycle of a servlet object, we can see that servlet classes are loaded to container by class loader dynamically. Each request is in its own thread, and a servlet object can serve multiple threads at the same time(thread not safe). When it is no longer being used, it should be garbage collected by JVM.

Like any Java program, the servlet runs within a JVM. To handle the complexity of HTTP requests, the servlet container comes in. The servlet container is responsible for servlets’ creation, execution and destruction.

4. How Servlet container and web server process a request?

  1. Web server receives HTTP request
  2. Web server forwards the request to servlet container
  3. The servlet is dynamically retrieved and loaded into the address space of the container, if it is not in the container.
  4. The container invokes the init() method of the servlet for initialization(invoked once when the servlet is loaded first time)
  5. The container invokes the service() method of the servlet to process the HTTP request, i.e., read data in the request and formulate a response. The servlet remains in the container’s address space and can process other HTTP requests.
  6. Web server return the dynamically generated results to the correct location

The six steps are marked on the following diagram:

servlet container - life cycle

5. The role of JVM

Using servlets allows the JVM to handle each request within a separate Java thread, and this is one of the key advantage of Servlet container. Each servlet is a Java class with special elements responding to HTTP requests. The main function of Servlet contain is to forward requests to correct servlet for processing, and return the dynamically generated results to the correct location after the JVM has processed them. In most cases servlet container runs in a single JVM, but there are solutions when container need multiple JVMs.

OTHER CONCEPTS

ServletContext

When the servletcontainer (like Apache Tomcat) starts up, it will deploy and load all webapplications. When a webapplication get loaded, the servletcontainer will create the ServletContext once and keep in server’s memory. The webapp’s web.xml will be parsed and every Servlet, Filter and Listener found in web.xml will be created once and kept in server’s memory as well. When the servletcontainer shuts down, it will unload all webapplications and the ServletContext and all Servlet, Filter and Listener instances will be trashed.

HttpServletRequest and HttpServletResponse

The servletcontainer is attached to a webserver which listens on HTTP requests on a certain port number, which is usually 80. When a client (user with a webbrowser) sends a HTTP request, the servletcontainer will create new HttpServletRequest and HttpServletResponse objects and pass it through the methods of the already-created Filter and Servlet instances whose url-patternmatches the request URL, all in the same thread.

The request object provides access to all information of the HTTP request, such as the request headers and the request body. The response object provides facility to control and send the HTTP response the way you want, such as setting headers and the body (usually with HTML content from a JSP file). When the HTTP response is committed and finished, then both the request and response objects will be trashed.

HttpSession

When a client visits the webapp for the first time and/or the HttpSession is to be obtained for the first time by request.getSession(), then the servletcontainer will create it, generate a long and unique ID (which you can get by session.getId()) and store it in server’s memory. The servletcontainer will also set a Cookie in the HTTP response with JSESSIONID as cookie name and the unique session ID as cookie value.

As per the HTTP cookie specification (a contract a decent webbrowser and webserver has to adhere), the client (the webbrowser) is required to send this cookie back in the subsequent requests as long as the cookie is valid. Using a HTTP header checker tool like Firebug you can check them. The servletcontainer will determine every incoming HTTP request header for the presence of the cookie with the name JSESSIONID and use its value (the session ID) to get the associated HttpSession from server’s memory.

The HttpSession lives until it has not been used for more than the time, a setting you can specify in web.xml, which defaults to 30 minutes. So when the client doesn’t visit the webapp anymore for over 30 minutes, then the servletcontainer will trash the session. Every subsequent request, even though with the cookie specified, will not have access to the same session anymore. The servletcontainer will create a new one.

On the other hand, the session cookie on the client side has a default lifetime which is as long as the browser instance is running. So when the client closes the browser instance (all tabs/windows), then the session will be trashed at the client side. In a new browser instance the cookie associated with the session won’t be sent anymore. A new request.getSession() would return a brand new HttpSession and set a cookie with a brand new session ID.

In a nutshell

  • The ServletContext lives as long as the webapp lives. It’s been shared among all requests in allsessions.
  • The HttpSession lives as long as the client is interacting with the webapp with the same browser instance and the session hasn’t timed out at the server side yet. It’s been shared among allrequests in the same session.
  • The HttpServletRequest and HttpServletResponse lives as long as the client has sent it until the complete response (the webpage) is arrived. It is not being shared elsewhere.
  • Any Servlet, Filter and Listener lives as long as the webapp lives. They are being shared among all requests in all sessions.
  • Any attribute which you set in ServletContext, HttpServletRequest and HttpSession will live as long as the object in question lives.

Threadsafety

That said, your major concern is possibly threadsafety. You should now have learnt that Servlets and filters are shared among all requests. That’s the nice thing of Java, it’s multithreaded and different threads (read: HTTP requests) can make use of the same instance. It would otherwise have been too expensive to recreate it on every request.

But you should also realize that you should never assign any request or session scoped data as aninstance variable of a servlet or filter. It will be shared among all other requests in other sessions. That’sthreadunsafe! The below example illustrates that:

public class MyServlet extends HttpServlet {

    private Object thisIsNOTThreadSafe;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        Object thisIsThreadSafe;

        thisIsNOTThreadSafe = request.getParameter("foo"); // BAD!! Shared among all requests!
        thisIsThreadSafe = request.getParameter("foo"); // OK, this is thread safe.
    } 
}

See also:

FROM HERE

Here is a good article for building Servlet Container yourself

中文解释文章

tomcat max post size

Today I encountered some error when trying to post a xml whose size is about 3MB to the server from gwt client. Turns out the reason is from tomcat.

 

Apache Tomcat , by default, sets the maximum size of acceptable HTTP POST request to 2MB.

You can reconfigure Tomcat to accept larger requests. This can be done by increasing the allowable limit or just simply disabling this functionality.
The file you need to edit is <Tomcat-Dir>/server.xml. In the <Connector> element, add an attribute maxPostSize and set a larger value (in bytes) to increase the limit.
to increase the limit. Setting it to 0 will disable the size check.

See the Tomcat Configuration Reference for more information.

maxPostSize – “The maximum size in bytes of the POST which will be handled by the container FORM URL parameter parsing. The feature can be disabled by setting this attribute
to a value inferior or equal to 0. If not specified, this attribute is set to 2097152 (2 megabytes).”

Tomcat Clustering Setup using mod_proxy

There are pretty much two ways to set up basic clustering, which use two different Apache modules. The architecture for both, is the same. Apache sits in front of the Tomcat nodes and acts as a load balancer.

Architecture of Apache and Tomcat cluster, protocols and connectivity

Traffic is passed between Apache and Tomcat(s) using the binary AJP 1.3 protocol. The two modules are mod_jk and mod_proxy.

mod_jk stands for “jakarta” the original project under which Tomcat was developed. It is the older way of setting this up, but still has some advantages.

mod_proxy is a newer and more generic way of setting this up. The rest of this guide will focus onmod_proxy, since it ships “out of the box” with newer versions of Apache.

You should be able to follow this guide by downloading Apache and Tomcat default distributions and following the steps. No funny business required.

Clustering Background

You can cluster at the request or session level. Request level means that each request may go to a different node – this is the ideal since the traffic would be balanced across all nodes, and if a node goes down, the user has no idea. Unfortunately this requires session replication between all nodes, not just of HttpSession, but ANY session state. For the purposes of this article I’m going to describe Session level clustering, since it is simpler to set up, and works regardless of the dynamics of your application.
…….  After all we only have 5 minutes! :)

Session level clustering means if your application is one that requires a login or other forms of session-state, and one or more your Tomcat nodes goes down, on their next request, the user will be asked to log in again, since they will hit a different node which does not have any stored session data for the user.

This is still an improvement on a non-clustered environment where, if your node goes down, you have no application at all!

And we still get the benefits of load balancing across nodes, which allows us to scale our application out horizontally across many machines.

Anyhow without further ado, let’s get into the how-to.

Setting Up The Nodes

In most situations you would be deploying the nodes on physically separate machines, but in this example we will set them up on a single machine, but on different ports. This allows us to easily test this configuration.

Nothing much changes for the physically separate set up – just the Hostnames of the nodes as you would expect.

Oh and I’m working on Windows – but aside from the installation of Apache and Tomcat nothing is different between platforms since the configuration files are standard on all platforms.

  1. Download Tomcat .ZIP distribution, e.g.
    Image showing download package
  2. We’ll use a folder to install all this stuff in. Let’s say it’s “C:\cluster” for the purposes of the article.
  3. Unzip the Tomcat distro twice, into two folders –
    C:\cluster\tomcat-node-1
    C:\cluster\tomcat-node-2
  4. Start up each of the nodes, using the bin/startup.bat / bin/startup.sh scripts. Ensure they start. If they don’t you may need to point Tomcat to the JDK installation on your machine.
  5. Open up the server.xml configuration on
    c:\cluster\tomcat-node-1\conf\server.xml
  6. There are two places we need to (potentially) configure –screenshot of where these lines are in server.xml

    The first line is the connector for the AJP protocol. The “port” attribute is the important part here. We will leave this one as is, but for our second (or subsequent) Tomcat nodes, we will need to change it to a different value.The second part is the “engine” element. The “jvmRoute” attribute has to be added – this configures the name of this node in the cluster. The “jvmRoute” must be unique across all your nodes. For our purposes we will use “node1″ and “node2″ for our two node cluster.
  7. This step is optional, but for production configs, you may want to remove the HTTP connector for Tomcat – that’s one less port to secure, and you don’t need it for the cluster to operate. Comment out the following lines of the server.xml –
  8. Now repeat this for C:\cluster\tomcat-node-2\conf\server.xml
    Change the jvmRoute to “node2″ and the AJP connector port to “8019″.

We’re done with Tomcat. Start each node up, and ensure it still works.

Setting Up The Apache Cluster

Okay, this is the important part.

  1. Download and install Apache HTTP Server.

    Use the custom option to install it into C:\cluster\apache2.2

  2. Now open up c:\cluster\apache2.2\conf\httpd.conf in your favourite text editor.
  3. Firstly, we need to uncomment the following lines (delete the ‘#’) –
    mod_proxy lines in httpd.conf to be uncommented

    These enable the necessary mod_proxy modules in Apache.
  4. Finally, go to the end of the file, and add the following:
    <Proxy balancer://testcluster stickysession=JSESSIONID>
    BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=node1 loadfactor=1
    BalancerMember ajp://127.0.0.1:8019 min=20 max=200 route=node2 loadfactor=1
    </Proxy>
    
    ProxyPass /examples balancer://testcluster/examples

    The above is the actual clustering configuration.

    The first section configures a load balancer across our two nodes. The loadfactor can be modified to send more traffic to one or the other node. i.e. how much load can this member handle compared to the others?

    This allows you to balance effectively if you have multiple servers which have different hardware profiles.

    Note also the “route” setting which must match the names of the “jvmRoutes” in the Tomcat server.xml for each node. This in conjunction with the “stickysession” setting is key for a Tomcat cluster, as this configures the session management. It tells mod_proxy to look for the node’s routein the given session cookie to determine which node that session is using. This allows all requests from a given client to go to the node which is holding the session state for the client.

    The ProxyPass line configures the actual URL from Apache to the load balanced cluster. You may want this to be “/”
    e.g. “ProxyPass /balancer://testcluster/”

    In our case we’re just configuring the Tomcat /examples application for our test.

  5. Save it, and restart your Apache server.

Test It Out

With your Apache server running you should be able to go to http://localhost/examples

You should get a 503 error page as per below –

This is because both Tomcat nodes are down.

Start up node1 (c:\cluster\tomcat-node-1\bin\startup) and reload http://localhost/examples

You should see the examples application from the default Tomcat installation –

Shut down node1, and then start up node2. Repeat the test. You should see the same page as above. We have transparently moved from node1 to node2 since node1 went down.

Start both nodes up and your cluster is now working.

You’re done!

Optional: Set Up Apache Balancer Manager

mod_proxy has an additional “balancer manager” component which provides a nice web interface to the load balanced cluster. It’s worthwhile setting this up if you want to remotely administer / monitor the cluster.

To do so is easy –

  1. Add the following to the bottom of your C:\cluster\apache2.2\conf\httpd.conf
    <Location /balancer-manager>
    SetHandler balancer-manager
    AuthType Basic
    AuthName "Balancer Manager"
    AuthUserFile "C:/cluster/apache2.2/conf/.htpasswd"
    Require valid-user
    </Location>

    This configures the balancer manager at http://localhost/balancer-manager

  2. We need to create a password file to secure it. At the command prompt you can use –
    c:\cluster\apache2.2\bin\htpasswd -c c:\cluster\apache2.2\conf\.htpasswd admin

    Then set a password when prompted. This password would be used by the balancer-manager URL to authenticate.

Restart your Apache web server, and go to http://localhost/balancer-manager

You should be prompted for a username/password as you set before, and see the balancer manager tool as below: