This article is a summary of a seminar I had on the topic. If it seems like it’s a continuation of an existing discussion that’s because, to some extent, it is. If you haven’t been discussing exchanging your app server, this article probably isn’t very interesting to you.
By putting the application server inside my application instead of the other way around, I was able to leap tall buildings in a single bound.
The embedded application server
This is how I build and deploy my sample application to a new test environment (or to production):
scp someapp-server/target/someapp-1.0.war appuser@appserver:/home/appuser/test-env1/
ssh appuser@appserver "cd /home/appuser/test-env1/ && java -jar someapp-1.0.war&"
This require no prior installed software on the appserver (with the exception of the JVM). It requires no prior configuration. Rolling back is a matter of replacing one jar-file with another. Clustering is a matter of deploying the same application several times.
In order to make this work in a real environment, there are a many details you as a developer need to take care of. As a matter of fact, you will have to take responsibility for your operational environment. The good news is that creating a good operational environment is not more time-consuming than trying to cope with the feed and care of a big-a Application Server.
In this scheme every application comes with its own application server in the form of jetty’s jar-files embedded in the deployed jar-file.
Why would you want to do something like this?
- Independent application: If you’ve ever been told that you can’t use Java 1.5 because that would require an upgrade of the application server. And if we upgrade the application server, that could affect someone else adversely. So we need to start a huge undertaking to find out who could possibly be affected.
- Developer managed libraries: Similar problems can occur with libraries. Especially those that come with the application server. For example: Oracle OC4J helpfully places a preview version of JPA 1.0 first in your classpath. If you want to use Hibernate with JPA 1.0-FINAL, it will mostly work. Until you try to use a annotation that was changed after the preview version (@Discriminator, for example). The general rule is: If an API comes with your app server, you’re better served by staying away from it. A rather bizarre state of affairs.
- Deployment, configuration and upgrades: Each version of the application, including all its dependencies is packaged into a single jar-file that can be deployed on several application server, or several times on the same application server (with different ports). The configuration is read from a properties-file in the current working directory. On the minus side, there’s no fancy web UI where you can step through a wizard to deploy the application or change the configuration. On the plus side, there is no fancy web UI …. If you’ve used one such web UI, you know what I mean.
- Continuous deployment: As your maven-repository will contain stand alone applications, creating a continuous deployment scheme is very easy. In my previous environment, a cron job running wget periodically was all that was needed to connect the dots. Having each server environment PULL the latest version gives a bit more flexibility if you want many test environments. (However, if you’re doing automated PUSH deployment, it’s probably just as practical for you).
- Same code in test and production: The fact that you can start Jetty inside a plain old JUnit test means that it is ideal for taking your automated tests one step further. However, if you test with Jetty and deploy on a different Application Server, the difference will occasionally trip you. It’s not a big deal. You have to test in the server environment anyway. But why not eliminate the extra source of pain if you can?
- Licenses: Sure, you can afford to pay a few million $ for an application server. You probably don’t have any better use for that money, anyway, right? However, if you have to pay licenses for each test-server in addition, it will probably mean that you will test less. We don’t want that.
- Operations: In my experience, operations people don’t like to mess around with the internals of an Application Server. An executable jar file plus a script that can be run with [start|status|stop] may be a much better match.
The missing bits
Taking control of the application server takes away a lot of complex technology. This simplifies and makes a lot of stuff cheaper. It also puts you back in control of the environment. However, it forces you to think about some things that might’ve been solved for you before:
- Monitoring: The first step of monitoring is simple: Just make sure you write to a log file that is being monitored by your operations department. The second step requires some work: Create a servlet (or a Jetty Handler) that a monitoring tool can ping to check that everything is okay. Taking control of this means that you can improve it: Check if your data sources can connect, if your file share is visible, if that service answers. Maybe add application-calibrated load reporting. Beyond that, Jetty has good JMX support, but I’ve never needed it myself.
- Load balancing: My setup supports no load balancing or failover out of the box. However, this is normally something that the web server or routers in front of the application server anyway. You might want to look into Jetty’s options for session affinity, if you need that.
- Security: Jetty supports JAAS, of course. Also: In all the environments I’ve been working with (CA SiteMinder, Sun OpenSSO, Oracle SSO), the SSO server sends the user name of the currently logged in user as an HTTP header. You can get far by just using that.
- Consistency: If you deploy more than one application as an embedded application server, the file structure used by an application (if any) should be standardized. As should the commands to start and stop the application. And the location of logs. Beyond that, reuse what you like, recreate what you don’t.
Taking control of your destiny
Using an embedded application server means using the application server as a library instead of a framework. It means taking control of your “main” method. There’s a surprisingly small number of things you need to work out yourself. In exchange, you get the control to do many things that are impossible with a big-A Application Server.
Thanks to Dicksen, Eivind, Terje, Kristian and Kristian for a fun discussion on Jetty as a production app server
Great read, and I would love to see something like this at my customer, however as this is a very large corporation, they have corporate guidelines and operations teams and what not that needs to be followed and catered to. And this is a far bigger challenge to change as it involves people, processes and policies.
Any thoughts on how to get such a ball rolling?
Thank you for a good comment.
When I introduced an embedded application server in my previous job, I was faced with the same issues. However, when I started talking to the operations teams, I realized that in our location, they wanted things to look like Unix services. They didn't particularly care for the application server.
The operations teams had issues that we were able to solve by simplifying and customizing the deployment. By talking with them and addressing their concerns, we gained a good ally in replacing the app server.
Of course, this is likely to vary wildly from place to place.
But here's where I would start: Make sure to sit next to someone from the operations team at the next company party.
Interesting read. I'm new to web development, but you indicated that deployment simply required the JVM to be installed, but your first command is “mvn install”, which looks like a Maven command (I don't know Maven yet). Is Maven a prerequisite on the app server?
Maven is a prerequisite when *building* the application. At least with the example code I provided (you could make a similar maven-free build).
When the application is built (with the embedded application server), the JVM is the only prerequisite when you want to *run* it.
Hope this makes sense.
Oh, I see, I misunderstood. In your deployment instructions above, step 1 (mvn install) is actually building the application on your development box, not deploying it (I understand it makes good sense to ensure the application is built against the latest source code before deploying it). Step 2 does the deployment and step 3 starts it. Thanks, this has been helpful!
Thank you for you comments, Ben. I've updated the article to say “this is how I build and deploy” instead of just “this is how I deploy”.
Please let me know if you have any other questions or input regarding this topic.
The main pro here with “one jar application” is that you cut dependency to your AppServer. Or to be precise you embed the appserver inside the jar and deploy them together. It is a good thing as in many companies you are not allowed to use new API/jdk because the prod version of AppServer does not support it yet. And admins do not want to upgrade the AppServer as it may break other installed applications. It is a chicken-egg problem. Cutting dependency to AppServer *you* decide what version to use as your deployment completely independent from other applications.
Another thing with “one jar app” is unification. If you deploy all your applications the same way – it makes administration things easier. We use “one jar app” way you described. Not only for Web applications but also for RPC applications – we embed small RPC server based on Apache Thrift inside jar. So all our applications just *.jar and we start them using “java -jar app.jar”
Thank you for a good comment, Markus. Installing other server libraries inside of the executable Jar is a good idea.
When you need to deploy a new version of the app how do you handle this? It seems like you would need to stop the original server and then start up a new one. The JVM can take some time to boot, not to mention allowing time to hotspot to optimize the new code.. So stopping the server and waiting for the new one to boot doesn't seem too graceful. Is there a way to reuse the previous JVM? (I'm new to the JVM.) I like this approach but I am struggling with how to deal with deployments and restarts.
If you are trying to maximize availability of a service than you need to have a loadbalancer. The standard schema for web applications is a reverse proxy + multiply instances of a service (aka jetty webserver). As a reverse proxy many projects use Apache or Nginx.
In case if you need to redeploy webapplication or restart server, load-balancer will take care of the traffic and send all requests to other available webservers. Using this way you could redeploy webapp instances one-by-one.
BTW apache/nginx could also take care of other routine tasks such as DDoS prevention/Gzip response compression/SSL handling/caching immutable responses from the servlet engine/….
If you are interested more in such architecture I recommend you to read this book http://www.amazon.com/Scalable-Internet-Archite…
Thanks for the quick reply. I am using nginx reverse proxy currently (with SSL) and am taking the approach you have recommended. It seems problematic though for apps which only need one jetty instance as requests will be backed up for several seconds as the other process boots up. I suppose I can add additional ports and stop one when a new one fires up assuming I have the additional ram required to do that.
One jar, that is the way to do it. Here are some implementation pointers.
Installing and configuring applicationservers is quite a drag and often hard to automate with tools like puppet or cfengine.
I've switched to doing this for all my different services with good results. With maven you can do this using the winstone-maven plugin very easily.
Another option is using the jetty-runner that was created for Jetty 7. The camel-web-standalone component in the Apache Camel project contains an example for creating a combined jar for jetty-runner and all dependencies.
Hope this helps someone.
winstone-maven-plugin looks very nice. What I like about my approach is the fact that I regain control of the main-method. Perhaps a sign that I am a control freak. And I suspect you can do the same with winstone if you want to.
Good tip. Thanks.
I'm beginning to wonder is the whole idea of a “container” is really a good idea. Not just with respect to web servers but more generally. Its one of the reasons I like Camel – I can have an application with ESB functionality without needing a container. In fact I'm working on a system using embedded Camel and Jetty all in the one app.
Even if you use a gazillion libraries including hefty ones like spring hibernate camel etc in terms of actual megabytes of space it doesn't really amount to much on a modern system so why not keep each application stand-alone and avoid all those container related dependency, unit testing, classloading etc etc problems.
IMO we've been brainwashed over the years to equate production environments with containers in the java world.
Excellent and very inspiring post! I’ve taken your example in to use and already love the concept of being able to ship the applicationserver with my application. Within a few months it will by great certainty be shipped into production and not only test-environment.
What I wonder a little about is this scheme demands one applicationserver per application? Considering that you have many applications, thus many applicationservers, will this leave a huge memory footprint against using a single applicationserver with many applications?
I don’t think that the appserver (Jetty) takes a lot of memory. I did not measure it but I do not expect that it takes more than several MB. (If you have some time you might want to measure and publish the data)
The most RAM takes the application data itself.
I have the same experience as Markus. To be on the safe side, I would calculate 100MB overhead per extra app server (relative to having a single one). If you have 10-15 apps, this shouldn’t be a problem. If you have 100, you may feel differently.
100MB sounds about right from what I could gather. Running your example one-jar took about 80MB, so Jetty itself probably demands even less. 10-15 apps on one server should be an overcoming task for any server so to speak.
A little off that topic: Do any of you have experience using this scheme on a Windows Server? And doing so, should one run the one-jar as a service or plain and simple inside a command-prompt?
I guess service is the correct answer utilizing the shutdownhook on Jetty. But how to wrap a “java -jar” into a windows process is a field unexplored by me.
On Windows, I’ve only ever used the command line. Maybe someone else have experience using Java Service Wrapper, or a similar approach?
There are many “service wrapper” libraries. Some of them
I would recommend to start trying from Apache Commons Daemon. It used for Apache Tomcat for example.
100MB is too much just for Jetty. Also as I know JVM can share memory between several processes. See it here http://docs.sun.com/app/docs/doc/819-3659/6n5s6m57s?l=en&a=view
“If multiple applications or modules refer to the same libraries, classes in those libraries are automatically shared. This can reduce the memory footprint and allow sharing of static information.”
Anyway, I would better create a simple test case that shows how much memory overhead this solution has.
So I’m starting with Java (only having programmed for the web in Django), and I wanted to avoid the complexity of application servers and all that J2EE stuff for now.
So your post was a great find, the demo worked perfectly, but there are several things I don’t get there, such as:
– What does the WarExtractor do? Is it being used when I run java -jar embedded-..container-one.jar ?
– I kinda understood (and tested) ShutdownControl, but I don’t understand why do you would want it. What’s the difference between this and a Ctrl-c in the screen session, or killing the process? BTW, when I shutdown (via cookie method) it said it could not stop a thread – did that happen to you? No idea which thread is this.
– In the function StatusHandler.java/checkStatus, you use your own HTTP service to check the status. Why would you do this instead of just calling the method?
– Are the files under embedded-container-web only for testing? Would you be running the main() inside WebServer.java or the one inside ServerControl.java for development?
– What is the purpose of the block which starts with “if (getBooleanProperty(“server.checkStatus”, false))” on ServerControl.java? To avoid running a second jetty in the same computer?
I know it might seem silly, but I don’t want to start my project using code that I don’t understand, of course…
Thanks in advance,
Type your reply…
Your questions are very insightful, and I’m happy to see that you’re exploring the code. :-)
WarExtractor “reaches around” and tries to find what one-jar jar file was used to start it. It then unpacks the WAR-file from this Jar file and puts it in a temporary location on the file system in order to execute it. It’s a … hack. :-)
ShutdownControl and StatusHandler are both used to control an already running server. If I start the server in the background and then leave the console, I can use ShutdownController to shut it down from another console. Similarly, I can use StatusHandler/checkStatus from another console to decide whether I need to start a new server instance or not
ShutdownControl is also useful when I run WebServer under the debugger in Eclipse or IDEA. I can then just start the debugger again, and it will kill any already running session.
I use the WebServer in embedded-container-web for development testing (start it in an Eclipse/IDEA debugger session). ServerControl is the one that’s started when you execute “java -jar …” after deploying to a test or production environment.
The block with “server.checkStatus” is to avoid running a second jetty on the same computer (on the same port). If you run “java -Dserver.checkStatus -jar ….” this is the behavior you get.
Thanks for the questions. Please do ask more. :-)
Hey Johannes, and first of all, thanks for this post; I found this approach really nice and elegant! It’s an important improvement over my current solution which uses the maven assembly plugin.
However, there are a few hurdles and missing pieces, that I really want to find out about.
I am curious, do you deploy the application like this in production? I would think that it would be smart to have some monitoring process that checks if the process is running and restarts if it’s not. Do you use Common Deamons for that? In that case, there are some “integration issues”, because they would have to share the pid-file. Let’s say the Common Daemons is monitoring the process and we do a new deploy like described in your post, then the app will get a new process id and Daemons will try to start a new one… trouble.
It would have been nice to have had a framework that would support /etc/init.d/start-stop-script, process monitoring and fast “hot”-deployment as you described in this post. ;)
I guess something like the Rails guys have; Capistrano.
This would be different from normal java containers like Tomcat, because the control will be on the OS-process-level and not within Java.
I wish this existed.
Another issue I find with this approach, versus deploying wars to a Tomcat, is that each jetty instance would need to have their own http and JMX port configured uniquely, and hence would raise the need for a httpd front server and some more configuration. What are your thoughts on this?
Thanks for you comment and questions, Tor Arne.
The short answer is: Yes, I’ve used Jetty in production, and I recommend others to do it if possible. You’re exactly right about the deployment being an OS process and not within an application server.
Contrary to popular belief (or at least vendor claims), operations people that I talk to love this. I don’t know any operations people who feel an app server is within their comfort zone.
I’ve used a system with shell scripts that will start a Java main-class (or “java -jar”) and capture the PID to a file if run with the argument “start”. If started with the argument “stop”, the script will issue “kill `cat pidfile`”.
I’ve also used a separate Servlet Context to control the server. Connecting to http://localhost:/shutdown from localhost with a secret HTTP parameter will cause Jetty to stop. The main-class of my server will try to do this before starting its own listener. I haven’t used this scheme in production yet.
My plan is that “cd /opt/application/production; java -jar application.war restart” will restart the server. This will make for good init.d-support.
On my previous production deployment with Jetty, we used Apache HTTPD as a reverse proxy in front of the server (mod_proxy). I was pretty happy with this approach. If given free reign, I probably would prefer to use Nginx in the same way.
I read a property file (named application.property) from the CWD and add it to System.getProperties() at startup. This file contains necessary configuration like http port (and database connection properties).
Hope this helps. Keep up the good questions!
Thanks for your answers. This is very interesting.
When deploying jetty like this, do you have another process on the same server that monitors and make sure that it runs? And that is capable of starting the process if it’s down, like Commons Daemons?
I’m not an operation expert, but I would like to understand what’s common practice. ;)
As for configurations I have become an avid user of Constretto, where the configuration is embedded in the jar – living up the idea that “the environment is part of the design”..
I’ve used NimBUS to monitor the Jetty process by periodically performing an HTTP request. When Jetty doesn’t respond, NimBUS raises an error in the operations room.
In this case, operations can use the init.d script to restart the server.
Our plan was to implement automatically restart as we knew more about the patterns of failure that would require restart. Automatically restart is, of course, a pretty high-risk function. As Jetty was extremely stable in our environment, we never found the need to implement automatic restart.
Pingback: Embedded Jetty 7 webapp executable with Maven – BEKK Open
Pingback: The Streamlined Life with Jetty | Jetty & Cometd Blogs
Greetings Johannes! I’d like to take a look at the sample application that you reference and I’m being prompted to login to your SVN server. I realize that this post is three years old, but I’ve recently seen the light on this topic and I’m trying to find some good working examples to help me get a better feel for how to connect all of the dots. Thanks!
Thanks for letting me know! I’ve created a few versions of the same server at github. I’ve updated the article to point to the most recent one: https://github.com/jhannes/java-ee-turnkey/. Let me know if you’d like more info.
Great article Johannes. I’m planning on using Jetty in production as well (on AWS). I was wondering though what kind of clean up is required to the default jetty installation before deploying in production? i.e, the test web apps and default configs that come pre-installed with the server. I’ve just deleted all of default files in the webapps directory but I think there’s more to it than just that, correct?
Hi Steve. The way I generally do this is to package the parts of Jetty that I use together with my application. I don’t support the idea of installing an application server and then installing applications into it. The code linked in my article has a lot of details on how to do it (and there are other ways, too)
If you do install Jetty as a server and add your app, I don’t know which of the preinstalled parts you need to remove. As a general rule, Jetty seems to try to be quite well assembled out of the box.