What is a practical way to automatically configure, version control, and deploy Maven-based Java applications?

We maintain a basic mid-size code base, combined into one multi (multi) -module maven project. In general, the entire assembly has up to ten output artifacts for various system components (web applications (.war), utilities (.jar), etc.).

Our deployment process is still based on simple bash scripts that build requested artifacts via maven, mark the scm repository with information about artifacts, the target environment, and the current build timestamp, and then upload artifacts to the selected application server and release command environments to restart the running demons. The configuration of the constructed artifacts occurs using maven profiles and resource filtering. Therefore, our assemblies are specific to the target environment.

This process has benefited us, but for various reasons, I would like to move on to a more complex approach. I would especially like to get rid of bash scripts.

So what are the best practices for customizing, versioning, and deploying Java applications on Maven?

Should our assemblies be agnostic, and the configuration must be done through the configuration files on the target systems? If so, how will the developer ensure that the new configuration settings are included in the deployed configuration files on different application servers?

Should we use the aka Maven Maven release plugin to label different collections?

Is it good to set up a CI server like Jenkins or Teamcity to build and optionally use our artifacts for us?

+6
source share
2 answers

I like to think about having two problem spaces:

  • artifacts of the building (ideally agnostic of the environment, since this means that QA can take the hash of the artifact, run their tests on this artifact, and when the time comes for deployment, check the ash, and you know that it is QA'd. If your assembly creates various artifacts depending on whether it is for QA env or intermediate env, or for creating env, then you need to do more work to ensure that the artifact going into production has been checked by QA and delivered at the production stage)

  • sending artifacts on Wednesday. If artifact configuration is required for this environment, the delivery process must include this configuration either by placing the appropriate configuration files in the target environment, or by selecting artifacts, or by hacking artifacts, setting them up and sealing them (but in a repeatable and deterministic way)

Maven is for the first problem space. The “Maven Way” is to create artificial artifacts to create an environment and publish them in a binary repository of artifacts. If you look at the Maven life cycles, you will see that the phases stop after the deploy ed artifact is in the Maven repository (binary artifact repository). In short, Maven sees his work as done at that moment. In addition, there are life-cycle phases for the test ing and integration-test units, both of which should be possible with an agnostic agent, but this is not a complete set of tests that you need ... Rather, to complete the testing, you will need to actually deploy built artifacts in a real environment.

Many people try to capture Maven in order to go beyond their goals (including me). For example, you have cargo-maven-plugin and ship-maven-plugin that deal with aspects outside the final maven game (i.e. after the artifact gets into the maven repository). Of these, I personally feel that ship-maven-plugin (which I wrote, although it was my previous “included by me”) is closest to “after maven”, because by default it is designed to work, and not in the -SNAPSHOT version the project that you checked on disk, but rather the release version of the same project that it extracts from the remote repository, for example

 mvn ship:ship -DshipVersion=2.5.1 

IMO, the load is aimed at using around the integration-test phase in the life cycle, but again you can capture it for other purposes.

If you create software with a shrink wrapper, that is, what the user buys and installs on his system, the installer program itself is designed to configure the application for the end-user environment. It’s good that the Maven assembly creates the installer, because the actual installer is (at least a few) agnostic. Well, it can be an installer for Microsoft Windows only or an installer for Linux only, but it doesn’t matter which users it will be installed on.

Currently, we tend to concentrate more on software as a service, so we deploy software on the servers that we manage. This becomes a more seductive tug for the “dark side of Maven,” where assembly profiles are used to configure the internal configuration of assembly artifacts (after all, we only have three environments in which we deploy) and we move fast, so we don’t want to take the time for the application chose an environment-specific configuration, from external to embedded artifact (sound familiar?). The reason I call it the dark side is because you are really struggling with how maven wants to work ... You always wonder if the bank was built in the local repository with another active profile, so you need to do a full clean build all the time. When the time comes to move from QA to production or from production to production, you need to complete the software assembly ... And all unit tests and integration tests run again (or you end up skipping them and turning to skipping the sanity that they can render to the artifacts that they build), so that you make life harder and harder ... Just for the sake of putting several profiles in maven pom.xml ... Think, if you followed the maven path, you would just take artifact from the repository and moved it to different environments, unchanged, unchanged, and with the signatures MD5, SHA1 (and, hopefully, GPG), to prove that this is the same artifact.

So you ask how we code delivery to production ...

Well, there are several ways to solve this problem. They all have a similar set of basic principles, namely

  • save recipe delivery on Wednesday in version control system

  • The recipe should ideally consist of two parts, the agnostic part of the medium and the specific medium.

You can use the good old bash scripts, or you can use more “modern” tools, such as the chef and puppet, that are designed for this second problem space.

Recommendations

You must use the right tool to work properly.

If it were me, here is what I would do:

  • Cut releases with Maven release plugin

  • Embedded artifacts should always be agnostic.

  • Embedded artifacts should contain “reasonable defaults” for all configuration parameters. In other words, they should either explode quickly if the required configuration parameter is missing without a reasonable default value, or they should act in a reasonable way if the optional parameter is not specified. An example of the required configuration option may be information about connecting to the database (if the application does not work with the database in memory)

  • Choose the side in the cook against the puppet war (no matter which side, and you can change sides if you want. If you have ANT thinking, the chef may suit you better, if you like the magic of dependency management, the puppet may suit you it is better)

  • Developers should have the right to define chef / puppet scenarios to deploy at least the agnostic part of these scenarios.

  • Operations should determine the specific data of the production environment for the deployment of the chef / puppets

  • Save all of these scripts in SCM.

  • Use Jenkins or any CI to automate as many steps as possible. The advanced Jenkins builds plugin is your friend.

  • Your final game is that each commit, provided that it passes all the necessary tests, * can * automatically be included in the production (or, perhaps, with the help of the gate of the person saying "go ahead") ... ignore saying that you actually do this for each commit, only so that you can

+7
source

What I used in the past that works well is to use Apache Karaf + iPOJO with my version control, which was disruptive (I would use git today)

To enable version control, you need to deploy the version of Apache Karaf with the version and my configuration files. Any changes made during the development process or to the production system (when something requires urgent correction) will continue to be monitored and can be verified (including information about who made the changes when)

What Apache Karaf supports is the dynamic deployment of maven libraries from your maven repository. those. you have configuration files that specify the jar versions you want to get, and they will load them as needed from your maven repository and run them. IPOJO adds components for these models, which you can configure using property values ​​(again versions)

This assumes that you control the end-to-end development for deployment, but you can work very well even with multiple remote sites.

0
source

All Articles