What is the best solution for working with multi-platform (dev / integ / valid / prod ...) development? Delivery process

I'm not so experienced, but I have been working on some major Java EE projects (using maven2) with very clear ways to handle installation / delivery across platforms.

1) One of them was to use snapshots for development, and then release maven, components, and basic web applications. Thus, delivery:

  • war / ear files
  • List item
  • file properties
  • sgdb files
  • some others

And the teams will use these files to host new versions of applications on different platforms. I believe that this process is strict and allows you to always easily save the various configurations transferred during the production process, but itโ€™s not very flexible, the process is a bit difficult, and it sometimes led us to do some dirty things, such as reorienting the war class to fix regression ... This is an e-commerce website with 10 million unique visitors per month and 99.89% availability.

2) Another one I've seen is to check the sources on each platform, and then install the snapshot artifacts in the local repository. Then the application server will use these snapshots in the .m2 folder. Since the creation of the new version, there is no real delivery process, we just need to update the components / webapps sources, perform some clean maven installation and restart the application server. I think it is more flexible, but I see some flaws, and this approach seems dangerous to me. There is a frontoff on this website, I donโ€™t know the numbers, but itโ€™s much less than the first. It also has great support for most employees of 130,000.

I assume that depending on the website, its exposure to the public and accessibility, we must adapt the delivery strategy to the needs.

I am not here to ask which solution is better, but I wonder if you saw different things and what strategy would you use in this case?

+6
java git java-ee svn maven-2
source share
5 answers

Without working with websites, I had to participate in the release management process for various large (Java) projects in a heterogeneous environment:

  • development on "PC", which means in our case Windows - unfortunately, still Windows Xp - (and unit testing)
  • continuous integration and testing of the system on Linux (because they are cheaper to configure)
  • pre-production and production on Solaris (e.g. Sun Fire)

The usual method I saw was:

  • binary dependency (each project uses binary files created by another project, not their sources)
  • no recompilation for integration testing (banks created on a PC are directly used on linux farms)
  • full recompilation in pre-production (which means the binary file stored on the Maven repo), at least to make sure everything is recompiled with the same JDK and sale options.
  • no VCS (version control system, e.g. SVN, Perforce, Git, Mercurial, ...) in the production system: everything is deployed from pre-prod via rsynch.

Thus, the various parameters that need to be considered for the release management process are as follows:

  • When you develop your project, are you directly dependent on the sources or binaries of other projects?
  • Where do you save your settings?
    You parameterize them and, if so, when do you replace the variables with your final values โ€‹โ€‹(only at startup or at run time?)
  • Do you recompile everything in the final (pre-production) system?
  • How do you access / copy / deploy in your production system?
  • How to stop / restart / fix your applications?

(and this is not an exhaustive list). Other problems will need to be resolved depending on the nature of the release of the application)

+3
source share

The answer to this varies greatly depending on the exact requirements and structure of the teams.

I have implemented processes for several very large sites with similar accessibility requirements, and there are some general principles that I believe have worked:

  • Appearance of any configuration, so that the same built-in artifact can work in all your environments. Then create artifacts only once for each version. Recovery in different environments is time consuming and risky, for example. This is not the same application that you tested.
  • Centralize the place where artifacts were created. - eg. all wars for production must be packaged on a CI server (using the maven release plugin on hudson works well for us).
  • All changes for release should be traceable (version control, audit table, etc.) to ensure stability and provide quick rollback and diagnostics. This should not mean a heavy process - see next paragraph.
  • Automate everything, build, test, release and roll back. If the process is reliable, automated and fast, the same process can be used for everything from quick fixes to crash changes. We use the same process for a quick 5 minute crash fix and for the main release because it is automated and fast.

Some additional pointers:

See my answer location-location-property storage from another property for an easy way to load various properties into the environment using spring.

http://wiki.hudson-ci.org/display/HUDSON/M2+Release+Plugin If you use this plugin and make sure that only the CI server has the correct credentials to run maven releases, you can make sure that all releases run sequentially.

http://decodify.blogspot.com/2010/10/how-to-build-one-click-deployment-job.html An easy way to deploy your releases. Although for large sites you will probably need something more complex so that there is no downtime - for example, deploying up to half the cluster at a time and flip-flopping web traffic between the two halves - http://martinfowler.com/bliki/BlueGreenDeployment .html

http://continuousdelivery.com/ A good site and a book with very good templates for release.

Hope this helps - good luck.

+3
source share

To add to my previous answer, what you are dealing with is basically a CM-RM problem:

  • CM (Change Management)
  • RM (Release Management)

In other words, after the first release (i.e. the main initial development is completed), you must continue to release, and that is what CM-RM should manage.

The RM implementation can be either 1) or 2) in your question, but I would like to add to this mechanism:

  • an appropriate CM to track any change request and evaluate its impact before undertaking any development.
  • proper RM to be able to implement โ€œfinalโ€ tests (system, performance, regression, deployment tests), and then plan, plan, execute and then monitor the release itself.
+1
source share

Without claiming to be the best solution, this is how my team is currently organizing and deploying.

  • Developers are initially developed on their local machine, the OS is free to choose, but we strongly recommend using the same JVM that will be used in production.
  • We have a DEV server where snapshots of code are often laid out. This is just scp from a binary assembly created from the IDE. We plan to build directly on the server, though.
  • The DEV server is used to keep interested parties constantly peeking with the development. By its very nature, it is unstable. This is well known to all users of this server.
  • If the code is good enough, it forks and is placed on the BETA server. Again, this is a scp binary assembly from the IDE.
  • Testing and general QA occurs on this BETA server.
  • At the same time, if any changes in the emergency should be necessary for the software in production, we have a third intermediate server called the UPDATE server.
  • The UPDATE server is initially used only to install very small patches. Here we also use scp to copy binary files.
  • After testing UPDATE, we will copy the assembly from UPDATE to LIVE . Nothing ever goes directly to direct servers; it always goes through the update server.
  • When all testing is completed on BETA , the assembly under test is copied from the beta server to the UPDATE server, and the last round of health checks is performed. Since this is an exact assembly that was tested on a beta server, it is very unlikely that problems were detected at this stage, but we support the rule that all deployment on a real server should go through the update server and that everything in the update server should be tested before its transfer.

This rolling strategy allows us to develop for three versions in parallel. Version N, which is currently under construction and delivered through the update server, version N + 1, which will be the next major version to be released and installed on the beta server, and version N + 2, which is the next next major version for which development is currently underway and is running on a dev server.

Some of the options we made:

  • A full application (EAR) usually depends on artifacts from other projects. We decided to include the binaries of these other projects instead of building all of this from the source. This simplifies the construction and gives great confidence that the tested application comes with the correct versions of all its dependencies. The cost is that the patch in this dependency must be manually distributed for all applications that depend on it.
  • Configuration for each stage is built into the EAR. We are currently using a naming convention, and the script is copying the correct version of each configuration file to the right place. Parameterizing the path for each configuration file, for example. using a single placeholder {stage} in the root configuration file is currently being considered. The reason we store the configuration in the EAR is because developers are the ones who enter and depend on the configuration, so they should be responsible for maintaining it (adding new entries, deleting unused ones, setting up existing ones, etc. )
  • We use the DevOps strategy for the deployment team. It consists of a person who is solely a developer, two persons who are both developers and operations, and two persons who are purely operations.

Embedding the configuration in the EAR can be controversial, because traditionally operations should have control over, for example, database data sources used in production (which server it points to, how many connections are the connection pool, etc.). However, since we have people in the development team who are also in operations, they can easily check for changes made by other developers in the configuration, while the code is still in development.

In parallel with the step, we have a continuous build server server that performs scripted (ANT) builds after each registration (maximum once every 5 minutes) and runs unit tests and some other integrity tests.

Itโ€™s still hard to say if this is the best-in-class approach, and we are constantly trying to improve our process.

+1
source share

I am a big proponent of a single deployment that contains everything (code, Config, DB Delta, ...) for all environments , embedded and released centrally on the CI Server.

The basic idea is that Code, Config, and DB Delta are closely related . The code depends on certain properties specified in the configuration, and some objects (tables, views, ...) present in the database. So, why part with it and spend your time tracking everything to make sure that it fits together when you can just send it together first.

Another important aspect is to minimize differences between environments in order to reduce the likelihood of failure to an absolute minimum.

For more information, see Continuous Delivery at Parleys: http://parleys.com/#id=2443&st=5

+1
source share

All Articles