Automatic network deployment on multiple servers with Mercurial

I recently reviewed some workflows for Mercurial since we started using it for our web development. We need an automated way to propagate changes that are transferred to test and live instances to multiple endpoints. Here is an outline of the idea:

+-------+ |Dev | | | +-------+ | Push +--------+ | V +-------+ Push +-------+ |Live |<--------|Test | |server | |server | +-------+ +-------+ | +-------+ | +-------+ +--->|Live 1 | +--->|Test 1 | | | | | | | | +-------+ | +-------+ | | | +-------+ | +-------+ +--->|Live 2 | +--->|Test 2 | | | | | | | | +-------+ | +-------+ | | | +-------+ | +-------+ +--->|Live 3 | +--->|Test 3 | | | | | +-------+ +-------+ 

Basically, the idea is that all we, as developers, will have to do is when the developer reaches a stable level, issue the push command (which does not have to be just hg push ) to the test server, and from there it will automatically spread out. Then, once the testing is complete, we will push it out of the test to live (or, if it were easier, we could click from dev to live), and this would also apply to each of the different instances.

It would be nice if we could easily add new test and live instances (for example, it is possible if the IP addresses were stored in a database that could be read by script, etc.).

What would be the best way to achieve this? I know about Mercury hooks. Maybe in a script process that the hook will be launched? I also reviewed Fabric , would that be a good option?

Also, what support software would each endpoint have to use? Would it be easier if Mercurial storage existed on every server? Will access to SSH be useful? Etc ...

+4
source share
1 answer

I did something similar using Mercurial, Fabric and Jenkins :

  +-------+ | Devs | +-------+ | hg push V +-------+ | hg | "central" (by convention) hg repo +-------+\ | \ | +--------------+ | Jenkins job | Jenkins job | pull stable | pulls test | branch & compile | branch & compile | +-------+ | | +----|Jenkins|-----+ | | | +-------+ | | V | | V +-------+ +-------+ | "live"| | "test"| shared workspaces ("live", "test") +-------+ +-------+ | Jenkins job | Jenkins job <-- jobs triggered | calls fabric | calls fabric manually in | +-------+ | +-------+ Jenkins UI |--> | live1 | |--> | test1 | ssh | +-------+ ssh | +-------+ | +-------+ | +-------+ |--> | live2 | |--> | test2 | | +-------+ | +-------+ | ... | ... | +-------+ | +-------+ +--> | liveN | +--> | testN | +-------+ +-------+ 
  • I do not have a repo on every web server; I use fabric to deploy only what is needed.
  • I have one fabfile.py file (in the repo) that contains all the deployment logic
  • The set of servers (IP addresses) to deploy as a command line arg to fabric (this is part of the Jenkins job configuration)
  • I use Jenkins shared workspaces, so I can separate extraction and compilation tasks from the actual deployment (so I can redeploy the same code if necessary)
  • If you manage to figure out one Jenkins job that pulls-compiles-deploys, you'll be happier. A common workspace is a hack that I have to use for my installation, and it has flaws.

To directly address some of your questions:

  • Devs working in the test branch can move forward at their leisure and together decide when to start Jenkins to update the test environment.
  • When the test is happy, merge it into a stable state and run the Jenkins task to update the environment in real time.
  • Adding a new web field is just a matter of adding another IP address to the command line used to invoke the fabric (i.e. in the configuration for Jenkins task)
  • All servers will require ssh access from a Jenkins window
+3
source

All Articles