Transitioning from Subversion to Mercurial - How to adapt workflows and middleware systems?

We will all go from svn to hg, and since the development workflow is more or less washed out, the most difficult part remains here - the production and integration system.

I hope this question will be a little further than your general "how do I go from xxx to Mercurial". Please forgive the long and probably poorly written question :)

We are a web store that runs many projects (mainly PHP and Zend), so we have one huge repo with svn, with similar 100+ folders, each of which represents a project with its own tags, branches and trunk, of course . On our integration and testing server (where QA and clients look at the results of work and test materials) everything is pretty automated - Apache is configured to collect new projects that automatically create vhost for each project / body; mysql migration scripts are also right in the trunk, and developers can use them through a simple web interface. In short, our workflow is this:

  • Design code, complete work, complete
  • Run the update on the server via the web interface (this basically does svn on the server in a specific project, and also runs the db-migration script if necessary).
  • QA changes on the server

This approach is certainly suboptimal for large projects, when we have 2+ developers working on the same code. Branching in svn only caused more headaches, well, therefore, switching to Mercurial. And here the question lies: how to organize an effective production / integration / testing server for this kind of work (where you have many projects, they say that one developer can work on 3 different projects in 1 day).

We decided to track the default branches by default, and then make all the changes to individual branches. In this case, how can we automate the queue update for each branch? If before, for one project, we almost always worked on the trunk, we needed one database, one ghost, etc. Now we are potentially talking about N-databases for each project, N-vhost configurations, etc. Then, what about CI materials (such as running phpDocumentor and / or unit tests)? Should this be done only by default? On the branches?

I wonder how other teams solve this problem, perhaps some good practices that we don’t use or don’t notice?

Additional notes:

It might be worth mentioning that we chose Kiln as a repo hosting service (mainly because we use FogBugz anyway)

+4
source share
2 answers

This is by no means the complete answer that you will ultimately choose, but here are some tools that are likely to influence it:

  • without working directories - if you are clone -U or hg update null , you get a repository without a working directory (only with .hg). They are better on the server because they take up less space and no one is tempted to edit there.
  • changegroup hooks

For this last one, the changegroup hook changegroup triggered whenever one or more sets of changes come in via push or pull, and you can do this with some interesting things, such as:

  • drag changes to another repo depending on what arrived
  • update the working directory of the receiving repo

For example, you can automate something like this using only the tools described above:

  • developer pushes five sets of changes to center-repo / project1 / main
  • the last set of changes is in the "my experiment" branch, so csets are automatically dragged into the optionally created repo central-repo / project1 / my-experiment
  • central-repo / project1 / my-experiment automatically performs the hg update tip , which is definitely located on the my-expiriment
  • central-repo / project1 / my experiment automatically runs the tests in its working directory, and if they pass, there is a "make dist" that can be installed, which can also configure the database and vhost

Big and chapter 10 in the Mercurial book covers this, is not to wait for the user in this process. You want the user to click on a repo that contains possibly-well-code, and the automated processing will execute the CI and deploy the work, which, if it passes, becomes likely-normal-repo.

In the largest mercury setup that I worked in (about 20 developers), we got to the point where our CI (Hudson) system pulled out because of the possible "REPO" for each, and then built and tested, and also processed each branch separately.

Below: all the tools needed to configure what you would like probably already exist, but gluing them together will be one-time.

+3
source

What you need to remember is that DVCS (versus CVCS) introduces another dimension for version control:
You no longer need to rely solely on branching (and get an intermediate workspace from the right branch)
Now you have a publishing workflow with push DVCS (push / pull between repos)

The value of your middleware is now a repo (with the full project history) written out in a specific branch:
Many developers can push many different branches to this intermediate repo: the reconciliation process can be performed in isolation within this repo, in the "main" branch of your choice.
Or they can pull this intermediate branch into their repo and check things out before pushing back.

alt text http://hginit.com/i/02-repo.png
From Joel's tutorial on Mercurial HgInit

The developer does not have to commit for others to see: the publishing process in DVCS allows him / her to first pull the intermediate branch, reconcile any conflict locally, and then click on the intermediate repo.

+2
source

Source: https://habr.com/ru/post/1315851/


All Articles