Migrating from SVN to HG: Branching and Backup

My company works svn right now, and we are very familiar with it. However, since we are doing a lot of parallel development, the merger can be very difficult. We play with hg and we really like the ability to create fast and efficient clones based on each function.

We have two main problems that we would like to solve before moving to hg:

  • Affiliates for erstwhile svn users I am familiar with the “4 ways to fork in Mercurial”, as described in the Steve article by Losch . We think we should “materialize” branches because I think the development team will find this easiest way to migrate from svn. Therefore, I think we should follow the “branching with clones” model, which means that the server produces separate clones for the branches. Although this means that each clone / branch must be made on the server and published separately, this is not very important for us, since we are used to checking svn branches that appear as separate copies. However, I am concerned that the merging of changes and subsequent history may become difficult between branches in this model.
  • Backup . If the programmers in our team create local branch clones, how do they back up the local clone? We are used to seeing svn commit messages like this in the "Temporary commit: db function doesn't work yet" function branch. I see no way to do this easily in hg.

I gratefully received advice. Rory

+7
branch svn mercurial backup
source share
4 answers

I am worried, however, that the merging of the change and the subsequent history may have become more difficult between the branches in this model.

Well, you decided that you want to keep the branches in separate clones, and this does not come for free. But there is no need to install a storage-level configuration file that pseudonizes all clones to facilitate pusing / pulling.

If the programmers in our team make branch clones, how do they back up the local clone? We are used to seeing svn commit messages like this on the "Temporary commit: db function does not work yet" branch. I don't see a way to do this easily in hg.

This is the number one reason to actually use DVCS because it perfectly supports this use case. Entries are local until you click them. This means that each developer can create as many “intermediate” as he sees fit. BUT this is not a "backup" in the original sense, it is more like a "save point" for an individual developer. So far, you have cluttered your story with what has been done for all members of your team. Using mercurial queues, you can easily "collapse" all of these gaps, commits before clicking them, which will lead to a clean history in your central repository.

If the real backup (in the sense of: what happens if this developer machine catches fire), the problem is that the answer is simple: just give each developer a private server repository on which he can regularly click.

+2
source share

I can not talk to your svn branch migration problem. Your solution sounds fine, but concurrency is VERY VERY Tough, and I haven't thought about your situation well enough to say.

But suppose you make a separate repository on the server for each "svn-branch", then I believe that you can easily solve your second problem. First follow the advice of Peter Loron by copying the files to work. Then, whenever the developer is ready to transfer "our server-side server repository" *, they can commit as the "hg branch": in the same repository. You will get the same commits “Intermediate commit: the db function is not working yet”, but they will not be on the body, twisting all the assemblies.

The key to all this work and the reason why hg / git is so cool is that when the ACTUALLY DONE function, you merge this "hg-branch" back into the trunk in the same repository, and the chances are much better than with SVN, which is automatic the merger will EASY WORK.

0
source share

If you need a lot of concurrent development, you need to use a distributed version control system, ClearCase or Perforce. ClearCase and Perforce do not use distributed version control, but they handle mergers, probably better than most other tools.

ClearCase merging is done for parallel development and works very well. In fact, most of the developers in ClearCase evolve in their own branch, and then merge their changes into an integration stream when everything they work on is complete. The UCM layer on top of ClearCase simply automates this behavior.

The Perforce pool is more tuned to what they call divergent branching, but it looks like it supports concurrent development.

Subversion is a great version control system. I use it a lot, and you cannot beat the price, but let it run into this, merging in Subversion is still very, very rough around the edges. Using properties to track a merge is very hacky. When you look at the logs, you see a lot of changes simply because Subversion changes the svn: merge property, even if the files are mostly unaffected. In addition, you should know when to use the --reintegrate flag.

Of course, distributed version control systems handle concurrent development with aplomb. This is how they were designed from the very beginning.

My only question is why do you work so much in parallel? Over the years, I have found that getting developers to work together on the same set of changes just works best. When they are forced to work on the same set of code, developers are more careful with their development. They engage in smaller bites, are more careful about changes, and communicate more with each other.

When I worked with developers in ClearCase, and each developer had his own branch, I use to get around and make sure that developers regularly merge in their changes. It is much easier to program when no one but you change the code. Developers simply did all their work in their industry without getting any changes made by other developers. You have 20 developers who do this, and you do not see any changes in the main branch. Then, right before delivery, the developers then mass merged all their changes. Fun followed.

We would spend next week trying to clear everything and get all the developer changes to work together. QA was upset because they had almost no time for testing. It was not uncommon to send an issue unverified. In the end, we had dates to meet.

There are good reasons to have concurrent development, but I have repeatedly found that developers request it because it simplifies the work. They do not need to coordinate their changes, because now it is your job. In the end, why they pay you big bucks.

Well, not a lot of money, but you get paid more than a lot of people. Maybe not the developers, but you are doing more than other people in your company, such as janitors, unless it belongs to the union. Well, you get stock options.

0
source share

Suggestion: also consider git.

In any distributed version control system, such as hg or git, your local repo copy contains both work files and a local repository. This may be all the necessary backup. If you need more, just copy the files (including the repo files in the .hg or .git directory) to the backup storage.

-one
source share

All Articles