Handling Relationships Between Multiple Subversion Projects

In my company, we use one SVN repository to store our C ++ code. The code base consists of a common part (infrastructure and applications) and client projects (designed as plugins).

The repository layout is as follows:

  • Infrastructure
  • App1
  • App2
  • App3
  • Project-for-client-1
    • App1 plugin
    • App2 plugin
    • Configuration
  • Project-for-client-2
    • App1 plugin
    • App2 plugin
    • Configuration

A typical release for a client project includes project data and each project that it uses (for example, infrastructure).

The actual location of each directory is

  • Infrastructure
    • branches
    • tags
    • trunk
  • Project-for-client-2
    • branches
    • tags
    • trunk

And the same applies to other projects.

We have a few problems with the layout above:

  • It is difficult to start a new development environment for a client project, since you need to check all the projects involved (for example, Infrastructure, App1, App2, project-for-client-1).
  • It is difficult to flag an issue in client projects for the same reason as above.
  • If a client project needs to change some common code (for example, infrastructure), we sometimes use a branch. It is difficult to keep track of which branches are used in projects.

Is there an SVN way to solve any of the above? I was thinking about using svn: externals in client projects, but after reading this post . I understand that this may be the wrong choice.

+4
source share
5 answers

You can handle this with svn: externals. This is the URL of the svn repository location. This allows you to pull parts of another repository (or the same one). One way to use this is in a client-for-project2, you add the svn: externals link to the infrastructure branch that you need, the application branch 1 that you need, etc. Therefore, when you check the project for client2, you get all the correct figures.

The svn: externals bundles are versioned along with everything else, as project-for-client1 receives tagged, branched and updated correct external branches will always be retracted.

+3
source

Yes, that sucks. We are doing the same, but I can’t think of a better layout.

So, we have a set of scripts that can automate everything related to subversion. Each client project will contain a file called project.list , which contains all the subversion projects / paths needed to build this client. For instance:

 Infrastructure/trunk LibraryA/trunk LibraryB/branches/foo CustomerC/trunk 

Each script then looks something like this:

 for PROJ in $(cat project.list); do # execute commands here done 

If the commands can be validation, updating or tag. This is a little more complicated, but it means that everything is coordinated, verification, updating and tagging become a single team.

And, of course, we try to separate as little as possible, which is the most important proposal that I can make. If we need to fork out something, we will try either to work from the outside, or with the previously noted version of as many dependencies as possible.

+2
source

It is supposed to change the directory location from

  • Infrastructure
    • branches
    • tags
    • trunk
  • Project-for-client-1
    • branches
    • tags
    • trunk
  • Project-for-client-2
    • branches
    • tags
    • trunk

to

  • branches
    • Function 1
      • Infrastructure
      • Project-for-client-1
      • Project-for-client-2
  • tags
  • Trunk
    • Infrastructure
    • Project-for-client-1
    • Project-for-client-2

There are some problems with this layout. Branches become massive, but at least it’s easier to mark specific places in your code.

To work with the code, you can simply check the trunk and work with it. Then you do not need scripts that test all different projects. They simply refer to the infrastructure with "../Infrastructure". Another problem with this layout is that you need to check multiple copies if you want to fully work on projects. Otherwise, changing the infrastructure for one project may result in another project not compiling until it is updated.

This can make releases even more cumbersome and separate code for different projects.

+2
source

Firstly, I do not agree that external ones are evil. Although they are not perfect.

You are currently doing a few checks to create a working copy. If you used external resources, this will do just that, but automatically and consistently every time.

If you point your external elements to tags (or specific versions) in the target projects, you only need to mark the current project for release (since this tag will indicate exactly what you pointed to). You will also have an entry in your project when you change links to external links to use the new version of a specific library.

External actions are not a panacea, and, as the message shows, there may be problems. I am sure that there is something better than external ones, but I have not found it yet (even conceptually). Of course, the structure you use can provide great information and control in your development process using external resources. However, the problems that he had were not fundamental corruption issues - a clean get resolved everything and quite rarely (can you really not create a new library branch in your repo?).

Points to consider - using recursive external ones. I don’t sell either yes or no, and usually take a pragmatic approach.

Consider using a piston, as the article suggests, I have not seen it in action, so I can’t comment, it can do the same work as the external ones in the best possible way.

+2
source

From my experience, I find it more useful to have a repository for each individual project. Otherwise, you have the problems you are talking about, and also version numbers change if other projects change, which can be confusing.

Only if there is a link between individual projects, such as software, hardware circuits, documentation, etc. We use one repository, so the version number is used to ensure that the entire package is in a known state.

0
source

All Articles