How to manage versions of build tools and libraries?

What are the guidelines for incorporating your compiler, libraries, and other tools into your version control system?

I used to have problems when, although we had all the source code, creating the old version of the product was an assembly exercise, trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. In my next project, I would like to avoid this by checking out these build tools in the original control and then creating them. It will also simplify the configuration of the new build machine - 1) install our source code management tool, 2) point the right branch and 3) build what it is.

The options I reviewed include:

  • Copying the ISO installation CD to the original management - although this provides the backup we need if we need to revert to an older version, this is not a good option for live use (each build should start with an installation step that can easily turn 1 hour assembly at 3 hours).
  • Installing software in the source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This does not take into account the non-file part of installing your tools, such as registry settings.
  • Installing all the software and setting up the build process inside the virtual machine, storing the virtual machine in the original control and figuring out how to force the virtual machine to build at boot time. Although we easily and easily fix the state of the “assembly”, we get the overhead of the virtual machine, and this does not help “make the same tools available to developers.”

This seems to be such a basic idea of ​​configuration management, but I have not been able to find any resources for how to do this. What are the suggestions?

+6
version-control clearcase build-process build-automation
source share
9 answers

I think VM is your best solution. We have always used specialized prefabricated machines to ensure consistency. In the old days, COM DLL Hell installed dependencies (COMCAT.DLL, anyone) on installed software without development (Office). Your first two options do not solve anything in common with COM components. If you have no problems with shared components, they may work.

There is no reason developers could not take a copy of the same VM to be able to debug in a clean environment. Your problems will be more complicated if your architecture has many physical layers, such as a mail server, database server, etc.

+5
source share

This is something that is very specific to your environment. That is why you will not see a guide for handling all situations. All the different stores I worked on handled it differently. I can only give you my opinion on what, in my opinion, worked best for me.

  • Put everything you need to create the application on a new workstation under source control.
  • Keep large applications out of source control, such as IDEs, SDKs, and database engines. Store them in a directory as ISO files.
  • Maintain a text document with source code that has a list of ISO files that will be needed to build the application.
+4
source share

I would definitely consider legal / licensing issues related to this idea. Is this allowed under the various licenses of your toolchain?

Have you considered the ghosting of a new development machine capable of creating an issue if you don't like the idea of ​​a VM image? Of course, saving this ghostly image working like hardware changes can be more of a problem than it costs ...

+2
source share

Just pay attention to the version of libraries in your version control system:

  • this is a good solution, but involves packaging (i.e. reduces the number of files in this library to a minimum)
  • it does not solve the "configuration aspect" (that is, "what specific set of libraries does my projects" 3.2 "execute?").
    Do not forget that the kit will evolve with each new version of your project. UCM and its "composite baseline" can give rise to this answer.

The packaging aspect (minimum number of files) is important because:

  • You do not want to access your libraries over the network (for example, a dynamic view), because the compilation time is much longer than when using files with a local library.
  • you want to get this library on your disk, which means viewing snapshots, that is, downloading these files ... and here is where you can evaluate the packaging of your libraries: the less files you download, the better you are;)
+1
source share

My organization has a read-only file system where everything fits in version and version. Releaselinks (essentially symbolic links) indicate the version used by your project. When a new version arrives, it is simply added to the file system, and you can switch your symbolic link to it. There is a complete history of auditing symbolic links, and you can create new symbolic links for different versions.

This approach works fine on Linux, but it doesn’t work well for Windows applications, which usually prefer to use things local on the computer, such as the registry, to store things like configuration.

0
source share

Do you use a continuous integration tool (CI) such as NAnt to complete your builds?

As an example of .Net, you can specify specific structures for each assembly.

Perhaps the popular CI tool for everyone that you are developing has options that allow you to avoid storing multiple IDEs in your version control system.

0
source share

In many cases, you can force your assembly to use compilers and libraries in your original control, rather than relying on global machine parameters that will not be repeated in the future. For example, using the C # compiler, you can use the / nostdlib switch and manually / link to all libraries to indicate the versions checked in the original control. And, of course, the compilers themselves also check the source code.

0
source share

Following my own question, I came across this post , which refers to the answer to another question. Although the issue is more discussed than aviator, it mentions the idea of ​​a virtual machine.

0
source share

Regarding the "determining the build method at boot": I developed very quickly using a custom build system, one system administrator and one developer. The subordinate query assembly requests the task manager for suitable queue requests in the queue. This is pretty nice.

A query is “appropriate” for a slave if its tool binding requirements match the tool chain versions on the slave, including the operating system, because the product is multi-platform and the assembly may include automated tests. This is usually “current state of the art”, but not required.

When the follower is ready for assembly, he simply starts a task manager survey, telling him what he installed. He does not need to know in advance what he is going to build. It retrieves the assembly request, which tells it to check specific tags from SVN, then run a script from one of these tags to get from there. Developers do not need to know how many slaves are available, what they are called or busy, how to add a request to the build queue. The build queue itself is a fairly simple web application. All are very modular.

Slaves do not have to be virtual machines, but usually it is. The number of slaves (and the physical machines on which they are running) can be scaled to meet demand. Obviously, slaves can be added to the system at any time or destroyed if the tool chain fails. This is actually the main point of this scheme, and not your problem with archiving the state of the tool chain, but I believe that this applies.

Depending on how often you need the old toolchain, you might want the build queue to start the virtual machines as needed, because otherwise, someone who wants to recreate the old assembly should also organize a suitable slave device. Not that it was necessarily difficult - it might just be a matter of starting the right virtual machine on the machine of their choice.

0
source share

All Articles