Regarding the "determining the build method at boot": I developed very quickly using a custom build system, one system administrator and one developer. The subordinate query assembly requests the task manager for suitable queue requests in the queue. This is pretty nice.
A query is “appropriate” for a slave if its tool binding requirements match the tool chain versions on the slave, including the operating system, because the product is multi-platform and the assembly may include automated tests. This is usually “current state of the art”, but not required.
When the follower is ready for assembly, he simply starts a task manager survey, telling him what he installed. He does not need to know in advance what he is going to build. It retrieves the assembly request, which tells it to check specific tags from SVN, then run a script from one of these tags to get from there. Developers do not need to know how many slaves are available, what they are called or busy, how to add a request to the build queue. The build queue itself is a fairly simple web application. All are very modular.
Slaves do not have to be virtual machines, but usually it is. The number of slaves (and the physical machines on which they are running) can be scaled to meet demand. Obviously, slaves can be added to the system at any time or destroyed if the tool chain fails. This is actually the main point of this scheme, and not your problem with archiving the state of the tool chain, but I believe that this applies.
Depending on how often you need the old toolchain, you might want the build queue to start the virtual machines as needed, because otherwise, someone who wants to recreate the old assembly should also organize a suitable slave device. Not that it was necessarily difficult - it might just be a matter of starting the right virtual machine on the machine of their choice.
Steve jessop
source share