Server hardware configuration

So, I saw this question , but I'm looking for another general tip: How do you specify the build server? In particular, what steps should I take to decide which processor, HD, RAM, etc. use for new build server. What factors should I decide to decide whether to use virtualization?

I am looking for general steps that I need to take in order to come to a decision about what equipment to buy. Steps that lead me to specific conclusions - think: “I will need 4 gigabytes” instead of “as much RAM as you can afford”

PS I intentionally do not give details, because I am looking for a teacher-man-fish answer, and not an answer that will only apply to my situation.

+4
source share
4 answers

The answer is what requirements the machine will need to "create" your code. It totally depends on the code you are talking about.

If its a few thousand lines of code, then just pull this old desktop out of the closet. If its several billion lines of code, talk to the bank manager about providing you a loan for the blade body!

I think it's best to start with the build server, but buy yourself a new machine for developers, and then rebuild your old one to be your build server.

+5
source

I would start by collecting some performance metrics in the assembly on any system that you currently use to build. I would specifically consider the use of the processor and memory, the amount of data read and written from the disk, and the amount of network traffic (if any). On Windows, you can use perfmon to get all this data; on Linux, you can use tools like vmstat, iostat, and top. Find out where the bottlenecks are - is your processor connected to the processor? Is the drive connected? Hunger for RAM? Answers to these questions will help you make a purchasing decision - if your assembly clogs the processor but generates relatively little data, inserting a SCSI-based flash drive is a waste of money.

You can try to run your assembly with different levels of parallelism as you collect these metrics. If you are using gnumake, run your build with -j 2 , -j 4 and -j 8 . This will help you understand if the assembly is a limited processor or disk.

Also consider the possibility that the right build server for your needs can actually be a cluster of cheap systems, rather than one massive box - there are many distributed build systems (gmake / distcc, pvmgmake, ElectricAccelerator, etc.) that can help you using an array of cheap computers is better than you could make one big system.

+3
source

Our store supports 16 products that range from several thousand lines of code to hundreds of thousands of lines (maybe a million + at the moment). We use 3 HP servers (about 5 years), a dual-core core with 10 GB of RAM. The drives are 7200 rpm SCSI drives. All compiled via msbuild on the command line with parallel compilation enabled.

With this installation, our biggest bottleneck is disk I / O. We will completely destroy the source code and re-check with each build, and the time for removal and verification will be very slow. Compilation and publishing time is also slow. The processor and RAM are not taxed remotely.

I am updating these servers, so I go the way of workstation class machines, switch from 4 instead of 3 and replace SCSI disks with the best / fastest SSDs that I can afford. If you have a similar installation, then disk I / O should be considered.

+2
source

Things to consider: How many projects are planned to be built at the same time? Is it permissible that one project will wait for another to finish?

Are you going to do CI or scheduled builds?

How long do your builds usually take?

What build software do you use?

Most web projects are small enough (build time less than 5 minutes), that buying a large server simply does not make sense.

As an example, we have about 20 developers actively working on 6 different projects. We use one TFS build server that runs CI for all projects. They are set for each check.

All our projects are completed in less than 3 minutes.

The build server is a single core core with 4 GB of RAM. The main reason we use it is dev performance and an intermediate build for QA. When the build is complete, this application is automatically deployed to the appropriate servers. He is also responsible for running unit and web tests for these projects.

The type of build software you use is very important. TFS can take advantage of each core for parallel build projects as part of the solution. If your build software cannot do this, you can investigate the availability of multiple build servers depending on your needs.

+1
source

All Articles