Should code coverage be performed by EVERY assembly?

I am a big fan of Brownfield application development. Without a doubt, a great book, and I would recommend it to all developers. I am here because I realized that in code coverage. In my new store, we use Team City for automated build / continuous integration, and it takes about 40 minutes to complete the build. Brownfield's book talks about frictionless development and how we want to alleviate the overall burden developers must bear. Here is what I read on page 130.

“Code coverage: two processes for the price of one? As you can see from the sample goal in Listing 5.2, you get two output files: one with test results and one with code coverage results. This is because you actually perform your tests during this task.

You do not need to technically perform your tests in a separate task if you are working on a code coverage task. For this reason, many teams will replace the automatic code coverage task for their testing task, which basically performs both actions in a CI process. The CI server will compile the code, test it, and generate statistics coverage code for each registration.

Although there is nothing conceptually wrong with this approach, be aware of some of the negative aspects. Firstly, theres overhead to generate code coverage statistics. when there are many tests, this overhead can be significant enough to cause friction to form a longer automated build script. Remember that the main build script should run as quickly as possible in order to encourage team members to run it often. If it takes too much time, you can find developers who are looking for workarounds.

For these reasons, we recommend that you complete the code coverage task separately from the default script build task. It should run at regular intervals, perhaps as a separate scheduled task in your build file, which runs every two weeks or even monthly, but we don’t feel that the indicator has a sufficient advantage to guarantee additional overhead for each registration. "

This is contrary to practice in my current store, we performed NCover for assembly. I want to go to my example and ask us not to do this, but the best thing I can do is tell him: "This is what Brownfield's book says." I do not think this is good enough. Therefore, I rely on you guys to fill me with my personal experience and advice on this topic. Thanks.

+4
source share
4 answers

There are always two competing interests in continuous integration / automated assembly systems:

  • You want to build as quickly as possible.
  • You want the assembly to be performed with maximum feedback (for example, the largest number of tests, the greatest amount of information about the stability and scope of the assembly, etc.).

You will always need to make compromises and strike a balance between these competing interests. Usually I try to ensure that the build time does not exceed 10 minutes, and consider failures of the build system if it takes about 20 minutes to give any meaningful feedback on the stability of the assembly. But it does not have to be a complete assembly that checks every case; there may be additional tests that run later or in parallel on other machines to further test the system.

If you see a build time of 40 minutes, I recommend that you do one of the following as soon as possible:

  • Distribute build / testing on multiple machines so that tests can run in parallel and you can get faster feedback
  • Find things that take a lot of time in your build, but don't bring much benefit and only perform these tasks as part of the nightly build.

I would 100% recommend the first solution , if at all possible. However, sometimes the hardware is not immediately available, and we must sacrifice.

Code coverage is a relatively stable indicator, since it is relatively rare that the number of numbers of your code will be significantly worse in one day. Therefore, if covering code takes a long time to execute, then it is not very important that this happens with every build. But you should still try to get code coverage numbers at least once a day. Nightly builds can be allowed to take a little longer, as there (presumably) there will not be anyone waiting for them, but they still provide regular feedback about your project status and ensure that there are not many unforeseen problems.

However, if you can get hardware for more distributed or parallel build / testing, you should definitely go this route - it will ensure that your developers find out as soon as possible if they break something or present a problem in the system . The cost of hardware will quickly pay off in increasing the productivity resulting from the fast feedback of the build system.

In addition, if your build machine does not work continuously (i.e. there is a lot of time when it is idle), I would recommend setting it up to perform the following actions:

  • When a code change occurs, build and verify. Drop some of the longer tasks, including potential code coverage.
  • Once this build / test cycle completes (or in parallel), start a longer build that checks things more thoroughly, protects the code, etc.
  • Both of these assemblies should provide feedback on the health of the system.

This way you get quick feedback, but also get more advanced tests for each assembly, if the assembly machine has the opportunity for it.

+2
source

I would not make any suggestions on how to fix this - you put the basket in front of the horse a little here. You have a complaint that the assembly takes too much time, so the problem that I ask you to solve is without bias about how to do this. There are many other potential solutions to this problem (faster machines, different processes, etc.), and you would be wise not to exclude them.

Ultimately, it is a question of whether the output of the build system is enough for your leadership to justify the time it takes. (And whether there will be any action that you can take to correct the consumption of time has acceptable accuracy of the conclusion).

+1
source

This solution is for every team and for every environment. First you must determine your threshold for the duration of the assembly, and then determine the longer running processes in less frequent occurrences (ideally at least 1 or 2 times a day in CI) after this is determined.

0
source

The objection seems to be that running all the tests and collecting code coverage is expensive, and you don’t want (well, someone doesn’t want) to pay this price for each assembly.

I cannot imagine why on earth you (or someone) would not want to always know what coverage status is.

If the build machine has nothing more to do, it doesn’t matter if it does. If your assembler is too busy making assemblies, you may have overloaded it by asking him to service too many masters, or you are doing too many assemblies (why so many changes? Hmm, maybe the tests are not very good!).

If the problem is that the tests themselves really take a lot of time, you can find a way to optimize the tests. In particular, you do not need to re-run the tests for the part of the code that has not changed. Finding out how to do it (and trust it) can be a problem.

Some testing tools (like ours ) allow you to keep track of which tests cover a portion of the code, and given the code changes that need to be rerun. With some additional scenarios, you can simply rerun those tests that were primarily affected; this allows you to get what corresponds to the full test result, at an early stage / quickly, without performing all tests. Then, if there is a build problem, you will find out as soon as possible.

[If you are paranoid and do not trust the incremental testing process, you can run them for early feedback and then run all the tests again, giving you full results.]

0
source

All Articles