Exclude code from test coverage

If possible, I use TDD:

  • I'm mocking my interfaces
  • I use IOC so that my mocking ojbect can be entered
  • I guarantee that my tests will be launched and that coverage will increase and I will be happy.

then ...

  • I create derived classes that actually do things, for example, they enter the database or are written to the message queue, etc.

The code coverage is reduced here - and I'm sad.

But then I freely distribute [CoverageExclude] to these specific classes, and the coverage rises again.

But instead of feeling sad, I feel dirty. I somehow feel like I'm cheating, although it is impossible to conduct a single test of specific classes.

I am interested to know how your projects are organized, for example, how you physically place code that can be tested against code that cannot be tested.

I think that maybe a good solution would be to split the unqualified specific types into their own assembly and then prohibit the use of [CoverageExclude] in assemblies that contain the test code. It also made it easy to create an NDepend rule to fail the assembly if this attribute was not found correctly in the tested assemblies.


Edit: The essence of this question is affected by the fact that you can test what USES your mocking interfaces, but you cannot (or should not!) UNIT-test objects that are real implementations of these interfaces, Here is an example:

 public void ApplyPatchAndReboot( ) { _patcher.ApplyPatch( ) ; _rebooter.Reboot( ) ; } 

patcher and rebooter are introduced in the constructor:

 public SystemUpdater(IApplyPatches patcher, IRebootTheSystem rebooter)... 

unit test is as follows:

 public void should_reboot_the_system( ) { ... new SystemUpdater(mockedPatcher, mockedRebooter); update.ApplyPatchAndReboot( ); } 

This works great - my UNIT-TEST coverage is 100%. Now I write:

 public class ReallyRebootTheSystemForReal : IRebootTheSystem { ... call some API to really (REALLY!) reboot } 

My UNIT-TEST coverage is declining, and there is no way for the UNIT-TEST of a new class. Of course, I will add a functional test and run it when I have 20 minutes (!).

So, I guess my question boils down to the fact that it's nice to have about 100% UNIT-TEST coverage. On the other hand, it’s nice to be able to test about 100% of the system’s behavior. In the above example, the behavior of the patcher should restart the machine. That we can verify sure. The ReallyRebootTheSytemForReal type is not a strict behavior - it has side effects, which means that it cannot be checked for one. Since it cannot be unit testing, it affects the percentage of test coverage. In this way,

  • Does it matter that these things reduce the cost of a single test?
  • Should they be divided into their own meetings where people expect 0% UNIT-TEST coverage?
  • If specific types like this are so small (in Cyclomatic Complexity) that a single test (or otherwise) is superfluous.

+4
source share
3 answers

You are on the right track. Some specific implementations that you can probably test, for example, data access components. Automatic testing against a relational database is certainly possible, but should also be taken into account in its own library (with the corresponding unit test library).

Since you are already using Dependency Injection, this should be a piece of cake so that you can lay such a dependency back into your real application.

On the other hand, there will also be specific dependencies that are essentially not testable (or de-testable, as Fowler once joked). Such implementations should be as subtle as possible. It is often possible to develop an API that such a Dependency provides in such a way that all the logic happens at the consumer, and the complexity of the actual implementation is very low.

The implementation of such specific dependencies is a clear design decision, and when you make this decision, you decide at the same time that such a library should not be tested per unit, and therefore code coverage should not be measured.

Such a library is called Humble Object. These (and many other patterns) are described in the excellent xUnit Test Patterns .

As a rule, I accept that the code is not verified if it has Cyclomatic Complexity of 1. In this case, it is more or less purely declarative. Pragmatically, untested components are in order if they have low Cyclomatic Complexity. How low is "low", you must decide for yourself.

In any case, [CoverageExclude] seems to me a smell (I didn’t even know that it existed before I read your question).

+4
source

I do not understand how your specific classes are uncheckable. It smells awful to me.

If you have a specific class that writes to the message queue, you should be able to pass a false queue to it and fully test all its methods. If your class goes to the database, then you should be able to pass it a mock database to go to.

There may be situations that can lead to unverifiable code, I will not deny this, but this should be an exception, not a rule. All your specific class work items? Something is wrong.

+1
source

To expand on the issue of mothers: I suspect that you think that more is “unverified” than it really is. Not tested in strict "one-time" unit testing without testing any dependencies at the same time? Sure. But this should be easily achievable with slower and more frequently performed integration style tests.

You mentioned access to the database and recording messages in the queue. As womp mentions, you can feed them mock databases and simulate queues during unit testing and test the actual specific beahviour in integration tests. Personally, I see nothing wrong with testing specific implementations directly as unit tests, at least when they are not remote (or deprecated). Of course, they work a little slower, but hey, at least they are covered by automatic tests.

Would you put the system in production where messages were written in a queue and were not actually tested that messages are written to the actual physical / logical queue? I would not.

+1
source

All Articles