If possible, I use TDD:
- I'm mocking my interfaces
- I use IOC so that my mocking ojbect can be entered
- I guarantee that my tests will be launched and that coverage will increase and I will be happy.
then ...
- I create derived classes that actually do things, for example, they enter the database or are written to the message queue, etc.
The code coverage is reduced here - and I'm sad.
But then I freely distribute [CoverageExclude] to these specific classes, and the coverage rises again.
But instead of feeling sad, I feel dirty. I somehow feel like I'm cheating, although it is impossible to conduct a single test of specific classes.
I am interested to know how your projects are organized, for example, how you physically place code that can be tested against code that cannot be tested.
I think that maybe a good solution would be to split the unqualified specific types into their own assembly and then prohibit the use of [CoverageExclude] in assemblies that contain the test code. It also made it easy to create an NDepend rule to fail the assembly if this attribute was not found correctly in the tested assemblies.
Edit: The essence of this question is affected by the fact that you can test what USES your mocking interfaces, but you cannot (or should not!) UNIT-test objects that are real implementations of these interfaces, Here is an example:
public void ApplyPatchAndReboot( ) { _patcher.ApplyPatch( ) ; _rebooter.Reboot( ) ; }
patcher and rebooter are introduced in the constructor:
public SystemUpdater(IApplyPatches patcher, IRebootTheSystem rebooter)...
unit test is as follows:
public void should_reboot_the_system( ) { ... new SystemUpdater(mockedPatcher, mockedRebooter); update.ApplyPatchAndReboot( ); }
This works great - my UNIT-TEST coverage is 100%. Now I write:
public class ReallyRebootTheSystemForReal : IRebootTheSystem { ... call some API to really (REALLY!) reboot }
My UNIT-TEST coverage is declining, and there is no way for the UNIT-TEST of a new class. Of course, I will add a functional test and run it when I have 20 minutes (!).
So, I guess my question boils down to the fact that it's nice to have about 100% UNIT-TEST coverage. On the other hand, it’s nice to be able to test about 100% of the system’s behavior. In the above example, the behavior of the patcher should restart the machine. That we can verify sure. The ReallyRebootTheSytemForReal type is not a strict behavior - it has side effects, which means that it cannot be checked for one. Since it cannot be unit testing, it affects the percentage of test coverage. In this way,
- Does it matter that these things reduce the cost of a single test?
- Should they be divided into their own meetings where people expect 0% UNIT-TEST coverage?
- If specific types like this are so small (in Cyclomatic Complexity) that a single test (or otherwise) is superfluous.