How to Overcome Unit Test Regression Problems ...?

I was looking for some solution for software developers who spend too much time on unit test regression problems (about 30% of the time in my case !!!), i.e. work with unit tests that are not performed on a daily basis.

The following is one solution that analyzes which of the most recent code changes caused a unit test error:

Unit Test Regression Analysis Tool

I wanted to know if anyone knew such tools so that I could compare them. Also, if someone can recommend a different approach to solve this annoying problem.

Thanks in Advanced

+4
source share
3 answers

You have our sympathy. It looks like you have a fragile test syndrome. Ideally, a single change in unit test should only break one test - and this should be a real problem. As I said, "ideally." But this type of behavior is common and treatable.

I would recommend spending some time getting the team to do some root causes to analyze why all these tests break down. Yes, there are some fancy tools that keep track of which tests most often fail, and which ones fail. Some continuous integration servers have an integrated interface. But I suspect that if you just ask each other, you will know. I was, although it is, and the team always just knows from my own experience.

Anywho, a few other things I've seen that trigger this:

  • Unit tests usually should not depend anymore on the class and method they test. Look at the addictions that have crept in. Make sure you use dependency injection to make testing easier.
  • Are these really unique tests? Or do they experience the same thing over and over again? If they always fail together, why not just delete everything but one?
  • Many people prefer integration into unit tests, as they get more money for their dollar. But at the same time, one change can break many tests. Perhaps you are writing integration tests?
  • Perhaps they all work through some kind of common setup code for a lot of tests, causing them to burst in unison. Perhaps this can be mocked to isolate behavior.
+3
source

Check often, do often.

If you do not, I suggest using the Continuous Integration tool and ask / require developers to run automated tests before committing. At least a subset of the tests. If running all the tests takes too much time, use the CI tools that generate the assembly (which includes running all the automated tests) for each commit, so you can easily see which commit broke the assembly.

If automated tests are too fragile, maybe they don't test functionality, but implementation details? Sometimes testing implementation details is a good idea, but it can be problematic.

+2
source
  • As for running a subset of the most likely test for failure - since it usually fails due to other team members (at least in my case), I need to ask others to run my test, which can be “politically problematic” in some development environments ;). Any other suggestions will be assigned. Thanks a lot - SpeeDev Sep 30 '10 at 23:18.

If you need to “ask others” to run your test, this suggests a serious problem with your test infrastructure. All tests (regardless of who wrote them) should run automatically. The responsibility for correcting an unsuccessful test should be that the person who made the change is not the author of the test.

+2
source

All Articles