Regression Testing Optimization in C ++

To avoid testing too much, I would like to give the Quality Assurance (QA) team tips on which features should be regressed after an iteration of development. Do you know tools that could do this in a C ++ and Subversion (and visual studio) developer environment?

Precedent Details:

  • The functions will be determined by the development team in terms of input points, usually classes or class methods. Let's say the function "excel file import" is determined by the ImportExcelFile (...) method of the FileImporter class.
  • During the development iteration, the development team changes some methods of some classes. Let's say one of these classes is indirectly used by the ImportExcelFile () method
  • At the end of the iteration, all commits are analyzed using the tool and a report is created and delivered to the QA team. In our example, the QA Team is informed that the "excel file import" function needs to be tested, and that the other XY and Z functions are unchanged.

Most likely, this tool will use static code analysis and consume subversion APIs. But does it exist?

+6
c ++ svn testing regression-testing
source share
2 answers

G'day

What you are describing is not regression testing. You are just testing new features.

Regression testing is where you specifically run your full test suite to see if the code that supports your new feature interrupted the previous working code.

I would highly recommend reading Martin Fowlerโ€™s excellent article , Continuous Integration , which covers some of the aspects you are talking about.

He can also provide you with a better way to work, in particular the CI aspects that Martin talks about in his paper.

Edit: Especially since CI has hidden little traps that are obvious in retrospect. Things like stopping testers trying to test a version that did not yet have files that implement the new feature. (You check that there have not been any commits in the last five minutes).

Another important point is the loss of time if you have a broken assembly and you do not know that it is broken until someone checks the code and then tries to build it to check it.

If it is broken, you now have:

  • a tester sitting unable to perform scheduled tests,
  • the developer interrupts his current work in order to return to the previous work in order to figure out what causes the broken assembly. Most likely, these are developers, because the problem is the interaction of two separate parts, each of which worked on its own.
  • loss of time due to the fact that the developer (developers) must return to thinking for this previous part of the work and
  • loss of time for the developer to return to thinking of the new part of the work on which they worked, until the investigation was interrupted.

The main idea of โ€‹โ€‹CI is to make several assemblies of the complete product during the day so that you catch the broken assembly as early as possible. You can even select several tests to verify that the core functions of your product are still working. Once again, to notify as soon as possible that there is a problem with the current state of your assembly.

Edit:. As for your question, what about tagging the repository when you did the testing, for example. TESTS_COMPLETE_2009_12_16. Then, when are you ready to develop the next set of tests that performs "svn diff -r" between these last completed tag tags and HEAD?

NTN

BTW I will update this answer with some additional suggestions as I think of them.

amuses

+1
source share

Divide the project into separate executable files and create them.

Make will rebuild any executable if its dependencies change.

Add the output files of any chain tests depending on the next test - for example, the output of the save file test as a function of the file read test.

Everything that was built after this point requires unit testing.

If any libraries use shared exhaustive resources (heap memory, disk, global mutexes, etc.), add them as dependencies as well, because depletion due to leakage in one library is often a regression failure in another.

Everything that was built after a certain point requires regression testing.

If you do not work in an environment in which guarantors lack resources (for example, TinyC), you will complete the regression testing of everything. Regression testing is not unit testing.

0
source share

All Articles