You need a connection between each test and the code that it runs.
It can be calculated statically, but it is complicated, and I do not know any tools that do this. Even worse, if you had such a tool, a static analysis, to decide what the test influenced, it may take more time than just running the test itself, so this does not look like an attractive direction.
However, this can be calculated using a testing tool. For each individual test, run this test (we assume that it passes) and collect test coverage data. Now we have many pairs (t_i, c_i) for "test I have cover c".
When the code base changes, you can familiarize yourself with test data coverage sets. A simple check: if for any (t_i, c_i), if c_i mentions the file F and F has changed, you need to run t_i again. Test coverage data in almost any representation is easy to find in an abstract form. Given that most testing tools do not specify how they store test coverage data, this is more complicated than it seems in practice.
Actually, you go perfectly, if c_i mentions any program element F and that program element has changed, you need to run t_i again.
Our SD test testing tools provide this feature for Java and C # at the method level. You need to configure some scenarios to link the actual test, however you packaged it with the collected test coverage vectors. In practice, this tends to be fairly easy.
Ira Baxter
source share