This sounds like a statistical sample. When you buy a product, there is a good chance that not every product coming out of the "assembly line" has been tested for quality.
Statistical sampling requires checking a certain percentage of products to ensure that they are all without problems. This minimizes the risk of some problems getting through them, and is absolutely essential when the testing process is destructive - if you conduct destructive testing at 100% of your production line, which does not leave much to spread :-)
To be honest, if you do not check every execution path and all possible input values, you already do this in your testing. The amount of effort required to test everything on any, but the most simplified systems is not worth it. The extra cost will make your product uncompetitive.
Note that statistical sampling doesnβt just include testing every 100th block. There are ways to target the sample to improve the likelihood of trap problems. For example, if historical data indicates that most errors are entered at a particular phase, indicate this step. If one of your developers is more problematic than others, better check its material.
From what I see from a quick look at some research articles, statistical debugging is only targeting areas based on a past history of problems.
I know that we are already doing this for our software. Since any corrections that are fixed must pass single and system tests that replicate the problem (and our TDD says that these tests must be written before you try to fix the error), these tests are automatically added to the regression test suite so that they areas that cause more problems are naturally more likely to be checked in the future.
source share