Thinking about this question in an agnostic language, a framework-agnostic way gives what you ask, this is a bit of a puzzle:
The test tool will not know about the execution time of any of the unit tests until they are launched; because it depends not only on the test tool and the tests themselves, but also on the application under test. The solution to stop scrap in this case will be to do things such as setting a time limit. If you do this, the question arises when the test expires, should it be passed, failed, or perhaps fall into some other (third) category? ... puzzle!
Thus, to avoid this, I suggested that you adopt a different strategy when you, as a developer, decide which subsets of the entire test suite you want to run in different situations. For example:
- Smoke test suite;
- i.e. tests that you would like to run first all the time. If any of them does not work, you do not want to worry about running any of the tests. Place only the true core tests in this group.
- A minimal test suite;
- For your specific requirement, this will be a set of tests containing all the tests "fast" or "fast" , and you determine which ones are.
- A comprehensive test suite;
- Tests that do not belong to any of the other categories. For your specific requirement, these will be tests that are “slow” or “long” .
When performing your tests, you can choose which of these subsets of tests will be run, possibly by setting it up in one form or another script.
I use this approach for an effective effect in automated testing (integrated into a continuous integration system). I do this with a script that, depending on the input parameters, decided to either run only smoke tests plus minimal tests; or, alternatively, smoke tests, minimum tests, and comprehensive tests (i.e., all).
NTN
bguiz
source share