Answer
Yes, your test GUI libraries should be tested.
For example, if your library provides a Check method to check the contents of a grid on a two-dimensional array, you must be sure that it works as intended.
Otherwise, your more sophisticated test cases that test business processes in which the grid should receive certain data may not be reliable. If an error in the Check method causes false negatives, you will quickly find the problem. However, if it creates false positives, you have major headaches along the line.
To test the CheckGrid method:
- Fill grid with known values
- Call the CheckGrid method with filled values
- If this case passes, at least one aspect of CheckGrid works .
- In the second case, you expect the CheckGrid method to report a testing error.
- The details of how you specify the wait will depend on your xUnit infrastructure (see the example later). But basically, if a test error is not reported by CheckGrid , then the test itself should fail.
- Finally, you may need a few more test cases for special conditions, such as: empty grids, grid size that does not match the size of the array.
You should be able to modify the following dunit example for most frameworks to verify that CheckGrid correctly detects errors:
begin //Populate TheGrid try CheckGrid(<incorrect values>, TheGrid); LFlagTestFailure := False; except on E: ETestFailure do LFlagTestFailure := True; end; Check(LFlagTestFailure, 'CheckGrid method did not detect errors in grid content'); end;
Let me reiterate: your test GUI libraries should be tested; and trick - how do you do it efficiently?
The TDD process recommends first figuring out how you intend to test the new functionality before that you are actually implementing . The reason is that if you do not, you often find yourself scratching your head about how you are going to check if this works. It is extremely difficult to modify test cases to existing implementations.
Side note
One thing you told me is a little troubling me ... you said that it takes 70% (of your time) to maintain (your tests)
This sounds a little wrong for me, because ideally your tests should be simple and themselves should only change when your interfaces or rules change.
You may have misunderstood you, but I got the impression that you are not writing "production" code. Otherwise, you need more control over the cycle between switching test code and production code to reduce your problem.
Some suggestions:
- Watch for non-deterministic values. For example, dates and artificial keys can play havoc with certain tests. You need a clear strategy on how you will resolve this. (Another answer in itself.)
- You will need to work closely with "product developers" to ensure that the aspects of the interfaces you are testing can be stabilized. That is, they need to know how your tests identify and interact with GUI components, so they don’t arbitrarily violate your tests with changes that "do not affect them."
- In the previous paragraph, this would help if automatic tests run whenever they make changes.
- You should also be wary of too many tests, which simply boil down to arbitrary permutations. For example, if each client has categories A, B, C or D; then 4 tests "New client" (1 for each category) give you 3 additional tests, which actually do not tell you much more than the first, and it is "difficult" to support.
Craig young
source share