Test testing?

I mainly spend my time on automated tests of win32 and .NET applications, which take about 30% of the time for writing and 70% for support. We have learned how to reduce maintenance time and have already moved on to a reusable test library that covers most of the key components of our software. In addition, we have some work to get our library in a state where we can use keyword-based testing .

I am considering unit testing our test library, but I wonder if this is worth the time. I am a strong proponent of unit testing software, but I'm not sure how to handle test code.

Do you think Gui automated test libraries should be tested on a module? Or is it just a waste of time?

+7
unit-testing automated-tests
source share
13 answers

First of all, it was very useful for me to look at unit-test as "executable specifications" instead of tests. I write that I want my code to execute and then implement it. Most of the benefits that I get from writing unit tests are that they control the implementation process and focus my thinking. The fact that they are reused to check my code is almost a happy coincidence.

Testing tests is just a way to move a problem, rather than solve it. Who will test the tests that test the tests? The “trick” that TDD uses to make sure tests are really useful is that they crash first. It could be something you can use here. Write a test, see how it works, then fix the code.

+11
source share

I do not think you should unit test your unit tests.

But if you wrote your own testing library, with custom statements, keyboard controllers, button testers, or whatever it is, then yes. You must write unit tests to ensure that they all work as intended.

For example, the NUnit library is tested for a block.

+9
source share

In theory, this is software and therefore should be tested per unit. If you download your own Unit Testing library, especially, you will want to unit test it as you go.

However, the actual unit tests for your core software system should never grow large enough to require unit testing. If they are so complex that they require unit testing, you need serious refactoring of your software and some attention to simplifying your unit tests.

+5
source share

Perhaps you should take a look at Who is testing the tests .

The short answer is that the code tests the tests, and the tests verify the code.

BUT?

Atomic clock testing
Let me start with an analogy. Suppose you are traveling with an atomic clock. How do you know if the watch is calibrated correctly?

One way is to ask your neighbor with an atomic clock (because everything carries one around) and compares them. If they both report this time, then you have a high degree of confidence that they are both correct.

If they are different, then you know that one or the other is wrong.

So, in this situation, if the only question you ask is: “Is my watch giving the correct time?”, Then you really need a third watch to check the second clock and a fourth measure to check the third? Not if all. Stack overflow prevented!

IMPO: this is a compromise between how much time you have and how much quality you would like to have.

  • If I used household test harnas, I would test it if time permits.
  • If this is a third party tool that I use, I would expect the supplier to test it.
+5
source share

There really is no reason why you could / shouldn't unit test use your library. Some parts may be too complicated for the unit test to be correct, but most of them can probably be tested without much trouble.

Actually, this is probably especially useful for the unit test of this type of code, as you expect it to be reliable and reusable.

+2
source share

Tests test the code, and the code tests the tests. When you say the same intention in two different ways (once in tests and once in code), the likelihood that they are wrong is very low (if you already had no requirements). This can be compared to the double entry bookkeeping used by accountants. See http://butunclebob.com/ArticleS.UncleBob.TheSensitivityProblem

The same issue was recently discussed in the comments http://blog.objectmentor.com/articles/2009/01/31/quality-doesnt-matter-that-much-jeff-and-joel


About your question, in which GUI test libraries should be tested ... If I understand correctly, you are creating your own test library and you want to know whether to test your test library. Yes. To be able to rely on the library to send reports correctly, you must have tests that ensure that the library does not report any false positives or false negatives. Regardless of whether the tests are unit tests, integration tests or acceptance tests, there must be at least some tests.

Typically, writing unit tests after the code is written is too late, because then the code is usually more connected. Unit tests cause the code to be more decoupled, because otherwise small units (a class or a closely related group of classes) cannot be tested in isolation.

When the code is already written, usually you can add only integration tests and acceptance tests. They will work with the whole system, so you can make sure that the functions work correctly, but covering every corner and execution path is more complicated than with unit tests.

+2
source share

Kent Beck's book, Test-Based Development: An Example, has a test development example of the unit test structure, so of course you can test your tests.

I have not worked with GUIs or .NET, but what do you have about your unit tests?

Are you worried that it may describe the target code as incorrect when it works correctly? I suppose this is an opportunity, but you can probably find out if this happens.

Or are you worried that it might describe the target code as functioning normally, even if it is not? If you are worried about this, then mutation testing may be what you need. Mutation testing modifies portions of the test code to see if these changes have failed the tests. If this is not the case, either the code does not run, or the results of this code are not checked.

If your system does not have mutation testing software, you can do the mutation manually by sabotaging the target code yourself and observing if this causes errors in the module.

If you are creating a set of products for testing modules that are not tied to a specific application, then perhaps you should create a trivial application in which you can run test software and ensure that it receives expectations and expectations.

One problem with mutation testing is that it does not guarantee that the tests cover all potential scenarios that may arise in the program. Instead, it only provides verification of all the scenarios expected by the target code.

+2
source share

Usually we use these rules:

1) All product codes have both unit tests (which closely correspond to classes and functions of the product code) and separate functional tests (using functions visible to the user)

2) Do not write tests for third-party code, such as .NET controls or third-party libraries. The exception to this is if you know that they contain the error you are working with. A regression test for this (which fails when a third-party error disappears) will warn you when updating to your third-party libraries will fix the error, that is, you can remove your workarounds.

3) Unit tests and functional tests themselves are not tested directly - APART should use the TDD procedure to write a test before the product code, and then run the test to see how it worked. If you do not, you will be amazed at how easy it is to accidentally put tests that always pass. Ideally, you would then implement your product code one step at a time and run the tests after each change to see that each statement in your test fails, and then implement and start the transfer. You will then see that the following statement fails. Thus, your tests pass the test, but only when writing product code.

4) If we expose the code from our module or functional tests - creating a test library that is used in many tests, we do unit test all of this.

It served us very well. We seem to have always adhered to these rules 100%, and we are very pleased with our agreement.

+2
source share

Answer

Yes, your test GUI libraries should be tested.

For example, if your library provides a Check method to check the contents of a grid on a two-dimensional array, you must be sure that it works as intended.

Otherwise, your more sophisticated test cases that test business processes in which the grid should receive certain data may not be reliable. If an error in the Check method causes false negatives, you will quickly find the problem. However, if it creates false positives, you have major headaches along the line.

To test the CheckGrid method:

  • Fill grid with known values
  • Call the CheckGrid method with filled values
  • If this case passes, at least one aspect of CheckGrid works .
  • In the second case, you expect the CheckGrid method to report a testing error.
  • The details of how you specify the wait will depend on your xUnit infrastructure (see the example later). But basically, if a test error is not reported by CheckGrid , then the test itself should fail.
  • Finally, you may need a few more test cases for special conditions, such as: empty grids, grid size that does not match the size of the array.

You should be able to modify the following dunit example for most frameworks to verify that CheckGrid correctly detects errors:

begin //Populate TheGrid try CheckGrid(<incorrect values>, TheGrid); LFlagTestFailure := False; except on E: ETestFailure do LFlagTestFailure := True; end; Check(LFlagTestFailure, 'CheckGrid method did not detect errors in grid content'); end; 

Let me reiterate: your test GUI libraries should be tested; and trick - how do you do it efficiently?

The TDD process recommends first figuring out how you intend to test the new functionality before that you are actually implementing . The reason is that if you do not, you often find yourself scratching your head about how you are going to check if this works. It is extremely difficult to modify test cases to existing implementations.

Side note

One thing you told me is a little troubling me ... you said that it takes 70% (of your time) to maintain (your tests)

This sounds a little wrong for me, because ideally your tests should be simple and themselves should only change when your interfaces or rules change.

You may have misunderstood you, but I got the impression that you are not writing "production" code. Otherwise, you need more control over the cycle between switching test code and production code to reduce your problem.

Some suggestions:

  • Watch for non-deterministic values. For example, dates and artificial keys can play havoc with certain tests. You need a clear strategy on how you will resolve this. (Another answer in itself.)
  • You will need to work closely with "product developers" to ensure that the aspects of the interfaces you are testing can be stabilized. That is, they need to know how your tests identify and interact with GUI components, so they don’t arbitrarily violate your tests with changes that "do not affect them."
  • In the previous paragraph, this would help if automatic tests run whenever they make changes.
  • You should also be wary of too many tests, which simply boil down to arbitrary permutations. For example, if each client has categories A, B, C or D; then 4 tests "New client" (1 for each category) give you 3 additional tests, which actually do not tell you much more than the first, and it is "difficult" to support.
+2
source share

Personally, I do not unit test my automation libraries, I run them with a modified version of the baseline to ensure that all control points work. The main thing here is that my automation is mainly intended for regression testing, for example. that the results for the current run coincide with the results of the wait (as a rule, this corresponds to the results of the last run). By running tests with an appropriate modified set of expected results, all shoud tests fail. If this is not the case, you have an error in your test case. This is a concept borrowed from mutation testing that I find working well for testing GUI automation suites.

+1
source share

From your question, I can understand that you are creating a platform with keywords support for testing automation. In this case, it is always recommended to perform some white checks on the general and graphical functions of the GUI. Since you are interested in unit testing each GUI testing feature in your libraries, go through it. Testing is always good. This is not a waste of time, I would consider it as a “value added" to your structure.

You also mentioned how to process the test code, if you mean the testing approach, please group different functions / modules that perform similar work, for example: checking the GUI control (presence), input of the GUI element, element of the GUI element . Group for different types of elements and follow the unit test approach for each group. It would be easier to track testing. Hooray!

+1
source share

I would suggest checking the test, this is a good idea and something that needs to be done. Just make sure that what you create to test your application is not more complex than the application itself. As already mentioned, TDD is a good approach even when creating automated functional tests (I personally would not do this, but in any case it is a good approach). Testing your test code is also a good approach. IMHO, if you automate GUI testing, just continue with all the manual tests (you should have steps, unprocessed scripts, expected results, etc.), make sure they pass. Then, for other tests that you can create that are no longer performed manually, unit test them and follow the TDD approach. (then if you have time, you can unit test others). Lastly, the keyword driven is IMO, the best approach you could follow because it gives you the most flexible approach.

0
source share

You might want to learn the structure of mutation testing (if you are working with Java: check PIT Mutation Testing ). Another way to evaluate the quality of testing your device is to view reports provided by tools such as SonarQube ; reports include various coverage indicators ;

0
source share

All Articles