Writing quality tests

We know that code coverage is a poor metric used in measuring the quality of test code. We also know that language / framework testing is a waste of time.

On the other hand, what indicators can we use to determine quality tests? Are there any best practices or rules of thumb you learned to help you identify and write better tests?

+14
testing
Oct 12 '08 at 19:02
source share
7 answers
  • Make sure your tests are independent of each other. The test should not depend on the performance or results of any other test.
  • Ensure that each test clearly defines entry criteria, test steps, and exit criteria.
  • Set up a requirement tracking matrix (RVTM). Each test must verify one or more requirements. In addition, each requirement must be verified by at least one test.
  • Make sure your tests are identified. Establish the usual naming or marking convention and stick to it. Link to the test identifier when registering defects.
  • Treat your tests like your code. Have a test software development process that reflects your software development process. Tests should have expert evaluations, be versioned, have change control procedures, etc.
  • Classify and organize your tests. Simplify finding and running a test or test suite as needed.
  • Make your tests as concise as possible. This simplifies their launch and automation. It is better to run many small tests than one big test.
  • When the test fails, try to understand why the test failed.
+15
Oct 13 '08 at 11:53
source share

Make sure it's easy and quick to write tests. Then write a lot of them.

I found it very difficult to predict in advance which tests will be the ones that will ultimately fail either now, or the long way down the line. I usually take a diffuser approach, trying to hit the corner cases if I can think of them.

Also, don't be afraid to write great tests that test a bunch of things together. Of course, if this test fails, it may take longer to figure out what went wrong, but often problems only occur after you start gluing things together.

+5
Oct 12 '08 at 19:39
source share

write tests that test the basic functionality and individual use cases of the software. Then write tests to check for edge cases and check for expected exceptions.

In other words, write good unit tests from the point of view of the client and forget about the metrics for the test code. no indicators will tell you if your test code is good, only functional software tells you when your test code is good.

+2
Oct. 12 '08 at 19:08
source share

I think the use case is very useful to get the best test coverage. If you have your functionality in terms of use, it can easily be converted to various test cases to cover the positive, negative and exceptions. The precedent also indicates the preconditions and preparation of the data, if any, which is very convenient when writing test cases.

+2
Oct 14 '08 at 16:08
source share

My rules of thumb are:

  • Cover even simpler test cases in your test plan (don't risk leaving your most frequently used functionality untested)
  • Check the appropriate box next to each test case.
  • As Joel says, there is a separate team that tests
+1
Oct. 12 '08 at 19:14
source share

I would not agree that code coverage is not a useful indicator. If you don't have coverage for 100% of the code, this at least indicates areas that need more tests.

In general, although as soon as you get adequate coverage of applications, the next logical place to go is a written test, which is either intended to directly verify the requirements that the code was written to satisfy, or to emphasize the -cases margin. None of them will fall naturally from everything that you can easily measure directly.

+1
Oct 12 '08 at 19:21
source share

There are two good ways to check the quality of the test.

1. Code Overview

While looking at the code, you can check the steps of the importers defined by @Patrick Cuff in his answer https://stackoverflow.com/a/464632/

A code review is a systematic examination (often called peer review) of a computer’s source code. It is designed to find and eliminate errors forgotten at the initial stage of development, improve both the overall quality of the software and the skills of developers.

2. Mutation tests

Secondly, it’s cheaper - this is an automated task that measures the quality of the test.

Mutation testing (or mutation analysis or program mutation) is used to develop new software tests and evaluate the quality of existing software tests .

Related Questions

  • How to ensure the quality of junit tests?
+1
Jan 11 '14 at 23:33
source share



All Articles