What is the coverage point of the base path?

The onjava article seems to imply that covering the base path is a sufficient substitute for covering the full path due to some linear magic of independence / cyclomatic complexity.

Using an example similar to the article:

public int returnInput(int x, boolean one, boolean two) { int y = x; if(one) { y = x-1; } if(two) { x = y; } return x; } 

with the base set {FF, TF, FT}, the error is not displayed. Only an untested TT path will reveal it.

So how is the base path useful? This does not seem much better than covering the branches.

+2
source share
3 answers

[Disclaimer: I have never heard of this method before, it just looks interesting, so I did a few searches and that’s what I think I learned. Hopefully someone who knows what they are saying will also contribute ...]

I think it should be the best way to create branch coverage tests, rather than completely replacing route coverage. There is a much longer document here that repeats the goals a bit: http://www.westfallteam.com/sites/default/files/papers/Basis_Path_Testing_Paper.pdf

An onjava article states: “The goal of testing the base path is to verify all the results of the solution independently of each other. Testing the four base paths achieves this goal, making other paths extraneous”

I think that "outsider" here means "unnecessary for the purpose of testing the base path", and not as you might assume, "a complete waste of time every time."

I believe that the point of testing branches independently of each other is to break those random correlations between the paths that work and the paths you test that happen with terrifying frequency, when I write both code and arbitrary a set of branch coverage tests. There is no magic in linear independence; this is just a systematic way to generate branch coverage, which prevents testers from making the same assumptions as a programmer about the relationship between industry choices.

So you are right, testing the base path skips your error and generally skips the errors 2 ^ (N-1) -N, where N is the cyclomatic complexity. It simply aims not to miss 2 ^ (N-1) -N paths, which are likely to be erroneous, as it allows the encoder to select N paths for testing, as a rule :-)

+5
source

Route coverage is no better than any other coverage metrics — these metrics show how much of the “code” has been verified. The fact that you can achieve 100% branch coverage with the TF (TF, FT) as well as (TT, FF) set means that it all depends on your luck if your exit criteria tell you to exit after 100 % coverage. <w> Coverage should not be the focus of the tester - there should be a search for errors, and TC is just a way to show the error, as well as the proxy server coverage, showing how many of these indicate the activity of errors. As with all other white-box methods, striving for maximum coverage at minimum cost requires an actual understanding of the code so that you can write a defect without TC. TC is just good for regression and as a defect documentation. Since covering testers is just a hint of how much has been done, only experience can be really useful to say how much is enough. Since it is difficult to imagine in numerical values, we use other methods, i.e. coverage statistics. Not sure if this makes sense to you. I think judging by the date that you are far from the date of publication of your question ...

+1
source

My recollection from McCabe’s work on this subject is that you consistently generate the base paths, changing one condition at a time, and only changing the last condition, until you can change any new conditions.

Suppose we start with FF, which is the shortest path. Following the algorithm, we will change the latter, if in the chain we get FT. We examined the second, if now, that is: if there was an error in the second case, if our two tests paid attention to what happened when the second if statement suddenly started to run, otherwise our tests would not work or our code could not be verified. Both features suggest that our code needs to be refined.

Covering FT, we return one node to the path and change the first T to F. When constructing the base paths, we change only one condition at a time. So, we are forced to leave the second, if the same, yielding ... TT!

We stay with these basic paths: {FF, FT, TT}. What address have you raised.

But wait, you say that if an error occurs in the case of TF? Answer: we should have noticed this between two of the three other tests. Think about it:

  • Second, if you already had the opportunity to demonstrate your influence on the code, regardless of any other changes in program execution using the FF and FT tests.
  • The first, if I had the opportunity to demonstrate my independent effect from FT to TT.

We could start with the TT case (the longest way). We would have reached several different base paths, but they would still execute each if statement independently.

Note that in your simple example, there is no linearity in the conditions of the if statements. Linearity cripples the generation of the base path.

In short : basic path testing, performed systematically, avoids the problems you think you have. The base path test does not tell you how to write testable code. (TDD does this.) Moreover, testing the path does not tell you what statements you need to make. This is your job as a person.

Source : this is my research area, but I read McCabe's article on this particular subject a few years ago: http://mccabe.com/pdf/mccabe-nist235r.pdf

0
source

All Articles