Where to add a statement for each generated Intellitest test

Here I will talk about this with an example. The original question presents the problem more abstractly. No need to read though.

Update: an example question

Suppose we implemented this function with an error to find min int []:

public int MyMin(int[] data) { int min = 1000; for (int i = 1; i < data.Length; i++) { if (data[i] < min) { min = data[i]; } } return min; } 

Running Intellitest on this function gives us: enter image description here

Note for tests No. 4 and No. 6, the function does not calculate the minimum value correctly due to its erroneous implementation. However, these tests pass, which is undesirable.

Intellitest cannot magically determine our intended MyMin behavior and process the test to fail on these inputs. However, it would be very good if we could manually specify the desired result for these tests.

Decision

@ michaล‚-komorowski is possible, but for each test case I have to repeat its input in terms of PexAssume s. Is there a more elegant / clean way to specify the desired output for test inputs?

Original question

Intelitest generates a parameterized test that can be modified, and general / global statements can be added there. It also generates a minimum number of inputs that maximize code coverage. Intellitest stores the inputs as separate unit tests, each of which calls a parameterized test with the created input.

I am looking for a way to add a statement for each input.

Since each input is stored as a unit test function in a .g.cs file, this statement can be added there. The problem is that these functions should not be configured by the user, as they will be overwritten by Intellitest in subsequent runs.

What is the recommended way to add assertions for each unit test?

+6
source share
2 answers

[TestMethod] should not be added to test methods (methods with the [TestMethod] attribute). They are used only to provide parameter values. [PexMethod] locations are methods with the [PexMethod] attribute.

At first, this may look like a limitation. However, if we look at how IntelliTest works, this is not so. It makes no sense to add a statement to each input, because the inputs can be deleted, updated, or created at any time. For example, when:

  • Verification method changed. Used class
  • PexAssume .
  • Changed the setting of the PexMethod attribute.

However, you can do something else, that is, add more than one "Pex" method for the test method and use PexAssume . For example, suppose we have a BubbleSort method, and we want to define different statements depending on the length of the input array.

 [PexMethod] public void BubbleSort(int[] a) { PexAssume.IsTrue(a.Length == 5); int[] result = Program.BubbleSort(a); // Assertions specific for an array with 5 elements } [PexMethod] public void BubbleSort(int[] a) { PexAssume.IsTrue(a.Length == 10); int[] result = Program.BubbleSort(a); // Assertions specific for an array with 10 elements } 
+2
source

This answer is based on the previous answer. This is a more specific question that has been asked.

Pex generates a test for all code codes, but knows nothing about your code. You still need to run / act / assert in PUT (a parameterized unit test) to tell Pex how you think your code should work. Alternatively, you can add an assumption before arranging for the template to accept / suit / act / approve.

In your example, I start with this PUT.

  [PexMethod(MaxRunsWithoutNewTests = 200)] [PexAllowedException(typeof(NullReferenceException))] public int MyMin([PexAssumeUnderTest]Class1 target, int[] data) { //assume PexAssume.IsTrue(data.Length == 1); //arrange data[0] = 0; //act int result = target.MyMin(data); //assert PexAssert.AreEqual(0, result); return result; } 

The results show that only 3/8 blocks were covered and that test 2 failed with the expected "0", received "1000"

This tells me that I need to look at the code to find out why I got 1000.

I see that I start the for loop with 1 instead of 0. Therefore, I fix the code and run IntelliTest again.

This time I get two passing tests that are good. But only 6/8 blocks were tested. Did I miss something.

I am creating a new PUT that allows Pex to generate data that looks like this.

  [PexMethod(MaxRunsWithoutNewTests = 200)] [PexAllowedException(typeof(NullReferenceException))] public int MyMin2([PexAssumeUnderTest]Class1 target, int[] data) { //assume //act int result = target.MyMin(data); //assert return result; } 

Now I have 7 unit test that execute all code codes and all tests pass.

You will notice that each PUT produces its own set of tests.

0
source

All Articles