How to write a block test, where each test case has a different input, but does the same?

I need to create unit-test for some python class. I have a database of inputs and expected results that must be created by the UUT for these inputs.

Here is the pseudo code of what I want to do:

for i=1 to NUM_TEST_CASES: Load input for test case i execute UUT on the input and save output of run Load expected result for test case i Compare output of run with the expected result 

Can I achieve this with unittest or is there some better test suite for this?

+6
python unit-testing
source share
5 answers

The way you describe testing is an odd coincidence for Unit Testing in general. Unit testing is not - usually - it downloads test data or search results from external files. Typically, it is simply hardcoded in the unit test.

This does not mean that your plan will not work. It’s just to say that it’s not typical.

You have two options.

  • (What are we doing). Write a little script that performs "load input for test case i" and "Load expected result for test case i". Use this to create the required unittest code. (We use Jinja2 templates to write Python code from source files.)

    Then delete the source files. Yes, delete them. They will only confuse you.

    You still have the correct Unittest files in a “typical” form with static data for the test case and expected results.

  • Write your setUp method to perform "Loading input for test case i" and "Download expected result for test case i". Write your test method to execute the UUT.

It might look like this.

 class OurTest( unittest.TestCase ): def setUp( self ): self.load_data() self.load_results() self.uut = ... UUT ... def runTest( self ): ... exercise UUT with source data ... ... check results, using self.assertXXX methods ... 

Want to run this many times? One way to do something like this.

 class Test1( OurTest ): source_file = 'this' result_file = 'that' class Test2( OutTest ): source_file= 'foo' result_file= 'bar' 

This will let the main unittest program locate and run your tests.

+4
source share

We do something like this to run those tests that are integration (regression) within the unittest framework (in fact, this is an internal setup that gives us huge advantages, such as running tests in parallel in a machine cluster, etc. etc. .d. - the great added value of this setting is why we are so interested in using unittest frameworks).

Each test is presented in a file (parameters for use in this test, followed by expected results). Our Integration_test reads all such files from the directory, analyzes each of them, and then calls:

 def addtestmethod(testcase, uut, testname, parameters, expresults): def testmethod(self): results = uut(parameters) self.assertEqual(expresults, results) testmethod.__name__ = testname setattr(testcase, testname, testmethod) 

Let's start with an empty test class:

 class IntegrationTest(unittest.TestCase): pass 

and then call addtestmethod(IntegrationTest, ... in a loop in which we read all the relevant files and parse them to get the name, parameters and expressions.

Finally, we call our own specialized test runner, which does the hard work (distributing tests on available machines in a cluster, collecting results, etc.). We did not want to invent this wheel with a rich added value, so we make a test case as close as possible to a typical "manual encoding" as necessary, in order to "trick" a tester working directly for us -).

If you have specific reasons (good test videos or the like) to use the unittest approach for tests (integration?), You may find that your life is easier with a different approach. Nevertheless, it is quite viable, and we are quite happy with its results (basically these are incredibly fast runs of large sets of integration / regression tests!).

+3
source share

It seems to me that pytest has only what you need.

You can parameterize the tests so that the same tests are performed as many times as you have inputs, and all that is required is a decorator (no loops, etc.).

Here is a simple example:

 import pytest @pytest.mark.parametrize("test_input,expected", [ ("3+5", 8), ("2+4", 6), ("6*9", 42), ]) def test_eval(test_input, expected): assert eval(test_input) == expected 

Here parametrise takes two arguments - the parameter names as a string and the values ​​of these parameters as an iteration.

test_eval will then be called once for each list item.

+1
source share

Perhaps you can use doctest for this. Knowing your inputs and outputs (and being able to match the case number with the function name), you should be able to create a text file as follows:

 >>> from XXX import function_name1 >>> function_name1(input1) output1 >>> from XXX import function_name2 >>> function_name2(input2) output2 ... 

And then just use doctest.testfile('cases.txt') . Maybe worth a try.

0
source share

You can also take a look at my answer to this question . Again, I'm trying to do regression testing, not per-se unit testing, but the unittest framework is good for both.

In my case, I had about a dozen input files covering the fair distribution of various use cases, and I had about a dozen test functions that I wanted to call on each of them.

Instead of writing 72 different tests, most of which were identical, except for the input parameters and the result data, I created a results dictionary (the key is the input parameters, and the value is the result dictionary for each function being tested). Then I wrote one TestCase class to test each of the 6 functions and replicated this across 12 test files, adding TestCase to the test suite several times.

0
source share

All Articles