Duplicate code in unittest test file

I have a test file that looks like this:

def MyTestCase(unittest.Testcase): def test_input01(self): input = read_from_disk('input01') output = run(input) validated_output = read_from_disk('output01') self.assertEquals(output, validated_output) def test_input02(self): input = read_from_disk('input02') # ... # and so on, for 30 inputs, from input01 to input30 

Now I understand that test code can be a little repetitive, because simplicity is more important than brevity. But it becomes really error-prone, because when I decided to change the signature of some functions used here, I had to make changes in all 30 places.

I could reorganize this into a loop over known inputs, but I want each input to remain a separate test, so I thought I should use test_inputxx methods.

What am I doing wrong?

+7
source share
4 answers

Write a helper function to remove the repetition from test cases:

 def MyTestCase(unittest.Testcase): def run_input_output(self, suffix): input = read_from_disk('input'+suffix) output = run(input) validated_output = read_from_disk('output'+suffix) self.assertEquals(output, validated_output) def test_input01(self): self.run_input_output('01') def test_input02(self): self.run_input_output('02') def test_input03(self): self.run_input_output('03') 
+11
source

I like the Ned Batchelder solution. But for posterity, if you often change the number of inputs, you can do something like:

 def MyTestCase(unittest.Testcase): def __init__(self, *args, **kwargs): for i in range(1,31): def test(self, suffix=i): input = read_from_disk('input%02d' % suffix) output = run(input) validated_output = read_from_disk('output%02d' % suffix) self.assertEquals(output, validated_output) setattr(self, 'test_input%02d' % i) = test super(MyTestCase, self).__init__(*args, **kwargs) 
+2
source

How about something like this so that it tells you which input failed.

 def MyTestCase(unittest.Testcase): def test_input01(self): for i in range(1,30): input = read_from_disk('input%.2d' %i) output = run(input) validated_output = read_from_disk('output%.2d' %i) self.assertEquals(output, validated_output, 'failed on test case %.2d' %i) 
+1
source

My favorite tool for such a test is parameterized test cases that look like this:

 from nose_parameterized import parameterized class MyTestCase(unittest.TestCase): @parameterized.expand([(1,), (2,), (3,)]) def test_read_from_disk(self, file_number): input = read_from_disk('input%02d' % file_number) expected = read_from_disk('output%02d' % file_number) actual = run(input) self.assertEquals(expected, actual) 

You write a test case to take whatever parameters you need, wrap the parameterized function in the @parameterized.expand decorator and provide sets of input parameters inside the expand () call. Then the test runner conducts an individual test for each set of parameters!

In this case, there is only one parameter, so the expand() call has an unsuccessful additional level of nesting, but the template becomes especially nice when your use case is a little more complicated and you use param objects to provide args and kwargs to your test function:

 from nose_parameterized import parameterized, param class MyTestCase(unittest.TestCase): @parameterized.expand([ param(english='father', spanish='padre'), param(english='taco', spanish='taco'), ('earth', 'tierra'), # A regular tuple still works too, but is less readable ... ]) def test_translate_to_spanish(self, english, spanish): self.assertEqual(translator(english), spanish) 

The template allows you to easily and clearly indicate many sets of input parameters and only write the testing logic once.

I use nose for testing, so my example uses nose-parameterized , but there is also a unittest compatible version .

+1
source

All Articles