Uncritical failures

I use Python's built-in unittest module , and I want to write some tests that are not critical.

I mean, if my program passes such tests, that's great! However, if this does not work, this is not a problem, the program will still work.

For example, my program is designed to work with user type "A". If it does not work with "A", it will break. However, for convenience, most of them should also work with another type “B”, but this is not necessary. If it does not work with "B", then it is not broken (because it still works with "A", which is its main purpose). The inability to work with “B” is not critical, I will simply miss the “bonus feature” that I might have.

Another (hypothetical) example is an OCR record. The algorithm should recognize most of the images from the tests, but it is normal if some of them fail. (and no, I do not write OCR)

Is there a way to write non-critical tests in unittest (or another testing framework)?

+4
source share
10 answers

Python 2.7 (and 3.1) added support to skip some test methods or test cases, and also flagged some tests as expected failure .

http://docs.python.org/library/unittest.html#skipping-tests-and-expected-failures

Tests marked as expected failure will not be considered an error in TestResult.

+1
source

As a practical matter, I would probably use print statements to indicate failure in this case. A better solution is to use warnings:

http://docs.python.org/library/warnings.html

(.. "B" ).

http://docs.python.org/library/logging.html

Edit:

, Django, , , , , . , SHOULD (.. , , ), . , , , , . , - , , , , .

+7

, unittest, - . , , , , , . , ... , , , .

, , .

+4

: , .

, "" , , . , "" .

OCR , -, , : "assert success_rate > 8.5", , .

+4

unittest , :

unittest.main() , . , :

suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)

TestSuite . , . , , .

+3

. , , . , , .

(, , unittest) . : , , .

, , . , , " , ", ( ) - .

, , , , ( , , ).

ryber idea , wcoenen answer. TextTestRunner . : TextTestRunner , .

class _TerseTextTestResult(unittest._TextTestResult):
    def printErrorList(self, flavour, errors):
        for test, err in errors:
            #self.stream.writeln(self.separator1)
            self.stream.writeln("%s: %s" % (flavour,self.getDescription(test)))
            #self.stream.writeln(self.separator2)
            #self.stream.writeln("%s" % err)


class TerseTextTestRunner(unittest.TextTestRunner):
    def _makeResult(self):
        return _TerseTextTestResult(self.stream, self.descriptions, self.verbosity)


if __name__ == '__main__':
    sys.stderr.write("Running non-critical tests:\n")
    non_critical_suite = unittest.TestLoader().loadTestsFromTestCase(TestSomethingNonCritical)
    TerseTextTestRunner(verbosity=1).run(non_critical_suite)

    sys.stderr.write("\n")

    sys.stderr.write("Running CRITICAL tests:\n")
    suite = unittest.TestLoader().loadTestsFromTestCase(TestEverythingImportant)
    unittest.TextTestRunner(verbosity=1).run(suite)

- , - , . , , .

+2

, , , test_unit ( , , ), ( ).

, , .

- / - ( ).

+1

- "B" ( - , ?) "B" . , (, !), B. , git ( ), / - , .

, , -, . : " , " B "?" , , . , , , , . "B" "B" , , "B" .

, , logger . , , , , , . , ( , , , , .. ..). ", !" app, , , , .

0

You can write your test so that they calculate the speed of success. With OCR, you can use the 1000 image code and require 95% to be successful.

If your program should work with type A, then if this does not work, the test will fail. If this is not required to work with B, what is the value of such a test?

-1
source

All Articles