Python with non-fatal bounce support

I evaluate the "test framework" for automated system tests; while I'm looking for a python structure. In py.test or nose, I do not see something like EXPECT macros that I know from the Google testing framework. I would like to make several statements in one test without interrupting the test on the first failure. Am I missing something in this framework or is it not working? Does anyone have any suggestions for working with python frames that can be used for automated system tests?

+4
source share
5 answers

I needed something like this for functional testing, which I do using my nose. I eventually came up with this:

def raw_print(str, *args): out_str = str % args sys.stdout.write(out_str) class DeferredAsserter(object): def __init__(self): self.broken = False def assert_equal(self, expected, actual): outstr = '%s == %s...' % (expected, actual) raw_print(outstr) try: assert expected == actual except AssertionError: raw_print('FAILED\n\n') self.broken = True except Exception, e: raw_print('ERROR\n') traceback.print_exc() self.broken = True else: raw_print('PASSED\n\n') def invoke(self): assert not self.broken 

In other words, it prints lines indicating whether the test passed or failed the test. At the end of the test, you call the invoke method, which actually performs the actual statement. This is definitely not preferable, but I have not seen a Python testing framework that can handle this type of testing. I also did not begin to figure out how to write a plug-in for the nose to do such things .: - /

+2
source

You asked for suggestions, so I propose a robot framework .

+1
source

Oddly enough, it looks like you're looking for something like claft (command and filter tester). Something like this, but much more mature.

claft (for now) is just a toy that I wrote to help students with programming. The idea is to provide exercises with simple configuration files that represent the requirements of the program in terms that are reasonably understandable to humans (and declarative rather than programmatic), as well as suitable for automatic testing.

claft runs all defined tests, passing arguments and inputs to each, checking return codes and corresponding output data ( stdout ) and error messages ( stderr ) against regular expression patterns. It collects all the failures in the list and prints the entire list at the end of each package.

It does not yet execute arbitrary I / O sequence dialogs. So far, it just transfers the data and then reads all the data / errors. It also does not use timeouts and, in fact, does not even record unsuccessful execution attempts. (I really said it was just a toy, still, right?). I have also not yet implemented support for Setup, Teardown, and External Check scripts (although I have plans to do this).

The Brian proposal for a "robot structure" might be better for your needs; although a quick look at him indicates that he is much more active than I want for my purposes. (I need everything to be simple enough so that students new to programming can focus on their exercises and not spend much time creating a test assembly).

You can look at claft and use it or get your own solution there (it is licensed by BSD). Obviously, you would be happy to contribute. (It on [bitbucket] :( http://www.bitbucket.org/ ) so you can use Mercurial to clone and fork your own repository ... and send a pull request if you ever want to I looked at merging your changes back into my repo).

Then, perhaps, I misunderstand your question.

+1
source

Why not (in unittest , but this should work in any framework):

 class multiTests(MyTestCase): def testMulti(self, tests): tests( a == b ) tests( frobnicate()) ... 

assuming your implemented MyTestCase so that the function is wrapped in

 testlist = [] x.testMulti(testlist.append) assert all(testlist) 
0
source

the nose only cancels the first failure if you pass the -x option on the command line.

test.py:

 def test1(): assert False def test2(): assert False 

without the -x option:

  C: \ temp \ py> C: \ Python26 \ Scripts \ nosetests.exe test.py
 Ff
 =================================================== ======================
 FAIL: test.test1
 -------------------------------------------------- --------------------
 Traceback (most recent call last):
   File "C: \ Python26 \ lib \ site-packages \ nose-0.11.1-py2.6.egg \ nose \ case.py", line
 183, in runTest
     self.test (* self.arg)
   File "C: \ temp \ py \ test.py", line 2, in test1
     assert false
 AssertionError

 =================================================== ======================
 FAIL: test.test2
 -------------------------------------------------- --------------------
 Traceback (most recent call last):
   File "C: \ Python26 \ lib \ site-packages \ nose-0.11.1-py2.6.egg \ nose \ case.py", line
 183, in runTest
     self.test (* self.arg)
   File "C: \ temp \ py \ test.py", line 5, in test2
     assert false
 AssertionError

 -------------------------------------------------- --------------------
 Ran 2 tests in 0.031s

 FAILED (failures = 2)

with the -x option:

  C: \ temp \ py> C: \ Python26 \ Scripts \ nosetests.exe test.py -x
 F
 =================================================== ======================
 FAIL: test.test1
 -------------------------------------------------- --------------------
 Traceback (most recent call last):
   File "C: \ Python26 \ lib \ site-packages \ nose-0.11.1-py2.6.egg \ nose \ case.py", line
 183, in runTest
     self.test (* self.arg)
   File "C: \ temp \ py \ test.py", line 2, in test1
     assert false
 AssertionError

 -------------------------------------------------- --------------------
 Ran 1 test in 0.047s

 FAILED (failures = 1)

Perhaps you should consider looking at the nose documentation .

-1
source

All Articles