We wrote parsers for different formats of scientific data in Perl. I recently added a test suite with the parser_*.t file for each format and subformat.
Of course, the parser APIs are exactly the same, as soon as the data read from the sample files that are used to test the parsing is different. To simplify the test files, I wrote a sub that receives the passed parser object and a hash structure representing the expected data. He looks like
my $parser = new MyApp::Parser($file); test_nested_objects = ($parser, { property1 => "value", property2 => 123, subobject_accessor => { property3 => "foobar", } }
Sub test_nested_objects goes through the hash and runs the tests for all properties defined in the hash, for example. if subobject_accessor can be called, returns an object and this object can be called property3 .
I checked how many tests are performed by the whole *.t file and tests => 123 added to all *.t files. Now I have added some checks to the general function, and all plans are wrong.
How to make my plan aware of subtests? I would like to do the following:
- number of tests given before running them to view progess
- the total number increased automatically β without manually changing numbers when editing sub
- individual tests for sub visible at startup prove (hiding tests in sub and returning only 0 or 1 is not acceptable, because I really need to know what is wrong with the data being analyzed)
I hope you understand. Sorry for the long story, but I thought that people probably would not understand without any basic knowledge.
Daniel BΓΆhmer
source share