If the DAL is not responsible for enforcing certain application rules in the data warehouse, then there is no need to guarantee that the test data complies with these higher level rules. unit test only needs to make sure that the DAL applies the rules that are its responsibility - presumably things like staying within the database constraints, data types, etc. The data must only meet the prerequisites of the DAL itself in order to constitute a valid test case. Higher-level rules will be checked within the unit test application level, in which the DAL will be hatched or ridiculed. Under these assumptions, DAL tests are likely to have a fairly static dataset or one generated using trivial code.
It is possible that the "outdated" nature of the application makes it difficult, if not impossible, to unit test the application and the DAL levels separately. In fact, two layers in the aggregate would be a single (if complex) βunitβ. In this case, it is acceptable (or possibly "valid" to have the correct word) for creating test data using the application layer as appropriate. Such a generation, in fact, will constitute even more test cases for the βunitβ of the conglomerate. DAL errors due to application level regressions should be investigated as candidate errors in one, the other, or both layers. Any time spent dividing the two layers into separate blocks is likely to pay dividends in the long run.
source share