I think you should ask yourself what (premature) testing should do.
First you write a (set) test without implantation. Perhaps also rainy day scenarios.
All these tests must be unsuccessful in order to be correct tests: So, you want to achieve two things: 1) Make sure that your implementation is correct; 2) Check that your device checks correctly.
Now, if you are doing upfront TDD, you want to run all your tests, as well as parts of NYI. The result of your overall test run passes if: 1) All materials sold succeed 2) All NYI information does not work
In the end, it would be a unit test ommision if your device tests were successful until there is an implementation, right?
You want to receive something like the mail of your continuous integration test, which checks all implemented and non-implemented code and is sent if any implemented code fails or any non-implemented code succeeds. Both are undesirable results.
Just write tests [ignore] that will not do the job. None of the statements that stops the first failure does not run other tests in the test.
Now how to do it? I think this requires a more advanced organization of your testing. And this requires some other mechanism to achieve these goals.
I think you need to separate your tests and create some tests that will run completely, but should fail, and vice versa.
The ideas are to split your tests into several assemblies, use a grouping of tests (ordered tests in mstest can do the job).
However, CI builds that letters, if not all tests in the NYI department, are not easy and straightforward.
Roland Roos Oct 07 '17 at 12:06 on 2017-10-07 12:06
source share