I am considering two different approaches to how to structure my acceptance tests. We have a Silverlight project that calls up a service level (I have both sides). Due to the Silverlight method, the test code that invokes Silverlight assemblies must be in a separate test project from other tests other than Silverlight.
1) Accept all the acceptance criteria that we have come up with and put them into the function files. Label scripts with tags to indicate the environment in which they will run (@server, @client, etc.). Include manual tests in function files and tag them with @manual.
Pros: All tests recorded by BAs will be in one place for viewing and potential editing.
Cons: It might make sense to test some scenarios with unit tests or integration tests, and NUnit might be a better tool for this than SpecFlow
2) Write down acceptance criteria for everything, but then automate some in SpecFlow, some with unit tests, some with integration tests, etc. Only SpecFlow-automated scripts will be in SpecFlow. We can put scripts to be tested per unit, test integration, or manually test them in function files, but these scripts will not actually run any code, they will simply be available for documentation purposes.
Pros: Less friction and overhead for developers. We will automate various tests using the best tools that we have for each test.
Cons: We will have to save scripts that are not executed by SpecFlow in synchronization with any code that automates them.
Thoughts? Is there any other way that I don't think about? How do you do this?
source share