Yes.:)
In VS2008, when you create a test project, Visual Studio will also create a test metadata file or vsmdi file. A solution can have only one metadata file. This file is the manifest of all tests generated as part of the solution in all test projects. Opening the metadata file, the Gui test list editor opens for editing and executing the file.
In the test list editor, you can create test lists [for example, UnitTestList, IntegrationTestList] and assign individual tests for a specific test list. By default, the test list editor displays all tests in the "All loaded tests" list and the "Tests not in list" list to help in assigning tests. Use them to find or assign test groups to lists. Remember that a test can belong to only one list.
There are two ways to call up a test list.
- In Visual Studio, each list can be called separately from the test list editor.
- At the command line, MSTest can be invoked with a specific list.
One option is good for developers in the daily workflow, the other is good for automated build processes.
I installed something similar in the last project I was working on.
This feature is very valuable *.
Ideally, we would like to run every conceivable test when we modify our code base. This gives us a better response to our changes as they are created.
In practice, however, each test in a test suite often means adding runtime minutes or hours to create time [depending on the size of the code base and build environment], which is prohibitively expensive for the developer and continuous integration [CI], which requires a quick turnaround to provide an appropriate response.
The ability to specify explicit test lists allows the developer, CI, and Final build to selectively target bits of functionality without sacrificing quality control or affecting overall performance.
The fact is that I was working on a distributed application. We wrote our own Windows services to handle incoming requests and used Amazon web services for storage. We did not want to run our Amazon test suite for each build, because
- Amazon hasn't always been
- We were not always connected
- The response time can be measured in hundreds of milliseconds, which in a series of test queries can easily fill the test suite execution time.
We wanted to keep these tests as we need a behavior test kit. If as a developer I had doubts about our integration with Amazon, I could run these tests from my dev environment as needed. When it was time to promote the final build for QA, Cruise Control could also run these tests so that someone from another functional space would not accidentally break Amazon's integration.
We put these Amazon tests on a list of integration tests that was available to every developer and ran on the build machine when Cruise Control was used to promote the build. We saved another Unit Test list, which was also available for each developer and performed on each individual assembly. Since they were all In-Memory [and well written:] and performed approximately as long as it took to create the project, they did not affect individual build operations and provided excellent feedback from Cruise Control in a timely manner.
* = valuable == important. "value" is the word of the day :)