I fully expect that I will be flames for this, but I do not understand how a set of unit tests generally proves what the client cares about, namely, whether the application meets his business requirements.
Here is an example: I just finished converting a piece of code to fix the big mistake we made. This was a classic case of over-engineering, and the changes affected a dozen window forms and about the same number of classes.
It took me a couple of days, now it is much easier, we got some functions for free, and we lost a ton of code that did what we now know that we really do not need.
Each of these forms worked great before. Public methods did exactly what they needed to do, and basic data access was in order.
So, any unit test would pass.
In addition, unfortunately, they did not what we did not understand, except in retrospect. As if we built a prototype and only after trying to use it realized that it was wrong.
So, now we have a more compact, more reliable, reliable application.
But the things that were wrong were wrong at a level where unit tests could never detect them, so I just don’t understand how sending a set of unit tests through an installation does anything but give a false sense of security.
Maybe I don’t understand something, but it seems to me that if the thing sent does not work at the same level as those tests, they do not prove anything.