Using unit tests as a "contract for functionality"

Unit tests are often deployed with software versions to verify installation — that is, perform the installation, run the tests, and if they pass, the installation will be good.

I am about to start a project that will include the delivery of prototypes of software libraries for clients. Unit tests will be delivered as part of each release, and in addition to using tests to verify the installation, I plan to use unit tests that test the API as a “contract” for how the release should be used. If the user uses the release in the same way as it is used in unit tests, then excellent. If they use it in any other way, then all bets are disabled.

Has anyone tried this before? Any thoughts on whether this is a good / bad idea?

Edit: To emphasize the good point ChrisA and Dan raised in the answers below, “unit tests that test APIs” is better called integration tests, and their goal is to use APIs and software to demonstrate the functionality of the software from a client’s point of view.

+4
source share
8 answers

It seems like a good idea to me. I (are we all?) Regularly apply internal tests for this. When using my unit tests to verify that I did not break anything, I also implicitly verify that my API contract has not changed. It seems natural to use unit tests to deploy them to the mod you're talking about.

+11
source

Agile methodologies say: tests are specifications, so this is a very good idea.

+5
source

I fully expect that I will be flames for this, but I do not understand how a set of unit tests generally proves what the client cares about, namely, whether the application meets his business requirements.

Here is an example: I just finished converting a piece of code to fix the big mistake we made. This was a classic case of over-engineering, and the changes affected a dozen window forms and about the same number of classes.

It took me a couple of days, now it is much easier, we got some functions for free, and we lost a ton of code that did what we now know that we really do not need.

Each of these forms worked great before. Public methods did exactly what they needed to do, and basic data access was in order.

So, any unit test would pass.

In addition, unfortunately, they did not what we did not understand, except in retrospect. As if we built a prototype and only after trying to use it realized that it was wrong.

So, now we have a more compact, more reliable, reliable application.

But the things that were wrong were wrong at a level where unit tests could never detect them, so I just don’t understand how sending a set of unit tests through an installation does anything but give a false sense of security.

Maybe I don’t understand something, but it seems to me that if the thing sent does not work at the same level as those tests, they do not prove anything.

+5
source

This is actually a pretty good idea and very enjoyable as an API user.

This method can also be used in the opposite direction: when you use the "obsolete" API, you can use unit tests to document how you behave the API and to confirm that it really behaves as planned.

+1
source

If you are interested in providing a set of specifications with your code, perhaps you should study some of the behavior-developed development tools (nbehave, jbehave, rspec, etc.). These structures provide support for describing your tests in a given / when / then syntax and outputting formatted results that are in natural language. See nbehave for an example BDD tool for .NET. You can find a great description of BDD here.

Another option would be for you to be able to write tests using the acceptance testing platform, such as fit or fitnesse (or just java concordion ) and deliver this confirmation tests with code. Both fit / fitnesse and concordion allow the specification of tests in plain HTML or even Word documents.

The advantage of any approach (BDD or acceptance testing framework) is that the results that the user sees are more readable and understandable.

+1
source

If you release a code library , that sounds great.

If you release a regular software product with which your users will interact only using the graphical user interface , your unit tests may not work at the same level of abstraction or may not be the most useful tool to evaluate the behavior of your product. A really good user manual (yes, it is possible) might be better for this.

+1
source

Tests will verify requirements.

Requirements Define Functionality

=> Tests will test functionality.

The problem is that you can only test functionality that can be covered by unit tests. Integration or tests of the entire system will not work.

Otherwise, this is the main TDD approach for testing functionality with unit tests.

0
source

Meszaros calls it "Tests as documentation"

0
source

All Articles