How to quickly save automatic tests?

Automated tests MUST be fast to reflect the status of a project in real time. The idea is this:

  • after performing any commit in the automatic assembly of the repository (as quickly as possible).
  • if automatic tests with automatic start are running. MUST be fast.

This is the best way to find out if your changes have changed.

At first it seemed that it was difficult to make the assembly, but we managed to save it for about 100 seconds. to solve 105 (!) projects (MSVS 2008 C #).

The tests turned out to be not so simple (we use NUnit FW). Unit testing is not a big problem. These are integration tests that are killing us. And it’s not the fact that they are slower (any ideas on how to make them faster are very appreciated), but the fact that the environment should be set up, MUCH slower (atm ~ 1000 seconds)!

Our integration tests use web services (19 so far) that need to be redistributed to reflect recent changes. This includes restarting services and a lot of HDD activity.

Can anyone share their experience on how the environment and workflow should / can be organized / optimized to consolidate the automated testing phase. What are the "low level" bottlenecks and workarounds.

PS books and wide articles are welcome, but real world working solutions are much more appreciated.

+6
performance continuous-integration testing automated-tests
source share
7 answers

There are a number of optimization strategies that you can do to increase test throughput, but you need to ask yourself what the purpose of this testing is and why it should be fast.

Some tests take time. This is a fact of life. Integration testing usually takes a lot of time, and you usually have to tune your environment to be able to do this. If you set up the environment, you will want to create an environment as close as possible to the final production environment.

You have two options:

  • Test optimization or test deployment.
  • Do not do them so often.

In my experience, it is better to have an integrating environment that is correct and detects errors and adequately represents the final production environment. Usually I choose option 2 (1).

It is very tempting to say that we will check all the time, but in reality you need a strategy.

(1) Except when there are many errors that are only in integration, in which case forget everything that I said :-)

+5
source share

We use .NET and NUnit, which support categories (an attribute that you can put to the test). Then we run lengthy tests and put them in the NightlyCategory so that they run only in nightly builds, and not in continuous builds that we want to run quickly.

+3
source share

I put together a presentation at the Turbo-Charged Test Suites . The second half is for Perl developers, but the first half may be useful to you. I do not know enough about your software to find out if it is suitable.

This mainly concerns methods for speeding up the use of the database in test suites and running tests in one process to avoid constant reloading of libraries.

+1
source share

I would suggest conducting a few tests at a high level, and if any of them fail, run the tests with a higher resolution.

Think about tech support by phone ...

Does your computer work? if yes, done. If not, will your computer turn on? ...

For my unit testing, I have some quick tests, such as "does my computer work"? if they pass, I do not execute the rest of my package. If any of these tests fails, I run an appropriate set of lower level tests that give me a higher resolution view in this case.

My opinion is that running a comprehensive set of top-level tests will take less than half a second.

This approach gives me both speed and detail.

+1
source share

the fact that the environment must be configured, which is MUCH slower (atm ~ 1000 sec)!

Well, at least you know where to focus ... Do you know where this time is spent?

Obviously, any decision will depend on the features here.

There are three solutions that I used in this situation:

  • use more cars. Perhaps you can split your services into two machines? This will allow you to refuse to set the time to 1/2?

  • use faster cars? In one situation, I know that the team reduced its integration test, having reduced from 18 hours to 1 hour, updating the equipment (several processors, fast RAID storage, more RAM, work). Of course, it cost them about $ 10,000, but it was worth it.

  • use the memory database for integration test. Yes, I know that you will also want to run tests with a real database, but perhaps you can run the tests initially in the memory version to get quick feedback.

+1
source share

The best solution for this situation is to back up ghost images of the environment and restore the image before resetting the environment. This will make more sense in order to spend time.

+1
source share

Buildbot: http://buildbot.net/trac I cannot recommend this enough if you are doing continuous integration (automatic testing). With quick setup, all our unit tests run every time there is a commit, and longer integration tests run periodically every other day (3 times the last time I checked, but this can be easily changed).

0
source share

All Articles