Best practice for comprehensive testing of entire systems

End-to-end testing means using an application from external boundaries to test its behavior. So far, I have only done written tests for one executable artifact. How should I test systems consisting of several artifacts that are deployed on different hosts?

I see two alternatives.

  • Tests set up the entire system and implement it from the most external borders.
  • Each artifact is tested individually, relying on test content to ensure compliance with the protocol between them.

Is there a clear case just to stick to one of them, or is one of them preferable, or are they interchangeable? If they are interchangeable, what are some of the advantages and disadvantages between them?

+4
source share
2 answers

Although I think it depends on the context, I prefer the first alternative. Here are my random thoughts:

I like my tests so that they are accurately matched against possible use cases (BDD style) (with the caveat that I am using the term use case incorrectly). These use cases may span multiple applications and subsystems.

Example: a back office administrator can view a transaction made by a user from an open interface.

Here, the back office admin interface and the open interface are different applications, but they are included in the same use case.

Comparing these thoughts with your problem, when you have subsystems deployed on different hosts, I would say that it depends on how it is used, from the point of view of the user / actor. Do multiple subsystems use cases?

In addition, it is possible that the fact that the system is deployed on multiple hosts does not matter for the tests. You can replace interprocess communication with method calls in your tests and the entire system as part of a single process during tests, reducing complexity. Complement this with tests that verify only interprocess communication.

Edit:

I understand that I forgot to indicate why I prefer to test the entire system.

Your asset is functions, that is, behavior, and code is a commitment. So you want to test the behavior, not the code (BDD style).

If you test each subsystem separately, you are testing code, not functions. What for? When you divided your system into subsystems, you did this for some technical reasons. When you learn more, you will find that the selected seam is suboptimal and would like to assign some responsibility from one subsystem to another. And you will have to modify the test and production code at the same time, leaving you without a protective grid. This is a typical sign of testing granularity.

However, these tests are too dumb to test everything. Therefore, you need to have additional tests for details where necessary.

+2
source

Testing each artifact from end to end individually would in any case be highly demanded. This ensures that every artifact sounds.

In addition, you can check the composition of artifacts. This will catch problems in the interactions between artifacts. I do not know about your situation, but one thing is important - this is a test environment, which is a copy of the production. Testing the system in a test environment is a very good idea. You can also test the system in a production environment; it may be feasible or not. For example, if your system processes credit card payments, you can avoid test payments in the production system.

In any case, testing each system individually is more important than testing the composition. Once you know that your artifacts sound in isolation, catching interaction tests will be much easier. If you only have an end-to-end test of the entire system, it is much more difficult to understand where the error is when the tests fail.

+1
source

Source: https://habr.com/ru/post/1413866/


All Articles