Is it enough to finish testing the end of 2?

My questions mainly concern the testing methodology. I work in an organization that practices TDD (Test Driven Development). We use AngularJS and therefore the full test stack - Jasmine for unit tests and Protractor for testing e2e.

When developing a function, our process begins by first writing unsuccessful e2e tests and then writing a function using TDD. Tests are written only for public methods (be it for the controller / directives / services). The product itself does not contain complex logic (except for a few exceptions). Recently, we started discussing the fact that it makes no sense to write unit tests for controllers, since they reveal functionality, 100% of them are viewed and, in any case, tested using e2e tests. Basically - unit tests and e2e tests overlap. At first we all agreed, but then this decision opened the box of Pandora. In the end, the same can be said for directives. So why test them? Then the question arose about services. Most of them (98%) simply make an internal call and return a response. So, why not just mock httpBackend and test the services when testing controllers that are tested through e2e.

You get a drift ....

I see an advantage in performing both unit tests and e2e tests, despite the fact that they practically overlap. Mostly instant feedback and "executable documentation." What do you practice? You see other benefits and the “juice is worth squeezing” - is it worth writing duplicate tests for the simplest implementations to get these two benefits higher?

+8
javascript angularjs unit-testing tdd testing
source share
1 answer

This is a big topic, not something that can really have an authoritative answer, but I will do my best to cover a few points.

First, you need to think about the purpose of the tests. According to the Agile Testing Quadrants , unit testing exists primarily to support the team. They are usually written next to the product (for example, using TDD, perhaps by the developers themselves) and serve to increase the confidence of the developer that they did not break anything with this latest change. With this confidence, developers can work effectively and refactor with the reckless abadon - the TDD's dream. Unit tests do not answer the question “Is this suitable for our customers”, but that’s not why they are there.

Functional testing (e2e, if I understand your description) still supports the team with a quick turn of the test results, but actually starts to answer the question "Can the user do something?". You verify that the user sees and begins testing your actual product in a way that makes sense to users.

Quadrants 3 and 4 begin to pay attention to whether the product works well (that is, it is suitable for the purpose, and not just for functionality), but this is another subject.

Based on this understanding of testing, part of the answer depends on the structure of your team. Do you have a separate team of developers and developers? If this is the case, it may be advisable for your developers to write unit tests (in any case, they are useful for them), and for the test team to process other quadrants on their own (including the e2e record at their discretion). And if your test team and development team are the same? If you can get a similar processing time (test written → useful result) from your tests of the / e 2e function, as you can from unit tests, it might make sense to focus on them and reap the benefits for both methods without overlapping.

The short answer I would give is simply to ask: "What benefit do we get from this test?" If you find answers for matching tests, it might make sense to remove redundancy.

Some of the points above and a few more are discussed here , so I will stop working now.

+3
source share

All Articles