Avoiding duplicate logic with Mocks

I have the following task and I have not found a good answer. In this case, I use the Mocking framework (JMock) to isolate the tests from the database code. I make fun of access to classes that are related to the database logic, and separately test the database classes using DBUnit.

The problem I am facing is that I notice a pattern where logic is conceptually duplicated in several places. For example, I need to find that the value in the database does not exist, so I could return null from the method in this case. So I have a database access class that interacts with the database and returns zero accordingly. Then I have a business logic class that gets zero from the layout, and then is checked to act accordingly if the value is null.

Now, what if in the future the behavior should change and return null, it no longer fits, say, because the state has become more complicated, so I will need to return an object that says that the value does not exist, and some additional fact from the database.

Now, if I changed the behavior of the database class so that it no longer returns null in this case, the business logic class will still function and the error will only be detected in QA unless someone remembers the connection, or the proper one Methods for using the method were observed.

I fell as if I had missed something, and there should be a better way to avoid this conceptual duplication, or at least try it out so that if it has changed, the fact that the change does not apply, w100>.

Any suggestions?

UPDATE:

Let me try to clarify my question. I think about when the code evolves over time, how to ensure that the integration does not interrupt between the classes tested with the layout and the actual implementation of the classified ones that mock represents.

For example, I just had a case when I had a method that was originally created and did not expect null values, so this was not a test on a real object. Then, the user of the class (checked using the layout) was strengthened to pass null as the value of the parameter under certain circumstances. About integration, which broke down because the real class was not tested for zero. Now, when you build these classes first, it doesn't really matter because you test both ends as you build, but if the design needs to develop two months later, when you tend to forget about the details, how would you test the interaction between these two sets of objects (one checked using layout versus actual implementation)?

The main problem, apparently, is duplication (which violates the DRY principle), expectations are really stored in two places, although the relationship is conceptual, and the actual duplicate code is missing.

[Edit after Aaron Digullah the second edited his answer]:

That's right, this is exactly what I am doing (except that there is some further interaction with the database in the class that is tested through DBUnit and interacts with the database during its tests, but this is the same idea), So now, let's say , we need to change the behavior of the database so that the results are different. The test using the layout will continue to pass if 1) someone does not remember, or 2) it breaks down in integration. Thus, the return values ​​of the stored procedure (say) of the database are essentially duplicated in the test layout data. Now, what bothers me about duplication is that the logic is duplicated, and that is a minor DRY violation. Maybe it is so (there is a reason for integration tests in the end), but I felt that instead I was missing something.

[Change when starting the bonus]

Reading interactions with Aaron goes to the bottom of the question, but what I'm really looking for is some understanding of how to avoid or manage obvious duplication, so a change in the behavior of the real class will show in unit tests that interact with the layout as something that broke. Obviously, this does not happen automatically, but there may be a way to develop the script correctly.

[Change upon award]

Thanks to everyone who spent time answering the question. The winner taught me something new about how to think about transferring data between two layers, and received the answer first.

+7
unit-testing mocking code-duplication
source share
11 answers

Database abstraction uses null to indicate "no results found". Ignoring the fact that the bad idea is to pass null between objects, your tests should not use this null literal when they want to check what happens when nothing is found. Instead, use a constant or test data builder so that your tests relate only to what information is passed between objects, and not how this information is represented. Then, if you need to change the way the database layer represents “no results found” (or any information that your test depends on), you only have one place in the tests to change this.

+2
source share

You basically ask for the impossible. You ask your unit tests to predict and notify you when you change the behavior of external resources. Without writing a test to create new behavior, how can they know?

What you are describing is adding a completely new state that needs to be tested for - instead of a null result, now there is some kind of object coming out of the database. How can your test suite know what should be for the intended behavior of the test object for some new random object? You need to write a new test.

The layout is not “wrong,” as you commented. The layout does exactly what you set it up. The fact that the specification has changed has nothing to do with the layout. The only problem in this scenario is that the person who introduced the change forgot to update unit tests. I'm actually not too sure why you think there is any kind of duplication regarding the problems.

An encoder adding a new return result to the system is responsible for adding a unit test to handle this case. If this code is also 100% sure that there is no way to return a null result now, it can also delete the old unit test. But why do you? unit test correctly describes the behavior of the test object when it gets a null result. What happens if you change the backend of your system to a new database that returns zero? What if the spec returned null? You could also do a test because, as far as your object is concerned, it can really get something back from an external resource, and it should gracefully handle all possible cases.

The whole purpose of ridicule is to separate your tests from real resources. This will not automatically save you from introducing errors into the system. If your unit test accurately describes the behavior when it gets zero, great! But this test should not know any other states and, of course, should not be somehow informed that the external resource will no longer send zeros.

If you make the right, loosely coupled design, your system can have any backend you could imagine. You should not write tests with one single external resource. It looks like you could be happier if you added some integration tests that use your real database, thereby eliminating the mocking tier. It is always a great idea to use with building or sanity / smoke tests, but it usually hinders the development of everyday life.

+4
source share

You will not miss anything here. This is a weakness in unit testing with mock objects. It looks like you are properly breaking your unit tests to a reasonable size. This is a good thing; it is much more common to find people who test too much in a “single” test.

Unfortunately, when you test this level of detail, your unit tests do not cover the interaction between collaborating objects. For this you need integration tests or functional tests. I do not know a better answer than that.

It is sometimes useful to use a real collaborator instead of a layout in your unit test. For example, if you are testing a data access object, using a real domain object in a unit test instead of a layout is often simple enough to configure and execute. The converse is often not true: data access objects usually require a database connection, connection to files or a network and are quite complex and require a lot of time to configure; using a real data object when the testing module of your domain object turns a unit test, which takes microseconds, into one, which takes hundreds or thousands of milliseconds.

So, we summarize:

  • Record integration / functional testing to catch problems with collaborating facilities.
  • It’s not always necessary to mock employees - use your best opinion
+4
source share

Unit tests cannot tell you when a method suddenly has a smaller set of possible results. What is the code for? It will tell you that the code is no longer executing. This, in turn, will lead to dead code detection at the application level.

[EDIT] Based on the comment: the layout does not have to do anything, but allows you to instantiate the tested class and allow the collection of additional information. However, this should never affect the outcome of what you want to test.

[EDIT2] Database bullying means you don't care if the DB driver works. What do you want to know if your code can correctly interpret the data returned by the database. Also, this is the only way to check if your error handling is working correctly, because you cannot tell the real DB driver "when you see this SQL, throw this error." This is only possible with a layout.

I agree, it takes time to get used to. That's what I'm doing:

  • I have tests that check if SQL is working. Each SQL is run once against a static test database, and I check that the returned data is what I expect.
  • All other tests are performed using the DB connector, which returns predefined results. I like to get these results by running code in a database, writing down primary keys somewhere. Then I write a tool that takes these primary keys and unloads the Java code with the layout in System.out. That way, I can create new test cases very quickly, and the test cases will reflect the "truth."

    Even better, I can recreate the old tests (when changing the database) by running the old identifiers and my tool again

+2
source share

I would like to narrow down the problem to her.

Problem

Of course, most of your changes will catch the test.
But there is a subset of Scenarios where your test will not fail, although it should:

When you write code, you use your methods several times. You get a 1: n ratio between method definition and use. Each class that uses this method will use it in accordance with the corresponding test. Thus, the layout is also used n times.

The result of your methods once was never null . After you change this, you probably will not forget to fix the corresponding test. So far so good.

You run your tests - the whole passage .

But over time, you forgot something ... the layout never returns null . So, the n test for n classes using the layout does not check for null .

Your QA will fail , although your tests did not work.

Obviously, you will have to modify your other tests. But the work fails. Thus, you need a solution that works better than remembering all abstracting tests.

Decision

To avoid such problems, you will need to write the best tests from the very beginning. If you skip cases where the test class should handle errors or null values, you simply have incomplete tests . It is like not testing all the functions of your class.

It is hard to add this later. - So, start early and be extensive in your tests.

As mentioned by other users - code coverage reveals some unverified cases. But the missing error handling code and the missing test will not be displayed in code coverage. (Code coverage of 100% does not mean that you are not noticing something.)

So write a good test: Assume the outside world is malicious. This not only involves passing bad parameters (e.g. null values). Your bullying is also part of the outside world. Pass null and exceptions - and watch how your class handles them as expected.

If you decide null be a valid value, this test will fail later (due to the absence of exceptions). This way you get a list of crashes.

Since each calling class handles errors or is null different, this is not duplicate code that could have been avoided. Different treatments require different tests.


Hint: Keep your layout simple and clean. Move the expected return values ​​to the test method. (Your layout may simply pass them back.) Avoid testing solutions in mocks.

+1
source share

Here is how I understand your question:

You use the mock objects of your objects to test the business level of your application using JMock. You also test your DAO level (the interface between your application and your database) using DBUnit and transfer real copies of object objects filled with a known set of values. Since you use 2 different methods for preparing test objects, your code violates DRY, and you risk that your tests do not synchronize with reality when the code changes.

Folwer says ...

It's not exactly the same, but it certainly reminds me of Martin Fowler's Mocks Are not Stubs . I see that the JMock route is a moxa way and the “real objects” route is a classic testing way.

One way to be as dry as possible when solving this problem is to rather be a classic and then a broker. Perhaps you can compromise and use real copies of your bean objects in your tests.

Custom creators to avoid duplication

What we did in one project was the creation of Makers for each of our business facilities. The creator contains static methods that will create a copy of the object of the object, filled with known values. Then, whatever item you need, you can call the creator for this object and get a copy of it with known values ​​that will be used for your testing. If this object has child objects, your creator will call the creators for the children to build it from top to bottom, and you will get as much of the complete graph of objects as you need. You can use these creator objects for all your tests, passing them to the database when testing the DAO level, and passing them to your office calls when testing your business services. Since manufacturers are reusable, its a rather harsh approach.

However, you still need to use JMock to scoff at the DAO level when testing your level of service. If your service makes a call to the DAO, you must ensure that a layout is entered instead. But you can still use your Makers the same way - when setting up your expectations, just make sure that your mocking DAO conveys the expected result using Maker for the corresponding object in the object. Thus, we still do not violate DRY.

Well-written tests tell you when code changes

My last tip to avoid your problem with changing code over time is to always have a test that accesses null inputs. Suppose that when you first created your method, nulls are not acceptable. You should have a test that checks if an exception is thrown if null is used. If at a later time the null values ​​become acceptable, your application code may change, so that the null values ​​will be processed in a new way and the exception will no longer be thrown. When this happens, your test will fail, and you will have heads-up that everything is out of sync.

+1
source share

You just need to think about returning a null value as the intended part of the external API or if it is an implementation detail.

Units tests should not care about implementation details.

If this is part of your intended external API, then since your change could potentially lead to client disconnection, this should of course also break the unit test.

Does it make sense from an external POV that this thing returns NULL or is it a convenient consequence, because direct assumptions can be made in the client regarding the value of this NULL? NULL should mean void / nix / nada / unavailable without any other value.

If you plan to granulate this condition later, then you should wrap the NULL check in that which returns either an informative exception, an enumeration, or an explicitly named bool.

One of the problems encountered when testing unit tests is that even the first unit tests written should reflect the full API in the final product. You need to visualize the full API and then program against THAT.

In addition, you need to maintain the same discipline in unit test code as in production code, avoiding odors like duplication and envy.

+1
source share

For a specific scenario, you change the type of the return method that will be caught at compile time. If it weren’t, it could have appeared when covering the code (as Aaron mentioned). Even then, you should have automatic functional tests that will run shortly after registration. However, I do automatic smoke tests, so in my case they will understand that :).

Without thinking about it, you still have 2 important factors playing in the original scenario. You want to give your device testing module the same attention as the rest of the code, which means it's wise to want to keep their DRY. If you did TDD, it would even put this problem in your design in the first place. If you are not involved in this, another risk factor is YAGNI; you do not want to receive every (un) likely script in your code. So, for me this would be: if my tests tell me that I'm missing something, I double-check the test in order and continue the shift. I'm not sure what to do if the scenarios are with my trials, as this is a trap.

0
source share

If I understand the question correctly, you have a business object that uses the model. There is a test of the interaction between BO and Model (test A), and there is another test that checks the interaction between the model and the database (test B). Test B modifies to return an object, but this change does not affect test A because model A is being tested.

The only way I can make test A invalid when tests B is not to not mock the model in test A and combine the two into one test, which is not very good, because you will experience too much (and you use different frames).

If you know about this dependency when writing tests, I think that an acceptable solution would be to leave a comment in each test that describes the dependency, and how, if someone changes, you need to change another. You will have to change test B, when you reorganize anyway, the current test will fail with error as soon as you make the changes.

0
source share

Your question is rather confusing, and the amount of text doesn't exactly help.

But the meaning that I could draw from a quick read does not make any sense to me, because you want changes that are not related to the contract to affect the layout.

Mocking is a tool that allows you to focus on testing a specific part of the system. The marked part will always work in a certain way, and the test can focus on testing the specific logic that it should. This way you will not be affected by unrelated logic, latency problems, unexpected data, etc.

You will probably have a separate number of tests that test for fake functionality in a different context.

The fact is that between the deceived interface and the actual implementation of this there should not be any connections. It just doesn't make any sense since you are mocking the contract and giving it your own implementation.

-one
source share

I think your problem violates the Liskov substitution principle:

Subtypes must be substituted for basic types

Ideally, you will have a class that depends on the abstraction. The abstraction, which says: "In order to work, I need an implementation of this method that takes this parameter, returns this result, and if I do it wrong, it gives me this exception." All of them will be defined on your interface, on which you depend, either by limiting compilation time, or by comments.

Technically, you can depend on abstraction, but in the scenario that you are talking about, you are not dependent on abstraction, in fact you are dependent on implementation. You say that "if this method changes its behavior, its users will break, and my tests will never know." At the unit test level, you're right. But at the contract level, changing behavior in this way is wrong. Because by changing the method, you are clearly breaking the contract between your method and its callers.

Why are you changing the method? It is clear that the subscribers of this method now need a different behavior. So, the first thing you want to do is not change the method itself, but change the abstraction or contract that your customers depend on. First they need to change and start working with a new contract: "OK, my needs have changed, I no longer want this method to return this in this particular scenario, the developers of this interface should return this instead." So, you change your interface, if necessary, change the users of the interface, and this includes updating their tests, and the last thing you do is change the actual implementation that you pass on to your clients. Thus, you will not encounter the error you are talking about.

So,

 class NeedsWork(IWorker b) { DoSth() { b.Work() }; } ... AppBuilder() { INeedWork GetA() { return new NeedsWork(new Worker()); } } 
  • Modify IWorker to reflect the new NeedsWork needs.
  • Modify DoSth so that it works with a new abstraction that meets its new needs.
  • Test NeedsWork and make sure it works with new behavior.
  • Change all implementations (Worker in this scenario) that you provide for IWorker (which you are trying to do now).
  • Test worker to meet new expectations.

It seems scary, but in real life it would be trivial for small changes and painful for huge changes, as this, in essence, should be.

-2
source share

All Articles