Is there middle ground? (Device testing and integration testing)

Consider the implementation of a repository template (or similar). I will try to simplify the example / illustration as much as possible:

interface IRepository<T> { void Add(T entity); } public class Repository<T> : IRepository<T> { public void Add(T entity) { // Some logic to add the entity to the repository here. } } 

In this particular implementation, the repository is defined by the IRepository interface to have one method that adds an object to the repository, thereby making the repository dependent on the general type T (also, the repository must be implicitly dependent on another type of TDataAccessLayer, since abstraction is the whole point of the repository template However, this dependency is currently unavailable). At the moment, from what I understand so far, I have two options: Unit testing and integration testing.

In case integration testing can have a larger number of moving parts, I would prefer Unit Test initially to at least test the basic functionality. However, without creating a kind of "entity" property (of a general type T), I see no way to assert that any logic is actually executed in the Add () method of the Repository implementation.

Is there perhaps an intermediate point somewhere between unit testing and integration testing, which allows (through Reflection or some other means) to verify that certain execution points have been reached in the unit under test?

The only explanation I came up with for this particular problem is to further abstract the data access level from the repository, as a result of which the Add () method accepts not only the object argument, but also the data access argument. It seems to me that this can lead to the defeat of the purpose of the repository template, since the consumer of the Repository should now be aware of the Data Access Level.

Regarding the query for examples:

(1) And as for Unit Testing, I'm not sure if something like Repository really could be Unit Test with my understanding of current testing methods. Since the repository is an abstraction (wrapper) around a certain level of data access, it seems that the integration test will be the only verification method. (Of course, the repository interface cannot be bound to any specific DAL, but any implemented repository must be bound to a specific DAL implementation, so you must be able to verify that the Add () method actually does some work).

(2) And as far as integration testing is concerned, the test, as I understand it, will check the Add () method that does the work, actually calling the Add () method (which should add an entry to the repository) and then check whether the data has really been added to the repository (or perhaps a database in a specific scenario). It might look something like this:

 [TestMethod] public void Add() { Repository<Int32> repository = new Repository<Int32>(); Int32 testData = 10; repository.Add(testData); // Intended to illustrate the point succinctly. Perhaps the repository Get() method would not // be called (and a DBCommand unrelated to the repository issued instead). However, assuming the // Get() method to have been previously verified, this could work. Assert.IsTrue(testData == repository.Get(testData)); } 

So, in this case, assuming that the repository is a wrapper around some logical level of the database, the database actually gets caught twice during the test (once during insertion and once during extraction).

Now what I could see useful would be a method of verifying that a certain execution path is being executed at runtime. An example would be if a non-empty reference is being transmitted, check if the execution path A is being executed, and if the empty reference is passed, check the execution of path B. In addition, it was possible to check whether a specific LINQ query should be executed. Thus, the database never crashes during the test (allowing prototyping and development implementation without an actual DAL in place).

+4
source share
2 answers

It looks like you are describing testing implementation details, rather than fulfilling the requirements of the template by the template developer. It doesn’t matter if "specific execution points" have been reached in the unit under test, this only matters if the particular contractor supports the interface contract. This is perfectly acceptable for tests to create a T object for testing purposes, which is why layouts are needed.

+2
source

If you want to perform integration testing, you need to use a real database. But if you quickly want to verify that you can try the memory database. The question is what you can verify and what you cannot verify. As long as your database access code is db, you use an external system (to stay in unit test speak), which you have to make fun of. But since you really want to know if your data is in the database, you need to test the real database.

But if you use db abstraction, for example. ORM-mapper, you can use the ORM-card and check if at least the display is working. Then the ORM-mapper could use the datbase database for your tests to check if the ORM-cardler is working as expected.

If you do not use ORM mapper and you create an additional layer of db abstraction just to have an abstraction, so you have code that runs for the sole purpose of having errors that you want to open in your real unit tests, will not to make you more productive.

0
source

All Articles