So, I am becoming more and more absorbed in test-based development, and the more code I write thinking about tdd, the more solutions it seems to me, as if I were to evaluate the degree of testing that I have to write. I would like to set a personal policy regarding how many testing units I should write for my own projects, and I was wondering if I would get some tips on how you feel about what you all do.
Here is an example of a solution that I am currently facing ...
I have three classes ...
public class User { public string Username { get; set; } public List<Favorite> Favorties { get; set; } } public class Favorite { public string Username { get; set; } public int rank { get; set; } } public class UserManager { public List<Favorite> GetTop5(User user) { var qry = from fav in user.Favorties.OrderBy(f => f.rank) select fav; return qry.Take<Favorite>(5).ToList(); } }
I have a data access level for the User class, for which I already have a GetUser test setup. As you can see, in my business logic, I have a UserManager.GetTop5 () method that returns the 5 best favorites for the User that I just pulled from db. This method is very simple and does not currently contain any external resources or dependencies.
So my question is: would you go further and write another test for this "GetTop5" function, although you have very little chance of failure?
Do you set up the test anyway if you continue to function in the future? Or do you think the test is excessive here?
source share