When should i choose nHibernate over other ORMs?

I confess I'm not completely grok * hibernate .

Since micro ORMs like Dapper can be used to satisfy most data access needs, what are the scenarios requiring a large gun like nHibernate? What are some examples of situations where nHibernate will shine? To be clear, I do not consider the “ability to shut down your database without changing the code” too many advantages. For eight years of programming, I never had to do this, and it seems to me that the idea starts from the very beginning.

I am open to any thoughtful answer, but here are a few examples of questions that I have:

  • When does the query API deserve the extra work you need to do in mapping compared to something like Dapper?
  • How can you use lazy loading in such a way as to limit the efforts of developers and just work?
  • When is it worth the time to figure out how to expose operators?
  • In which scenario is a caching system better than, for example, caching a page? Is this true only in uncommon environments?
  • How would an ordinary mortal, as I expected, understand how nHibernate would work in a distributed environment? Consider mixing state-of-the-art caching, batch processing, and sessions, and think about how all this will work with weight-balanced web servers.
+7
source share
2 answers

Wow, big question. Not sure if a mere mortal can answer him. But I think that you too quickly miss the opportunity to "disable your database." There are many software packages, both commercial and open source, that offer the ability to work with various DBMSs as backup storage. Managing SQL for deployment on 2+ database platforms can be an absolute nightmare, so creating something of your own SQL in a predictable way (at least compared to writing it manually) is a huge advantage. Just playing the devil's advocate, for some database platforms, I could observe an increase in transaction throughput, which makes choosing a database prohibitively expensive. Most ORMs will help you with this anyway, although having a rich API of requests can go a long way when your database needs are quite complex.

The short answer, in my opinion, is that when your database needs to achieve a certain level of application complexity, the cost of satisfying your requirements will not be lower than the cost associated with the nhibernate learning curve. I cannot offer complete answers, but I will try to talk about your list items.

  • When you do more than just CRUD. A good example is the need for complex queries across multiple database platforms. In this type of application, you can almost completely support two separate code files (well, they really become detached if you send the saved proc route), and it can be important when saving all your code in .net (well, to be able to unit test these queries with the rest of your code, for example).
  • Besides the issues seen in medium trust environments, I’m not sure that lazy loading isn’t working right now. The only problem with lazy loading in my eyes is that you need to know about this in order to avoid some of the problems that may arise when extracting large amounts of data, basically the problem of choosing N + 1.
  • You do not need to determine how to perform batch operations - you just need to set the configuration value and forget about it. This is a pretty big optimization that NHibernate does for you with minimal effort - your code can be much cleaner if it is directly related to operations and transaction control.
  • Caching returned data can be useful when you randomly display your pages to different users or perform some non-trivial processing at your domain level. Even in basic scripts with page output caching, you can get an edit page, detail page, etc. In your cache, while caching your data is closer to the source, you only need to cache the object once. Caching closer to the source also gives you more protection from serving outdated data. A data-oriented cache can also be shared among several applications, either through services, or by pointing nHibernate to storage outside the process, such as memcached or redis. This can be extremely valuable in some environments.
  • I'm not sure you need to understand how this works (many times I use open source libraries to protect myself from having to understand the implementation details of this kind of thing). But the short answer is that none of them behaves differently in a distributed scenario, except for caching (and only level 2 caching there). As long as you use the distributed cache provider (or specify all of your servers to the same cache provider outside the process), you should also be good on this front.

I am only talking about nHibernate, but I think the story is the same for Hibernate. For larger applications, more complex applications can bring many benefits, but there is a lot of additional complexity that you need to take to take advantage of this - it is still probably less complicated than deploying your own solution to all problems * Hibernate solves for you.

You also had a lot of caching questions. I suggest reading this link to understand how the first and second level caches work. I will not explain here, because it sounds like you are after a deeper understanding than I can fit into this already long answer :)

+6
source

NHibernate is big and powerful, but you don't need to know everything about it to succeed. To answer your questions:

  • All .NET microorganisms do not have the LINQ support that I know of, and instead rely on mixing SQL strings in your code. LINQ query building provides type safety, compile-time checking, and excellent refactoring. Try refactoring code with thousands of queries in it if all queries use SQL strings ... yikes! And by refactoring, I mean something as simple as adding new columns, new tables, etc., which happens all the time in a corporate environment. Refactoring strings are possible, damn what people still have to do who rely on stored procedures, but I certainly wouldn't want to do this if I had type safety at my disposal.

  • With lazy loading, the only thing you need to keep in mind is to create a SELECT N + 1 script. Every time you have code that executes a foreach loop on an object / domain object, make sure that in the request that populated the object (s), used the .Fetch () method, which simply creates a JOIN in SQL and populates any child objects. Otherwise, when you go through an object and place it in any child objects, ORM must execute another SELECT statement to "select" the data. Mostly at NHibernate lingo, an impatient choice is your friend.

  • Packing is as easy as NHibernate pie. In your NHibernate configuration, enable the package and enable it. After that, if you need it, you can adjust the batch size at runtime for specific requests.

  • I have never used level 2 caching. I work in a large corporate environment, and our applications are very fast, without the need for caching. NHibernate's first-level cache, although it does not need configuration, can simply be considered change tracking. Basically, NHibernate stores the dictionary inside which objects it has already been extracted from the database and which objects are waiting to be saved / updated in the database. The first level cache is something that I never think about, but, in my opinion, it’s good to know at the initial level how it works.

  • I currently work in a corporate environment, and we have all kinds of applications using NHibernate; some are pretty simple and others using all the powerful features that NHibernate offers. What I usually saw in my experience is that not every team member needs to be an NHibernate expert. As a rule, 1-3 developers will be very knowledgeable, and everyone else will not worry about it and just create their own entities, mappings and continue their programming. Once the infrastructure is in place, and your organization has sorted out the templates that it wants to use, everything is usually a breeze.

Additional thoughts:

One place NHibernate really shines is its ability to match any kind of crazy database design that you throw at it. Now I'm not saying that it will be easy to display some kind of crazy database design, assembled in the early 90s, when you need to join a stored procedure and another table together, but this is possible. I made some crazy comparisons at one time. I even thought it was impossible, because the database is simply not designed to do what we wanted, but each time, with persistence, I still manage to get rid of NHibernate with incredible flexibility in displaying both good and bad database design.

With Micro-ORMS, you usually get a ton of SQL lines embedded with your code. How it is considered clean and efficient. It seems that all people do what they use to embed stored procedures in their code. I really use Micro-ORM in some of my projects, although where it makes sense, but usually when I only look at one table for some simple data, there are no complicated WHERE clauses.

To be fair, I am one of those people who spent a lot of time studying NHibernate inputs and outputs, but not because I needed to work, but because I just wanted to. I work with a large number of people at work who use NHibernate daily and do not fully receive it. But again, they don’t need it. You just need to know a few basic things, and you're good to go.

Hope this helps.

+5
source

All Articles