Wow, big question. Not sure if a mere mortal can answer him. But I think that you too quickly miss the opportunity to "disable your database." There are many software packages, both commercial and open source, that offer the ability to work with various DBMSs as backup storage. Managing SQL for deployment on 2+ database platforms can be an absolute nightmare, so creating something of your own SQL in a predictable way (at least compared to writing it manually) is a huge advantage. Just playing the devil's advocate, for some database platforms, I could observe an increase in transaction throughput, which makes choosing a database prohibitively expensive. Most ORMs will help you with this anyway, although having a rich API of requests can go a long way when your database needs are quite complex.
The short answer, in my opinion, is that when your database needs to achieve a certain level of application complexity, the cost of satisfying your requirements will not be lower than the cost associated with the nhibernate learning curve. I cannot offer complete answers, but I will try to talk about your list items.
- When you do more than just CRUD. A good example is the need for complex queries across multiple database platforms. In this type of application, you can almost completely support two separate code files (well, they really become detached if you send the saved proc route), and it can be important when saving all your code in .net (well, to be able to unit test these queries with the rest of your code, for example).
- Besides the issues seen in medium trust environments, I’m not sure that lazy loading isn’t working right now. The only problem with lazy loading in my eyes is that you need to know about this in order to avoid some of the problems that may arise when extracting large amounts of data, basically the problem of choosing N + 1.
- You do not need to determine how to perform batch operations - you just need to set the configuration value and forget about it. This is a pretty big optimization that NHibernate does for you with minimal effort - your code can be much cleaner if it is directly related to operations and transaction control.
- Caching returned data can be useful when you randomly display your pages to different users or perform some non-trivial processing at your domain level. Even in basic scripts with page output caching, you can get an edit page, detail page, etc. In your cache, while caching your data is closer to the source, you only need to cache the object once. Caching closer to the source also gives you more protection from serving outdated data. A data-oriented cache can also be shared among several applications, either through services, or by pointing nHibernate to storage outside the process, such as memcached or redis. This can be extremely valuable in some environments.
- I'm not sure you need to understand how this works (many times I use open source libraries to protect myself from having to understand the implementation details of this kind of thing). But the short answer is that none of them behaves differently in a distributed scenario, except for caching (and only level 2 caching there). As long as you use the distributed cache provider (or specify all of your servers to the same cache provider outside the process), you should also be good on this front.
I am only talking about nHibernate, but I think the story is the same for Hibernate. For larger applications, more complex applications can bring many benefits, but there is a lot of additional complexity that you need to take to take advantage of this - it is still probably less complicated than deploying your own solution to all problems * Hibernate solves for you.
You also had a lot of caching questions. I suggest reading this link to understand how the first and second level caches work. I will not explain here, because it sounds like you are after a deeper understanding than I can fit into this already long answer :)
AlexCuse
source share