(N) Is "session per application" hibernation considered evil for a particular use case?

Well, everyone knows that a global session application with (N) Hibernate is not recommended. BUT I have a very specific, apparently non-standard use case for which it seems like the perfect solution.

To summarize, my (server) application basically has all the persistent data permanently in memory and never queries the database for normal operation. The only reason for the database in the first place is that the data is saved throughout the process. I only want to query the database at application startup to extract everything into memory. The database is only 5-10 MB realistic.

Now the problem is that if I follow the advice that sessions should be short-lived, I need to combine () all my data for each business transaction or somehow manually track all changes, instead of using NHibernate to automatically track changes . This makes perseverance difficult without incurring significant costs.

So my question is, are there any reasons why I should not use the global session for this particular use case?

General arguments against global sessions that I know of:

  • The first level cache will be filled with the entire database over time => I do not mind, because I really want to have all the data in memory!

  • Problems with obsolete data and concurrency => My application is designed in such a way that all the code that can receive or modify persistent data must be single-threaded (an intentional design choice), and this is the only application that can write to the database. Therefore, this should not be a problem.

  • A session is damaged if it throws an exception (for example, the database timeout) => This is the only real problem that I see, but it can be solved by dropping the session, creating a new one and updating all the data. Expensive, but exceptions must be very rare and can be caused by either a serious mistake or serious infrastructure problems that should be resolved as soon as possible.

Therefore, I believe that there is no reason why I should not use the global session for my specific use case. Or is there something important that I'm missing?

Update 1: this is a server application

Update 2: This does not mean long-term global transactions. Transactions will still be short-lived - one long-lived session, many short-lived transactions.

+8
java orm hibernate nhibernate transactions
source share
3 answers

If you included all the transactions coming from several threads in one dedicated executor of the thread stream, then you really can use one session for each application.

Exceptions can be caused by lock timeouts, server failures, or violation of restrictions, so disconnecting a support session will discard all first level cache entries, which is bad for your use case. In this case, you will have to retrieve everything from the database, and since you are using one thread in the background, all other client threads will be blocked, which is inconclusive.

I would advise instead of the second level cache . You can configure the 2LC provider to speed up memory operations instead of overflowing to disk. You can load all the data into the second level cache at application startup and use the NONSTRICT_READ_WRITE Cache Concurrency Strategy to speed up writing (Concurrency problems are not a problem for you).

You need to make sure that you use 2NL caching for collections .

The easiest way is to use a session-per-request, as the session is easy anyway, and it will still retrieve data from 2LC memory.

You need to run some performance tests to see if you should reuse the session, rather than creating a new one for each individual transaction. You can find out that this process is not your bottleneck anyway, and you should not do any optimization without real evidence.

Another reason for refusing a session is that most database exceptions cannot be repaired in any case. If the server goes down or the current request causes a restriction violation, re-inclusion in it will not fix anything.

+5
source share

One of the potential drawbacks that I see is that a dirty check can take a lot of time; you will need to use the bytecode tool mode to solve this problem.

In addition, serialized access to the server can affect performance much more than recreate objects from the second-level cache (creating objects in modern JVM-systems is very fast). This is true in single-user applications (the user can start a long-term operation on one screen and wants to do something else on another, or the server can initiate a scheduled operation, thereby blocking user access to the server until the work is done).

Third, for the reverse structure of your approach, it can be difficult later if you need to execute simultaneous requests in the end.

Then you will not avoid going to the database when doing queries anyway.

Finally, an extended session is a Hibernate function that is not used as normal, like a classic session template per request. Although Hibernate is a good piece of software, it is also complex and therefore has many bugs. I would expect more bugs and weaker community support / documentation related to less used features (as in any other environment).

So my suggestion is to use the second level cache and handle concurrency problems using optimistic / pessimistic locks depending on the use cases. You can enable caching of all objects by default, using the caching mode DISABLE_SELECTIVE or ALL , as described in docs .

+4
source share

The reasons for not using a global session can be summarized as shown below:

* First level cache: you must understand. The first level cache is not limited to memory only. The consequence of the first level cache is that when an object is ever stored or deleted or requested, (n) hibernate must ensure that before this event the database must be in a consistent state with your memory. This is redness. However, (n) sleep mode has one unique feature called Transparent Persistence. Unlike other ORM structures, such as an entity, you do not know what is dirty. NHiberante does it for you. But the way this works is somewhat expensive. He compares all previous states of the object with new ones and tries to determine what has changed. Therefore, if your first level cache is full of entities, performance will degrade. There are two ways around this problem:

1-) using session.Flush (); session.Clear ();

2-) using a session without saving.

In the first case, all your pending changes go to the database. After that, you can safely clear the session. However, it stores the data in memory, even if your session is cleared until the transaction is deleted. This is because you can vote for a transaction that will be rejected at the end. (May provide additional information about this if required)

The second case, for a stateless session, (n) sleep mode behaves like a micro ORM. No tracking. It is lightweight but gives you more responsibility.

* Session-related exceptions: Another important reason why an applicationโ€™s broadcast session is not used is whenever an exception is raised that is associated with a session or database (for example, a unique violation of restrictions), your session is doomed. You cannot work with it anymore because it is in a conflicting state with the database. Therefore, your global session must be updated, and this brings more complications.

* Stream Security: No session, nor any ADO.NET construct, is thread safe. If you use a global session object with multiple threads, you need to provide some thread safety. It can be very difficult.

+2
source share

All Articles