Nested synchronized blocks in interned strings

The name sounds like there are a lot of problems ahead. Here is my specific case:

This is a travel ticket sales system. Each route has a limited number of tickets, so buying the last ticket for this route should not be available to two people (standard scenario). However, there is a “return ticket” option. So, I use a unique route identifier (provided by the database) to do the following:

synchronized(bothRoutesUniqueString.intern()) { synchronized (routeId.intern()) { if (returnRouteId != null) { synchronized (returnRouteId.intern()) { return doPurchase(selectedRoute, selectedReturnRoute); } } return doPurchase(selectedRoute, selectedReturnRoute); } } 

Two synchronized indoor units are designed to ensure that flows stop there only if a ticket for this particular route is purchased by two people at the same time, and not while purchasing tickets for two different routes. The second synchronization is, of course, because who Someone may try to acquire a retransmission route at the same time as an outbound route.

The outermost synchronized block must take into account the scenario when two people buy the same combination of tickets, vice versa. For example, one orders London-Manchester, and the other orders Manchester-London. If there is no external synchronized block, this situation can lead to a deadlock.

(The doPurchase() method returns a Ticket or throws an exception if there are no more tickets)

Now I understand perfectly that this is a very inconvenient solution, but if it works as expected, it gives:

  • 10 lines to handle the whole complex script (and with the corresponding comments, it will not be so difficult to understand)
  • there is no unnecessary blocking - everything blocks only if it should be blocked.
  • agnosticism database

I also know that such scripts are handled either by pessimistic or optimistic database locks, and since I use Hibernate, it is also not difficult to implement.

I think horizontal scaling can be achieved using the above code using VM clustering. According to the Teracotta documentation , this allows you to turn a single-user node multi-threaded application into a multi-node and:

The terracotta track String.intern () calls and guarantees referential equality for these explicitly interned strings. Since all references to the interned String object indicate a canonical value, reference equality checks will work as expected even for distributed applications.

So now for the questions themselves:

  • Have you noticed any flaws in the above code (other than its awkwardness)?
  • Is there an applicable class from the java.util.concurrent API to help in this scenario?
  • Why is database locking preferable?

Update: Since most of the answers concern OutOfMemoryError , I compiled a test for intern() and the memory was not eaten. Perhaps the row table is cleared, but it does not matter in my case, since I need the objects to be equal in race conditions, and the clearing of the last lines should not occur at the point:

 System.out.println(Runtime.getRuntime().freeMemory()); for (int i = 0; i < 10000000; i ++) { String.valueOf(i).intern(); } System.out.println(Runtime.getRuntime().freeMemory()); 

PS Wednesday JRE 1.6

+4
source share
6 answers

Interning is a moderately expensive operation, and there is little chance that it chews more CPU time than possible alternatives. But, of course, I don’t see that it takes longer for the wall time than querying the database. The only scenario where I could assume that a database-based implementation would benefit is if you have so many threads that do this in parallel that you would love to let the database do some of the work, so your processor will wait, but not grind time.

For the limited scope of your intended application, your solution looks brilliant to me.

+2
source

Why is database locking preferable?

Your built-in solution will only work if you have only one interface for the database, which means that you can scale only vertically (with more processing power in one interface). As soon as you start visiting a web farm or the like (horizontal scaling), your solution no longer works.

+8
source

I would say that the main drawback is your interning of synchronization objects.

I think that the intern map is limited in size, so at some point the unique lines will begin to be squeezed out of it, so you won’t block the same objects if your program runs for a long time.

On the other hand, if in some implementations the intern card is not limited in size, it is possible that you may run out of memory.

I would not rely on the internal logic of intern and created my own object to hold bites and locks.

In addition, there is nothing wrong with using nested locks. Just don't try to block in reverse order elsewhere in your code.

+2
source

I can offer two tips.

1) A standard way to avoid locks is to sort lock locks so that they are acquired in a known manner. This would avoid the awkwardness of using three locks when only two are required.

2) Do you ask this question explicitly and only in the presence of Terracotta? If so, you do not need to put your own lines. This may not be obvious, but when converting a synchronized (String) to a Terracotta lock, Terracotta blocks VALUE strings, not the identity. Obviously, if you rely on this behavior, you should comment on it, since interning, as you did, is required in any case, except in the presence of Terracotta, so any standard Java programmer looking at your code will be justified :)

+2
source

A better option would be the singleton RouteProvider, which gives the route only once, blocking or returning null if the route is already in use. I am thinking of a generic GenericObjectPool structure that has only one trick for each route.

The route will take place Origin and Destination and have corresponding peers and HashCode.

You need to take () and release () the route so that RouteProvider recognizes it again. Remember to make exceptions in your account and release () in the finally clause :)

Anyway, I would never go for any implementation as an internal string.

+1
source

Another thought about how you implemented this is that you are using a pessimistic locking scheme.

It depends on the overall likelihood of collisions, which scheme will be better - optimistic or pessimistic.

But, given the problem area, I suspect that in this time window it is very unlikely that you will have a collision, so using a pessimistic locking scheme will be very slow compared to an optimistic locking scheme.

An optimistic locking scheme is what is used by default in a typical database script, for example, using Hibernate.

What this means is that you are simply trying to make a purchase without worrying about blocking, and only when the purchase is trying to make a deal do you verify that no one has made that purchase.

This type of behavior is easily expressed in Hibernate - and will scale much better than your suggestion. In addition, since you can do this very easily and very standardly using Hibernate, you will find that your code is easier to debug and maintain and scale, because complex non-standard solutions to such problems are more prone to error-prone paths that are difficult to maintain, more difficult to maintain. and it’s usually much harder to scale.

Read this page (especially in section 11.3) for more information on Hibernate concurrency and lock support.

+1
source

All Articles