Is using a mySQL column a smart way to achieve global locking?

I am creating an application that requires significant image processing. We build it in a distributed estate with an arbitrary number of rendering machines that receive requests for rendering images from the RabbitMQ exchange.

Since the image request may appear during rendering processing, and because I do not want to duplicate the work of the two rendering servers, I created a column in the mySQL image table called is_rendering as a boolean.

When the render server receives a render request, it performs a series of steps that look like this:

  • Select to update the image line.
  • If is_rendering == true interrupt the visualization request
  • Set is_rendering == true and complete the transaction
  • Display image and save thumbnail in public store
  • Set is_rendering == false and return

This definitely works, but I'm worried that these frequent database updates seem a little dumb. In addition, I am considering the edge case where the rendering server fails in the middle of rendering and leaves is_rendering == true, preventing this image from ever being displayed. The solution I am considering for this problem is to change the is_rendering column from the tinyint (1) field to datetime and save the lock date as a “true” value, and null as a “false” value. A periodic system health check can select all rows with the value is_rendering for a certain period of time and release the lock in this case.

Is this a reasonable approach to this problem or are there other more elegant approaches that I should consider?

+7
source share
2 answers

I went ahead and changed the implementation to use the DATETIME column.

I was very interested if this were a poor use of mySQL in general. From what I explored, I could go with something like Hadoop ZooKeeper: http://zookeeper.apache.org/doc/r3.1.2/recipes.html or something like Google’s internal Chubby system. Since this is only the first iteration of the service, I will stick with mySQL.

From what I read here, and what I read on the Internet using mySQL, as a way to create a global lock, is not a terrible idea and changing it to a DATETIME column, although a bit dumb, is the best way to implement an expiration policy to handle case with an odd edge of the machine, ending in the middle of processing the job.

Keeping a lock at the transaction level would be another approach, but it does not make sense when many threads with a small pool of connections are running on the same server, it will disconnect the connection unnecessarily, although it has a built-in function after the expiration of the time when the client connection is lost.

0
source

Dear, As I understand your problem, your first approach is also correct if you follow the rule below: 1) your table type is innoDB. 2) You are using a transaction in your code. Because it will be rolled back if any break occurs during the update.

In the end, your second approach is also better. if you do not follow my points mentioned

0
source

All Articles