Postgres control lock inside function allows simultaneous execution

I ran into a problem when I have a function that needs serialized access, depending on some circumstances. This seemed like a good case for using advisory locks. However, with a rather heavy load, I found that serialized access does not occur, and I see parallel access to this function.

The purpose of this function is to provide "inventory control" for the event. This means that it is supposed to limit the simultaneous purchase of tickets for this event so that the event is not oversold. These are the only advisory locks used in the application / database.

I find that sometimes there are more tickets in an event than the value of eventTicketMax. It does not look like it should be possible due to advisory locks. When testing with small volume (or with manual delays, such as pg_sleep after acquiring a lock), everything works as expected.

CREATE OR REPLACE FUNCTION createTicket( userId int, eventId int, eventTicketMax int ) RETURNS integer AS $$ DECLARE insertedId int; DECLARE numTickets int; BEGIN -- first get the event lock PERFORM pg_advisory_lock(eventId); -- make sure we aren't over ticket max numTickets := (SELECT count(*) FROM api_ticket WHERE event_id = eventId and status <> 'x'); IF numTickets >= eventTicketMax THEN -- raise an exception if this puts us over the max -- and bail PERFORM pg_advisory_unlock(eventId); RAISE EXCEPTION 'Maximum entries number for this event has been reached.'; END IF; -- create the ticket INSERT INTO api_ticket ( user_id, event_id, created_ts ) VALUES ( userId, eventId, now() ) RETURNING id INTO insertedId; -- update the ticket count UPDATE api_event SET ticket_count = numTickets + 1 WHERE id = eventId; -- release the event lock PERFORM pg_advisory_unlock(eventId); RETURN insertedId; END; $$ LANGUAGE plpgsql; 

Here is my environment setup:

  • Django 1.8.1 (django.db.backends.postgresql_psycopg2 w / CONN_MAX_AGE 300)
  • PGBouncer 1.7.2 (session mode)
  • Postgres 9.3.10 on Amazon RDS

Additional variables that I tried to configure:

  • setting CONN_MAX_AGE to 0
  • Remove pgbouncer and connect directly to DB

In my testing, I noticed that in cases where the event was resold, tickets were purchased from different web servers, so I don’t think there is any kind of funny business about a joint session, but I can’t say for sure.

+5
source share
1 answer

Once PERFORM pg_advisory_unlock(eventId) is executed, another session may capture this lock, but since the INSERT of session # 1 has not yet been credited, it will not be counted in COUNT(*) session # 2, resulting in over-reservations.

If you maintain a locking strategy, you should use transaction-level control locks ( pg_advisory_xact_lock ), as opposed to a session level. These locks are automatically released in COMMIT mode.

Also note that transaction isolation level is important. By default, postgres uses REPEATABLE READ , and it seems to take a quick look that Django is not moving away from this. It is important that your COUNT(*) reflect the last committed state, and not the state when your transaction began.

+3
source

All Articles