Suppress "duplicate key value that violates unique constraints"

I am developing a Rails 3 application that uses Postgres as its database. I have a table shown below:

Table "public.test" Column | Type | Modifiers ---------------+---------+----------- id | integer | not null some_other_id | integer | Indexes: "test_pkey" PRIMARY KEY, btree (id) "some_other_id_key" UNIQUE CONSTRAINT, btree (some_other_id) 

This has two columns:

  • id, which is the main key (automatically generated by rails)
  • some_other_id, which contains the keys generated by another system. This identifier must be unique, so I added a unique key constraint for the table.

Now, if I try to insert a row with a duplicate of some_other_id , it will not work (good), and I get the following output in Postgres logs:

 ERROR: duplicate key value violates unique constraint "some_other_id_key" 

The problem is that it fully supports my application to try to add the same identifier twice, and my logs are spammed with this error message, which causes various problems: the files take up a lot of disk space, they receive diagnostics lost in noise Postgres has to throw away diags to keep log files within size, etc.

Does anyone know how I can:

  • Suppress the log, either suppressing all logs about this key, or, possibly, indicating something in the transaction that is trying to execute INSERT .
  • Use some other Postgres functions to determine the duplicate key, rather than try INSERT . I have heard about rules and triggers, but I can't get myself to work (although I am not a Postgres expert).

Note that any solution should work with Rails, which makes it insert as follows:

 INSERT INTO test (some_other_id) VALUES (123) RETURNING id; 
+8
sql ruby-on-rails duplicates postgresql sql-insert
source share
3 answers

To avoid duplication of a key error, start with:

 INSERT INTO test (some_other_id) SELECT 123 WHERE NOT EXISTS (SELECT 1 FROM test WHERE some_other_id = 123) RETURNING id; 

I assume id is a sequential column that automatically gets its value.

This is due to the very tiny race condition (in the time interval between SELECT and INSERT ). But the worst thing that can happen is that you get a double key error, and that is unlikely to ever happen and should not be a problem in your case.

You can always use raw SQL if your framework restricts your options to using the correct syntax.

Or you can create a UDF (custom function) for this purpose:

 CREATE FUNCTION f_my_insert(int) RETURNS int LANGUAGE SQL AS $func$ INSERT INTO test (some_other_id) SELECT $1 WHERE NOT EXISTS (SELECT 1 FROM test WHERE some_other_id = $1) RETURNING id; $func$ 

Call:

 SELECT f_my_insert(123); 

Or, by default for an existing id :

 CREATE FUNCTION f_my_insert(int) RETURNS int LANGUAGE plpgsql AS $func$ BEGIN; RETURN QUERY SELECT id FROM test WHERE some_other_id = $1; IF NOT FOUND THEN INSERT INTO test (some_other_id) VALUES ($1) RETURNING id; END IF; END $func$ 

Again, this leaves a minimal chance for a race condition. You can eliminate this due to lower performance:

  • Is SELECT or INSERT a function prone to race conditions?
+10
source share

You can turn off error logging for a session (or actually globally), but this requires superuser privileges:

By running:

 set log_min_messages=fatal; 

only fatal errors are logged before the end of the session (= connection) or it issues a new set statement to reset the value.

But since only the superuser is allowed to change this, this is probably not a good solution, since the user of your application will need such a privilege, which is a serious security problem.

+2
source share

If you just want to suppress these errors while working in psql , you can do

 SET client_min_messages TO fatal 

which will last until the end of the session.

0
source share

All Articles