I am developing a Rails 3 application that uses Postgres as its database. I have a table shown below:
Table "public.test" Column | Type | Modifiers ---------------+---------+----------- id | integer | not null some_other_id | integer | Indexes: "test_pkey" PRIMARY KEY, btree (id) "some_other_id_key" UNIQUE CONSTRAINT, btree (some_other_id)
This has two columns:
- id, which is the main key (automatically generated by rails)
- some_other_id, which contains the keys generated by another system. This identifier must be unique, so I added a unique key constraint for the table.
Now, if I try to insert a row with a duplicate of some_other_id , it will not work (good), and I get the following output in Postgres logs:
ERROR: duplicate key value violates unique constraint "some_other_id_key"
The problem is that it fully supports my application to try to add the same identifier twice, and my logs are spammed with this error message, which causes various problems: the files take up a lot of disk space, they receive diagnostics lost in noise Postgres has to throw away diags to keep log files within size, etc.
Does anyone know how I can:
- Suppress the log, either suppressing all logs about this key, or, possibly, indicating something in the transaction that is trying to execute
INSERT . - Use some other Postgres functions to determine the duplicate key, rather than try
INSERT . I have heard about rules and triggers, but I can't get myself to work (although I am not a Postgres expert).
Note that any solution should work with Rails, which makes it insert as follows:
INSERT INTO test (some_other_id) VALUES (123) RETURNING id;
sql ruby-on-rails duplicates postgresql sql-insert
Alex hockey
source share