PostgreSQL UPDATE vs INSERT performance

I work with a million row database, using python to parse documents and populate the table with terms. Insert statements work fine, but update instructions get extremely time consuming as the size of the table grows.

It would be great if someone could explain this phenomenon, and also say if there is a faster way to make updates.

Thanks Arnav

+4
source share
1 answer

It looks like you have a problem with indexing. Whenever I hear about problems that get worse as the size of the table grows, it makes me wonder if you are scanning the table whenever you interact with the table.

Check if you have a primary key and significant indexes in this table. Take a look at the WHERE clause you have on this UPDATE and make sure that the index in these columns allows you to find this record as quickly as possible.

UPDATE. Write a SELECT query using the WHERE clause that you use for UPDATE and ask for the EXPLAIN PLAN database engine. If you see TABLE SCAN, you will know what to do.

+6
source

All Articles