The fastest way to insert data is with the COPY . But this requires a flat file. I think creating a flat file is not an option.
Do not run too often, especially do not run it with auto messaging enabled. "Tens of thousands" sounds like one fix at the end, it would be right.
If you can convect your ORM to use Postgres multi-line insert, which will also speed things up.
This is an example of a multi-line insert:
insert into my_table (col1, col2)
values
(row_1_col_value1, row_1_col_value_2),
(row_2_col_value1, row_2_col_value_2),
(row_3_col_value1, row_3_col_value_2)
If you cannot generate the above syntax and you use Java, make sure you use batch instructions instead of single attachments to the statement (maybe other database layers allow something like that)
Edit:
Message
jmz 'inspired me to add something:
You can also see the improvement when wal_buffers increases to some larger value (e.g. 8 MB) and checkpoint_segments (e.g. 16)
a_horse_with_no_name
source share