We have a product using a PostgreSQL database server that is deployed to several hundred clients. Some of them have collected tens of gigabytes of data over the years. Therefore, in the next version, we will introduce automatic cleaning procedures that will be gradually archived and DELETE old records during nightly batch jobs.
If I understood correctly, autovacuum will run and analyze and reorganize tuples, so the performance will be as if there were fewer records.
Actual disk space will not be released if I understood correctly, since this only happens with VACUUM FULL , and it does not work autovacuum .. p>
So, I was thinking of an automatic process that would do this.
I found the kind of bells and whistles that nagios check_postgres uses at http://wiki.postgresql.org/wiki/Show_database_bloat .
Is this look good? Do I understand correctly that if tbloat -> 2, can it use VACUUM FULL? And if the ibloat parameter is too high, can it use REINDEX?
Any comments on the next assignment to work as a daily batch assignment?
vacuumdb -Z mydatabase #vacuum with analysisselect tablename from bloatview order by tbloat desc limit 1vacuumdb -f -t tablename mydatabaseselect tablename, iname from bloatview order by ibloat desc limit 1reindexdb -t tablename -i iname mydatabase
Of course, I still need to wrap it in a nice perl script in crontab (we use ubuntu 12), or postgresql has some kind of scheduler with which I could do this:
Or is it a complete excess and is there a simpler procedure?
greyfairer
source share