One way is to use pg_dump to create a flat sql dump that you can use gzip or something else. This is by far the easiest option, as the results can be returned back to psql to reload the database, and since it can also be exported as plain text, you can view or edit the data before recovery, if necessary.
The next method is to temporarily close your database (or if your file system supports atomic snapshots, this could theoretically work) and backup your PostgreSQL data directory.
This page from the PostgreSQL website also explains how to do online backup and restore at a specific point in time, which is by far the most difficult to set up, but also the best method. The idea is that you make a basic backup (which you can do every day, a couple of days or a week) by running a special SQL ( pg_start_backup and pg_stop_backup ) and make a copy (at the file system level) of your database, the database does not shut down during this time, and everything is working fine. From this moment, the database generates an Ahead (WAL) log of any changes that can then be pushed (automatically, through the database) to wherever you want. To restore, you take a basic backup, load it into another database instance, and then simply play all the WAL files. In this way, you can also perform recovery at a specific point in time without replaying all the logs.
Adam batkin
source share