Firebird backup restore is frustrating, is there a way to avoid this?

I use Firebird, but recently the database is growing seriously. There are really many delete operators as well as update / insertions, and the size of the database file is growing very fast. After a ton of deleting records, the database size does not decrease, and, even worse, I get the feeling that the query actually slowed down a bit. To fix this, the daily backup / restore process was involved, but because of this, the time to complete - I can say that it is really unpleasant to use Firebird.

  • Any ideas on workarounds or solutions on this subject would be welcome.

  • In addition, I am considering switching to Interbase, because I heard from a friend that he does not have this problem - is that so?

+7
source share
3 answers

We have many huge Firebird databases in production, but there has never been a problem with the growth of the database. Yes, every time a deleted or updated record, the old version will be stored in a file. But sooner or later the garbage collector will pull it out. As soon as both processes balance each other, the database file will grow only for the size of the new data and indexes.

As a general precaution to prevent huge database growth, try to keep your transactions as short as possible. In our applications, we use one READ ONLY transaction to read all the data. This transaction is open through the life of the application. For each batch of insert / update / delete statements, we use short separate transactions.

Slowing database operations can be caused by outdated statistical indexes. Here you can find an example of recalculating statistics for all indices: http://www.firebirdfaq.org/faq167/

+9
source

Check if you have pending transactions in your applications. If a transaction is started but not committed or rolled back, the database will have its own revision for each transaction after the oldest active transaction.

You can check the database statistics (gstat or an external tool), the oldest transaction and the next transaction. If the difference between these numbers continues to grow, you have a transaction problem.

There are also tools for monitoring the verification situation, one of which I used is Sinatica Monitor for Firebird.

Edit: In addition, the database file is not compressed automatically. Parts of it are marked as unused (after the sweep operation) and will be reused. http://www.firebirdfaq.org/faq41/

+7
source

The space occupied by deleted records will be reused as soon as it is garbage collected by Firebird. If the GC does not happen (transaction problems?), The DB will continue to grow until the GC can do its job.

In addition, there is a problem when you do a massive deletion in a table (for example, millions of records), the next choice in this table “starts” garbage collection, and performance will decrease until the GC finishes. The only way to get around this is to make massive deletions at a time when the server will not be used, and start a cleanup after that, making sure that there are no stuck transactions.

Also keep in mind that if you use “standard” tables to store temporary data (that is, information is inserted and deleted several times), you may get a damaged database in some cases. I highly recommend you start using the global temporary table feature.

+6
source

All Articles