I have the answer “out of the box” for you, but I’m not sure how much it is possible to implement in your situation.
If you do not control the dumping process: since this is a large recovery file (dump?) Created in an excpetional case, why not scan the file (for 0 bytes) with a low priority immediately after resetting it and mark it somehow for faster identification later? (or you can pin it and parse / scan the zip file later)
Or, if you control the dumping process: (a slow process, which you should do anyway), why not indicate at the end of the dump file (or go back and write at the beginning of it) if the dump file is filled with 0 or has some reliable data (since you wrote this and know what is in it)? so you don’t have to pay twice for I / O.
The goal is to make reading much faster by defragmenting procss at another time in the hierarchy, because when a dump occurs, the operator is unlikely to wait for it to load.
source share