Thanks for your comments and answers. debugfs seems like an interesting solution for the initial requirements, but it is a bit redundant for the simple and easy solution I was looking for; if I understand correctly, the kernel must be built with debugfs support, and the target directory must be in mount debugfs . Unfortunately, this will not work for my use case; I should be able to provide a solution for existing "core" kernels and directories.
As this seems almost impossible, I was able to discuss and relax the requirements down to listing the number of files that were recently removed from the directory, recursively, if possible.
This is the solution I completed:
- A simple
find sent to wc to count the initial number of files in the destination directory (recursively). The result can easily be saved in a shell or script variable without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
- Then we can run the same command later to get the updated number of files.
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
- Then we can save the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT)); DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
- Then we can print a simple message if the number of files has decreased.
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
- Return to step 2.
Unfortunately, this solution will not report anything if the same number of files were created and deleted during the interval, but this is not a big problem for my use.
To get around this, I would have to store the actual list of files instead of the sum, but I could not do this work using shell variables. If anyone could understand this, I would really help me, as it would meet the initial requirements!
I would also like to know if anyone has comments on one of two approaches.
source share