I was wondering if there was a best practice for checking if the upload to your ftp server was successful.
The system I'm working with has a download directory that contains subdirectories for each user into which the files are downloaded.
Files in these directories are temporary, they are deleted after access.
The system goes through each of these subdirectories and the new files in them, and for each file it checks to see if it has been changed for 10 seconds. If it was not changed within 10 seconds, the system suggested that the file was downloaded successfully.
I do not like the way the system currently handles these situations because it will try to process the file and fail if the file download was incomplete, instead of waiting and letting the user resume the download before it finishes. This may be good for small files that do not require a lot of time to download, but if the file is large, I would like to resume downloading.
I also don't like directory and file loops, the system is idle with high CPU usage, so I implemented pyinotify to initiate an action when writing a file. I really did not look at the source code, I can only assume that it is more optimized than the current implementation (which does more than I described).
However, I still need to check if the file was downloaded successfully.
I know I can parse xferlog to get all full downloads. For instance:
awk '($12 ~ /^i$/ && $NF ~ /^c$/){print $9}' /var/log/proftpd/xferlog
This would make pyinotify unnecessary, since I can get the path for full and incomplete downloads if I only delay the log.
So, my solution would be to check xferlog in my run-loop and only process complete files.
If there is no best practice or just the best way to do this?
What would be the disadvantages of this method?
I run my application on a debian server, and proftpd is installed on the same server. In addition, I do not control clients sending the file.