Cronjob to delete files older than 99 days

I need to do a cronjob to delete files older than 99 days in a specific directory, but I'm not sure if the file names were made by trusted Linux users. I should expect special characters, spaces, slashes, and others.

Here is what I think might work:

find /path/to/files -mtime +99 -exec rm {}\; 

But I suspect that this will not work if there are special characters or if it finds a read-only file (cron cannot be run with superuser privileges). I need it to continue if it encounters such files.

+4
source share
4 answers

When using -exec rm {} \; you should not have problems with spaces, tabs, returns or special characters, because find calls the rm command directly and passes it the name of each file one at a time.

Directories will not be deleted with this command because you do not pass the -r option to it, and you probably do not want this either. This can be a little dangerous. You can also enable the -f option to apply force if you do not have write permission. Run cron script as root and you should be fine.

The only thing I will worry about is that you can get into a file that you do not want to delete, but has not been changed in the last 100 days. For example, a password to stop the startup sequence during operation. Most likely, the file has not been changed in the last 100 days, but as soon as this startup sequence starts, you will not want it to be charged because the password has been lost.

Well, applications that are used, but rarely modified, may be more reasonable. Maybe someone will resume that is not being updated, because they have ongoing work, etc.

So, be careful with your assumptions. Just because the file has not been changed in 100 days does not mean that it is not used. A higher criterion (although still dubious) is whether the file has been accessed in the last 100 days. Perhaps this is like the final command:

  find /path/to/files -atime +99 -type f -exec rm -f {}\; 

One more thing ...

Some find commands have the -delete option, which can be used instead of the -exec rm option:

  find /path/to/files -atime +99 -delete 

This will delete both the directories found and the files.

Another small recommendation: during the first week, save the files found in the log file instead of deleting them, and then view the log file. This way you make sure that you are not deleting something important. Once you are happy, there is nothing in the log file that you do not want to touch, you can delete these files. After a week, and you are satisfied that you are not going to delete anything important, you can return the find to perform the removal for you.

+11
source

If you run rm with the -f option, your file will be deleted, regardless of whether you have permission to write to the file or not (all that matters is the containing folder). Thus, either you can delete all the files in the folder or not. Add also -r if you want to delete subfolders.

And I have to say this: be very careful! You play with fire;) I suggest you debug something less harmful than the file command.

You can verify this by creating a bunch of files, for example:

 touch {a,b,c,d,e,f} 

And setting permissions at the request of each of them

+1
source

You should use -execdir instead of -exec . Even better, read the full Security Considerations for the find chapter in the findutils .

+1
source

Please always use rm [opts] -- [files] , this will save you from errors with files like -rf , which otherwise would be analyzed as parameters. When you provide the file names, then complete all the options.

+1
source

All Articles