Do a cron job before completing another job

I'm currently trying to do something like this:

  • 1 cron job backup_daily
  • 2 cron job backup_weekly
  • 3 cron job backup_monthly

so it may happen that daily and weekly work on the same day. at least one of them will fail because the file to be copied is locked by another backup process. therefore, one simple solution would be to complete tasks at different times. but since we can’t say how long the work will take, it looks ugly.

so I was thinking of a proxy script, so instead of doing the above cron jobs, I would do something like

  • 1 cron job check_if_anybackup_is_running_and_run_backup_daily_else_wait_till_finished
  • 2 cron job check_if_anybackup_is_running_and_run_backup_weekly_else_wait_till_finished
  • 3 cron job check_if_anybackup_is_running_and_run_backup_monthly_else_wait_till_finished

then the only thing that I would be interested in is that they begin with an offset of any time, so they do not block each other. it’s also wise to choose a “wait time” so that they don’t “double-check” at the same moment and block each other again (with three processes we could use an offset of +1 for a weekly move, +2 for monthly and even odd counters for a run for time double check)

however, I'm not sure how to implement this with linux script. I'm not sure what the “right” procedure is. use the lock file that is created when the process starts and check this? and what happens if it is locked? is it a “good” method to just use sleep and check the lock file again after time X? I'm not sure what happens when I use sleep on a linux thread - I mean: is there a “counter” scheduled and using processor power or is there some kind of interruption that the processor will send to the process waiting after some time X (for example, “event based”?) Are there any good methods you might think of? some linux script code snippets would be perfect as this is not what i have ever done before

+4
source share
1 answer

I would combine all three scripts into one with different parameters. Like do_backup.sh daily .

You are using the same lock file correctly. I would go ahead and write my process PID in this lock file. Thus, if your process sees that the lock file exists, it does not just help out, but checks the process that creates this file, still running. Thus, even if your process crashes and does not delete the lock file, your entire system is still safe.

Here is an example that I use in my scripts to ensure that only one copy works at a time:

 #!/bin/sh PID_FILE=$0.pid [ -f $PID_FILE ] && { pid=`cat $PID_FILE` ps -p $pid && { echo Already running... exit } rm -rf $PID_FILE } echo $$ > $PID_FILE 

And then in your script you just include this file

 source pid.sh 
+5
source

All Articles