I had a similar problem when my scheduler process was uWSGI MULE and there was a separate application in which I wanted to add new tasks.
Looking at the BaseScheduler add_job() Function :
with self._jobstores_lock: if not self.running: self._pending_jobs.append((job, jobstore, replace_existing)) self._logger.info('Adding job tentatively -- it will be properly scheduled when the scheduler starts') else: self._real_add_job(job, jobstore, replace_existing, True)
you can see the problem: the scheduler adds tasks only when it is already running.
The solution, fortunately, is quite simple, we need to define our own "work-only scheduler":
class JobAddScheduler(BlockingScheduler): def add_job(self, func, trigger=None, args=None, kwargs=None, id=None, name=None, misfire_grace_time=undefined, coalesce=undefined, max_instances=undefined, next_run_time=undefined, jobstore='default', executor='default', replace_existing=False, **trigger_args): job_kwargs = { 'trigger': self._create_trigger(trigger, trigger_args), 'executor': executor, 'func': func, 'args': tuple(args) if args is not None else (), 'kwargs': dict(kwargs) if kwargs is not None else {}, 'id': id, 'name': name, 'misfire_grace_time': misfire_grace_time, 'coalesce': coalesce, 'max_instances': max_instances, 'next_run_time': next_run_time } job_kwargs = dict((key, value) for key, value in six.iteritems(job_kwargs) if value is not undefined) job = Job(self, **job_kwargs)
Then we can add cron jobs instantly:
jobscheduler = JobAddScheduler() jobscheduler.add_job(...)
Do not forget to configure the scheduler! In my case, I used SQLAlchemy-MySQL to store jobs:
jobstores=dict(default=SQLAlchemyJobStore(url='mysql+pymsql://USER: PASSWORD@SERVER /DATABASE')) jobscheduler.configure(jobstores=jobstores)
I am not sure about other jobs, but after I added a new task, I had to call the wakeup() function of a separate scheduler process for the "active" task. I achieved this with the uWSGI signaling system.