task repository (database implementation) reflects the last “known” state. therefore, in the case when the work was performed and the JVM crashed, it will never be updated in the database.
if the database is not “synchronized” with the JVM, then the process should be manual, for it, it seems, there is no ready-made solution. the easiest solution would be to run a script at startup, which checked the package tables for any RUNNING jobs, and then “failed”.
update batch_job_execution set STATUS = 'FAILED', EXIT_CODE = 'FAILED', EXIT_MESSAGE = 'FORCED UPDATE' where job_execution_id in (select job_execution_id from batch_job_execution where status = 'RUNNING');
One thing you might want to consider in this situation is that the JobRepository tables and related jobs are shared with another JVM. in this case, you might want to make a pass, which also evaluates whether the work is still outside the maximum duration of any story that it has. (subsection with max () end_time - create_time for the same job_name)
source share