Linux: how to queue some jobs in the background?

Here is the functionality I am looking for (and not found):

I have x processes that I want to run sequentially. Some may take a long time.

I want these processes to run in the background of my shell.

I know about nohup, but it doesn't seem to work fine ... if job1 is a time-consuming job, if I ctrl + c from the empty line that I get after running nohup job1 && & job2 && job3 &, then job2 and job3 are not will run, and job1 may or may not start depending on how long I can run nohup.

Is there a way to get the functionality I want? I am connected to linux server. For bonus points, I would like it if the tasks that I put in the queue continued to work, even if I closed my connection.

Thank you for your help.

EDIT: little addition to the question: if I have a shell script with three exec statements

exec BIGTHING exec littlething exec smallthing

will it definitely be consistent? And is there a way to wrap them all in one line of exec to get equivalent functionality?

those. exec BIGTHING and smallthing and smallthing or && & or somesuch

+8
linux queue
source share
7 answers

Use screen .

  • ssh to server
  • run screen
  • run your programs: job1;job2;job3 - separated by a semicolon, they will run sequentially
  • Disconnect from screen: CTRL-A , D
  • exit server

(later)

  1. ssh to server
  2. run screen -r
  3. and you are in your shell with a running job queue ...
+11
source share

Alternatively, if you want a little more control over your turn, for example. the ability to list and modify entries, see Task Spooler . It is available in the Ubuntu 12.10 Universe repositories as a task-spooler package.

+6
source share

I agree that screen is a very handy tool. There is an additional enqueue command line enqueue - which allows you to set additional tasks as needed, i.e. Even when you have already started one of the tasks, you can add another when the first is already running.

Here is a sample from enqueue README:

 $ enqueue add mv /disk1/file1 /disk2/file1 $ enqueue add mv /disk1/file2 /disk2/file2 $ enqueue add beep $ enqueue list 
+2
source share

a bit more info:

If you separate your tasks into && it is equivalent to & & / and the operator in a programming language, for example, in an if statement. If task 1 returns with an error, then task 2 will not work, etc. This is called a short circuit. If you separate the tasks; instead, they will work regardless of the return code.

If you finish the line and it closes the whole thing. If you forget, you can press control + z to pause the command and give you an invitation, then the bg command will close them.

If you close the ssh session, it will probably end the commands because they are attached to your shell (at least in most bash implementations). If you want it to continue after you log out, use the "disown" command.

+1
source share

after your team put &

for exmaple executing an executable named foo:

 ./foo & 

and it will continue to work even if you close your ssh connection.

0
source share

The main mechanism for setting a job from the shell is

 job1 & 

So, you can make a simple shell script that does this sequentially for several of them, although at that moment I would think about diving into some kind of scripting language, for example perl or python, and write a simple script to fork these processes, If your The script takes its first step and does all its work in a fork, it will immediately return control to the shell and continue to work, even if you exit the system.

0
source share

In order to separate it from the shell (tasks that are only substituted for & obey the commands of the shell), use

setsid foo

0
source share

Source: https://habr.com/ru/post/650675/


All Articles