My script server deployment launches a lengthy process through SSH, for example:
ssh host 'install.sh'
Since my internet connection at home is not the best, sometimes I can disconnect while install.sh is running. (This is easy to simulate by closing the terminal window.) I would really like the install.sh script to continue to work in these cases, so that I do not interrupt the apt-get processes and other such hindrances.
The reason install.sh gets killed seems to be that stdout and stderr are closed when an SSH session is jerking, so writing to them is not performed. (This is not a SIGHUP problem, by the way - using nohup does not matter.) If I put touch ~/1 && echo this fails && touch ~/2 in install.sh , only ~/1 will be created.
So, running ssh host 'install.sh &> install.out' solves the problem, but then I lose any live move and error output.
So my question is: what is a simple / idiomatic way to start a process through SSH so that it doesn't crash if SSH dies, but so that I still see the result when it starts?
The solutions I tried:
When I run things manually, I use screen for such cases, but I donβt think it will be of much help here, because I need to run install.sh automatically from the shell script, The screen seems to be made for interactive use (it complains "Must be connected to the terminal.").
Using install.sh 2>&1 | tee install.out install.sh 2>&1 | tee install.out didnβt help (itβs silly for me to think that it can).
You can redirect stdout / stderr to install.out and then tail -f it. Actually the following snippet works:
touch install.out &&
But of course, should there be a less awkward way to do the same?
source share