Shell error code capture

I currently have a script that does something like

./a | ./b | ./c 

I want to change it so that if any of b, c or c with an error code, I print the error message and stop, and not delete the failed output forward.

What would be the easiest / cleanest way to do this?

+77
shell pipe error-handling
Oct 11 '09 at 15:17
source share
4 answers

If you really do not want the second command to continue until the first, as you know, is successful, you will probably have to use temporary files. Simple version:

 tmp=${TMPDIR:-/tmp}/mine.$$ if ./a > $tmp.1 then if ./b <$tmp.1 >$tmp.2 then if ./c <$tmp.2 then : OK else echo "./c failed" 1>&2 fi else echo "./b failed" 1>&2 fi else echo "./a failed" 1>&2 fi rm -f $tmp.[12] 

Redirection '1> & 2' can also be abbreviated '> & 2'; however, the old version of the MKS shell incorrectly redirected the error without the previous “1”, so I used this unique notation for reliability for many years.

This is a file leak if you interrupt something. Bomber (more or less) shell programming uses:

 tmp=${TMPDIR:-/tmp}/mine.$$ trap 'rm -f $tmp.[12]; exit 1' 0 1 2 3 13 15 ...if statement as before... rm -f $tmp.[12] trap 0 1 2 3 13 15 

The first line of the trap says: "Run the commands" rm -f $tmp.[12]; exit 1 rm -f $tmp.[12]; exit 1 "when any of the signals 1 SIGHUP, 2 SIGINT, 3 SIGQUIT, 13 SIGPIPE or 15 SIGTERM or 0 occurs (when the shell exits for some reason) If you are writing a shell script, the last trap only needs to remove the trap to 0, which is a trap for exiting the shell (you can leave other signals in place, as the process is about to end).

In the source pipeline for 'c' it is possible to read data from 'b' before the completion of 'a' - this is usually desirable (for example, for several cores to work). If "b" is the "sort" phase, this will not apply - "b" must see all of its data before it can generate any of its output.

If you want to detect which commands are not executing, you can use:

 (./a || echo "./a exited with $?" 1>&2) | (./b || echo "./b exited with $?" 1>&2) | (./c || echo "./c exited with $?" 1>&2) 

It is simple and symmetrical - it is trivial to extend to a 4-part or N-part conveyor.

Simple experiments with 'set -e' did not help.

+16
Oct 11 '09 at 15:40
source share

In bash, you can use set -e and set -o pipefail at the beginning of your file. Follow up team ./a | ./b | ./c ./a | ./b | ./c ./a | ./b | ./c will fail if any of the three scenarios fail. The return code will be the return code of the first failed script file.

Note that pipefail not available in standard sh.

+129
Feb 10 '11 at 16:13
source share

You can also check the ${PIPESTATUS[]} array after full execution, for example. if you run:

 ./a | ./b | ./c 

Then ${PIPESTATUS} will be an array of error codes from each command in the pipe, so if the middle command failed, echo ${PIPESTATUS[@]} will contain something like:

 0 1 0 

and something like this starts after the command:

 test ${PIPESTATUS[0]} -eq 0 -a ${PIPESTATUS[1]} -eq 0 -a ${PIPESTATUS[2]} -eq 0 

Allows you to verify that all commands in the pipe have worked.

+34
Feb 06 '12 at 22:50
source share

Unfortunately, Jonathan's answer requires temporary files, and Michel and Imron's answers require bash (even if this question is tagged with a shell). As others have already noted, it is not possible to interrupt the operation of a pipe until later processes begin. All processes are started immediately and thus will be executed before any errors are reported. But the title of the question also asked for error codes. They can be repaired and examined after the completion of the pipe to find out if any of the processes involved have worked.

Here is a solution that catches all the errors in the pipe, not just the errors of the last component. So it's like a bash pipefail, just more powerful in the sense that you can get all the error codes.

 res=$( (./a 2>&1 || echo "1st failed with $?" >&2) | (./b 2>&1 || echo "2nd failed with $?" >&2) | (./c 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1) if [ -n "$res" ]; then echo pipe failed fi 

To find that something failed, the echo command prints a standard error if any command fails. Then the combined standard error output is stored in $res and examined later. That is why the standard error of all processes is redirected to standard output. You can also send this output to /dev/null or leave it another indicator that something went wrong. You can replace the last redirect with /dev/null file if yo uneed to save the output of the last command anywhere.

To play more with this construct and convince myself that this is really what you need, I replaced ./a , ./b and ./c with subshells that execute echo , cat and exit . You can use this to verify that this construct does indeed forward all output from one process to another and that error codes are written correctly.

 res=$( (sh -c "echo 1st out; exit 0" 2>&1 || echo "1st failed with $?" >&2) | (sh -c "cat; echo 2nd out; exit 0" 2>&1 || echo "2nd failed with $?" >&2) | (sh -c "echo start; cat; echo end; exit 0" 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1) if [ -n "$res" ]; then echo pipe failed fi 
+6
Jun 18 '16 at 18:12
source share



All Articles