After some experimentation, I believe the problem is that bash expects all processes in the pipeline to terminate in some form or form.
With a simple qqq file of approximately 360 lines of C source (several programs concatenated several times), and using grep -q return, I observe:
tail -n 300 qqq | grep -q return tail -n 300 qqq | grep -q return exits almost immediately.tail -n 300 -f qqq | grep -q return tail -n 300 -f qqq | grep -q return does not exit.tail -n 300 -f qqq | strace -o grep.strace -q return tail -n 300 -f qqq | strace -o grep.strace -q return does not exit until interrupted. The grep.strace file ends with:
read(0, "#else\n#define _XOPEN_SOURCE 500\n"..., 32768) = 10152 close(1) = 0 exit_group(0) = ?
This one leads me to think that grep came out before the interrupt kills tail ; if he was waiting for something, there would be an indication that he received a signal.
A simple program that mimics the actions of a shell, but without waiting, indicates that everything is ending.
#define _XOPEN_SOURCE 600 #include <stdlib.h> #include <unistd.h> #include <stdarg.h> #include <errno.h> #include <string.h> #include <stdio.h> static void err_error(const char *fmt, ...) { int errnum = errno; va_list args; va_start(args, fmt); vfprintf(stderr, fmt, args); va_end(args); if (errnum != 0) fprintf(stderr, "%d: %s\n", errnum, strerror(errnum)); exit(1); } int main(void) { int p[2]; if (pipe(p) != 0) err_error("Failed to create pipe\n"); pid_t pid; if ((pid = fork()) < 0) err_error("Failed to fork\n"); else if (pid == 0) { char *tail[] = { "tail", "-f", "-n", "300", "qqq", 0 }; dup2(p[1], 1); close(p[0]); close(p[1]); execvp(tail[0], tail); err_error("Failed to exec tail command"); } else { char *grep[] = { "grep", "-q", "return", 0 }; dup2(p[0], 0); close(p[0]); close(p[1]); execvp(grep[0], grep); err_error("Failed to exec grep command"); } err_error("This can't happen!\n"); return -1; }
With a fixed-size file, tail -f not going to exit, so the shell ( bash ) seems to be hanging around.
tail -n 300 -f qqq | grep -q return tail -n 300 -f qqq | grep -q return freezes, but when I used another terminal to add another 300 lines to the qqq file, the command came out. I interpret this as happening because grep exited, so when tail wrote new data to the pipe, it received SIGPIPE and exited, and bash therefore acknowledged that all the processes in the pipeline were dead.
I observed the same behavior with both ksh and bash . This suggests that this is not a mistake, but some expected behavior. Testing on Linux (RHEL 5) on x86_64 machine.
Jonathan leffler
source share