Pipe () with fork () with recursion: handling file descriptors

I have confusion regarding the existing question that was asked yesterday:
The recursive pipeline in Unix again .

I resubmit the problematic code:

#include <stdio.h> #include <unistd.h> #include <sys/types.h> #include <stdlib.h> void pipeline( char * ar[], int pos, int in_fd); void error_exit(const char*); static int child = 0; /* whether it is a child process relative to main() */ int main(int argc, char * argv[]) { if(argc < 2){ printf("Usage: %s option (option) ...\n", argv[0]); exit(1); } pipeline(argv, 1, STDIN_FILENO); return 0; } void error_exit(const char *kom){ perror(kom); (child ? _exit : exit)(EXIT_FAILURE); } void pipeline(char *ar[], int pos, int in_fd){ if(ar[pos+1] == NULL){ /*last command */ if(in_fd != STDIN_FILENO){ if(dup2(in_fd, STDIN_FILENO) != -1) close(in_fd); /*successfully redirected*/ else error_exit("dup2"); } execlp(ar[pos], ar[pos], NULL); error_exit("execlp last"); } else{ int fd[2]; pid_t childpid; if ((pipe(fd) == -1) || ((childpid = fork()) == -1)) { error_exit("Failed to setup pipeline"); } if (childpid == 0){ /* child executes current command */ child = 1; close(fd[0]); if (dup2(in_fd, STDIN_FILENO) == -1) /*read from in_fd */ perror("Failed to redirect stdin"); if (dup2(fd[1], STDOUT_FILENO) == -1) /*write to fd[1]*/ perror("Failed to redirect stdout"); else if ((close(fd[1]) == -1) || (close(in_fd) == - 1)) perror("Failed to close extra pipe descriptors"); else { execlp(ar[pos], ar[pos], NULL); error_exit("Failed to execlp"); } } close(fd[1]); /* parent executes the rest of commands */ close(in_fd); pipeline(ar, pos+1, fd[0]); } } 

An error has occurred:

 Example: ./prog ls uniq sort head gives: sort: stat failed: -: Bad file descriptor 

The solution that was proposed was to "not close the file descriptors fd [1] and in_fd in the child process, since they are already closed in the parent process."

My confusion: (sorry I'm new to Linux)
According to my book, β€œStarting Linux Programming,” when we execute the fork () process, file descriptors are also duplicated. Therefore, the parent and child must have different file descriptors. This contradicts the answer.

My attempt:
I tried to run this code myself, and I saw that the problem only occurs if I close the in_fd file handle in both processes (parent and child). It does not depend on fd [1].
It is also interesting if I try ./prog ls sort head ./prog ls sort head uniq , but when I try ./prog ls sort head uniq , it gives a read error on head .

My thoughts: The in_fd file in_fd is the only input variable for this function. It seems that even after fork there is only one file descriptor left, which is shared by both the parent and the child. But I can’t figure out how to do this.

+2
c linux unix
source share
1 answer

when we process fork (), the file descriptors are also duplicated. Therefore, the parent and child must have different files. Descriptors

the file descriptor is a simple integer . therefore, when it is copied, it has the same value, therefore it points to the same file.

So, you can open the file in the parent and access it from the child. The only problem that may arise is access to the file from the parent and child, and in this case it is not guaranteed what position the file will be accessing from. To avoid this, it is recommended that you close fd in the child device and reopen it.

As you stated about your attempt, I did the same for the indicated problem and found that this always happens for the 4th team. In addition, dup2() closes the file that it duplicates. In this problem, fd[1] and in_fd were duplicated for the child stdin and stdout . and fd[1] and in_fd were closed at this point. There is no need to close them again.

Closing an already closed handle will cause an error.

And since you do not know whether the parent or child will be executed first, if you close one file from the child and close it again from the parent, a problem may arise, and this type of behavior is unpredictable.

0
source share

All Articles