Using EOF for unnamed pipe alarms

I have a test program that uses unnamed pipes created using pipe () to communicate between parent and child processes created using fork () on a Linux system.

Typically, when the send process closes the channel fd entry, the receive process returns from read () a value of 0 indicating EOF.

However, it seems that if I fill the pipe with a rather large amount of data (maybe 100 Kbytes0 before the receiver starts to receive, the receiver blocks after reading all the data in the pipe, even though the sender closed it.

I checked that the sending process closed the channel with lsof, and it seems pretty clear that the receiver is blocked.

Which leads to the question: does one end of the pipe block a reliable way to tell the recipient that there is no more data?

If this is the case, and there are no conditions that could lead to blocking read () on an empty closed FIFO, something is wrong with my code. If not, that means I need to find an alternative method to signal the end of the data stream.

Resolution

I was sure that the initial assumption was correct, that closing the pipe caused EOF from the reader, this question was just a shot in the dark - I thought maybe there was some kind of subtle behavior that I observed. Almost every example you've ever seen using pipes is a toy that sends a few bytes and outputs. Things often work differently when you no longer perform atomic operations.

In any case, I tried to simplify my code to get rid of the problem, and I managed to find my problem. In pseudocode, I ended up doing something like this:

create pipe1 if ( !fork() ) { close pipe1 write fd do some stuff reading pipe1 until EOF } create pipe2 if ( !fork() ) { close pipe2 write fd do some stuff reading pipe2 until EOF } close pipe1 read fd close pipe2 read fd write data to pipe1 get completion response from child 1 close pipe1 write fd write data to pipe2 get completion response from child 2 close pipe2 write fd wait for children to exit 

The reading tube for the baby process1 hung, but only when the amount of data in the pipe became significant. This happened, although I closed the channel that child1 was reading.

A look at the source reveals the problem. When I forked the second child process, it grabbed its own copy of the pipe1 file descriptors that were left open. Despite the fact that only one process should write to the pipe, opening it in the second process, it did not enter the EOF state.

The problem was not discovered with small data sets, because child2 quickly finished his business and exited. But with large datasets, child2 did not return quickly, and I ended up in a dead end.

+7
source share
1 answer

read should return EOF when authors closed the end of the entry.

Since you are making a channel and then a fork, both processes will have a fd open entry. Perhaps during the reading process you forgot to close part of the recording in the handset.

Caution: It has been a long time since I programmed Unix. So this may be inaccurate.

Here is the code from: http://www.cs.uml.edu/~fredm/courses/91.308/files/pipes.html . Look at the "close unused" comments below.

 #include <stdio.h> /* The index of the "read" end of the pipe */ #define READ 0 /* The index of the "write" end of the pipe */ #define WRITE 1 char *phrase = "Stuff this in your pipe and smoke it"; main () { int fd[2], bytesRead; char message [100]; /* Parent process message buffer */ pipe ( fd ); /*Create an unnamed pipe*/ if ( fork ( ) == 0 ) { /* Child Writer */ close (fd[READ]); /* Close unused end*/ write (fd[WRITE], phrase, strlen ( phrase) +1); /* include NULL*/ close (fd[WRITE]); /* Close used end*/ printf("Child: Wrote '%s' to pipe!\n", phrase); } else { /* Parent Reader */ close (fd[WRITE]); /* Close unused end*/ bytesRead = read ( fd[READ], message, 100); printf ( "Parent: Read %d bytes from pipe: %s\n", bytesRead, message); close ( fd[READ]); /* Close used end */ } } 
+5
source

All Articles