The problem is that the pipe full. The subprocess stops, waiting for the pipe to fail, but then your process (the Python interpreter) terminates by breaking its end (hence the error message).
p.wait() will not help:
Warning This will be inhibited if the child process generates sufficient output to the stdout or stderr channel, so that it blocks waiting for the OS buffer to receive more data. Use communicate() to avoid this.
http://docs.python.org/library/subprocess.html#subprocess.Popen.wait
p.communicate() will not help:
Note Reading data is buffered in memory, so do not use this method if the data size is large or unlimited.
http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate
p.stdout.read(num_bytes) will not help:
Warning Use communicate() rather than .stdin.write , .stdout.read or .stderr.read to avoid deadlocks because any of the other OS buffer buffers fills and blocks the child process.
http://docs.python.org/library/subprocess.html#subprocess.Popen.stdout
The moral of this story is that for great performance, subprocess.PIPE doom you to a certain failure if your program tries to read data (it seems to me that you should put p.stdout.read(bytes) in while p.returncode is None: , but the warning above indicates that this could lead to blocking).
The documents suggest replacing the shell with the following:
p1 = Popen(["zgrep", "thingiwant", "largefile"], stdout=PIPE) p2 = Popen(["processreceivingdata"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0]
Note that p2 takes its standard input directly from p1 . This should avoid deadlocks, but given conflicting warnings above, who knows.
In any case, if this last part does not work for you (it should, however), you can try to create a temporary file, write all the data from the first call to it, and then use the temporary file as an input to the next process.