So, I study fclose manpage for me, and I concluded that if fclose is interrupted by some kind of signal, according to the man page, there is no way to recover ...? Am I missing a point?
Usually, with unbuffered POSIX functions (open, close, write, etc.), there is ALWAYS a way to recover from a signal interruption (EINTR) by restarting the call; unlike the documentation for buffered calls, it says that after an unsuccessful Fclose attempt, another attempt has undefined behavior ... there is no hint of HOW it will be restored. Am I just “out of luck” if the signal interrupts fclose? Data may be lost, and I cannot be sure if the file descriptor is really closed or not. I know the buffer is freed, but what about a file descriptor? Think of large-scale applications that use a lot of fd at the same time and run into problems if fd are not properly freed → I would suggest that there must be a CLEAN solution to solve this problem.
So let me assume that I am writing a library and I am not allowed to use sigaction and SA_RESTART, and a lot of signals are sent, how can I recover if fclose is interrupted? Would it be nice to call in a loop (instead of fclose) after fclose failed with EINTR? The fclose documentation simply does not mention the state of the file descriptor; UNDEFINED is not very useful, though ... if fd is closed, and I cause it to close again, there may be strange hard to debug side effects, so naturally I would rather ignore this case by doing the wrong thing ... again, there is no unlimited amount file descriptors, and resource leakage is a kind of error (at least for me).
Of course, I could check one specific implementation of fclose, but I can’t believe that someone designed stdio and did not think about this problem? Is this just bad documentation or the design of this feature?
This corner case really bothers me :(
c linux fclose stdio
boo-hoo
source share