Primarily: small fwrites () slower because each fwrite needs to check the validity of its parameters, execute the equivalent of flockfile (), possibly fflush (), add data, return success: this overhead adds up - not as much as tiny calls to write (2), but that's it still noticeable.
Evidence:
#include <stdio.h> #include <stdlib.h> static void w(const void *buf, size_t nbytes) { size_t n; if(!nbytes) return; n = fwrite(buf, 1, nbytes, stdout); if(n >= nbytes) return; if(!n) { perror("stdout"); exit(111); } w(buf+n, nbytes-n); } /* Usage: time $0 <$bigfile >/dev/null */ int main(int argc, char *argv[]) { char buf[32*1024]; size_t sz; sz = atoi(argv[1]); if(sz > sizeof(buf)) return 111; if(sz == 0) sz = sizeof(buf); for(;;) { size_t r = fread(buf, 1, sz, stdin); if(r < 1) break; w(buf, r); } return 0; }
Having said that, you can do what many commentators have suggested, i.e. add your own buffering before fwrite: this is very trivial code, but you should check to see if it really benefits you.
If you do not want to roll back your own, you can use, for example, the buffer interface in skalibs , but you will probably take longer to read documents than to write yourself (imho).
loreb source share