In some environments, compilation will be the fastest if it includes only header files. In other environments, compilation will be optimized if all source files can use the same main collection of headers (some files may have additional headers outside of a common subset). Ideally, headers should be built so that multiple #include operations have no effect. It may be useful to surround #include statements with checking the include-guard file included, although this creates a dependency on the format of this defender. In addition, depending on the caching behavior of the system file, an unnecessary #include, whose goal ends entirely in # ifdef'ed away, may not take a lot of time.
Another thing to keep in mind is that if a function takes a pointer to a structure, you can write the prototype as
void foo (struct BAR_s * bar);
no definition for BAR_s, which should be in scope. A very convenient approach to prevent unnecessary inclusions.
PS - in many of my projects there will be a file that expects each module to be #include containing things like typedefs for integer sizes and several general structures and unions [for example,
typedef union {
unsigned long l;
unsigned short lw [2];
unsigned char lb [4];
} U_QUAD;
(Yes, I know that I would have problems if I switched to a large-end architecture, but since my compiler does not allow anonymous structures in unions, using named identifiers for bytes in a union will require them to access them as theUnion. b.b1, etc., which seems pretty annoying.
supercat Oct. 15 2018-10-15 15:55
source share