In C ++, why does it take so much longer to compile a large number of smaller files than large files?

I recently split very large files in my C ++ project into many small files (basically one file for each class). This doubled the compilation time and also increased the resulting executable from 1.6 MB to 2.4 MB. Why has this changed so much?

Is this a direct result of including multiple headers in a large number of files, not just a few?

Compiler Options:

g ++ -Wall -Wextra -g -ggdb -std = C ++ 0x

The size of the executable I am referring to is the execution of the strip -s executable.

Dimensions:

Earlier with debugging symbols: 16 MB

After debugging characters: 26 MB

Previously without debugging symbols: 1.5 MB

After debugging characters: 2.4 MB

Additional question:

I already use precompiled headers by putting headers in pch.hpp and then using the -include pch.hpp option in my g ++ flags. Is this the best way to do this with gcc? This seems to have very little effect on compilation time. The only headers that have not been precompiled at present are different from the project and may be changed as the project is under heavy development.

+7
source share
3 answers

There are several reasons why this could happen, here is braindump:

  • slow disk access (perhaps not the reason for such a large increase)
  • multiple translation units, including the same headings, mean that these headers are inserted into each of them. Headers are also preprocessed each time. ( most likely reason )
  • static variables or functions defined in the headers are duplicated in each translation unit
  • symbols for templates are generated for each translation unit that specializes in them.

Here are some things that can help you - save multiple files, but reduce compilation time:

  • precompiled headers
  • bulk assemblies - exclude cpp files from the assembly, but include them in another implementation file that is compiled.
+11
source

This is usually because you compile many system headers during each compilation unit. There is also some minor overhead associated with linking all object files.

+1
source

Use a build system (e.g. CMake or GNU Make) to do incremental builds instead of recompiling the entire shebang whenever you make changes.

The Pimpl racetrack can help reduce the number of "secondary" header files that should be included in the header file, due to private members of the class. I donโ€™t think this idiom will reduce the time of full recovery, but it should help to reduce the time of incremental assembly when changing private members of the class.

I like to use Pimpl for classes that are part of the visible interface of a library or package. I'm not worried about Pimpl for โ€œinnerโ€ classes or classes that act as value types.

0
source

All Articles