What C ++ compilation of bottle neck performance?

When I do a new compilation for my project, which includes 10 + open-source libs. It takes about 40 minutes. (on conventional equipment)

Question: where are my bottle necks really? hard drive search or ghz processor? I do not think that multicore will help very correctly?

- Edit 1 -
my usual hardware = i3 oc up to 4.0Ghz, 8GB 1600Mhz DDR3 and 2tb Western Digital

- Change 2

my code = 10%, libs = 90%, I know that I do not need to build all the time, but I would like to learn how to improve compilation of performance, so when I buy a new PC for the developer, I would make a more reasonable choice.

- Edit 3 -
cc = Visual Studio (damned)

+5
source share
5 answers

Since VS 2010, VS can optionally use multiple cores when compiling a single project. He can also collect several projects in parallel. However, parallel acceleration does not seem significant in my experience: for example, Xcode performs parallel assemblies much better.

Luckily, you can't rebuild open source libraries every time, right? You can create them once, save the .lib files in version control and use them for future builds.

Have you tried precompiled header files for your own code? This can lead to significant acceleration.

+2
source

, , , :)

: distcc, ( 20 , ).

, - #include. ...

+4

, 40 ( 40 , ), #include. , , .

. , . , , 30 , 3- , , #includes #include. , ...

+4

When you compile from scratch, yes, it will take longer. Use the 40-year-old make technology that VS includes as project management to compile only what needs to be compiled after the first run.

However, the C ++ translation model plus the extensive use of templates can be a significant practical problem.

+1
source