It will save me a tremendous amount of time.
No, it will not. Hyper-threading is useful when you have different startup tasks that use additional resources inside the CPU. For example, one thread uses a lot of floating point, and the other does not. While the former does floating point math, the rest of the processor is available for another thread.
For obvious reasons, a bunch of compiled threads require the same internal CPU resources. All that you achieve is twice as many threads fighting over cache memory and processor resources. More cache conflicts will make life slower rather than faster.
Well, the above explains why you won’t get BIG gains from Hyperthreading and consistent code. The usual wisdom for parallel make is to set the number of jobs to one more than the number of cores, and it is assumed that 1 / N processes are likely to do disk I / O. Of course, for Unix to do, where work does a lot of makefile processing in addition to the actual compilation.
If you turned the knob to 8 and did not see any changes (note, this may be a negative change in bandwidth for the reasons described above) in the task manager reported CPU consumption, this is probably due to the fact that some tasks are interdependent in your solution compilations are performed sequentially. If one task depends on the output of another (pre-compiled headers often lead to this), then this limits the number of simultaneous tasks - even if you have a 16-core system, you still will not get more parallelism than the structure of the resolution project.
source share