There is a C # A(arg1, arg2) function A(arg1, arg2) that needs to be called many times. To do this the fastest, I use parallel programming.
Take an example of the following code:
long totalCalls = 2000000; int threads = Environment.ProcessorCount; ParallelOptions options = new ParallelOptions(); options.MaxDegreeOfParallelism = threads; Parallel.ForEach(Enumerable.Range(1, threads), options, range => { for (int i = 0; i < total / threads; i++) {
Now the problem is that this does not increase the number of cores; for example, on 8 cores it uses 80% of the processor, and on 16 cores it uses 40-50% of the CPU. I want to use the processor to the maximum extent.
You can assume that A(arg1, arg2) internally contains complex calculations, but it does not have I / O or network operations, and there is no thread blocking. What are other options to figure out which part of the code is making it non-executing 100% parallel?
I also tried to increase the degree of parallelism, for example
int threads = Environment.ProcessorCount * 2;
But it did not help.
Update 1 - if I run the same code, replacing A() with a simple function that calculates a prime number, then it uses 100 CPUs and scales well. Thus, this proves the correctness of the other part of the code. Now the problem may be in the original function A() . I need a way to detect this problem that causes some sequence of sequencing.
c # task-parallel-library cpu-usage parallel.foreach
Ramesh soni
source share