I have the following program (which I got from http://blogs.msdn.com/b/csharpfaq/archive/2010/06/01/parallel-programming-in-net-framework-4-getting-started.aspx ) that splits a task using Parallel.For loop
class Program { static void Main(string[] args) { var watch = Stopwatch.StartNew(); Parallel.For(2, 20, (i) => { var result = SumRootN(i); Console.WriteLine("root {0} : {1} ", i, result); }); Console.WriteLine(watch.ElapsedMilliseconds); Console.ReadLine(); } public static double SumRootN(int root) { double result = 0; for (int i = 1; i < 10000000; i++) { result += Math.Exp(Math.Log(i) / root); } return result; } }
When I run this test several times, I get the time:
1992, 2140, 1783, 1863 ms, etc. etc.
My first question is: why are there always different times? I do the same calculation every time, but time changes every time.
Now, when I add the following code to use all available processors on my processor:
var parallelOptions = new ParallelOptions { MaxDegreeOfParallelism = Environment.ProcessorCount (On my CPU this is 8) }; Parallel.For(2, 20, parallelOptions, (i) => { var result = SumRootN(i); Console.WriteLine("root {0} : {1} ", i, result); });
I notice that the execution time is actually increasing! Time:
2192, 3192, 2603, 2245 ms, etc. etc.
Why does this increase time? Am I using this incorrectly?