Anxious risk in a nested parallel for

Take the following naive implementation of a async nested loop with ThreadPool:

ThreadPool.SetMaxThreads(10, 10); CountdownEvent icnt = new CountdownEvent(1); for (int i = 0; i < 50; i++) { icnt.AddCount(); ThreadPool.QueueUserWorkItem((inum) => { Console.WriteLine("i" + inum + " scheduled..."); Thread.Sleep(10000); // simulated i/o CountdownEvent jcnt = new CountdownEvent(1); for (int j = 0; j < 50; j++) { jcnt.AddCount(); ThreadPool.QueueUserWorkItem((jnum) => { Console.WriteLine("j" + jnum + " scheduled..."); Thread.Sleep(20000); // simulated i/o jcnt.Signal(); Console.WriteLine("j" + jnum + " complete."); }, j); } jcnt.Signal(); jcnt.Wait(); icnt.Signal(); Console.WriteLine("i" + inum + " complete."); }, i); } icnt.Signal(); icnt.Wait(); 

Now you will never use this template (it will work in a deadlock at startup), but it shows a specific deadlock that you can cause with threadpool - blocking, waiting for the completion of nested threads after blocking threads have consumed the entire pool.

I am wondering if there is any potential risk of creating similar destructive behavior using the nested Parallel.For version of this:

 Parallel.For(1, 50, (i) => { Console.WriteLine("i" + i + " scheduled..."); Thread.Sleep(10000); // simulated i/o Parallel.For(1, 5, (j) => { Thread.Sleep(20000); // simulated i/o Console.WriteLine("j" + j + " complete."); }); Console.WriteLine("i" + i + " complete."); }); 

Obviously, the planning mechanism is much more complicated (and I did not see this version at a dead end), but the main risk seems to be that it can still be hiding there. Is it theoretically possible to dry the pool that Parallel.For uses up to the deadlock creation point, having dependencies on nested threads? that is, is there a limit to the number of threads that Parallel.For keeps in it a back pocket for tasks that are scheduled after a delay?

+4
source share
1 answer

No, there is no such dead end in Parallel.For() (or Parallel.ForEach() ).

There are some factors that would reduce the risk of deadlock (for example, the dynamic number of threads used). But there is also a reason why a dead end is not possible: iteration is performed on the original thread. This means that if ThreadPool fully occupied, the calculation will be performed completely synchronously. In this case, you cannot speed up the use of Parallel.For() , but your code will still work, no deadlocks.

In addition, a similar situation with Task also resolved correctly: if you Wait() on a Task (or go to your Result ), which has not yet been scheduled, it will work inline in the current thread. I think this is primarily a performance optimization, but I think that in some specific cases it could also avoid deadlocks.

But I think the question is more theoretical than practical ThreadPool 4 ThreadPool has a default maximum thread size set to a thousand. And if at the same moment you have thousands of Thread locked, you are doing something very wrong.

+4
source

All Articles