Take the following naive implementation of a async nested loop with ThreadPool:
ThreadPool.SetMaxThreads(10, 10); CountdownEvent icnt = new CountdownEvent(1); for (int i = 0; i < 50; i++) { icnt.AddCount(); ThreadPool.QueueUserWorkItem((inum) => { Console.WriteLine("i" + inum + " scheduled..."); Thread.Sleep(10000); // simulated i/o CountdownEvent jcnt = new CountdownEvent(1); for (int j = 0; j < 50; j++) { jcnt.AddCount(); ThreadPool.QueueUserWorkItem((jnum) => { Console.WriteLine("j" + jnum + " scheduled..."); Thread.Sleep(20000); // simulated i/o jcnt.Signal(); Console.WriteLine("j" + jnum + " complete."); }, j); } jcnt.Signal(); jcnt.Wait(); icnt.Signal(); Console.WriteLine("i" + inum + " complete."); }, i); } icnt.Signal(); icnt.Wait();
Now you will never use this template (it will work in a deadlock at startup), but it shows a specific deadlock that you can cause with threadpool - blocking, waiting for the completion of nested threads after blocking threads have consumed the entire pool.
I am wondering if there is any potential risk of creating similar destructive behavior using the nested Parallel.For version of this:
Parallel.For(1, 50, (i) => { Console.WriteLine("i" + i + " scheduled..."); Thread.Sleep(10000); // simulated i/o Parallel.For(1, 5, (j) => { Thread.Sleep(20000); // simulated i/o Console.WriteLine("j" + j + " complete."); }); Console.WriteLine("i" + i + " complete."); });
Obviously, the planning mechanism is much more complicated (and I did not see this version at a dead end), but the main risk seems to be that it can still be hiding there. Is it theoretically possible to dry the pool that Parallel.For uses up to the deadlock creation point, having dependencies on nested threads? that is, is there a limit to the number of threads that Parallel.For keeps in it a back pocket for tasks that are scheduled after a delay?
source share