ForkJoinPool, Phaser and managed lock: how much do they work against deadlocks?

This small piece of code never ends in jdk8u45 and is used to complete correctly on jdk8u20:

public class TestForkJoinPool { final static ExecutorService pool = Executors.newWorkStealingPool(8); private static volatile long consumedCPU = System.nanoTime(); public static void main(String[] args) throws InterruptedException { final int numParties = 100; final Phaser p = new Phaser(1); final Runnable r = () -> { p.register(); p.arriveAndAwaitAdvance(); p.arriveAndDeregister(); }; for (int i = 0; i < numParties; ++i) { consumeCPU(1000000); pool.submit(r); } while (p.getArrivedParties() != numParties) {} } static void consumeCPU(long tokens) { // Taken from JMH blackhole long t = consumedCPU; for (long i = tokens; i > 0; i--) { t += (t * 0x5DEECE66DL + 0xBL + i) & (0xFFFFFFFFFFFFL); } if (t == 42) { consumedCPU += t; } } } 

doc phaser claims that

Phasers can also be used by tasks performed in ForkJoinPool, which will provide sufficient parallelism for tasks to be executed when others are blocked, waiting for a phase transition.

However, javadoc ForkjoinPool # mangedBlock indicates:

When launched in ForkJoinPool, the pool can be expanded first to provide sufficient parallelism

Only there can be. So I'm not sure if this is a mistake or just bad code that does not rely on the Phaser / ForkJoinPool contract: how hard is the Phaser / ForkJoinPool combination contract working to prevent deadlocks?


My configuration:

  • Linux adc 3.14.27-100.fc19.x86_64 # 1 SMP Wed Dec 17 19:36:34 UTC 2014 x86_64 x86_64 x86_64 GNU / Linux
  • 8 cores i7
+5
source share
1 answer

It seems your problem is with changing the ForkJoinPool code between the JDK 8u20 and 8u45.

At u20, ForkJoin threads were always alive for at least 200 milliseconds (see ForkJoinPool.FAST_IDLE_TIMEOUT) before being restored.

In u45, as soon as ForkJoinPool has reached its target parallelism plus 2 additional threads, the threads will die as soon as they run out of work, without waiting for the wait. You can see this change in the awaitWork method in ForkJoinPool.java (line 1810):

  int t = (short)(c >>> TC_SHIFT); // shrink excess spares if (t > 2 && U.compareAndSwapLong(this, CTL, c, prevctl)) return false; 

Your program uses Phasers tasks to create additional workers. Each task gives rise to a new compensating employee, who must choose the next task.
However, as soon as you reach the goal of parallelism + 2, the compensating worker will die immediately, without waiting, and is not able to pick up the task that will be sent immediately after that.

Hope this helps.

+3
source

All Articles