There are two problems with your approach. First, using custom FJP will not change the maximum number of individual tasks created by the streaming API, as it is defined as follows :
static final int LEAF_TARGET = ForkJoinPool.getCommonPoolParallelism() << 2;
Thus, even if you use a custom pool, the number of parallel tasks will be limited by commonPoolParallelism * 4 . (in fact, this is not a hard limit, but a goal, but in many cases the number of tasks is equal to this number).
The above problem can be fixed using the system property java.util.concurrent.ForkJoinPool.common.parallelism , but here you are faced with another problem: Files.lines very poorly parallelized. See this question for more details. In particular, for 13,000 input lines, the maximum possible speed is 3.17x (provided that each line processing takes about the same time), even if you have 100 processors. My StreamEx library provides a workaround for this (create a stream using StreamEx.ofLines(path).parallel() ). Another possible solution is to read the lines of the file in the List sequentially and then create a parallel stream from it:
Files.readAllLines(path).parallelStream()...
This will work along with the system property. However, in general, the Stream API is not suitable for parallel processing when tasks include I / O. A more flexible solution is to use CompletableFuture for each row:
ForkJoinPool fjp = new ForkJoinPool(100); List<CompletableFuture<String>> list = Files.lines(path) .map(line -> CompletableFuture.supplyAsync(() -> addressName(line), fjp)) .collect(Collectors.toList()); list.stream().map(CompletableFuture::join) .forEach(System.out::println);
Thus, you do not need to configure the system property and use separate pools for individual tasks.
source share