// parallel processing int processors = Runtime.getRuntime().availableProcessors(); ExecutorService executorService = Executors.newFixedThreadPool(threads); final List<String> albumIds2 = new ArrayList<String>(); long start2 = System.nanoTime(); for (final HColumn<String, String> column : result.get().getColumns()) { Runnable worker = new Runnable() { @Override public void run() { albumIds2.add(column.getName()); } }; executorService.execute(worker); } long timeTaken2 = System.nanoTime() - start2;
I have code similar to the above example that creates a List<String> album identifiers. the column is a fragment from the cassandra database. I record time for the entire list of albums being created.
I did the same with the extended loop as shown below.
QueryResult<ColumnSlice<String, String>> result = CassandraDAO.getRowColumns(AlbumIds_CF, customerId); long start = System.nanoTime(); for (HColumn<String, String> column : result.get().getColumns()) { albumIds.add(column.getName()); } long timeTaken = System.nanoTime() - start;
I note that no matter how large the number of albums, a shorter time is always required for each cycle. Am I doing it wrong? or I need a computer with multiple cores. I am really new to the whole concept of parallel computing, please forgive me if my question is stupid.
source share