int[] records = job.getTargetSearchIDs(); topology.applyMatcherSearchWeight(records); int[] mIDs = topology.getMatcherIds(); SystemResponse[] sysResponse = new SystemResponse[mIDs.length]; Map<Integer, SearchCommand> mrCmdsMap = new HashMap<Integer, SearchCommand>();
The length of the MID is 250, and the length of the records is 7.5 million integers. I want this loop to work in less than 3 seconds on a server with an Intel Xeon X5355 octa-core processor, 64-bit Linux (Ubuntu), and 32-bit Java.
for (long mID : mIDs) { List<Integer> recIDsToMatch = new LinkedList<Integer>(); Matcher matcher = topology.getMatcherById(mID); for (long record : records) { if (matcher.getRange().isInRange(record)) recIDsToMatch.add(record); } if (recIDsToMatch.size() > 0) { SearchCommand command = new SearchCommand(job.getMatchParameters(), job.getRequestType(), job.getId(), job.getMatchParameters().getEngineProperties(), recIDsToMatch); command.setTimeout(searchTimeout, TimeUnit.SECONDS); mrCmdsMap.put(mID, command); } }
What improvements come to mind when you read this piece of code? What data structure and / or algorithm can be made?
source share