I evaluate various data from a text file in a fairly large algorithm.
If the text file contains more than datapoints (the minimum value I need is 1.3 million data), it gives the following error:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.regex.Matcher.<init>(Unknown Source) at java.util.regex.Pattern.matcher(Unknown Source) at java.lang.String.replaceAll(Unknown Source) at java.util.Scanner.processFloatToken(Unknown Source) at java.util.Scanner.nextDouble(Unknown Source)
When I run it in Eclipse with the following settings for installed jre6 (standard virtual machine):
-Xms20m -Xmx1024m -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:NewSize=10m -XX:MaxNewSize=10m -XX:SurvivorRatio=6 -XX:TargetSurvivorRatio=80 -XX:+CMSClassUnloadingEnabled
Please note that it works fine if I execute only part of a text file.
Now I read a lot about this topic, and it seems that somewhere I should have a data leak, or I am storing too much data in arrays (which, I think, is).
Now my problem is: how can I get around this?
- Is it possible to change my settings so that I can still perform the calculation or do I really need more processing power? (don't know where to get this)
- I read somewhere that it is better to use id and pointers for a processor than to put data in arrays and let it handle it. But how can I change my code so that it only displays pointers?
Basically I am looking for some general recommendations for preventing a huge amount of memory / memory leak.
Jean paul
source share