I am trying to run NetLogo (Java simulation simulation) in a cluster as part of a big experiment. I was surprised at the seemingly huge memory requirement for a relatively (relatively) simple simulation. On the cluster, it throws "java.lang.OutOfMemoryError: Java heap space" exceptions for anything less than "-Xmx2500M", heapsizes. It takes 5 hours to complete. I did the same experiment on both of my computers (iMac and MacBook Pro), and they ran for less than one hour, and "-Xmx1024" did not produce any errors. Cluster tasks require "-XX: MaxPermSize = 250M", whereas on my Mac computers no increase is required that exceeds the default value. I ran the same code, the same inputs, using all the same banks in all cases.
In each case, 64-bit JVMs are used (and as far as I know, they are very similar):
<on the cluster> $ java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) <on my macs> $ java -version java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04-415-10M3646) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01-415, mixed mode)
And I start the Client JVM in all cases (I originally used the server on the cluster, the transition to the client did not matter). I tried to execute on a cluster with java 7, the same huge memory and runtime problems.
I am completely puzzled, no one I spoke to about can explain this. Has anyone there encountered this before? Any help is much appreciated!
java heap jvm out-of-memory execution-time
user1660640
source share