I am constantly updating the neo4j chart through the REST API with simultaneous requests. I open and close each transaction explicitly, using the recommended method of garbage collection ( ConcurrentMarkSweep), my memory cards are large enough to store the entire graph in the cache, and yet I see that the memory of the “old generation” creeps up much higher than the size of the graph itself, reaching 8 GB with about 4 million nodes and 15 million relationships. Has anyone experienced a similar problem? Since I am using the REST API, it is difficult to understand where the memory leak occurs.
Additional Information: I use a cache_type=strongbunch of 16 GB. I added these flags:
wrapper.java.additional=-XX:MaxTenuringThreshold=15
wrapper.java.additional=-XX:SurvivorRatio=20
wrapper.java.additional=-XX:NewRatio=1
to impede progression through old memory, but I have a problem with and without them.
source
share