Is it normal for Node.js' RSS (Resident Set Size) to grow with each request until it reaches a certain limit?

I noticed that the RSS (Resident Set Size) of my node.js application is growing over time, and considering that I have a "JS Object Allocation Failed - Out of Memory" error on my server, this seems like the likely cause.

I installed the following very simple Node application:

var express = require('express'); var app = express(); app.get('/',function(req,res,next){ res.end(JSON.stringify(process.memoryUsage())); }); app.listen(8888); 

Just by holding the "Update" hotkey @http: // localhost: 8888 / I can watch RSS / heap / etc. grow until RSS gets above 50 mb (before I get bored). Waiting a few minutes and returning, RSS crashes - presumably, the GC is working.

I am trying to find out if this explains why my actual Node application crashes ... my production application quickly hits 100 MB RSS when it crashes, it usually amounts to 200 MB-300 MB. As far as I can tell, this should not be too large (node ​​should be able to handle 1.7Gb or so, I suppose), but nonetheless I am concerned that the RSS size on my production server is growing up (downturns are glitches) :

enter image description here

+8
source share
1 answer

This question is already quite old and has not yet received an answer, so I'll throw it in, which refers to a blog post since 2013. 2014 by Jay Conrod, who "worked on optimizing the V8 JavaScript engine for mobile phones."

The V8 is trying to be efficient at garbage collection, and for this it uses incremental marking and lazy sweeping .

In general, incremental marking is responsible for tracking whether your objects can be collected.

Incremental marking begins when the heap reaches a certain threshold size.

Lazy sweeping is responsible for collecting objects marked as garbage during incremental marking and for performing other laborious tasks.

As soon as incremental marking is complete, lazy sweeping will begin. All objects are marked, both alive and dead, and the heap knows exactly how much memory memory could be freed up by sweeping. All of this memory does not have to be immediately released, and a sweeping delay will not hurt anything. Therefore, instead of viewing all pages at once, the garbage collector processes the pages as needed until all pages are covered. At this point, the garbage collection cycle is completed, and incremental labeling may resume again.

I think this explains why your server allocates so much memory until it reaches a certain mark. For a better understanding, I recommend reading the Jay Conrod blog post “V8 Review: Garbage Collection” .

+1
source share

All Articles