Node.js Http.request slows down during stress testing. Am I doing something wrong?

Here is my sample code:

var http = require('http'); var options1 = { host: 'www.google.com', port: 80, path: '/', method: 'GET' }; http.createServer(function (req, res) { var start = new Date(); var myCounter = req.query['myCounter'] || 0; var isSent = false; http.request(options1, function(response) { response.setEncoding('utf8'); response.on('data', function (chunk) { var end = new Date(); console.log(myCounter + ' BODY: ' + chunk + " time: " + (end-start) + " Request start time: " + start.getTime()); if (! isSent) { isSent = true; res.writeHead(200, {'Content-Type': 'application/xml'}); res.end(chunk); } }); }).end(); }).listen(3013); console.log('Server running at port 3013'); 

What I found out is that if I connect to another server (Google or any other), the response will be slower and slower for a few seconds. This does not happen if I connect to another node.js server on the same network.

I am using JMeter for testing. 50 simultaneously per second with a cycle of 1000.

I do not know what the problem is...

===========================

Further research:

I am running the same script on Rackspace as well as on EC2 for testing. And the script will use http.request to connect to: Google, Facebook, as well as my other script, which simply displays data (for example, hello world), which is hosted by another EC2 instance.

Testing Tool I just have jMeter on the desktop.

Pre- node.js test: jMeter β†’ Google Result: fast and consistent. jMeter β†’ Facebook Result: Fast and Consistent. jMeter -> My simple exit script Result: fast and consistent.

Then I create 50 parallel threads / sec with 100 cycles, checking my nodes Rackspace nodejs and then EC2 node.js, which has the same performance king jMeter β†’ node.js β†’ Google Result: from 50 ms to 2000 ms in 200 requests .
jMeter β†’ node.js β†’ Facebook Result: from 200 ms to 3000 ms after 200 requsets.
jMeter β†’ node.js β†’ My simple exit script Result: from 100 ms to 1000 ms after 200 requsets.
The first 10-20 queries are fast and then start to slow down.

Then when I switch to 10 parallel threads everything starts to change. The answer is very consistent, not slowing down.

Something C # related from parallel threads that node.js (http.request) can handle.

------------ More details --------------

Today I did more tests, and here it is: I used http.Agent and increased the maximum socket. However, it is interesting that on one test server (EC2) it improves significantly and does not slow down. But another server (rackspace) only improves a little. It still shows a slowdown. I even set "Connection: close" in the request header, it only improves 100 ms.

if http.request uses a connection pool, how to increase it?

on both servers, if I do "ulimit -a", the file of the open file is 1024.

------------- ** MORE AND MORE ** -------------------

It seems that even I set maxSockets to a larger number, it only works on some limit. Apparently there is a restriction on the socket or an internal limitation of the OS. However, to raise it?

------------- ** AFTER EXTENSIVE TEST ** ---------------

After reading many posts, I find out:



quote from: https://github.com/joyent/node/issues/877


1) If I set headers with connection = 'keep-alive', the performance is good and can go up to maxSocket = 1024 (this is my linux setup).

 var options1 = { host: 'www.google.com', port: 80, path: '/', method: 'GET', **headers: { 'Connection':'keep-alive' }** }; 

If I set it to "Connection": "close", the response time will be 100 times slower.

Funny things happened here:

1) in EC2, when I test Connection: keep-alive first, it will take about 20-30 ms. Then, if I switch to Connection: Close OR set Agent: false, the response time slows down to 300 ms. WIHTOUT will restart the server if I switch to Connection: save-wait again, the response time will slow down even to 4000 ms. Either I need to restart the server, or wait a while to return the response to the lighting speed of 20-30 m.

2) If I ran it with an agent: false, then the response time will slow down to 300 ms. But then it will be faster again and return to "normal".

I assume the connection pool is still valid even if you install the agent: false. However, if you keep in touch: keep-alive, then it will be fast. just don't switch it.




July 25, 2011 Patch

I tried the latest version of node.js V0.4.9 with the fix http.js and https.js from https://github.com/mikeal/node/tree/http2 p>

performance is much better and more stable.

+8
request load-testing
source share
4 answers

I solved the problem with

 require('http').globalAgent.maxSockets = 100000 

or

 agent = new http.Agent() agent.maxSockets = 1000000 # 1 million http.request({agent:agent}) 
+8
source

This will not necessarily fix your problem, but it will clear your code a bit and use the various events as you need:

 var http = require('http'); var options1 = { host: 'www.google.com', port: 80, path: '/', method: 'GET' }; http.createServer(function (req, res) { var start = new Date(); var myCounter = req.query['myCounter'] || 0; http.request(options1, function(response) { res.on('drain', function () { // when output stream buffer drained response.resume(); // continue to receive from input stream }); response.setEncoding('utf8'); res.writeHead(response.statusCode, {'Content-Type': 'application/xml'}); response.on('data', function (chunk) { if (!res.write(chunk)) { // if write failed, the stream is choking response.pause(); // tell the incoming stream to wait until output stream drained } }).on('end', function () { var end = new Date(); console.log(myCounter + ' time: ' + (end-start) + " Request start time: " + start.getTime()); res.end(); }); }).end(); }).listen(3013); console.log('Server running at port 3013'); 

I deleted the body pin. As we move from one socket to another, we cannot be sure that the whole body will be at any time without buffering it.

EDIT: I assume that node uses the connection pool for http.request. If you have 50 concurrent connections (and therefore 50 concurrent http.request attempts), you may encounter a connection pool limitation. I don’t have time now to look at this, but you should look at the node documentation regarding http, especially the http agent.

EDIT 2: There is a thread regarding a very similar issue on the node.js. mailing list. You should take a look at it, especially Michael's message should be of interest. He suggests completely disabling pooling for requests by passing the agent: false option to calling http.request . I have no further hints, so if this doesn’t help, try contacting the node.js mailing list for help.

+1
source

Github question 877 may be related:

https://github.com/joyent/node/issues/877

Although it’s not clear to me if this is what you click on. The "agent: false" workaround worked for me when I hit it, as I set the connection: keep-alive header with the request.

+1
source

I struggled with the same problem. I found that my bottleneck is DNS material, although I do not quite understand where and why. If I make my requests for something like http://myserver.com/asd , I can hardly run 50-100 rq / s, and if I go beyond 100 rq / s and more things become a disaster, the response time becomes huge, and some requests never end and wait indefinitely, and I need to kill -9 my server. If I make requests for the server’s IP address, everything will be stable at 500 rq / s, although not quite smoothly, and the graph (I have a real-time graph) is peak. And be careful that the number of open files in Linux is still limited, and I managed to hit it once. Another observation is that a single node process cannot smoothly be 500 rq / s. But I can start 4 node processes, each of which is 200 rq / s, and I get a very smooth schedule and consistent CPU / net load and a very short response time. This is node 0.10.22.

+1
source

All Articles