NodeJS on multiple processors (PM2, Cluster, Recluster, Naught)

I am exploring options for running node in a multi-core environment.

I am trying to determine the best approach, and so far I have seen these options

  • Use the built-in cluster library to work and respond to signals
  • Use PM, but PM2 -i is listed as beta.
  • Nothing
  • Requiter

Are there other alternatives? What do people use in production?

+7
source share
3 answers

I use the default cluster library and it works very well. I had over 10,000 consonants (multiple clusters on multiple servers), and this works very well.

It is proposed to use clusters with a domain for error handling.

This is taken directly from http://nodejs.org/api/domain.html I made some changes to the way it creates new clusters for each core of your machine. and got rid of if / else and added express.

var cluster = require('cluster'), http = require('http'), PORT = process.env.PORT || 1337, os = require('os'), server; function forkClusters () { var cpuCount = os.cpus().length; // Create a worker for each CPU for (var i = 0; i < cpuCount ; i += 1) { cluster.fork(); } } // Master Process if (cluster.isMaster) { // You can also of course get a bit fancier about logging, and // implement whatever custom logic you need to prevent DoS // attacks and other bad behavior. // // See the options in the cluster documentation. // // The important thing is that the master does very little, // increasing our resilience to unexpected errors. forkClusters () cluster.on('disconnect', function(worker) { console.error('disconnect!'); cluster.fork(); }); } function handleError (d) { d.on('error', function(er) { console.error('error', er.stack); // Note: we're in dangerous territory! // By definition, something unexpected occurred, // which we probably didn't want. // Anything can happen now!Be very careful! try { // make sure we close down within 30 seconds var killtimer = setTimeout(function() { process.exit(1); }, 30000); // But don't keep the process open just for that! killtimer.unref(); // stop taking new requests. server.close(); // Let the master know we're dead.This will trigger a // 'disconnect' in the cluster master, and then it will fork // a new worker. cluster.worker.disconnect(); } catch (er2) { // oh well, not much we can do at this point. console.error('Error sending 500!', er2.stack); } }); } // child Process if (cluster.isWorker) { // the worker // // This is where we put our bugs! var domain = require('domain'); var express = require('express'); var app = express(); app.set('port', PORT); // See the cluster documentation for more details about using // worker processes to serve requests.How it works, caveats, etc. var d = domain.create(); handleError(d); // Now run the handler function in the domain. // // put all code here. any code included outside of domain.run will not handle errors on the domain level, but will crash the app. // d.run(function() { // this is where we start our server server = http.createServer(app).listen(app.get('port'), function () { console.log('Cluster %s listening on port %s', cluster.worker.id, app.get('port')); }); }); } 
+5
source share

We use Supervisor to control the Node.JS process, run them at boot time, and act as a watchdog in the event of a process failure.

We use Nginx as a reverse proxy to load balance between a process that listens on different ports.

in this way each process is isolated from the other.

for example: Nginx listens on port 80 and forwards traffic to ports 8000-8003

0
source share

I used PM2 for quite some time, but their price is expensive for my needs, because I have my own analytics environment and I do not need support, so I decided to experiment with alternatives. For my case, I just did a trick forever, very simple actually:

 forever -m 5 app.js 

Another useful example is

 forever start app.js -p 8080 
0
source share

All Articles