What is the correct way to use the node.js postgresql module?

6 answers

I am the author of node-postgres . Firstly, I apologize that the documentation did not allow me to make the right choice: it’s my fault. I will try to improve it. I wrote Gist to explain this because the conversation has grown too long for Twitter.

Using pg.connect is a way to migrate to a web environment.

A PostgreSQL server can only process one query at a time per connection. This means that if you have 1 global new pg.Client() connected to your backend, your entire application is bottled based on how fast postgres can respond to requests. This is literally all aligned, queuing every request. Yes, it is asynchronous and so good ... but isn’t it? multiply your bandwidth by 10 times? Use pg.connect to set pg.defaults.poolSize to something normal (we do 25-100, not sure if the right number is still).

new pg.Client is when you know what you are doing. When you need one long-lived client for some reason or you need to carefully monitor the life cycle. A good example of this is using LISTEN/NOTIFY . The listening client must be nearby and connected and not used, so it can correctly process NOTIFY messages. Another example would be opening a 1-off client to kill some or in command line scripts.

One very useful thing is to centralize all access to your database in the application to one file. Do not clog pg.connect calls or new clients. Have a file like db.js that looks something like this:

 module.exports = { query: function(text, values, cb) { pg.connect(function(err, client, done) { client.query(text, values, function(err, result) { done(); cb(err, result); }) }); } } 

This way you can change your implementation from pg.connect to a custom client pool or something else and only change things in one place.

Take a look at the node -pg-query module , which does just that.

+153
Oct 9 '13 at 21:02
source share

I am the author of pg-promise , which simplifies the use of node-postgres through promises.

It addresses the problems of properly connecting and disconnecting from the database using the connection pool implemented by node-postgres , among other things, as automated transactions.

An individual pg-promise request comes down to what is relevant to your business logic:

 db.any('SELECT * FROM users WHERE status = $1', ['active']) .then(data => { console.log('DATA:', data); }) .catch(error => { console.log('ERROR:', error); }); 

i.e. you do not need to deal with the connection logic when executing queries, because you only establish a connection once, globally, for example:

 const pgp = require('pg-promise')(/*options*/); const cn = { host: 'localhost', // server name or IP address; port: 5432, database: 'myDatabase', user: 'myUser', password: 'myPassword' }; // alternative: // const cn = 'postgres://username:password@host:port/database'; const db = pgp(cn); // database instance; 

You can find many more examples in Learn by Example or on the main page of the project .

+22
May 16 '15 at 18:36
source share

As you can see from the documentation , both parameters are valid, so choose depending on what you prefer. Like you, I would go with a second choice.

0
Dec 13 '11 at 8:23
source share

It is better to create a pg pool globally, and every time you need to perform a db operation, use the client, and then release it back to the pool. After all db pool operations are complete using pool.end()

Code Example -

 let pool = new pg.Pool(dbConfig); pool.connect(function(err, client, done) { if (err) { console.error('Error connecting to pg server' + err.stack); callback(err); } else { console.log('Connection established with pg db server'); client.query("select * from employee", (err, res) => { if (err) { console.error('Error executing query on pg db' + err.stack); callback(err); } else { console.log('Got query results : ' + res.rows.length); async.each(res.rows, function(empRecord) { console.log(empRecord.name); }); } client.release(); }); } }); 

Source

0
Nov 26 '17 at 19:01
source share

I was interested in a very simple handler, so I made my own without complicating it. I have no illusions that this is super basic, but it can help some people get started. Basically, it connects, runs queries, and processes errors for you.

 function runQuery(queryString, callback) { // connect to postgres database pg.connect(postgresDatabase.url,function(err,client,done) { // if error, stop here if (err) {console.error(err); done(); callback(); return;} // execute queryString client.query(queryString,function(err,result) { // if error, stop here if (err) {console.error(err+'\nQuery: '+queryString); done(); callback(); return;} // callback to close connection done(); // callback with results callback(result.rows); }); }); } 

Then you can use it by calling it like this:

 runQuery("SELECT * FROM table", function(result) { // Whatever you need to do with 'result' } 
-one
Aug 10 '16 at 20:55
source share

This is how I do it, like "all of the above approach"

 Promise = require 'bluebird' pg = module.exports = require 'pg' Promise.promisifyAll pg.Client.prototype Promise.promisifyAll pg.Client Promise.promisifyAll pg.Connection.prototype Promise.promisifyAll pg.Connection Promise.promisifyAll pg.Query.prototype Promise.promisifyAll pg.Query Promise.promisifyAll pg connectionString = process.env.DATABASE_URL module.exports.queryAsync = (sql, values) -> pg.connectAsync connectionString .spread (connection, release) -> connection.queryAsync sql, values .then (result) -> console.log result.rows[0] .finally -> release() 
-2
Oct 02 '15 at 1:58
source share



All Articles