Using asynchronous assembler postgresql

I am using Goliath (which eventmachine runs on) and the postgres pg stone, I am currently using pg gem blocked: conn.exec('SELECT * FROM products') (for example), and I wonder if there is a better way to connect to the database postgres?

+8
ruby asynchronous postgresql eventmachine goliath
source share
4 answers

The pg library provides full support for the PostgreSQL asynchronous API. I added an example on how to use it in the samples/ directory:

 #!/usr/bin/env ruby require 'pg' # This is a example of how to use the asynchronous API to query the # server without blocking other threads. It intentionally low-level; # if you hooked up the PGconn#socket to some kind of reactor, you # could make this much nicer. TIMEOUT = 5.0 # seconds to wait for an async operation to complete CONN_OPTS = { :host => 'localhost', :dbname => 'test', :user => 'jrandom', :password => 'banks!stealUR$', } # Print 'x' continuously to demonstrate that other threads aren't # blocked while waiting for the connection, for the query to be sent, # for results, etc. You might want to sleep inside the loop or # comment this out entirely for cleaner output. progress_thread = Thread.new { loop { print 'x' } } # Output progress messages def output_progress( msg ) puts "\n>>> #{msg}\n" end # Start the connection output_progress "Starting connection..." conn = PGconn.connect_start( CONN_OPTS ) or abort "Unable to create a new connection!" abort "Connection failed: %s" % [ conn.error_message ] if conn.status == PGconn::CONNECTION_BAD # Now grab a reference to the underlying socket so we know when the # connection is established socket = IO.for_fd( conn.socket ) # Track the progress of the connection, waiting for the socket to # become readable/writable before polling it poll_status = PGconn::PGRES_POLLING_WRITING until poll_status == PGconn::PGRES_POLLING_OK || poll_status == PGconn::PGRES_POLLING_FAILED # If the socket needs to read, wait 'til it becomes readable to # poll again case poll_status when PGconn::PGRES_POLLING_READING output_progress " waiting for socket to become readable" select( [socket], nil, nil, TIMEOUT ) or raise "Asynchronous connection timed out!" # ...and the same for when the socket needs to write when PGconn::PGRES_POLLING_WRITING output_progress " waiting for socket to become writable" select( nil, [socket], nil, TIMEOUT ) or raise "Asynchronous connection timed out!" end # Output a status message about the progress case conn.status when PGconn::CONNECTION_STARTED output_progress " waiting for connection to be made." when PGconn::CONNECTION_MADE output_progress " connection OK; waiting to send." when PGconn::CONNECTION_AWAITING_RESPONSE output_progress " waiting for a response from the server." when PGconn::CONNECTION_AUTH_OK output_progress " received authentication; waiting for " + "backend start-up to finish." when PGconn::CONNECTION_SSL_STARTUP output_progress " negotiating SSL encryption." when PGconn::CONNECTION_SETENV output_progress " negotiating environment-driven " + "parameter settings." end # Check to see if it finished or failed yet poll_status = conn.connect_poll end abort "Connect failed: %s" % [ conn.error_message ] unless conn.status == PGconn::CONNECTION_OK output_progress "Sending query" conn.send_query( "SELECT * FROM pg_stat_activity" ) # Fetch results until there aren't any more loop do output_progress " waiting for a response" # Buffer any incoming data on the socket until a full result # is ready. conn.consume_input while conn.is_busy select( [socket], nil, nil, TIMEOUT ) or raise "Timeout waiting for query response." conn.consume_input end # Fetch the next result. If there isn't one, the query is # finished result = conn.get_result or break puts "\n\nQuery result:\n%p\n" % [ result.values ] end output_progress "Done." conn.finish if defined?( progress_thread ) progress_thread.kill progress_thread.join end 

I would recommend that you read the documentation for the PQconnectStart function and Asynchronous Command Processing in the PostgreSQL manual, and then compare this with the example above.

I haven't used EventMachine before, but if it allows you to register a socket and callbacks when it becomes readable / writable, I think it would be pretty easy to integrate database calls into it.

I had in mind to use the ideas in the article by Ilya Grigorik about using Fibers to clear code events in order to simplify the asynchronous API to use, but this is the way to go. I have a ticket open to track it if you are interested / motivated to do it yourself.

+15
source share

Yes, you can access postgres in a non-blocking way from goliath. I had the same need, and I put together this proof of concept: https://github.com/levicook/goliath-postgres-spike

+3
source share

I am not (anymore) very familiar with Pg, but I have not heard that any popular database could connect asynchronously. Therefore, you still need to maintain a database connection for the duration of the request. Therefore, you still need to block some of them where the stack is located.

Depending on your application, you may already be doing it in the best way.

But when you are dealing with some kind of polling application (where the same client sends a lot of requests in a short time), and it is more important to get an answer, even if it is empty, then you can write ruby โ€‹โ€‹Fiber or a blown stream stream or process , which is durable and requests database queries and caches the results.

For example: the request comes from client A. The Goliath application processes the request to the database process using some unique identifier and responds to the request โ€œwhile there is no dataโ€. The database process completes the query and stores the results in a cache with an identifier. When the next request comes from the same client, Goliath sees that it is already waiting for the results of the request, removes the results from the cache, and responds to the client. At the same time, he plans the next query with the database process so that it is ready earlier. If the next request arrives before the last one is completed, a new request is not scheduled (without multiplying the requests).

Thus, your answers are fast and non-blocking, while maintaining fresh data from the ASAP database. Of course, they may not be slightly synchronized with the actual data, but again, depending on the application, this may not be a problem.

+1
source share

The idea is to use an asynchronous adapter in the database (Postgresql) in combination with a web server (Goliath) to get performance. Mike Perham wrote the PG Revitalization Adapter for Rails 2.3 last year. Perhaps you can use this.

As another example, Ilya Grigorik released this demo of the async Rails stack. In this case, the event server is Thin and the database is Mysql. Install the demo version and try the test with and without a driver that supports EM. The difference is dramatic.

+1
source share

All Articles