Am I preloading the app on Heroku + Unicorn?

When using the Unicorn on Hereka. Scaling will have problems, as a new scalable web dino can be available on demand when it is still loading the application. This mainly results in a timeout error.

I read a little at http://codelevy.com/2010/02/09/getting-started-with-unicorn.html and https://github.com/blog/517-unicorn

Two articles suggest using preload_app true . Both after_fork and before_fork .

In Rails 3+, is code still required in before_block ? I read somewhere else. Anyone who has experienced tuning this before and would like to share?

Have I missed anything else? Am I loading the app correctly?

 # config/initializers/unicorn.rb # Read from: # http://michaelvanrooijen.com/articles/2011/06/01-more-concurrency-on-a-single-heroku-dyno-with-the-new-celadon-cedar-stack/ worker_processes 3 # amount of unicorn workers to spin up timeout 30 # restarts workers that hang for 90 seconds # Noted from http://codelevy.com/2010/02/09/getting-started-with-unicorn.html # and https://github.com/blog/517-unicorn preload_app true after_fork do |server, worker| ActiveRecord::Base.establish_connection end before_fork do |server, worker| ## # When sent a USR2, Unicorn will suffix its pidfile with .oldbin and # immediately start loading up a new version of itself (loaded with a new # version of our app). When this new Unicorn is completely loaded # it will begin spawning workers. The first worker spawned will check to # see if an .oldbin pidfile exists. If so, this means we've just booted up # a new Unicorn and need to tell the old one that it can now die. To do so # we send it a QUIT. # # Using this method we get 0 downtime deploys. old_pid = Rails.root + '/tmp/pids/unicorn.pid.oldbin' if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end 
+7
source share
3 answers

What you see here is expected. The moment you expand to dynamo, the Heroku platform will deploy this pool to a new dynamic that is completely isolated from your other dinosaurs (i.e., the Other Unicorn Master).

After you deploy and run this dynamo (efficiently loaded), the routing grid will start sending requests to this dinover, namely, when Rails starts on the Unicorn server or on any server that you have installed.

However, as soon as this request arrives, you will receive a 30-second window to return your data or the request will be calculated on the routing grid (error H12).

Therefore, to summarize, your problem is not with forking, but with the fact that your application cannot start for 30 seconds, therefore, early timeouts. Worrying about forking and PID files is that you don't need to worry about the Heroku platform.

+2
source

Only a partial answer, but I was able to reduce these nasty scaling timeouts with this Unicorn configuration:

 worker_processes 3 # amount of unicorn workers to spin up timeout 30 # restarts workers that hang for 30 seconds preload_app true # hack: traps the TERM signal, preventing unicorn from receiving it and performing its quick shutdown. # My signal handler then sends QUIT signal back to itself to trigger the unicorn graceful shutdown # http://stackoverflow.com/a/9996949/235297 before_fork do |_server, _worker| Signal.trap 'TERM' do puts 'intercepting TERM and sending myself QUIT instead' Process.kill 'QUIT', Process.pid end end # Fix PostgreSQL SSL error # http://stackoverflow.com/a/8513432/235297 after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection end 

In addition, I use heroku labs:enable preboot (see https://devcenter.heroku.com/articles/labs-preboot/ ). Unfortunately, I still see some timeouts when scaling web dines.

Here's a discussion on the HireFire support forum, I started: http://hirefireapp.tenderapp.com/discussions/problems/205-scaling-up-and-down-too-quickly-provoking-503s

+1
source

preload_app true helped our application, so take a picture if you see timeout problems during deployment / reload. The comments saying that it didn’t help made me think that it wasn’t worth trying, and then realized that this was really the fix we needed.

Our situation was a slow boot Rails application using preboot . On some deployments and restarts, we would get many timeouts before the site was reviewed using our uptime monitoring.

We realized that with preload_app false Unicorn will first bind its port and then load the application. As soon as it binds the port, Heroku starts sending traffic. But then this slow application loads, so the traffic gets timeouts.

This is easy to verify by running Unicorn in dev, trying to access the site immediately after starting Unicorn and checking if you have a "no server on the port" error (desirable) or a very slow request (undesirable).

When we set preload_app true instead, it will take longer until Unicorn binds the port, but as soon as it does and Heroku sends the traffic, it is ready to respond.

+1
source

All Articles