Puma reload error on reboot using EC2 + Rails + Nginx + Capistrano

I have successfully used capistrano to deploy my rails application on Ubuntu EC2. Everything works great when deployed. The name of the Rails application is deseov12. My problem is that Puma does not start at boot, which is necessary, as EC2 instances will be created on demand. Puma will start when deployed through Capistrano, it will also start when launched

cap production puma:start 

by local machine.

It will also start on the server after a reboot if I run the following commands:

 su - deploy [enter password] cd /home/deploy/deseov12/current && ( export RACK_ENV="production" ; ~/.rvm/bin/rvm ruby-2.2.4 do bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon ) 

I followed the pointers from the Puma Jungle tool to launch Puma on boot using the upstart as follows:

Contents of / etc / puma.conf

 /home/deploy/deseov12/current 

The contents of / etc / init / puma.conf and /home/deploy/puma.conf

 # /etc/init/puma.conf - Puma config # This example config should work with Ubuntu 12.04+. It # allows you to manage multiple Puma instances with # Upstart, Ubuntu native service management tool. # # See workers.conf for how to manage all Puma instances at once. # # Save this config as /etc/init/puma.conf then manage puma with: # sudo start puma app=PATH_TO_APP # sudo stop puma app=PATH_TO_APP # sudo status puma app=PATH_TO_APP # # or use the service command: # sudo service puma {start,stop,restart,status} # description "Puma Background Worker" # no "start on", we don't want to automatically start stop on (stopping puma-manager or runlevel [06]) # change apps to match your deployment user if you want to use this as a less privileged user $ setuid deploy setgid deploy respawn respawn limit 3 30 instance ${app} script # this script runs in /bin/sh by default # respawn as bash so we can source in rbenv/rvm # quoted heredoc to tell /bin/sh not to interpret # variables # source ENV variables manually as Upstart doesn't, eg: #. /etc/environment exec /bin/bash <<'EOT' # set HOME to the setuid user home, there doesn't seem to be a better, portable way export HOME="$(eval echo ~$(id -un))" if [ -d "/usr/local/rbenv/bin" ]; then export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH" elif [ -d "$HOME/.rbenv/bin" ]; then export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH" elif [ -f /etc/profile.d/rvm.sh ]; then source /etc/profile.d/rvm.sh elif [ -f /usr/local/rvm/scripts/rvm ]; then source /etc/profile.d/rvm.sh elif [ -f "$HOME/.rvm/scripts/rvm" ]; then source "$HOME/.rvm/scripts/rvm" elif [ -f /usr/local/share/chruby/chruby.sh ]; then source /usr/local/share/chruby/chruby.sh if [ -f /usr/local/share/chruby/auto.sh ]; then source /usr/local/share/chruby/auto.sh fi # if you aren't using auto, set your version here # chruby 2.0.0 fi cd $app logger -t puma "Starting server: $app" exec bundle exec puma -C current/config/puma.rb EOT end script 

The contents of / etc / init / puma-manager.conf and / home / deploy / puma -manager.conf

 # /etc/init/puma-manager.conf - manage a set of Pumas # This example config should work with Ubuntu 12.04+. It # allows you to manage multiple Puma instances with # Upstart, Ubuntu native service management tool. # # See puma.conf for how to manage a single Puma instance. # # Use "stop puma-manager" to stop all Puma instances. # Use "start puma-manager" to start all instances. # Use "restart puma-manager" to restart all instances. # Crazy, right? # description "Manages the set of puma processes" # This starts upon bootup and stops on shutdown start on runlevel [2345] stop on runlevel [06] # Set this to the number of Puma processes you want # to run on this machine env PUMA_CONF="/etc/puma.conf" pre-start script for i in `cat $PUMA_CONF`; do app=`echo $i | cut -d , -f 1` logger -t "puma-manager" "Starting $app" start puma app=$app done end script 

Content / home / deploy / deseov12 / shared / puma.rb

 #!/usr/bin/env puma directory '/home/deploy/deseov12/current' rackup "/home/deploy/deseov12/current/config.ru" environment 'production' pidfile "/home/deploy/deseov12/shared/tmp/pids/puma.pid" state_path "/home/deploy/deseov12/shared/tmp/pids/puma.state" stdout_redirect '/home/deploy/deseov12/shared/log/puma_error.log', '/home/deploy/deseov12/shar$ threads 0,8 bind 'unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock' workers 0 activate_control_app prune_bundler on_restart do puts 'Refreshing Gemfile' ENV["BUNDLE_GEMFILE"] = "/home/deploy/deseov12/current/Gemfile" end 

However, I could not start Puma automatically after rebooting the server. It just does not start.

Of course, I will be grateful for the help

EDIT: I just noticed something that might be the key:

when you run the following command as a deployment user:

 sudo start puma app=/home/deploy/deseov12/current 

ps aux will show the puma process for a few seconds before it disappears.

 deploy 4312 103 7.7 183396 78488 ? Rsl 03:42 0:02 puma 2.15.3 (tcp://0.0.0.0:3000) [20160106224332] 

this puma process is different from the workflow launched by capistrano:

 deploy 5489 10.0 12.4 858088 126716 ? Sl 03:45 0:02 puma 2.15.3 (unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock) [20160106224332] 
+4
ruby-on-rails ruby-on-rails-4 capistrano upstart puma
source share
1 answer

This is finally decided after many studies. Turns out the problem was threefold:

1) the correct environment was not installed when the script was launched; 2) the actual puma.rb configuration file when using capistrano can be found in the home / deploy / deseov12 / shared directory not in the / current / 3 directory) it is not correct to demonize the puma server

To solve these problems:

1) This line should be added to the top of the script in /etc/init/puma.conf and / home / deploy / puma.conf:

 env RACK_ENV="production" 

2) and 3) this line

 exec bundle exec puma -C current/config/puma.rb 

should be replaced with this

 exec bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon 

After that, the puma server starts correctly when rebooting or generating a new instance. Hope this helps someone avoid hourly crashes.

+4
source share

All Articles