Tornado Code Injection

Is there a canonical code deployment strategy for deploying tornado-based web applications. Our current configuration is 4 tornado processes that work for NginX? (Our specific use case is behind EC2.)

We currently have a solution that works quite well, so we start four tornado processes and save the PID to a file in / tmp /. After deploying the new code, we run the following sequence through the tag:

  • Make a git pull from the prod branch.
  • Remove the machine from load balancing.
  • Wait until everyone in the flight connections has finished their sleep.
  • Kill all tornadoes in the pid file and delete all * .pyc files.
  • Restart the tornado.
  • Connect the machine to the load balancer.

We breathed this in a bit: http://agiletesting.blogspot.com/2009/12/deploying-tornado-in-production.html

Are there any other complete solutions?

+7
source share
2 answers

We run Tornado + Nginx with the supervisor as the supervisor.

Configuration Example (names changed)

[program:server] process_name = server-%(process_num)s command=/opt/current/vrun.sh /opt/current/app.py --port=%(process_num)s stdout_logfile=/var/log/server/server.log stderr_logfile=/var/log/server/server.err numprocs = 6 numprocs_start = 7000 

I have not yet been able to find the β€œbest” way to restart what I will probably finally do is to have an β€œactive” Nginx file that is updated, letting HAProxy know that we are messing with the configuration, and then wait for a bit, swap , then turn it on again.

We use Capistrano (we have the backlog task to go to Fabric), but instead of dealing with deleting * .pyc files, we refer to symlink / opt / current with the release identifier.

+1
source

I did not use Tornado in production, but I played with Gevent + Nginx and used Supervisord for the control process - start / stop / restart, logging, monitoring - the supervisor is very useful for this. As I said, not a deployment solution, but perhaps a tool worth using.

0
source

All Articles