As others have argued, using the dist command is the easiest way to deploy Play for a one-time application. However, to clarify, I have some other options and my experience with them:
When I have an application that I often update, I usually install Play on the server and perform updates through Git. So after each update, I just run play stop (to stop the server running), sometimes I run play clean to remove any potentially damaged libraries or binaries, then I started the play stage to make sure that all the prerequisites are present and compile, and then finally play start start the server for the updated application. This seems like a lot, but it is easy to automate with a quick bash script.
Another way is to deploy Play behind a web server interface such as Apache, Nginx, etc. This is mostly useful if you want to do some kind of load balancing, but not necessary, since Play comes bundled with its own server. Docs: http://www.playframework.com/documentation/2.1.1/HTTPServer
Creating a WAR archive using the play2war plugin is another deployment method, but I would not recommend it unless you give it to someone who already has the basic infrastructure built on these servlet containers that you mentioned (so like many large companies). Using servlet containers adds a level of complexity that Play must remove by nature (hence, an integrated server). There are no noticeable performance gains that I know about using this method over the two previously described.
Of course, there is always a play dist that creates a package for you that you download to your server and run play start from there. This is probably the easiest option. Docs: http://www.playframework.com/documentation/2.1.1/ProductionDist
For performance and scalability, the Netty server on Play will function very adequately for the exceptional for you. Here's a popular link showing Netty with the highest performance of all frameworks and the "Stock" Play application, which appears somewhere in the middle of the field, but is ahead of Rails / Django in terms of performance: http://www.techempower.com/blog/2013/ 04/05 / frameworks-round-2 / .
Keep in mind that you can always change your deployment architecture along the way to work behind the front-end server, as described above, if you need additional load balancing and such availability. This is a trivial change in Play. I still don't recommend the WAR deployment option if, as I said, you already have a large installed base of servlet containers that someone forces you to serve your application.
Scalability and performance are also much more related to other factors, such as the use of caching, database configuration, the use of concurrency (which plays well), and the quality of the underlying hardware or the cloud platform. For example, Instagram and Pinterest serve millions of people daily in the Python / Django stack, which has average performance across all popular landmarks. They mitigate this with lots of caching and high-performance databases (which is usually the bottleneck in large applications).
At the risk of making this answer too long, I will simply add the latter. I'm also used to worrying about performance and scalability, thinking that I need the most powerful stack and settings to run my applications. This is no longer the case unless you say it like Google or Facebook, where each algorithm needs to be fine tuned, as it will be bombarded a billion times a day. Hardware (or cloud) resources are cheap, but developer / sysadmin time is not. You must consider the ease of use and support for deploying your application versus raw performance, even if the Play configuration is best used with the deployment, perhaps the easiest option.
Mike hawkins
source share