Paperclip, large file downloads and AWS

So, I use Paperclip and AWS-S3, which is awesome. And it works great. Only one problem: I need to upload really large files. As in more than 50 megabytes. So nginx is dying. So, is it obvious that Paperclip stores things on disk before moving to S3?

I found this really cool article , but it seems to go to disk first and then do the rest in the background.

Ideally, I could upload the file in the background ... I have little experience with PHP, but not with Rails yet. Can someone point me in a general direction?

+5
source share
4 answers

Perhaps you need to increase the timeout in ngix configurations?

+5
source

You can completely bypass the server and download it directly to S3, which will prevent a timeout. The same thing happens with Geroku. If you are using Rails 3, check out my sample projects:

Sample project using Rails 3, Flash and MancyTools based on FancyUploader to download directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader

Sample project using Rails 3, Flash / Silverlight / GoogleGears / BrowserPlus and based on JQuery to download directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload

, - Paperclip, - ( ):

http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip

+5

, paperclip, apache.
nginx, apache Timeout apache, .

, .
, 8k /tmp/. apache 500 .

.
http://tinyw.in/fwVB

0

All Articles