Carrierwave + S3 Storage + Counter Cache Too Long

I have a simple application that receives POSTed images via the API and sends them to S3 via Carrierwave. The My Photos table also has counter_cache.

In 80% of cases, my transaction time is HUGE, for example 60 seconds or more, and more than 90% of this time is spent uploading an image to S3 and updating counter_cache.

Does anyone know why this loading time is so long and why cache request requests take so long?

New relay report

Transaction trace

SQL Trace

Just added some photos at http://carrierwave-s3-upload-test.herokuapp.com

The behavior was similar: enter image description here

Just removed counter_cache from my code and made a few more downloads .... Odd behavior again. enter image description here


EDIT 1

Logs of the latest package download. EXCON_DEBUG is True: https://gist.github.com/rafaelcgo/561f516a85823e30fbad


EDIT 2

My logs did not display EXCON information. Therefore, I realized that I used fog 1.3.1. Updated to Fog 1.19.0 (which uses the newer version of excon gem) and now everything works fine. enter image description here

Tips .. If you need to debug external connections, use the newer version of excon and set env EXCON_DEBUG = true to see some detailed ones, for example: https://gist.github.com/geemus/8097874


EDIT 3

Guys updated the gem fog and now it's sweet. I don't know why older versions of fog and excon have this odd performance.

+7
amazon-s3 heroku carrierwave excon
source share
1 answer

Three tips, but not the whole story:

  • CarrierWave transfers the file to s3 inside a database transaction . Since the counter_cache update also takes place inside the transaction, it is possible that your benchmarking code believes that the update is performed forever, when in fact it is a file transfer, which is conducted forever.

  • The last thing I checked was not even possible for the heroku app to support the connection while you see. You should see H12 or H15 errors in your logs if you have synchronous downloads that have passed about 30 seconds. Read more about heroku timeouts here .

  • Have you tried updating the fog? 1.3.1 - a year and a half, and since then, they probably corrected the error.

The past, the only thing that comes to mind is that you download an epic scale file. I was disappointed with both the latency and the bandwidth that I was able to achieve from heroku to s3 so that this could also be involved.

Mandatory: you do not allow users to boot directly to your dyno, do you?

0
source share

All Articles