Rails & Heroku: How Many Work / Speakers Do I Need

I have a tinder style app that allows users to rate events. After the user evaluates the event, the background is set again, which rewrites other events based on user feedback.

This background job takes about 10 seconds, and it works about 20 times per minute per user.

Using a simple example. If I have 10 users who use the application at any given time, and I never want the work to be pending, what is the best way to do this?

I am confused about Dynos, resque pool and redis connections. Can someone help me understand the difference? Is there any way to calculate this?

+7
ruby-on-rails heroku resque
source share
4 answers

Not sure if you are asking the right question. Your real question is: "How can I get better performance?" Not "how many dins?" Just adding dinosaurs will not necessarily give you better performance. More din gives you more memory ... therefore, if your application runs slowly because you are running out of available memory (i.e. you are running swap), then the answers may be more dynamic. If these tasks take 10 seconds to run, though ... memory is probably not your real problem. If you want to track your memory usage, check out a visualization tool like New Relic.

There are many approaches to solving your problem. But I would start with the code you wrote. Posting some code on SO can help you understand why this job takes 10 seconds (send the code!). 10 seconds is a long time. Therefore, query optimization within this work will almost certainly help.

Another piece of low-hanging fruit ... switch from resque to sidekiq for your background jobs. Really easy to use. You will use less memory and should see an instant performance bump.

+4
source share

Dynos: These are separate virtual / physical servers. Think that they are the same as the EC2 instances.

Redis Connections: Individual connections to a Redis instance.

Resque Pool: a gem that lets you run workers simultaneously on the same dyno / instance.

0
source share

First of all, you should look for ways in which you can improve the productivity of the work itself. You may be able to get it in less than ten seconds by caching a low-level model or by optimizing your algorithm.

From the point of view of developing the number of workers that you will need, you will need to complete the number of runs per minute (20) times the number of seconds that it takes to start (10) times the number of users (10). This will give you the number of seconds per minute that it will take to work for one employee. 20 * 10 * 10 = 2000 . Divide this by 60 and you have the number of minutes per minute, 33.3 . So if you had 34 workers, and these numbers would be consistent, they should have been able to prevail over things.

However, you should not be in a position where you need to run 36 or more speakers for a total of 10 concurrent users for the ranking algorithm. It will be very expensive.

Optimize your algorithm, try adding more caching and try Sidekiq. In my experience, Sidekiq can process a queue 10 times faster than Resque. It depends on what your work does and how you use each tool, but it's worth checking. See Sidekiq vs Resque .

0
source share

Re-ranking other events is a bad idea.

You should consider the presence of the columns total_points and average_points for the event table, and let the series be determined in order by request. Like this.

 class Event has_many :feedbacks scope :rank_by_total, -> { order(:total_points) } scope :rank_by_average, -> { order(:average_points) } end class Feedback belongs_to :event after_create :update_points def update_points total = event.feedbacks.sum(:points) avg = event.feedbacks.average(:points) event.update(total_points: total, average_points: avg) end end 

So, how many working / speakers do you need?

You do not need to worry about a dinosaur or a worker for this problem. No matter how many speakers with higher processing power you use, your decision will take a long time when the event table becomes huge. Therefore, try to change your decision as I described.

0
source share

All Articles