What are the "average" requests per second for a production web application?

I do not have a frame of reference in terms of what is considered “fast”; I always thought about it, but never found a direct answer ...

+84
optimization
Dec 16 '08 at 23:10
source share
8 answers

OpenStreetMap has 10-20 per second

Wikipedia seems to be 30,000 to 70,000 per second spreads to 300 servers (from 100 to 200 requests per second per machine, most of which are cached)

Geography gets 7000 images per week (1 download in 95 seconds)

+66
Dec 16 '08 at 23:42
source share

Not sure if anyone is still interested, but this information was posted about Twitter (and here) :

Statistics

  • Over 350,000 users. Actual numbers are, as always, a very super super secret.
  • 600 queries per second.
  • An average of 200-300 compounds per second. Dive up to 800 connections per second.
  • MySQL processes 2400 queries per second.
  • 180 instances of Rails. Uses Mongrel as a "web server".
  • 1 MySQL server (one large core with 8 cores) and 1 slave. Slave is read only for statistics and reporting.
  • 30+ for handling odd jobs.
  • 8 Sun X4100s.
  • Process a 200 millisecond request in Rails.
  • The average time spent in the database is 50-100 milliseconds.
  • Over 16 GB memcached.
+31
Jan 22 2018-11-22T00:
source share

When I go to the control panel of my web host, open phpMyAdmin and click "Show MySQL runtime information", I get:

This MySQL server runs for 53 days, 15 hours, 28 minutes and 53 seconds. It started on October 24, 2008 at 04:03.

Query statistics: since the launch, 3,444,378,344 requests have been sent to the server.

Total 3,444 M
per hour 2.68 M
per minute 44.59 k
per second 743.13

On average for 743 mySQL queries every second in the last 53 days!

I do not know about you, but for me it is fast! Very fast!

+12
Dec 16 '08 at 23:35
source share

I personally like every analysis done every time .... requests / second and average time / request and love, seeing the maximum request time, and also on top of that. it's easy to flip it, if you have 61 requests per second, you can just flip it to 1000 ms / 61 requests.

To answer your question, we ourselves conduct a huge load test and find its range on the various amazon hardware that we use (the best value was a 32-bit average processor when it dropped to $$ / event / second) and our requests / seconds ranged from 29 requests / second / node to 150 requests / second / node.

Providing the best equipment, of course, gives the best results, but not the best return on investment. In any case, this post was wonderful, as I was looking for some parallels to see if my numbers will be where they are in the stadium and will share with you, as well as in case someone else looks. Mine is cleanly loaded as high as I can go.

NOTE. Thanks to the requests / second analysis (not ms / request), we discovered the main problem with Linux, which we are trying to solve, where linux (we tested the server in C and java) freezes all calls in socket libraries when the load is too heavy, which seems to be very weird. The full post can actually be found here .... http://ubuntuforums.org/showthread.php?p=11202389

We are still trying to solve this because it gives us a huge performance boost, as our test runs from 2 minutes 42 seconds to 1 minute 35 seconds when this is fixed, so we see a 33% performance improvement .... not to mention that the worse the DoS attack, the longer these pauses are that all cpus drop to zero and stop processing ... in my opinion, server processing should continue in the face of DoS, but for some reason it freezes every time through some time during Dos sometimes up to 30 seconds !!!

ADD: We found out that it was a jdk race condition error .... it is difficult to isolate on large clusters, but when we started 1 server 1 node data, but 10 of them, we could play it every time then and just looked at the server / The datanode on which this occurred. Switching jdk to an earlier release fixed the problem. We were probably at jdk1.6.0_26.

+7
Aug 31 '11 at 19:03
source share

This is a very open question about the types of apples and oranges.

You ask 1. The average load of a request for a production application 2. What is considered fast

They are not necessarily related.

The average number of requests per second is determined

but. number of concurrent users

b. average number of page requests they make per second

from. number of additional requests (i.e. ajax calls, etc.)

As for what is considered fast .. do you mean how many requests a site may require? Or if a piece of hardware is considered fast, if it can handle xyz requests per second?

+3
Dec 16 '08 at 23:22
source share

Note that the baud rate graphs will be sinusoidal patterns with “peak hours”, maybe 2x or 3 times the rate that you get while users are asleep. (May be useful when you plan daily processing of batch processing on servers)

You can see the effect even on "international" (multilingual, localized) sites such as wikipedia

+1
Dec 17 '08 at 0:09
source share

less than 2 seconds per user usually - i.e. users who see slower responses than this think the system is slow.

Now you tell how many users you connected.

+1
Dec 17 '08 at 0:24
source share

You can search for “slashdot effect analysis” for graphs, what would you see if some aspect of the site suddenly became popular in the news, for example this graph on the wiki .

The web applications that survive are typically those that can generate static pages instead of putting each request through a processing language.

It was a great video (I think it could be on ted.com? I think it could be the flickr web command? Does anyone know the link?) With ideas on how to scale websites outside of one server, for example, how to distribute connections between read-only, read-write, and write-back server connections to get the best effect for different types of users.

+1
Dec 17 '08 at 12:30 a.m.
source share



All Articles