I personally like every analysis done every time .... requests / second and average time / request and love, seeing the maximum request time, and also on top of that. it's easy to flip it, if you have 61 requests per second, you can just flip it to 1000 ms / 61 requests.
To answer your question, we ourselves conduct a huge load test and find its range on the various amazon hardware that we use (the best value was a 32-bit average processor when it dropped to $$ / event / second) and our requests / seconds ranged from 29 requests / second / node to 150 requests / second / node.
Providing the best equipment, of course, gives the best results, but not the best return on investment. In any case, this post was wonderful, as I was looking for some parallels to see if my numbers will be where they are in the stadium and will share with you, as well as in case someone else looks. Mine is cleanly loaded as high as I can go.
NOTE. Thanks to the requests / second analysis (not ms / request), we discovered the main problem with Linux, which we are trying to solve, where linux (we tested the server in C and java) freezes all calls in socket libraries when the load is too heavy, which seems to be very weird. The full post can actually be found here .... http://ubuntuforums.org/showthread.php?p=11202389
We are still trying to solve this because it gives us a huge performance boost, as our test runs from 2 minutes 42 seconds to 1 minute 35 seconds when this is fixed, so we see a 33% performance improvement .... not to mention that the worse the DoS attack, the longer these pauses are that all cpus drop to zero and stop processing ... in my opinion, server processing should continue in the face of DoS, but for some reason it freezes every time through some time during Dos sometimes up to 30 seconds !!!
ADD: We found out that it was a jdk race condition error .... it is difficult to isolate on large clusters, but when we started 1 server 1 node data, but 10 of them, we could play it every time then and just looked at the server / The datanode on which this occurred. Switching jdk to an earlier release fixed the problem. We were probably at jdk1.6.0_26.
Dean Hiller Aug 31 '11 at 19:03 2011-08-31 19:03
source share