What relation should processes and threads have to my web server?

If I run a Ruby application supported by a web server such as Puma that allows me to combine processes and threads, what is the amount of each of them that I have to use? Assuming my code is of course thread safe (and I am running a Ruby implementation that supports native threads). I am not asking for specific numbers, just general relationships that are theoretically better than others.

If threads are much faster because they use little memory, should I only use threads? But then again, I heard that hybrid models (combinations of threads and processes) are the best way to go. I also heard that the number of processes with the number of cores should correspond to me; it's true?

+4
source share
1 answer

This is probably not what you want to hear, but: It depends. I think that even giving the coefficient looks like wavy hands. There are many variables that can affect performance (HW is probably the largest: RAM, number of cores, etc.).

I came across a "similar question" where I wanted to find out how many unicorn workers (application workers) and nginx workers would be the best for my application (and what impact caching would be).

The best way to find the answer to this question is to run several tests. Here is more or less what I did:

script, apache bench (ab) 3 URL-, .

function run_apache_bench() {
  ab -n $N -c $C -g "results/${ID}/root_${1}.tsv"  url1
  ab -n $N -c $C -g "results/${ID}/index_${1}.tsv" url2
  ab -n $N -c $C -g "results/${ID}/show_${1}.tsv" url3
}

. ( , .) gnuplot - :

1 nginx:

enter image description here

4 4 nginx:

enter image description here

: " ":)

enter image description here

:

  • .
  • Apache javascript/css- (), , html.
  • , ( ..).

, .

+4

All Articles