Multiple server processes using nginx and uWSGI

I noticed that you can run multiple processes on the same uWSGI instance for nginx:

uwsgi --processes 4 --socket /tmp/uwsgi.sock 

Or you can run multiple instances of uWSGI on different sockets and balance the balance between them using nginx:

 upstream my_servers { server unix:///tmp.uwsgi1.sock; server unix:///tmp.uwsgi2.sock; #... } 

What is the difference between these two strategies and one is preferable over the other?

How is load balancing performed by nginx (in the first case) different from load balancing performed by uWSGI (in the second case)?

nginx can manage servers on multiple hosts. Can uWSGI do this in one instance? Some uWSGI functions work in only one uWSGI process (i.e., in shared memory / cache)? If so, it can be difficult to scale from the first approach to the second ....

+7
nginx uwsgi
source share
1 answer

The difference is that in the case of uWSGI there is no "real" load balancing. The first free process will always respond, so this approach is better than dropping the nginx load between multiple instances (this is obviously true only for local instances). What you need to take into account is the problem of the thundering herd. Its implications are disclosed here: http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html .

Finally, all uWSGI functions are multi-threaded / multi-processor (and greenthreads), so caching (for example) is used by all processes.

+7
source share

All Articles