UWSGI does not free memory

I tried my hands with an extremely small django application that mainly uses html + static content without any db operations. The application is on nginx and uwsgi. I also have postgres installed, but for this problem I did not perform any database operations.

I find that memory is not freed by the uwsgi process. In this diagram from newrelic, you will find that the memory occupied by the uwsgi process remains stagnant at ~ 100 MB, although during this stagnation there was absolutely no activity with the website / application.

Also FYI: the app / uwsgi process when it started consuming only 56 MB. I achieved this ~ 100 MB when I tested with ab (apache test) and hit it with -n 1000-c 10 or around that range.

enter image description here

Nginx conf

server { listen 80; server_name <ip_address>; root /var/www/mywebsite.com/; access_log /var/www/logs/nginx_access.log; error_log /var/www/logs/nginx_error.log; charset utf-8; default_type application/octet-stream; tcp_nodelay off; gzip on; location /static/ { alias /var/www/mywebsite.com/static/; expires 30d; access_log off; } location / { include uwsgi_params; uwsgi_pass unix:/var/www/mywebsite.com/django.sock; } } 

app_uwsgi.ini

 [uwsgi] plugins = python ; define variables to use in this script project = myapp base_dir = /var/www/mywebsite.com app=reloc uid = www-data gid = www-data ; process name for easy identification in top procname = %(project) no-orphans = true vacuum = true master = true harakiri = 30 processes = 2 processes = 2 pythonpath = %(base_dir)/ pythonpath = %(base_dir)/src pythonpath = %(base_dir)/src/%(project) logto = /var/www/logs/uwsgi.log chdir = %(base_dir)/src/%(project) module = reloc.wsgi:application socket = /var/www/mywebsite.com/django.sock chmod-socket = 666 chown-socket = www-data 

Update 1: it seems this is not uwsgi, but python processes that save certain data structures for faster processing.

+4
source share
2 answers

Typically, web frameworks load their code into memory. As a rule, this is not a problem, but it would be nice to impose a quantity on the total consumption of your work force, since the consumption of a separate working memory may increase over several requests.

When the worker reaches or exceeds the cap, it will restart after the request. This is done using the reload_on_rss flag

what you want to install depends on the memory available on your server and the number of working users.

+1
source

You can also limit the maximum number of requests per employee to the max-requests parameter in your .ini . This will kill the worker who processed the specified number of max-requests and create a new one.

0
source

All Articles