Uwsgi broken pipe - django, nginx

I accidentally (and sequentially) get a broken pipe in wsgi ... shown below. Any idea what could be causing this or how can I debug?

I am on django (tastypie), uwsgi, nginx and I am running m3.medium on aws (ubuntu 14.04).

[pid: 1516|app: 0|req: 548/1149] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:11 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 20 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1517|app: 0|req: 594/1150] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:12 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 15 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1516|app: 0|req: 549/1151] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:13 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 15 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1516|app: 0|req: 550/1152] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:13 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 14 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1517|app: 0|req: 595/1153] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:14 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 15 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1516|app: 0|req: 551/1154] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:14 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 14 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1517|app: 0|req: 596/1155] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:15 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 12 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1516|app: 0|req: 552/1156] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:15 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 12 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) Wed Jun 18 16:11:17 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /api/v1/clock/?format=json (10.0.0.204) IOError: write error [pid: 1512|app: 0|req: 1/1157] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:16 2014] GET /api/v1/clock/?format=json => generated 0 bytes in 1460 msecs (HTTP/1.1 200) 4 headers in 0 bytes (0 switches on core 0) announcing my loyalty to the Emperor... Wed Jun 18 20:11:17 2014 - [emperor] vassal api.ini is now loyal [pid: 1516|app: 0|req: 553/1158] 10.0.0.159 () {42 vars in 1039 bytes} [Wed Jun 18 16:11:33 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 14 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1516|app: 0|req: 554/1159] 10.0.0.204 () {46 vars in 908 bytes} [Wed Jun 18 16:11:41 2014] GET /api/v1/clock/ => generated 1298 bytes in 14 msecs (HTTP/1.0 200) 4 headers in 119 bytes (1 switches on core 0) 

I notice that the number on what is connected drops to a very low number. Pay attention to the second request here - 2/1303. This request has been disabled.

 [pid: 1516|app: 0|req: 624/1302] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:41:09 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 12 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1512|app: 0|req: 2/1303] 10.0.0.204 () {42 vars in 1039 bytes} [Wed Jun 18 16:41:10 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1516|app: 0|req: 625/1304] 10.0.0.159 () {42 vars in 1039 bytes} [Wed Jun 18 16:41:29 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 17 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) [pid: 1517|app: 0|req: 668/1305] 10.0.0.204 () {46 vars in 908 bytes} [Wed Jun 18 16:41:31 2014] GET /api/v1/clock/ => generated 1298 bytes in 18 msecs (HTTP/1.0 200) 4 headers in 119 bytes (1 switches on core 0) 

UPDATED : nginx.conf

 user www-data; worker_processes 1; pid /run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { client_body_timeout 12; client_header_timeout 12; keepalive_timeout 15; send_timeout 10; client_max_body_size 8m; ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## #access_log off; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } 

this particular virtual configuration site

 upstream django { server unix:/tmp/domain.sock; } server { listen 80; server_name domain.com; return 301 https://$host$request_uri; } server { listen 443; server_name domain.com; location /static { alias /home/ubuntu/domain/static; } location / { proxy_set_header X-Forwarded-Proto https; uwsgi_pass django; include /etc/nginx/uwsgi_params; } } 

uwsgi config (vassals)

 [uwsgi] env = DEBUG=False env = DB_ENVIRONMENT=production env = NEW_RELIC_CONFIG_FILE=config/newrelic.ini env = NEW_RELIC_ENVIRONMENT=production chdir = /home/ubuntu/domain home = /home/ubuntu/domain/venv module = domain.wsgi processes = 20 uid = www-data gid = www-data chmod-socket = 666 socket = /tmp/domain.sock stats = /tmp/domain.stats.sock 

contend / etc / rc.local, which starts the uwsgi process on boot

 #!/bin/sh -e /usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals --logto /var/log/uwsgi/emperor.log exit 0 
+7
python django nginx uwsgi tastypie
source share
1 answer

you can safely ignore them, they are launched by the client (or nginx), disconnecting in the middle of the request. Since the response time is really low, this is most likely a customer failure. Btw, report your nginx and uWSGI configuration to be sure.

+8
source share

All Articles