Bad gateway 502 after testing a small load on fastcgi-mono-server via nginx and ServiceStack

I am trying to start webservice API with ServiceStack under nginx and fastcgi-mono-server.

The server starts normally, and the API is up and running. I see the response time in the browser through the ServiceStack profiler and works up to 10 ms.

But as soon as I do a little load test using a siege (a total of 500 requests using 10 connections), I begin to receive 502 Bad Gateway. And for recovery I have to restart fastcgi-mono-server.

The nginx server is fine. Fastcgi-mono-server is the one that stops responding after this small load.

I tried using both tcp and unix sockets (I am aware of a problem with permissions in a unix socket, but I have already fixed this).

Here are my settings:

server { listen 80; listen local-api.acme.com:80; server_name local-api.acme.com; location / { root /Users/admin/dev/acme/Acme.Api/; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass unix:/tmp/fastcgi.socket; include /usr/local/etc/nginx/fastcgi_params; } } 

To start fastcgi-mono-server:

 sudo fastcgi-mono-server4 /applications=local-api.acme.com:/:/Users/admin/dev/acme/Acme.Api/ /socket=unix:/tmp/fastcgi.socket /multiplex=True /verbose=True /printlog=True 

EDIT: I forgot to mention an important detail: I am running this on Mac OS X.

I also tested all possible web server configuration for Mono: console application, apache mod_mono, nginx fast_cgi and proxy_pass. Everyone presented the same crash problem after several requests in Mono 3.2.3 + Mac OS X.

I was able to test the same configuration on a Linux machine and there was no problem.

So it seems that this is a problem with Mono / ASP.NET when working on Mac OS X.

+4
source share
1 answer

EDIT: In the initial question, I see that there were no problems with Linux, however I ran into difficulties with Linux in high-load scenarios (e.g. +50 concurrent requests), which may also be the case for OS X ...

I delved into this problem and I found a solution for my installation - I no longer get 502 Bad Gateway errors when loading my simple hello world application. I tested everything on Ubuntu 13.10 with the new Mono 3.2.3 compiler installed in / opt / mono.

When you start the mono-fastcgi-4 server using "/ verbose = True / printlog = True", you will see the following output:

 Root directory: /some/path/you/defined Parsed unix:/tmp/nginx-1.sockets as URI unix:/tmp/nginx-1.sockets Listening on file /tmp/nginx-1.sockets with default permissions Max connections: 1024 Max requests: 1024 

The important lines are "maximum connections" and "maximum requests". Basically, this indicates how many active TCP connections and requests the mono-fastcgi server will be able to process - in this case, 1024.

My NGINX configuration reads:

 worker_processes 4; events { worker_connections 1024; } 

So, I have 4 employees, each of which can have 1,024 connections. Thus, NGINX happily accepts 4096 concurrent connections, which are then sent to mono-fastcgi (who wants to process only 1024 conns). Therefore, mono-fastcgi "protects itself" and stops serving requests. There are two solutions for this:

  • Reduce the number of requests that NGINX can accept
  • Increase your top fastcgi pool.

1 is trivially solved by changing the NGINX configuration to read something like:

 worker_processes 4; # <-- or 1 here events { worker_connections 256; # <--- if 1 above, then 1024 here } 

However, this may probably mean that you cannot maximize the resources on your computer.

The solution for 2. is a bit more complicated. First, mono-fastcgi needs to be run several times. To do this, I created the following script (inside the website that should be running):

 function startFastcgi { /opt/mono/bin/fastcgi-mono-server4 /loglevels=debug /printlog=true /multiplex=false /applications=/:`pwd` /socket=$1 & } startFastcgi 'unix:/tmp/nginx-0.sockets' startFastcgi 'unix:/tmp/nginx-1.sockets' startFastcgi 'unix:/tmp/nginx-2.sockets' startFastcgi 'unix:/tmp/nginx-3.sockets' chmod 777 /tmp/nginx-* 

As a result, 4 people are launched, which can accept 1024 connections. Then NGINX should be configured something like this:

 upstream servercom { server unix:/tmp/nginx-0.sockets; server unix:/tmp/nginx-1.sockets; server unix:/tmp/nginx-2.sockets; server unix:/tmp/nginx-3.sockets; } server { listen 80; location / { fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_pass servercom; include fastcgi_params; } } 

This configures NGINX with a pool of 4 "upstream workers" that it will use in round-robin fashion . Now, when I hammer my server with Boom in concurrency 200 for 1 minute, all this is good (aka not 502).

Hope you can somehow apply this to your code and get the job done :)

PS:

You can download my Hello World ServiceStack code, which I used for testing here .

And you can download my full NGINX.config here .

There are several ways that you need to adjust, but they should serve as a good base.

+15
source

All Articles