EDIT: In the initial question, I see that there were no problems with Linux, however I ran into difficulties with Linux in high-load scenarios (e.g. +50 concurrent requests), which may also be the case for OS X ...
I delved into this problem and I found a solution for my installation - I no longer get 502 Bad Gateway errors when loading my simple hello world application. I tested everything on Ubuntu 13.10 with the new Mono 3.2.3 compiler installed in / opt / mono.
When you start the mono-fastcgi-4 server using "/ verbose = True / printlog = True", you will see the following output:
Root directory: /some/path/you/defined Parsed unix:/tmp/nginx-1.sockets as URI unix:/tmp/nginx-1.sockets Listening on file /tmp/nginx-1.sockets with default permissions Max connections: 1024 Max requests: 1024
The important lines are "maximum connections" and "maximum requests". Basically, this indicates how many active TCP connections and requests the mono-fastcgi server will be able to process - in this case, 1024.
My NGINX configuration reads:
worker_processes 4; events { worker_connections 1024; }
So, I have 4 employees, each of which can have 1,024 connections. Thus, NGINX happily accepts 4096 concurrent connections, which are then sent to mono-fastcgi (who wants to process only 1024 conns). Therefore, mono-fastcgi "protects itself" and stops serving requests. There are two solutions for this:
- Reduce the number of requests that NGINX can accept
- Increase your top fastcgi pool.
1 is trivially solved by changing the NGINX configuration to read something like:
worker_processes 4; # <-- or 1 here events { worker_connections 256; # <--- if 1 above, then 1024 here }
However, this may probably mean that you cannot maximize the resources on your computer.
The solution for 2. is a bit more complicated. First, mono-fastcgi needs to be run several times. To do this, I created the following script (inside the website that should be running):
function startFastcgi { /opt/mono/bin/fastcgi-mono-server4 /loglevels=debug /printlog=true /multiplex=false /applications=/:`pwd` /socket=$1 & } startFastcgi 'unix:/tmp/nginx-0.sockets' startFastcgi 'unix:/tmp/nginx-1.sockets' startFastcgi 'unix:/tmp/nginx-2.sockets' startFastcgi 'unix:/tmp/nginx-3.sockets' chmod 777 /tmp/nginx-*
As a result, 4 people are launched, which can accept 1024 connections. Then NGINX should be configured something like this:
upstream servercom { server unix:/tmp/nginx-0.sockets; server unix:/tmp/nginx-1.sockets; server unix:/tmp/nginx-2.sockets; server unix:/tmp/nginx-3.sockets; } server { listen 80; location / { fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_pass servercom; include fastcgi_params; } }
This configures NGINX with a pool of 4 "upstream workers" that it will use in round-robin fashion . Now, when I hammer my server with Boom in concurrency 200 for 1 minute, all this is good (aka not 502).
Hope you can somehow apply this to your code and get the job done :)
PS:
You can download my Hello World ServiceStack code, which I used for testing here .
And you can download my full NGINX.config here .
There are several ways that you need to adjust, but they should serve as a good base.