CouchDB / MochiWeb: the negative effect of persistent connections

I have a fairly simple CouchDB setup in my Mint / Debian. My Java webapp experienced rather long delays when requesting CouchDB, so I started looking for reasons.

EDIT . A request template is a lot of small requests and small JSON objects (for example, 300 bytes / 1 KB down).

Wireshark landfills are pretty nice, and basically their volume is 3-5 milliseconds. Sampling by JVM frames showed me that the socket code (client-side requests in Couch) is somewhat busy, but nothing remarkable. Then I tried to profile the same with ApacheBench and oops: currently I see that keep-alive introduces a sustained additional 39 ms delay compared to inconsistent settings.

Does anyone know how to explain this? Perhaps persistent connections increase the congestion window at the TCP level, and then fail due to TCP_WAIT and small request / response sizes, or something like that? Should this option (TCP_WAIT) be enabled for tcp loopback connections?

w@mint ~ $ uname -a Linux mint 2.6.39-2-486 #1 Tue Jul 5 02:52:23 UTC 2011 i686 GNU/Linux w@mint ~ $ curl http://127.0.0.1:5984/ {"couchdb":"Welcome","version":"1.1.1"} 

works with saving, on average 40 milliseconds per request

 w@mint ~ $ ab -n 1024 -c 1 -k http://127.0.0.1:5984/ >>>snip Server Software: CouchDB/1.1.1 Server Hostname: 127.0.0.1 Server Port: 5984 Document Path: / Document Length: 40 bytes Concurrency Level: 1 Time taken for tests: 41.001 seconds Complete requests: 1024 Failed requests: 0 Write errors: 0 Keep-Alive requests: 1024 Total transferred: 261120 bytes HTML transferred: 40960 bytes Requests per second: 24.98 [#/sec] (mean) Time per request: 40.040 [ms] (mean) Time per request: 40.040 [ms] (mean, across all concurrent requests) Transfer rate: 6.22 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 0 Processing: 1 40 1.4 40 48 Waiting: 0 1 0.7 1 8 Total: 1 40 1.3 40 48 Percentage of the requests served within a certain time (ms) 50% 40 >>>snip 95% 40 98% 41 99% 44 100% 48 (longest request) 

There is no keepalive and voila - on average 1 ms per request.

 w@mint ~ $ ab -n 1024 -c 1 http://127.0.0.1:5984/ >>>snip Time taken for tests: 1.080 seconds Complete requests: 1024 Failed requests: 0 Write errors: 0 Total transferred: 236544 bytes HTML transferred: 40960 bytes Requests per second: 948.15 [#/sec] (mean) Time per request: 1.055 [ms] (mean) Time per request: 1.055 [ms] (mean, across all concurrent requests) Transfer rate: 213.89 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 0 Processing: 1 1 1.0 1 11 Waiting: 1 1 0.9 1 11 Total: 1 1 1.0 1 11 Percentage of the requests served within a certain time (ms) 50% 1 >>>snip 80% 1 90% 2 95% 3 98% 5 99% 6 100% 11 (longest request) 

Ok, now with keep-alive on, but also ask to close the connection through the http-header. Also 1 ms per request or so.

 w@mint ~ $ ab -n 1024 -c 1 -k -H 'Connection: close' http://127.0.0.1:5984/ >>>snip Time taken for tests: 1.131 seconds Complete requests: 1024 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 236544 bytes HTML transferred: 40960 bytes Requests per second: 905.03 [#/sec] (mean) Time per request: 1.105 [ms] (mean) Time per request: 1.105 [ms] (mean, across all concurrent requests) Transfer rate: 204.16 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 0 Processing: 1 1 1.2 1 14 Waiting: 0 1 1.1 1 13 Total: 1 1 1.2 1 14 Percentage of the requests served within a certain time (ms) 50% 1 >>>snip 80% 1 90% 2 95% 3 98% 6 99% 7 100% 14 (longest request) 
+7
source share
1 answer

Yes, this is due to tcp configuration settings. This configuration now aligns all three cases at 1 ms per request.

 [httpd] socket_options = [{nodelay, true}] 

See http://wiki.apache.org/couchdb/Performance#Network for details

+8
source

All Articles