As it turned out, the increase max_dbs_open at best, a partial response and, at worst, misleading. In our case, the problem was not in the number of open databases, but, apparently, in the number of HTTP connections used by our many repetitions.
Each replication can use min(worker_processes + 1, http_connections) , where worker_processes is the number of workers assigned to each replication, and http_connections is the maximum number of HTTP connections allocated for each replication, as described in this document .
Thus, the total number of compounds used
number of replications * min(worker_processes + 1, http_connections)
The default value of worker_processes is 4, and the default value of http_connections is 20. If there are 100 replications, the total number of HTTP connections used by replication is 500. Another max_connections setting determines the maximum number of HTTP connections allowed by the CouchDB server, as described in this document . The default value is 2048.
In our case, each user has two replications: one from the user to the main database and the other from the main database to the user. Thus, in our case with the default settings, every time we added a user, we added an additional 10 HTTP connections, which ultimately purged the default max_connections .
Since our replications are minimal, and only a small amount of data moves from user to master and from master to user, we dialed the number worker_processes , http_connections , increased max_connections and all is well.
UPDATE
Several other results
It was necessary to raise ulimit to the process so that it had more open connections
Creating replication too quickly also caused problems. If I typed the answer, how quickly I created new replications, it also helped alleviate the problem. YMMV.