I suggest you read this page and watch the attached video. The Oracle Performance Group demonstrates how an application with a pool of 96 connections easily processes 10,000 front-panel users and 20,000 transactions per second.
PostgreSQL recommends the formula:
connections = ((core_count * 2) + effective_spindle_count)
Where core_count are the processor cores and effective_spindle_count is the number of disks in the RAID. For many servers, this formula will result in a connection pool with 10-20 maximum connections.
Most likely, even with 100 connections, your database is very saturated. Do you have 50 processor cores? If you are disks spinning without an SSD, the head can only be in one place at a time, and if the entire data set is not in memory, it is not possible to serve as many requests at once (100-200).
UPDATE: Directly answering the question about the size of the fixed pool. You will probably get maximum performance from your application with a pool that, as the maximum number of connections, will be turned to the right by the “knee” or “maximum” performance that your database can handle. This is probably a small number. If you have a “demand for spike”, as many applications do, trying to deploy new connections to grow the pool at the time of the surge is counterproductive (creates a lot of server load). A small, consistent pool will give you predictable performance.
source share