How to handle too many concurrent connections even after using a connection pool?

Scenario

Say you have a website or application that has tons of traffic. And even when using the database connection pool, the performance becomes real (the site / application may even crash), because there are too many simultaneous connections.

Question

What are the possible solutions to this problem?

My thoughts

I thought that someone with this problem could create several databases (maybe on different machines, although I'm not sure if this is necessary), each of which has the same information and is updated at the same time, which would give a multiple of the original connection number for one database. But if the database is large, that does not seem to be a very viable solution.

+6
source share
7 answers

The stock is not specific enough to give a solid proposal, but the full list of what can be done is as follows:

  • Database cluster Suitable for situations where you do not want to change your application level and database, everything that you touch. There is a restriction on how much you can leave the database cluster. If your query volume continues to grow, this solution will also fail. But the good news is that you have all the functionality that you already have in regular MySQL with a single instance.
  • Disclaimer Since your question is tagged with MySQL, and it does not support the outline on its own, if you want to use this solution, you need to implement it at your application level. In this solution, you will distribute data across several databases (preferably in multiple MySQL instances on separate hardware) logically. You are responsible for finding the appropriate database containing your assigned data. This is one of the most effective solutions ever, but it is not always possible. His biggest mistake is that data scattered between two or more databases cannot be included in a transaction.
  • Replication . Depending on your scenario, you can enable database replication and have copies of your data on it. Thus, you can connect to them instead of the main database and reduce the load on it. Defining default replication is a master / slave scenario in which data flow is one way: from master to slave. Thus, the changes that you can make on the slave will be applied to the mast, they will not affect the master. But there is also a master / master replication configuration in which data flows in both directions. However, you cannot read atomic integrity for simultaneous data changes between both wizards. Ultimately, this solution is most effective if you plan to use it in master / slave mode and using slaves for read-only access.
  • Caching Perhaps this solution should not be included here, but since your stem does not reject it, here it is. One way to reduce database load is to cache data after retrieving it. This solution can be useful, especially if retrieving data is expensive. There are many cache servers such as memcached or redis . This way you can omit so many database connections, but only for data extraction.
  • Other storage mechanisms . You can always switch to more powerful engines if your current one does not provide you with what you need. Of course, this is only possible if you need it. There are currently NoSQL engines that are much more efficient than RDBMS, which support the outline initially, and you can scale them linearly with minimal effort. There are also Lucene-based solutions with powerful full-text search capabilities that provide you with the same automatic window. In fact, the only reason you should use a traditional DBMS is the atomic behavior of transactions. But if transactions are optional, there are much better solutions than RDBMSs.
+10
source

If you have not already done so, you can try running the application on the application server - to get some middleware behind your application. Most application servers will make their own connection pool (because getting a connection from a web application to a database connection pool is still really very expensive). In addition, you should be able to configure the application server to use shared connections, which, as the name implies, will allow you to combine connections when possible.

In short, use an application server. If you already may indicate which one you are using, and we can see how to optimize the server configuration.

+3
source

Replication - Master plus any number of followers. This gives you "unlimited" read scaling.

Disconnect . The connection should not remain open longer than necessary.

Unix, not Windows - Do I need to develop?

InnoDB - Use InnoDB, not MyISAM.

SlowLog - set long_query_time to 1 and look at the top pair of queries; optimize them. See pt-query-digest for help on summing a slow log.

+3
source

This is a typical application scaling problem, and many solutions have been developed - for example, Google Big Table and Amazon Elastic. If you go to the cloud and use the automatic scaling options that they all provide, this is not an option, then you will need to create your own customization. Take a look at the docs for Postgres and MySQL , and you'll find that the ideas are pretty similar, including the concepts

  • sharding : Distribute your client data into multiple databases and direct client requests to the right database instances.

  • Load Balancing: Your application is deployed on multiple servers and uses middleware to route requests based on server load. Database synchronization will require some kind of DB data synchronization tool, such as SymmetricDS .

This is not a complete overview of all your options, but it can help you get started.

+2
source

There are many things you should research for this problem.
- How many simultaneous connections are there. You can always increase the drum and increase the number of maximum connections. MySQL can support millions of connections.

- make sure your application closes connections. Even with a pool, the application should return connections to the pool.

-run database on a separate server.

- make sure you have optimized queries. One long request can slow down the system.

- If possible, use a MySQL cluster if other approaches fail. With a high traffic site, you can consider this to avoid a single point of failure.

0
source

I had similar problems, although the application supposedly closed its connections, and I could see how they flow in SQL as sleeping connections. After checking the problem, I will add the following line to the connection string in webconfig with the following test:

 Connection Lifetime=600 

This should have killed any sleeping compounds in 10 minutes - but it didnโ€™t ...

In a further review, I had pending window updates on both my web server and SQL server. And magic, the problem is gone!

I would like to have a more specific answer for you, but somewhere between adding that the "Connection Lifetime" and updating my web servers and SQL servers with patches completely fixed the problem for me. I was clean for 3 weeks, no problem.

0
source

In our case, we also encountered the same problem when the concurrent mysql connection reached 100.

Finally, we found the excellent npm express-myconnection module ( https://www.npmjs.com/package/express-myconnection ). It automatically releases connections upon completion. It supports Single and Pool connection strategies.

It works great.

0
source

All Articles