How to automatically kill slow MongoDB queries?

Is there a way to protect the application from slow queries in MongoDB? My application has many filter capabilities, and I control all of these requests, but at the same time I do not want to compromise performance due to the lack of an index definition.

+4
source share
4 answers

The "notablescan" option, as indicated by @ghik, will prevent you from performing slow queries due to the lack of an index. However, this setting is global for the server and is not suitable for use in a production environment. It will also not protect you from any other source of slow queries other than table scans.

Unfortunately, I don’t think there is a way to do what you want right now. There is a JIRA ticket offering to add the request parameter $ maxTime or $ maxScan, which sounds like it will help you, so please vote for it: https://jira.mongodb.org/browse/SERVER-2212 .

+2
source

Right now with version 2.6 it's possible. In your press release, you can see the following:

with MaxTimeMS operators and developers, you can specify automatic cancellation of requests that provide better control over the use of resources;

Therefore, with MaxTimeMS you can specify how much time you allow to fulfill your request. For example, I do not want a particular request to run for more than 200 ms.

 db.collection.find({ // my query }).maxTimeMS(200) 

What's cool is that you can specify different timeouts for different operations.

To answer the OP question in the comment . There are no global settings for this. One reason is that different requests may have different maximum allowable times. For example, you might have a query that finds a user id by its id. This is a very common operation and should work very quickly (otherwise we are doing something wrong). Therefore, we cannot allow it to work longer than 200 ms.

But we also have some aggregation request that we run once a day. For this operation, work normally for 4 seconds. But we cannot tolerate this for more than 10 seconds. So we can supply 10,000 as maxTimeMS.

+2
source

Options are available on the client side (maxTimeMS starting from version 2.6).

There is no attractive global option on the server side, since it will affect all databases and all operations, even those that the system must be working for a long time for internal work (for example, tail for oplog for replication). In addition, it may be good that some of your requests will take a long time to complete by design.

The correct way to resolve this issue is to track the currently running requests with a script and kill those that work for a long time, and initiate the user / client - you can create exceptions for requests that take a long time to execute by design, or have different threshold values ​​for different queries / collections, etc.

Then you can use the db.currentOp () method (in the shell) to view all current operations. The secs_running field indicates how long the operation has been running. Be careful not to kill any lengthy operations that are not triggered by your application / client - this may be a necessary system operation, for example, moving pieces in a fragmentation cluster (as only one example).

+2
source

I assume that there is currently no support for killing a request by passing a time argument. Although on your development side you can set the profiler level to 2. It will log every request that is requested. From there you can see which queries take how long. I know that this is not what you wanted, but it helps to understand that all the questions are fat, and then in your application logic you can somehow gracefully handle cases where these requests may arise. I usually take this approach, and it helps.

0
source

All Articles