MySQL query timeout in MySQL python

I am trying to set a query time limit in python MySQLDB. I have a situation where I do not control requests, but I have to make sure that they do not work within the prescribed period. I tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal is sent, but does not get until the call to end is completed.

I wrote a test case to prove this behavior:

#!/usr/local/bin/python2.6 import time import signal from somewhere import get_dbc class Timeout(Exception): """ Time Exceded """ def _alarm_handler(*args): raise Timeout dbc = get_dbc() signal.signal(signal.SIGALRM, _alarm_handler) signal.alarm(1) try: print "START: ", time.time() dbc.execute("SELECT SLEEP(10)") except Timeout: print "TIMEOUT!", time.time()' 

"SELECT SLEEP (10)" simulates a slow query, but I see the same behavior with the actual slow query.

Result:

 START: 1254440686.69 TIMEOUT! 1254440696.69 

As you can see, it sleeps for 10 seconds, then I get a timeout exception.

Questions:

  • Why am I not receiving a signal until completion is complete?
  • Is there another reliable way to limit query execution time?
+7
python mysql timeout
source share
6 answers
Decision

Twisting-based @nosklo is elegant and workable, but if you want to avoid twisting dependence, the task is still feasible, for example ....:

 import multiprocessing def query_with_timeout(dbc, timeout, query, *a, **k): conn1, conn2 = multiprocessing.Pipe(False) subproc = multiprocessing.Process(target=do_query, args=(dbc, query, conn2)+a, kwargs=k) subproc.start() subproc.join(timeout) if conn1.poll(): return conn1.recv() subproc.terminate() raise TimeoutError("Query %r ran for >%r" % (query, timeout)) def do_query(dbc, query, conn, *a, **k): cu = dbc.cursor() cu.execute(query, *a, **k) return cu.fetchall() 
+7
source share

I tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal is sent, but does not get until the call to end is completed.

The mysql library handles interrupted system calls internally, so you won't see the side effects of SIGALRM until the API call finishes (without waiting for the current thread or process to be destroyed)

You can try to fix MySQL-Python and use the MYSQL_OPT_READ_TIMEOUT parameter (added in mysql 5.0.25)

+2
source share

Why am I not receiving a signal until completion is complete?

The request is executed using the C function, which blocks Python VM execution until it returns.

Is there another reliable way to limit query execution time?

This (IMO) is really an ugly solution, but it really works. You can run the request in a separate process (either through fork() or the multiprocessing module ). Start the alarm timer in the main process, and when you receive it, send it to the child process SIGINT or SIGKILL . If you use multiprocessing , you can use the Process.terminate() method.

+1
source share

Use adbapi . This allows you to make a db call asynchronously.

 from twisted.internet import reactor from twisted.enterprise import adbapi def bogusQuery(): return dbpool.runQuery("SELECT SLEEP(10)") def printResult(l): # function that would be called if it didn't time out for item in l: print item def handle_timeout(): # function that will be called when it timeout reactor.stop() dbpool = adbapi.ConnectionPool("MySQLdb", user="me", password="myself", host="localhost", database="async") bogusQuery().addCallback(printResult) reactor.callLater(4, handle_timeout) reactor.run() 
+1
source share

General notes

Recently, I had the same problem with several conditions that I had to fulfill:

  • the solution must be thread safe
  • multiple database connections from the same computer can be active at the same time, kill the exact connection / request
  • the application contains connections to many different databases - a portable handler for each database host

We had the following class layout (unfortunately, I cannot post real sources):

 class AbstractModel: pass class FirstDatabaseModel(AbstractModel): pass # Connection to one DB host class SecondDatabaseModel(AbstractModel): pass # Connection to one DB host 

And created several threads for each model.


Python 3.2 Solution

In our application, one model = one database. Therefore, I created a “service connection” for each model (so that we can execute KILL in a parallel connection). Therefore, if one instance of FirstDatabaseModel was created, 2 database connections were created; if 5 instances were created, only 6 compounds were used:

 class AbstractModel: _service_connection = None # Formal declaration def __init__(self): ''' Somehow load config and create connection ''' self.config = # ... self.connection = MySQLFromConfig(self.config) self._init_service_connection() # Get connection ID (pseudocode) self.connection_id = self.connection.FetchOneCol('SELECT CONNECTION_ID()') def _init_service_connection(self): ''' Initialize one singleton connection for model ''' cls = type(self) if cls._service_connection is not None: return cls._service_connection = MySQLFromConfig(self.config) 

Now we need a killer:

 def _kill_connection(self): # Add your own mysql data escaping sql = 'KILL CONNECTION {}'.format(self.connection_id) # Do your own connection check and renewal type(self)._service_connection.execute(sql) 

Note: connection.execute = create cursor, execute, close cursor.

And make the cache thread safe with threading.Lock :

 def _init_service_connection(self): ''' Initialize one singleton connection for model ''' cls = type(self) if cls._service_connection is not None: return cls._service_connection = MySQLFromConfig(self.config) cls._service_connection_lock = threading.Lock() def _kill_connection(self): # Add your own mysql data escaping sql = 'KILL CONNECTION {}'.format(self.connection_id) cls = type(self) # Do your own connection check and renewal try: cls._service_connection_lock.acquire() cls._service_connection.execute(sql) finally: cls._service_connection_lock.release() 

Finally, add the timed execute method using threading.Timer :

 def timed_query(self, sql, timeout=5): kill_query_timer = threading.Timer(timeout, self._kill_connection) kill_query_timer.start() try: self.connection.long_query() finally: kill_query_timer.cancel() 
+1
source share

Why am I not receiving a signal until completion is complete?

A process that is waiting for network I / O is in uninterrupted access (a UNIX thing not related to Python or MySQL). It receives a signal after completing a system call (possibly as an EINTR error code, although I'm not sure).

Is there another reliable way to limit query execution time?

I think this is usually done with an external tool like mkill , which monitors MySQL for long queries and kills them.

-one
source share

All Articles