Can I write many different log files using a single logger?

My application manages the state of several objects called Queries for a considerable period of time. Each request has a unique identifier and goes through a separate life cycle. New queries arise in the system over time.

I would like to write a separate log file for each request. The log will track every interesting change in the status of this request. Therefore, if I wanted to find out everything about the history of the X request, it would just go and look at X.log.

Obviously, I could manually execute the solution using simple files. But I would like to do this using the Python framework. One way would be to create a new log instance for each unique request, configure it to point to the desired file, and then move away. But this seems like a wrong decision. It creates a lot of logs that are not garbage collected, and also unlimited, as new requests will continue to be logged in.

I was hoping to somehow configure a single logger, perhaps using my own handler, so that I could redirect the output to different files depending on the identifier of the incoming request. I looked through the docs, but everything I see seems to work at the inbound record level, rather than processing outbound endpoints.

Is it possible?

+4
source share
2 answers

Looking at the RotatingFileHandler code in logging.handlers finally gave me enough hints to solve this problem. The implementation of the key is that when registering a message, an optional extra keyword can be transferred, which is an attribute dictionary that will be stored in Record . It can be accessed from Handler . Inside the Handler we can initiate a change in the output stream based on the value provided by the user.

 import logging class MultiFileHandler(logging.FileHandler): def __init__(self, filename, mode, encoding=None, delay=0): logging.FileHandler.__init__(self, filename, mode, encoding, delay) def emit(self, record): if self.should_change_file(record): self.change_file(record.file_id) logging.FileHandler.emit(self, record) def should_change_file(self, record): if not hasattr(record, 'file_id') or record.file_id == self.baseFilename: return False return True def change_file(self, file_id): self.stream.close() self.baseFilename = file_id self.stream = self._open() if __name__ == '__main__': logger = logging.getLogger('request_logger') logger.setLevel(logging.DEBUG) handler = MultiFileHandler(filename='out.log', mode='a') handler.setLevel(logging.DEBUG) logger.addHandler(handler) # Log some messages to the original file logger.debug('debug message') logger.info('info message') # Log some messages to a different file logger.debug('debug message', extra={'file_id':'changed.log'}) logger.info('info message', extra={'file_id':'changed.log'}) logger.warn('warn message', extra={'file_id':'changed.log'}) logger.error('error message', extra={'file_id':'changed.log'}) logger.critical('critical message', extra={'file_id':'changed.log'}) 
+2
source

It looks like you are looking for a completely different logging system - one that does not support global state. Have you viewed the Logbook ?

Alternatively, if you should avoid third-party dependencies, you can use logging.addLevelName to add a level for each request and add a handler with a filter that puts each inconsistent log entry in your logger for each request. When the request goes out of scope, you can call the close method to remove it from the tree.

This will most likely not scale well, since each filter handler will be called for each log message.

0
source

All Articles