My application manages the state of several objects called Queries for a considerable period of time. Each request has a unique identifier and goes through a separate life cycle. New queries arise in the system over time.
I would like to write a separate log file for each request. The log will track every interesting change in the status of this request. Therefore, if I wanted to find out everything about the history of the X request, it would just go and look at X.log.
Obviously, I could manually execute the solution using simple files. But I would like to do this using the Python framework. One way would be to create a new log instance for each unique request, configure it to point to the desired file, and then move away. But this seems like a wrong decision. It creates a lot of logs that are not garbage collected, and also unlimited, as new requests will continue to be logged in.
I was hoping to somehow configure a single logger, perhaps using my own handler, so that I could redirect the output to different files depending on the identifier of the incoming request. I looked through the docs, but everything I see seems to work at the inbound record level, rather than processing outbound endpoints.
Is it possible?
source share