Log4j: one log file per request

We have a weblogic batch application that processes several requests from consumers at the same time. We use log4j to register a belly button. We are now entering a single log file for multiple requests. It will be tedious to debug the problem for this request, since for all requests the logs are in one file.

So, we plan to have one log file for each request. The user sends the request identifier for which processing should be performed. Now, in fact, there can be many consumers sending request identifiers to our application. So the question is how to segregate the log files based on the request.

We cannot start and stop the production server each time to exclude the possibility of using an overridden file application with a timestamp or request identifier. This is explained in the following article: http://veerasundar.com/blog/2009/08/how-to-create-a-new-log-file-for-each-time-the-application-runs/

I also tried playing around with these alternatives:

http://cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html

http://www.mail-archive.com/ log4j-user@logging.apache.org /msg05099.html

This approach gives the desired results, but it does not work properly if multiple requests are sent at the same time. Due to some concurrency error logs go here and there.

I expect some help from you. Thanks in advance.

+3
source share
3 answers

Here's my question on the same topic: dynamically creating and destroying logging applications

I follow this in a thread where I discuss how this happens on the Log4J mailing list: http://www.qos.ch/pipermail/logback-user/2009-August/001220.html

Ceci Gulcu (the inventor of log4j) did not think it was a good idea ... instead used Logback instead.

We went ahead and did it anyway using the custom appender file. See My discussions above for more details.

+4
source

See SiftingAppender delivery using logback (successor to log4j), it is designed to handle the creation of additives by run-time criteria.

If your application needs to create only one log file per session, simply create a discriminator based on the session ID. Writing a discriminator includes 3 or 4 lines of code and, therefore, should be fairly easy. Scream in the user-journal mailing list if you need help.

+3
source

This problem is handled very well by Logback . I suggest choosing it if you have freedom.

Assuming what you need, you will need SiftingAppender . It allows you to separate the log files according to some runtime value. This means that you have a wide selection of ways to split log files.

To split your files into requestId , you can do something like this:

logback.xml

 <configuration> <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator> <key>requestId</key> <defaultValue>unknown</defaultValue> </discriminator> <sift> <appender name="FILE-${requestId}" class="ch.qos.logback.core.FileAppender"> <file>${requestId}.log</file> <append>false</append> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern> </layout> </appender> </sift> </appender> <root level="DEBUG"> <appender-ref ref="SIFT" /> </root> </configuration> 

As you can see (inside discriminator ), you will distinguish the files used to write logs to requestId . This means that each request will be sent to a file with the corresponding requestId . Therefore, if you had two requests, where requestId=1 and one request, where requestId=2 , you will have 2 log files: 1.log (2 entries) and 2.log (1 entry).

At this point, you may wonder how to install key . This is done by entering key-value pairs in MDC (note that the key matches the key defined in the logback.xml file):

RequestProcessor.java

 public class RequestProcessor { private static final Logger log = LoggerFactory.getLogger(RequestProcessor.java); public void process(Request request) { MDC.put("requestId", request.getId()); log.debug("Request received: {}", request); } } 

And that’s basically it for easy use. Now, every time a request arrives with a different (not yet met) identifier, a new file will be created for it.

+1
source

All Articles