I was asked to merge our log4j log files (NOT using Socket calls for now) into a Logstash JSON file, which I will then transfer to Elasticsearch. Our code uses RollingFileAppender. Here is an example of logging.
2016-04-22 16:43:25,172 ERROR :SomeUser : 2 [com.mycompany.SomeClass] AttributeSchema 'Customer |Customer |Individual|Individual|Quarter|Date' : 17.203 The Log Message.
Here is the ConversionPattern value in the log4j.properties file
<param name="ConversionPattern" value="%d{ISO8601} %p %x %X{username}:%t [%c] %m %n" />
Can someone please help me write a Logstash Grok filter that will analyze the line? I have the following:
filter {
if [type] == "log4j" {
grok {
match => ["message", "%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:loglevel} %{GREEDYDATA:messsage}"]
}
date {
match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS", "ISO8601"]
}
}
}
But of course, it takes everything after priority, like a message. I want to further isolate AT LEAST the following fields (defined in Log4j Pattern Layout )
- User (% X {username})
- Classpath ([% c])
- Subject (% t)
- Nested Diagnostic Materials (% x)
- Message itself (% m)