How to do two status checks in logstash and write a better configuration file

I am using logstash 1.4.2,

I have logstash-forwarder.conf in a client log server, like this

{ "network": { "servers": [ "xxx.xxx.xxx.xxx:5000" ], "timeout": 15, "ssl ca": "certs/logstash-forwarder.crt" }, "files": [ { "paths": [ "/var/log/messages" ], "fields": { "type": "syslog" } }, { "paths": [ "/var/log/secure" ], "fields": { "type": "linux-syslog" } } ] } 

==================================================== =======

In logstash server

1. filter.conf

 filter { if [type] == "syslog" { date { locale => "en" match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"] timezone => "Asia/Kathmandu" target => "@timestamp" add_field => { "debug" => "timestampMatched"} } grok { match => { "message" => "\[%{WORD:messagetype}\]%{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } } if [type] == "linux-syslog" { date { locale => "en" match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"] timezone => "Asia/Kathmandu" target => "@timestamp" add_field => { "debug" => "timestampMatched"} } grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } mutate { replace => [ "syslog_timestamp", "%{syslog_timestamp} +0545" ] } } } 

==================================================== =====

2. output.conf

 output { if [messagetype] == "WARNING" { elasticsearch { host => "xxx.xxx.xxx.xxx" } stdout { codec => rubydebug } } if [messagetype] == "ERROR" { elasticsearch { host => "xxx.xxx.xxx.xxx" } stdout { codec => rubydebug } } if [type] == "linux-syslog" { elasticsearch { host => "xxx.xxx.xxx.xxx" } stdout { codec => rubydebug } } } 

==================================================== =====

I want all the logs to go from / var / log / secure and only ERROR and WARNING are logged from / var / log / messages, I know this is not a very good configuration. I want someone to show me the best way to do this.

+7
logstash kibana
source share
1 answer

I prefer to make decisions about events in the filter block. My input and output blocks are usually pretty simple. From there I see two options.

Use a drop filter

The drop filter causes the event to be deleted. He will never do this for your exits:

 filter { #other processing goes here if [type] == "syslog" and [messagetype] not in ["ERROR", "WARNING"] { drop {} } } 

The surface of this is that it is very simple.

The disadvantage is that the event is simply disabled. It will not be displayed at all. Which is good if that is what you want.

Use tag

Many filters allow you to add tags that are useful for passing decisions between plugins. You can attach a tag telling your output block to send the event to ES:

 filter { #other processing goes here if [type] == "linux-syslog" or [messagetype] in ["ERROR", "WARNING"] { mutate { add_tag => "send_to_es" } } } output { if "send_to_es" in [tags] { elasticsearch { #config goes here } } } 

The surface of this is that it allows for fine control.

The disadvantage of this is that it works a little more and your ES data ends up a bit dirty (the tag will be visible and searchable in ES).

+9
source share

All Articles