Logstash output to elasticsearch index and mapping

I am trying to get the log output for elasticsearch, but I'm not sure how to use the mapping that I defined in elasticsearch ...

In Kiban, I did this:

Created an index and display as follows:

PUT /kafkajmx2 { "mappings": { "kafka_mbeans": { "properties": { "@timestamp": { "type": "date" }, "@version": { "type": "integer" }, "host": { "type": "keyword" }, "metric_path": { "type": "text" }, "type": { "type": "keyword" }, "path": { "type": "text" }, "metric_value_string": { "type": "keyword" }, "metric_value_number": { "type": "float" } } } } } 

Can write data like this:

 POST /kafkajmx2/kafka_mbeans { "metric_value_number":159.03478490788203, "path":"/home/usrxxx/logstash-5.2.0/bin/jmxconf", "@timestamp":"2017-02-12T23:08:40.934Z", "@version":"1","host":"localhost", "metric_path":"node1.kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec.FifteenMinuteRate", "type":null } 

now my log output is as follows:

 input { kafka { kafka details here } } output { elasticsearch { hosts => "http://elasticsearch:9050" index => "kafkajmx2" } } 

and he just writes it to kafkajmx2 index, but doesn't use the map when I request it like this in kiban:

 get /kafkajmx2/kafka_mbeans/_search?q=* { } 

I will return this:

  { "_index": "kafkajmx2", "_type": "logs", "_id": "AVo34xF_j-lM6k7wBavd", "_score": 1, "_source": { "@timestamp": "2017-02-13T14:31:53.337Z", "@version": "1", "message": """ {"metric_value_number":0,"path":"/home/usrxxx/logstash-5.2.0/bin/jmxconf","@timestamp":"2017-02-13T14:31:52.654Z","@version":"1","host":"localhost","metric_path":"node1.kafka.server:type=SessionExpireListener,name=ZooKeeperAuthFailuresPerSec.Count","type":null} """ } } 

how do i say use kafka_mbeans map in kafka_mbeans output?

----- EDIT -----

I tried my output like this, but still get the same results:

 output { elasticsearch { hosts => "http://10.204.93.209:9050" index => "kafkajmx2" template_name => "kafka_mbeans" codec => plain { format => "%{message}" } } } 

data in elastic search should look like this:

 { "@timestamp": "2017-02-13T14:31:52.654Z", "@version": "1", "host": "localhost", "metric_path": "node1.kafka.server:type=SessionExpireListener,name=ZooKeeperAuthFailuresPerSec.Count", "metric_value_number": 0, "path": "/home/usrxxx/logstash-5.2.0/bin/jmxconf", "type": null } 

-------- EDIT 2 --------------

I got a parsing message in json by adding a filter like this:

 input { kafka { ...kafka details.... } } filter { json { source => "message" remove_field => ["message"] } } output { elasticsearch { hosts => "http://node1:9050" index => "kafkajmx2" template_name => "kafka_mbeans" } } 

It does not use the template yet, but it at least parses json correctly ... so now I get the following:

  { "_index": "kafkajmx2", "_type": "logs", "_id": "AVo4a2Hzj-lM6k7wBcMS", "_score": 1, "_source": { "metric_value_number": 0.9967205071482902, "path": "/home/usrxxx/logstash-5.2.0/bin/jmxconf", "@timestamp": "2017-02-13T16:54:16.701Z", "@version": "1", "host": "localhost", "metric_path": "kafka1.kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent.Value", "type": null } } 
+7
elasticsearch logstash
source share
2 answers

What you need to change is very simple. First use the json codec in your kafka input. There is no need for json filter, you can remove it.

  kafka { ...kafka details.... codec => "json" } 

Then, in your elasticsearch output, you are not given a matching type ( document_type parameter below), which is important otherwise it defaults to logs (as you can see), and this does not match your kafka_mbeans display type. In addition, you do not need to use a template, since your index already exists. Make the following changes:

  elasticsearch { hosts => "http://node1:9050" index => "kafkajmx2" document_type => "kafka_mbeans" } 
+4
source share

This is determined by the template_name parameter on the output of elasticsearch .

 elasticsearch { hosts => "http://elasticsearch:9050" index => "kafkajmx2" template_name => "kafka_mbeans" } 

One warning. If you want to start creating indexes placed on time, such as one index per week, you will need to take a few more steps to ensure that your mapping remains with each. You have a couple of options:

  • Create an elasticsearch template and define it for indexes using glob. For example, kafkajmx2-*
  • Use the template parameter on the output, which indicates the JSON file that defines your mapping, which will be used with all created indexes through this output.
0
source share

All Articles