Selective parsing csv file using logstash

I am trying to transfer data to elasticsearch from csv files via logstash. These csv files contain the first row as column names. Is there any specific way to skip this line when parsing a file? Are there any conventions / filters that I could use so that in case of an exception it goes to the next line?

my configuration file looks like this:

input {  
      file {
          path => "/home/sagnik/work/logstash-1.4.2/bin/promosms_dec15.csv"
          type => "promosms_dec15"
          start_position => "beginning"
          sincedb_path => "/dev/null"
      }
}
filter {

    csv {
        columns => ["Comm_Plan","Queue_Booking","Order_Reference","Generation_Date"]
        separator => ","
    }  
    ruby {
          code => "event['Generation_Date'] = Date.parse(event['Generation_Date']);"
    }

}
output {  
    elasticsearch { 
        action => "index"
        host => "localhost"
        index => "promosms-%{+dd.MM.YYYY}"
        workers => 1
    }
}

The first few lines of my csv file look like

"Comm_Plan","Queue_Booking","Order_Reference","Generation_Date"
"","No","FMN1191MVHV","31/03/2014"
"","No","FMN1191N64G","31/03/2014"
"","No","FMN1192OPMY","31/03/2014"

Is there a way to skip the first line? In addition, if my csv file ends with a new line, there is nothing in it, then I also get an error message. How to skip these new lines if they fall at the end of the file, or if it is an empty line between two lines?

+4
1

( csv, ruby):

if [Comm_Plan] == "Comm_Plan" {
  drop { }
}

, , , , , :

if [Comm_Plan] == "Comm_Plan" and [Queue_Booking] == "Queue_Booking" and [Order_Reference] == "Order_Reference" and [Generation_Date] == "Generation_Date" {
  drop { }
}

, , , , , , .

+11

All Articles