Delete lines in a file - Ruby

What is a smart way to delete lines from a CSV file in ruby โ€‹โ€‹where a specific value exists on a specific line?

Here is an example file:

350 lbs., Outrigger Footprint, 61" x 53", Weight, 767 lbs., 300-2080 350 lbs., Outrigger Footprint, 61" x 53", Weight, 817 lbs., 300-2580 350 lbs., Outrigger Footprint, 61" x 53", Weight, 817 lbs., 300-2580 350 lbs., Outrigger Footprint, 69" x 61", Weight, 867 lbs., 300-3080 350 lbs., Outrigger Footprint, 69" x 61", Weight, 867 lbs., 300-3080 

Ideally, I need a new file created with just this:

 350 lbs., Outrigger Footprint, 61" x 53", Weight, 767 lbs., 300-2080 350 lbs., Outrigger Footprint, 61" x 53", Weight, 817 lbs., 300-2580 350 lbs., Outrigger Footprint, 69" x 61", Weight, 867 lbs., 300-3080 

if indicated:

 300-2580 300-3080 300-2080 

so I know I can do this with sort filename|uniq -d , but I'm trying to learn Ruby (somewhat painfully).

Thanks in advance, M

+4
source share
3 answers

You can use this to get unique strings in an array in a csv file

 File.readlines("file.csv").uniq => ["350 lbs., Outrigger Footprint, 61\" x 53\", Weight, 767 lbs., 300-2080\n", "350 lbs., Outrigger Footprint, 61\" x 53\", Weight, 817 lbs., 300-2580\n", "350 lbs., Outrigger Footprint, 69\" x 61\", Weight, 867 lbs., 300-3080\n"] 

To write it to a new file, you can open the file in write mode, write it to a file:

 File.open("new_csv", "w+") { |file| file.puts File.readlines("csv").uniq } 

To compare values, you can use the split function on the "," page to access each column, for example:

 rows = File.readlines("csv").map(&:chomp) # equivalent to File.readlines.map { |f| f.chomp } mapped_columns = rows.map { |r| r.split(",").map(&:strip) } => [["350 lbs.", " Outrigger Footprint", " 61\" x 53\"", " Weight", " 767 lbs.", " 300-2080"], ["350 lbs.", " Outrigger Footprint", " 61\" x 53\"", " Weight", " 817 lbs.", " 300-2580"], .....] mapped_columns[0][5] => "300-2080" 

If you need additional functionality, you better install the FasterCSV gem .

+11
source

Well, I donโ€™t think this example will get the answer you are looking for ... but it will work ...

tmp.txt =>

 350 lbs., Outrigger Footprint, 61" x 53", Weight, 767 lbs., 300-2080 350 lbs., Outrigger Footprint, 61" x 53", Weight, 817 lbs., 300-2580 350 lbs., Outrigger Footprint, 61" x 53", Weight, 817 lbs., 300-2580 350 lbs., Outrigger Footprint, 69" x 61", Weight, 867 lbs., 300-3080 350 lbs., Outrigger Footprint, 69" x 61", Weight, 867 lbs., 300-3080 

File.readlines('tmp.txt').uniq will return this:

 350 lbs., Outrigger Footprint, 61" x 53", Weight, 767 lbs., 300-2080 350 lbs., Outrigger Footprint, 61" x 53", Weight, 817 lbs., 300-2580 350 lbs., Outrigger Footprint, 69" x 61", Weight, 867 lbs., 300-3080 

So, you can also easily sort using Array fxns. Google ruby โ€‹โ€‹arrays, and Iโ€™m sure you can find out how to choose whether you want to record according to comparison with the desired string.

0
source

You can also create a Hash that would NOT allow duplicate entries as entries. For example, the following code should help:

 require 'optparse' require 'csv' require 'pp' options = Hash.new OptionParser.new do |opts| opts.banner = "Usage: remove_extras.rb [options] file1 ..." options[:input_file] = '' opts.on('-i', '--input_file FILENAME', 'File to have extra rows removed') do |file| options[:input_file] = file end end.parse! if File.exists?(options[:input_file]) p "Parsing: #{options[:input_file]}" UniqFile=Hash.new File.open(options[:input_file]).each do |row| UniqFile.store(row,row.hash) end puts "please enter the output filename: \n" aFile=File.open(gets.chomp, "a+") UniqFile.each do|key,value| aFile.syswrite("#{key}") end end 
0
source

All Articles