The fastest way to read a subset of CSV lines

I have a 5GBcsv with 2 million lines. The header is separated by a comma strings, and each line is separated by a comma doubleswithout missing or corrupted data. It is rectangular.

My goal is to read random 10% (with or without replacement, it does not matter) of the lines in RAM as quickly as possible. An example of a slow solution (but faster than read.csv) is to read through the entire matrix using freadand then save random 10% of the rows.

require(data.table)
X <- data.matrix(fread('/home/user/test.csv')) #reads full data.matix
X <- X[sample(1:nrow(X))[1:round(nrow(X)/10)],] #sample random 10%

However, I am looking for the fastest solution (this is slow because I need to read everything first and then crop it).

The award-winning solution will provide an system.time()assessment of various alternatives.

Other:

  • I am using linux
  • I do not need exactly 10% of the rows. About 10%.
+3
3

, , , .

write.csv(iris,"iris.csv")

fread("shuf -n 5 iris.csv")

    V1  V2  V3  V4  V5         V6
1:  37 5.5 3.5 1.3 0.2     setosa
2:  88 6.3 2.3 4.4 1.3 versicolor
3:  84 6.0 2.7 5.1 1.6 versicolor
4: 125 6.7 3.3 5.7 2.1  virginica
5: 114 5.7 2.5 5.0 2.0  virginica

N = 5 iris.

, :

fread("tail -n+2 iris.csv | shuf -n 5", header=FALSE)

+5

100000 :

"","a","b","c"
"1",0.825049088569358,0.556148858508095,0.591679535107687
"2",0.161556158447638,0.250450366642326,0.575034103123471
"3",0.676798462402076,0.0854280597995967,0.842135070590302
"4",0.650981109589338,0.204736212035641,0.456373531138524
"5",0.51552157686092,0.420454133534804,0.12279288447462

$ wc -l d.csv 
100001 d.csv

100000 . , 0 1 0,9.

$ awk 'NR==1 {print} ; rand()>.9 {print}' < d.csv >sample.csv

:

$ head sample.csv 
"","a","b","c"
"12",0.732729186303914,0.744814146542922,0.199768838472664
"35",0.00979996216483414,0.633388962829486,0.364802648313344
"36",0.927218825090677,0.730419414117932,0.522808947600424
"42",0.383301998255774,0.349473554175347,0.311060158303007

10027 :

$ wc -l sample.csv 
10027 sample.csv

0.033s 4- , , HD . , .

sample.csv read.csv fread :

> s = fread("sample.csv")
+5

You can use sqldf::read.csv.sqlthe SQL command to output data to:

library(sqldf)
write.csv(iris, "iris.csv", quote = FALSE, row.names = FALSE) # write a csv file to test with
read.csv.sql("iris.csv","SELECT * FROM file ORDER BY RANDOM() LIMIT 10")
   Sepal_Length Sepal_Width Petal_Length Petal_Width    Species
1           6.3         2.8          5.1         1.5  virginica
2           4.6         3.1          1.5         0.2     setosa
3           5.4         3.9          1.7         0.4     setosa
4           4.9         3.0          1.4         0.2     setosa
5           5.9         3.0          4.2         1.5 versicolor
6           6.6         2.9          4.6         1.3 versicolor
7           4.3         3.0          1.1         0.1     setosa
8           4.8         3.4          1.9         0.2     setosa
9           6.7         3.3          5.7         2.5  virginica
10          5.9         3.2          4.8         1.8 versicolor

It does not calculate 10% for you, but you can choose the absolute limit of the rows to return.

+2
source

All Articles