Open selected lines with pandas using "chunksize" and / or "iterator"

I have a large csv file and I open it with pd.read_csv, as follows:

df = pd.read_csv(path//fileName.csv, sep = ' ', header = None) 

Since the file is really large, I would like to be able to open it in lines

 from 0 to 511 from 512 to 1023 from 1024 to 1535 ... from 512*n to 512*(n+1) - 1 

Where n = 1, 2, 3 ...

If I add chunksize = 512 to read_csv arguments

 df = pd.read_csv(path//fileName.csv, sep = ' ', header = None, chunksize = 512) 

and I print

 df.get_chunk(5) 

How can I open lines from 0 to 5, or can I split the file in parts of 512 lines using the for loop

 data = [] for chunks in df: data = data + [chunk] 

But this is completely useless, since the file must be fully open and takes time. How can I read only lines from 512 * n to 512 * (n + 1).

Looking around, I often saw that “chunksize” is used along with the “iterator”, as it should

  df = pd.read_csv(path//fileName.csv, sep = ' ', header = None, iterator = True, chunksize = 512) 

But after many attempts, I still do not understand what advantages this logical variable provides me. Could you please explain this to me?

+5
source share
1 answer

How can I read only lines from 512 * n to 512 * (n + 1)?

 df = pd.read_csv(fn, header=None, skiprows=512*n, nrows=512) 

You can do it this way (and this is very useful):

 for chunk in pd.read_csv(f, sep = ' ', header = None, chunksize = 512): # process your chunk here 

Demo:

 In [61]: fn = 'd:/temp/a.csv' In [62]: pd.DataFrame(np.random.randn(30, 3), columns=list('abc')).to_csv(fn, index=False) In [63]: for chunk in pd.read_csv(fn, chunksize=10): ....: print(chunk) ....: abc 0 2.229657 -1.040086 1.295774 1 0.358098 -1.080557 -0.396338 2 0.731741 -0.690453 0.126648 3 -0.009388 -1.549381 0.913128 4 -0.256654 -0.073549 -0.171606 5 0.849934 0.305337 2.360101 6 -1.472184 0.641512 -1.301492 7 -2.302152 0.417787 0.485958 8 0.492314 0.603309 0.890524 9 -0.730400 0.835873 1.313114 abc 0 1.393865 -1.115267 1.194747 1 3.038719 -0.343875 -1.410834 2 -1.510598 0.664154 -0.996762 3 -0.528211 1.269363 0.506728 4 0.043785 -0.786499 -1.073502 5 1.096647 -1.127002 0.918172 6 -0.792251 -0.652996 -1.000921 7 1.582166 -0.819374 0.247077 8 -1.022418 -0.577469 0.097406 9 -0.274233 -0.244890 -0.352108 abc 0 -0.317418 0.774854 -0.203939 1 0.205443 0.820302 -2.637387 2 0.332696 -0.655431 -0.089120 3 -0.884916 0.274854 1.074991 4 0.412295 -1.561943 -0.850376 5 -1.933529 -1.346236 -1.789500 6 1.652446 -0.800644 -0.126594 7 0.520916 -0.825257 -0.475727 8 -2.261692 2.827894 -0.439698 9 -0.424714 1.862145 1.103926 

In this case, an "iterator" may be useful?

when using chunksize - all pieces will have the same length. Using the iterator parameter, you can determine how much data ( get_chunk(nrows) ) you want to read at each iteration:

 In [66]: reader = pd.read_csv(fn, iterator=True) 

read the first 3 lines

 In [67]: reader.get_chunk(3) Out[67]: abc 0 2.229657 -1.040086 1.295774 1 0.358098 -1.080557 -0.396338 2 0.731741 -0.690453 0.126648 

Now we will read the following 5 lines:

 In [68]: reader.get_chunk(5) Out[68]: abc 0 -0.009388 -1.549381 0.913128 1 -0.256654 -0.073549 -0.171606 2 0.849934 0.305337 2.360101 3 -1.472184 0.641512 -1.301492 4 -2.302152 0.417787 0.485958 

The following 7 lines:

 In [69]: reader.get_chunk(7) Out[69]: abc 0 0.492314 0.603309 0.890524 1 -0.730400 0.835873 1.313114 2 1.393865 -1.115267 1.194747 3 3.038719 -0.343875 -1.410834 4 -1.510598 0.664154 -0.996762 5 -0.528211 1.269363 0.506728 6 0.043785 -0.786499 -1.073502 
+6
source

All Articles