For the task of machine learning, I need to deal with data sets that are too large to immediately enter my memory, so I need to break it into a piece. Fortunately, pandas.read_csv has a chunk_size parameter in which you can specify the amount of data that you want to use for analysis, and then iterate over the data set in pieces with a for loop, which looks like this:
In [120]: reader = pd.read_table('tmp.sv', sep='|', chunksize=4)
In [121]: reader
<pandas.io.parsers.TextFileReader at 0xaa94ad0>
In [122]: for chunk in reader:
.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
[4 rows x 5 columns]
Unnamed: 0 0 1 2 3
0 4 -0.424972 0.567020 0.276232 -1.087401
1 5 -0.673690 0.113648 -1.478427 0.524988
2 6 0.404705 0.577046 -1.715002 -1.039268
3 7 -0.370647 -1.157892 -1.344312 0.844885
[4 rows x 5 columns]
Unnamed: 0 0 1 2 3
0 8 1.075770 -0.10905 1.643563 -1.469388
1 9 0.357021 -0.67460 -1.776904 -0.968914
[2 rows x 5 columns].
But I need both a train set and a test set in a for loop for my machine learning algorithm to make predictions on pieces of data, and I don't know how to do it. I am basically looking for this:
result = []
train = pd.read('train_set',chunksize = some_number)
test = pd.read('test_set',chunksize = some_number)
for chunk in train and test:
result.append(do_machine_learning(train,test))
save_result(result)
update:
So I tried any Hayden solution, but it gave me a new error when I try to access certain pieces of data:
print("getting train set")
train = pd.read_csv(os.path.join(dir,"Train.csv"),chunksize = 200000)
print("getting test set")
test = pd.read_csv(os.path.join(dir,"Test.csv"),chunksize = 200000)
result = []
for chunk in train:
print("transforming train,test,labels into numpy arrays")
labels = np.array(train)[:,3]
train = np.array(train)[:,2]
test = np.array(test)[:,2]
print("getting estimator and predictions")
result.append(stochastic_gradient(train,test))
print("got everything")
result = np.array(result)
tracking:
Traceback (most recent call last):
File "C:\Users\Ano\workspace\final_submission\src\rf.py", line 38, in <module>
main()
File "C:\Users\Ano\workspace\final_submission\src\rf.py", line 18, in main
labels = np.array(train)[:,3]
IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index