Using memmap files for batch processing

I have a huge dataset for which I want ATP. I am limited by RAM and PCA computational efficiency. So I switched to using Iterative PCA.

Dataset size (140,000 3504)

The documentation states thatThis algorithm has constant memory complexity, on the order of batch_size, enabling use of np.memmap files without loading the entire file into memory.

This is really good, but I don’t know how to use it.

I tried loading one memmap, hoping it would gain access to it in pieces, but my RAM exploded. My code below ends up using a lot of RAM:

ut = np.memmap('my_array.mmap', dtype=np.float16, mode='w+', shape=(140000,3504))
clf=IncrementalPCA(copy=False)
X_train=clf.fit_transform(ut)

When I say “my RAM worked”, Traceback I see:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python27\lib\site-packages\sklearn\base.py", line 433, in fit_transfo
rm
    return self.fit(X, **fit_params).transform(X)
  File "C:\Python27\lib\site-packages\sklearn\decomposition\incremental_pca.py",
 line 171, in fit
    X = check_array(X, dtype=np.float)
  File "C:\Python27\lib\site-packages\sklearn\utils\validation.py", line 347, in
 check_array
    array = np.array(array, dtype=dtype, order=order, copy=copy)
MemoryError

How can I improve this without using accuracy while reducing the size of the batch?


My ideas for diagnosis:

sklearn fit() . , , .

for batch in gen_batches(n_samples, self.batch_size_):
        self.partial_fit(X[batch])
return self

Edit: iterativePCA, , .npy. .

Edit2: memmap file. .

Edit3: , IncrementalPCA.fit() , RAM. , , , memmap .

temp_train_data=X_train[1000:]
temp_labels=y[1000:] 
out = np.empty((200001, 3504), np.int64)
for index,row in enumerate(temp_train_data):
    actual_index=index+1000
    data=X_train[actual_index-1000:actual_index+1].ravel()
    __,cd_i=pywt.dwt(data,'haar')
    out[index] = cd_i
out.flush()
pca_obj=IncrementalPCA()
clf = pca_obj.fit(out)

, , out.flush . del out, , - IncrementalPCA.fit().

+4
2

sklearn 32- . , np.float16, 32- , , memmap .

64- ( Python3.3 64- Windows) . , 64- - python 64-bit numpy, scipy, scikit-learn 64 , .

, , . github , . , , float16, . .

, , 64- . , , ...

NB , , :) , , ( Traceback), , for batch in gen_batches , , .


:

, OP:

import numpy as np
from sklearn.decomposition import IncrementalPCA

ut = np.memmap('my_array.mmap', dtype=np.float16, mode='w+', shape=(140000,3504))
clf=IncrementalPCA(copy=False)
X_train=clf.fit_transform(ut)

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python27\lib\site-packages\sklearn\base.py", line 433, in fit_transfo
rm
    return self.fit(X, **fit_params).transform(X)
  File "C:\Python27\lib\site-packages\sklearn\decomposition\incremental_pca.py",
 line 171, in fit
    X = check_array(X, dtype=np.float)
  File "C:\Python27\lib\site-packages\sklearn\utils\validation.py", line 347, in
 check_array
    array = np.array(array, dtype=dtype, order=order, copy=copy)
MemoryError

check_array ( ) dtype=np.float, dtype=np.float16. , check_array() copy=False np.array(), ( ), , dtype ; np.array.

IncrementalPCA, dtype dtype in (np.float16, np.float32, np.float64). , , MemoryError .

, linalg.svd() scipy, to gesdd(), lapack. , , ( , - core scipy).

+6

:

X_train_mmap = np.memmap('my_array.mmap', dtype=np.float16,
                         mode='w+', shape=(n_samples, n_features))
clf = IncrementalPCA(n_components=50).fit(X_train_mmap)

, ( ) :

X_projected_mmap = np.memmap('my_result_array.mmap', dtype=np.float16,
                             mode='w+', shape=(n_samples, clf.n_components))
for batch in gen_batches(n_samples, self.batch_size_):
    X_batch_projected = clf.transform(X_train_mmap[batch])
    X_projected_mmap[batch] = X_batch_projected

, , .

+1

All Articles