Reading lines of a large csv file in python

I have a very large csv file that I cannot fully load in memory. So I want to read it in parts, convert it to a numpy array, and then do some processing.

I already checked: Lazy Method for reading a large file in Python?

But the problem is that this is a regular reader, and I can not find any option for specifying the size in csvReader.

Also, since I want to convert strings to a numpy array, I don’t want to read any string in half, so instead of specifying a size, I want something where I can indicate “no strings” in the reader.

Is there a built-in function or an easy way to do this.

+4
source share
2 answers

csv.readerwill not read the entire file into memory. It lazily iterates over the file line by line when you iterate over an object reader. That way you can just use reader, as usual, but breakwith your iteration after you read, but how many lines you want to read. You can see this in the C code used to implement the objectreader .

Initializer for the reader objecT:
static PyObject *
csv_reader(PyObject *module, PyObject *args, PyObject *keyword_args)
{
    PyObject * iterator, * dialect = NULL;
    ReaderObj * self = PyObject_GC_New(ReaderObj, &Reader_Type);

    if (!self)
        return NULL;

    self->dialect = NULL;
    self->fields = NULL;
    self->input_iter = NULL;
    self->field = NULL;
    // stuff we dont care about here
    // ...
    self->input_iter = PyObject_GetIter(iterator);  // here we save the iterator (file object) we passed in
    if (self->input_iter == NULL) {
        PyErr_SetString(PyExc_TypeError,
                        "argument 1 must be an iterator");
        Py_DECREF(self);
        return NULL;
    }

static PyObject *
Reader_iternext(ReaderObj *self)  // This is what gets called when you call `next(reader_obj)` (which is what a for loop does internally)
{
    PyObject *fields = NULL;
    Py_UCS4 c;
    Py_ssize_t pos, linelen;
    unsigned int kind;
    void *data;
    PyObject *lineobj;

    if (parse_reset(self) < 0)
        return NULL;
    do {
        lineobj = PyIter_Next(self->input_iter);  // Equivalent to calling `next(input_iter)`
        if (lineobj == NULL) {
            /* End of input OR exception */
            if (!PyErr_Occurred() && (self->field_len != 0 ||
                                      self->state == IN_QUOTED_FIELD)) {
                if (self->dialect->strict)
                    PyErr_SetString(_csvstate_global->error_obj,
                                    "unexpected end of data");
                else if (parse_save_field(self) >= 0)
                    break;
            }
            return NULL;
        }

As you can see, it next(reader_object)calls next(file_object)internally. This way you iterate through the lines without reading all of this in memory.

+2
source

I used this function. The basic idea is to get the generator to output numbers in the file.

def iter_loadtxt(filename, delimiter=',', skiprows=0, read_range=None, dtype=float):
    '''
    Read the file line by line and convert it to Numpy array.
    :param delimiter: character
    :param skiprows : int
    :param read_range: [int, int] or None. set it to None and the function will read the whole file.
    :param dtype: type
    '''
    def iter_func():
        with open(filename, 'r') as infile:
            for _ in range(skiprows):
                next(infile)
            if read_range is None:
                for line in infile:
                    line = line.rstrip().split(delimiter)
                    for item in line:
                        yield dtype(item)
            else:
                counter = 0
                for line in infile:
                    if counter < read_range[0]:
                        counter += 1
                    else:
                        counter += 1
                        for item in line:
                            yield dtype(item)

                    if counter >= read_range[1]:
                        break

        iter_loadtxt.rowlength = len(line)

    data = np.fromiter(iter_func(), dtype=dtype)
    data = data.reshape((-1, iter_loadtxt.rowlength))
    return data
0
source

All Articles