Using pandas read_csv with missing data

I am trying to read a csv file where some lines may be missing pieces of data.

This seems to be causing a problem with the pandas read_csv function when you specify dtype. The problem is that to convert from str to the value given by dtype, pandas just tries to pass it directly. Therefore, if something is missing, everything breaks.

What follows is MWE (this MWE uses StringIO instead of a true file, however, the problem also occurs when using a real file)

import pandas as pd
import numpy as np
import io

datfile = io.StringIO("12 23 43| | 37| 12.23| 71.3\n12 23 55|X|   |      | 72.3")

names = ['id', 'flag', 'number', 'data', 'data2']
dtypes = [np.str, np.str, np.int, np.float, np.float]

dform = {name: dtypes[ind] for ind, name in enumerate(names)}

colconverters = {0: lambda s: s.strip(), 1: lambda s: s.strip()}

df = pd.read_table(datfile, sep='|', dtype=dform, converters=colconverters, header=None,
                   index_col=0, names=names, na_values=' ')

The error that I get at startup is

Traceback (most recent call last):
  File "pandas/parser.pyx", line 1084, in pandas.parser.TextReader._convert_tokens (pandas/parser.c:12580)
TypeError: Cannot cast array from dtype('O') to dtype('int64') according to the rule 'safe'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/aliounis/Repos/stellarpy/source/mwe.py", line 15, in <module>
    index_col=0, names=names, na_values=' ')
  File "/usr/local/lib/python3.5/site-packages/pandas/io/parsers.py", line 562, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "/usr/local/lib/python3.5/site-packages/pandas/io/parsers.py", line 325, in _read
    return parser.read()
  File "/usr/local/lib/python3.5/site-packages/pandas/io/parsers.py", line 815, in read
    ret = self._engine.read(nrows)
  File "/usr/local/lib/python3.5/site-packages/pandas/io/parsers.py", line 1314, in read
    data = self._reader.read(nrows)
  File "pandas/parser.pyx", line 805, in pandas.parser.TextReader.read (pandas/parser.c:8748)
  File "pandas/parser.pyx", line 827, in pandas.parser.TextReader._read_low_memory (pandas/parser.c:9003)
  File "pandas/parser.pyx", line 904, in pandas.parser.TextReader._read_rows (pandas/parser.c:10022)
  File "pandas/parser.pyx", line 1011, in pandas.parser.TextReader._convert_column_data (pandas/parser.c:11397)
  File "pandas/parser.pyx", line 1090, in pandas.parser.TextReader._convert_tokens (pandas/parser.c:12656)
ValueError: invalid literal for int() with base 10: '   '

I can fix it. I looked through the documentation, but did not see anything like it directly addressing this decision. Is this just a bug you need to report to panda?

0
2

, , , nan ints, , , pandas , . , , , pandas. 1087-1096 parser.pyx

        na_count_old = na_count
        print(col_res)
        for ind, row in enumerate(col_res):
            k = kh_get_str(na_hashset, row.strip().encode())
            if k != na_hashset.n_buckets:

                col_res[ind] = np.nan

                na_count += 1

            else:

                col_res[ind] = np.array(col_res[ind]).astype(col_dtype).item(0)

        if na_count_old==na_count:

            # float -> int conversions can fail the above
            # even with no nans
            col_res_orig = col_res
            col_res = col_res.astype(col_dtype)
            if (col_res != col_res_orig).any():
                raise ValueError("cannot safely convert passed user dtype of "
                                 "{col_dtype} for {col_res} dtyped data in "
                                 "column {column}".format(col_dtype=col_dtype,
                                                          col_res=col_res_orig.dtype.name,
                                                          column=i))

, , na ( , , na). , double np.nan. na, dtype, ( , dtypes).

(, , ), , , , -, , .

0

:

import pandas as pd
import numpy as np
import io

datfile = io.StringIO(u"12 23 43| | 37| 12.23| 71.3\n12 23 55|X|   |      | 72.3")

names  = ['id', 'flag', 'number', 'data', 'data2']
dtypes = [np.str, np.str, np.str, np.float, np.float] 
dform  = {name: dtypes[ind] for ind, name in enumerate(names)}

colconverters = {0: lambda s: s.strip(), 1: lambda s: s.strip()}

df     = pd.read_table(datfile, sep='|', dtype=dform, converters=colconverters, header=None, na_values=' ')
df.columns = names

: dtypes .

df["number"] = df["data"].astype('int')
df["data"]   = df["data"].astype('float')

str .

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2 entries, 0 to 1
Data columns (total 5 columns):
id        2 non-null object
flag      2 non-null object
number    2 non-null object
data      2 non-null object
data2     2 non-null float64
dtypes: float64(1), object(4)
memory usage: 152.0+ bytes

data, np.float, , data2 - np.float , .

+1

All Articles