Python Pandas: what is the fastest way to create a datetime index?

My data looks like this:

TEST 2012-05-01 00:00:00.203 OFF 0 2012-05-01 00:00:11.203 OFF 0 2012-05-01 00:00:22.203 ON 1 2012-05-01 00:00:33.203 ON 1 2012-05-01 00:00:44.203 OFF 0 TEST 2012-05-02 00:00:00.203 OFF 0 2012-05-02 00:00:11.203 OFF 0 2012-05-02 00:00:22.203 OFF 0 2012-05-02 00:00:33.203 ON 1 2012-05-02 00:00:44.203 ON 1 2012-05-02 00:00:55.203 OFF 0 

I use pandas read_table to read a line with preliminary analysis (which gets rid of the "TEST" lines) as follows:

 df = pandas.read_table(buf, sep=' ', header=None, parse_dates=[[0, 1]], date_parser=dateParser, index_col=[0]) 

So far I have tried several date parsers, without commenting, one of them was the fastest.

 def dateParser(s): #return datetime.strptime(s, "%Y-%m-%d %H:%M:%S.%f") return datetime(int(s[0:4]), int(s[5:7]), int(s[8:10]), int(s[11:13]), int(s[14:16]), int(s[17:19]), int(s[20:23])*1000) #return np.datetime64(s) #return pandas.Timestamp(s, "%Y-%m-%d %H:%M:%S.%f", tz='utc' ) 

Is there anything else I can do to speed things up? I need to read large amounts of data - several Gb per file.

+4
source share
1 answer

Quick answer: what you indicate as the fastest way to parse your date / time strings into a datetime index is really the fastest way . I have timed some of your approaches and some others, and this is what I get.

First, to get an example DataFrame to work with:

 import datetime from pandas import * start = datetime(2000, 1, 1) end = datetime(2012, 12, 1) d = DateRange(start, end, offset=datetools.Hour()) t_df = DataFrame({'field_1': np.array(['OFF', 'ON'])[np.random.random_integers(0, 1, d.size)], 'field_2': np.random.random_integers(0, 1, d.size)}, index=d) 

Where:

 In [1]: t_df.head() Out[1]: field_1 field_2 2000-01-01 00:00:00 ON 1 2000-01-01 01:00:00 OFF 0 2000-01-01 02:00:00 OFF 1 2000-01-01 03:00:00 OFF 1 2000-01-01 04:00:00 ON 1 In [2]: t_df.shape Out[2]: (113233, 2) 

Its OK. 3.2MB file if you dump it to disk. Now we need to drop the DataRange type of your Index and make it a list of str to simulate how you will analyze your data:

 t_df.index = t_df.index.map(str) 

If you use parse_dates = True when reading your data in a DataFrame using read_table , you look at the parsing time for 9.5 seconds :

 In [3]: import numpy as np In [4]: import timeit In [5]: t_df.to_csv('data.tsv', sep='\t', index_label='date_time') In [6]: t = timeit.Timer("from __main__ import read_table; read_table('data.tsv', sep='\t', index_col=0, parse_dates=True)") In [7]: np.mean(t.repeat(10, number=1)) Out[7]: 9.5226533889770515 

Other strategies are based on analyzing your data on a DataFrame first (a little parsing time), and then converting the index into Index of datetime objects:

 In [8]: t = timeit.Timer("from __main__ import t_df, dateutil; map(dateutil.parser.parse, t_df.index.values)") In [9]: np.mean(t.repeat(10, number=1)) Out[9]: 7.6590064525604244 In [10]: t = timeit.Timer("from __main__ import t_df, dateutil; t_df.index.map(dateutil.parser.parse)") In [11]: np.mean(t.repeat(10, number=1)) Out[11]: 7.8106775999069216 In [12]: t = timeit.Timer("from __main__ import t_df, datetime; t_df.index.map(lambda x: datetime.strptime(x, \"%Y-%m-%d %H:%M:%S\"))") Out[12]: 2.0389052629470825 In [13]: t = timeit.Timer("from __main__ import t_df, np; map(np.datetime_, t_df.index.values)") In [14]: np.mean(t.repeat(10, number=1)) Out[14]: 3.8656840562820434 In [15]: t = timeit.Timer("from __main__ import t_df, np; map(np.datetime64, t_df.index.values)") In [16]: np.mean(t.repeat(10, number=1)) Out[16]: 3.9244711160659791 

And now for the winner:

 In [17]: def f(s): ....: return datetime(int(s[0:4]), ....: int(s[5:7]), ....: int(s[8:10]), ....: int(s[11:13]), ....: int(s[14:16]), ....: int(s[17:19])) ....: t = timeit.Timer("from __main__ import t_df, f; t_df.index.map(f)") ....: In [18]: np.mean(t.repeat(10, number=1)) Out[18]: 0.33927145004272463 

When working with numpy , pandas or datetime -type approaches, there can definitely be more optimizations to think about, but it seems to me that I stay with the standard CPython libraries and convert each str date / time to tupple from int , and to the datetime instance the fastest way to get what you want.

+6
source

All Articles