How to read a list of parquet files from S3 as pandas data using public relations?

I have a hacker way to achieve this using boto3 (1.4.4), pyarrow (0.4.1) and pandas (0.20.3).

Firstly, I can read one parquet file locally as follows:

 import pyarrow.parquet as pq path = 'parquet/part-r-00000-1e638be4-e31f-498a-a359-47d017a0059c.gz.parquet' table = pq.read_table(path) df = table.to_pandas() 

I can also locally localize the parquet file directory as follows:

 import pyarrow.parquet as pq dataset = pq.ParquetDataset('parquet/') table = dataset.read() df = table.to_pandas() 

Both work like a charm. Now I want to get the same with the files stored in the S3 bucket. I was hoping something like this would work:

 dataset = pq.ParquetDataset('s3n://dsn/to/my/bucket') 

But this is not so:

OSError: Passed non-file path: s3n://dsn/to/my/bucket

After reading the pyarrow documentation , this is not possible at the moment . So I came out with the following solution:

Reading a single file from S3 and receiving a pandas frame:

 import io import boto3 import pyarrow.parquet as pq buffer = io.BytesIO() s3 = boto3.resource('s3') s3_object = s3.Object('bucket-name', 'key/to/parquet/file.gz.parquet') s3_object.download_fileobj(buffer) table = pq.read_table(buffer) df = table.to_pandas() 

And here is my hacky, not very optimized solution for creating a pandas data frame from the S3 folder path:

 import io import boto3 import pandas as pd import pyarrow.parquet as pq bucket_name = 'bucket-name' def download_s3_parquet_file(s3, bucket, key): buffer = io.BytesIO() s3.Object(bucket, key).download_fileobj(buffer) return buffer client = boto3.client('s3') s3 = boto3.resource('s3') objects_dict = client.list_objects_v2(Bucket=bucket_name, Prefix='my/folder/prefix') s3_keys = [item['Key'] for item in objects_dict['Contents'] if item['Key'].endswith('.parquet')] buffers = [download_s3_parquet_file(s3, bucket_name, key) for key in s3_keys] dfs = [pq.read_table(buffer).to_pandas() for buffer in buffers] df = pd.concat(dfs, ignore_index=True) 

Is there a better way to achieve this? Maybe some kind of pandas connector using piarrow? I would like to avoid using pyspark , but if there is no other solution, then I would take it.

+22
python pandas dataframe boto3 arrow pyarrow
source share
5 answers

You should use the s3fs module suggested by yjk21 . However, as a result of calling ParquetDataset, you get a pyarrow.parquet.ParquetDataset object. To get a Pandas DataFrame, you'd rather want to apply .read_pandas().to_pandas() :

 import pyarrow.parquet as pq import s3fs s3 = s3fs.S3FileSystem() pandas_dataframe = pq.ParquetDataset('s3://your-bucket/', filesystem=s3).read_pandas().to_pandas() 
+26
source share

You can use s3fs from dask, which implements the file system interface for s3. Then you can use the ParquetDataset file system argument as follows:

 import s3fs s3 = s3fs.S3FileSystem() dataset = pq.ParquetDataset('s3n://dsn/to/my/bucket', filesystem=s3) 
+5
source share

This can be done using boto3, as well as without using Pyarrow

 import boto3 import io import pandas as pd # Read the parquet file buffer = io.BytesIO() s3 = boto3.resource('s3') object = s3.Object('bucket_name','key') object.download_fileobj(buffer) df = pd.read_parquet(buffer) print(df.head()) 
+4
source share

Thanks! Your question actually tell me a lot. This is how I do it now with pandas (0.21.1), which will be called pyarrow , and boto3 (1.3.1).

 import boto3 import io import pandas as pd # Read single parquet file from S3 def pd_read_s3_parquet(key, bucket, s3_client=None, **args): if s3_client is None: s3_client = boto3.client('s3') obj = s3_client.get_object(Bucket=bucket, Key=key) return pd.read_parquet(io.BytesIO(obj['Body'].read()), **args) # Read multiple parquets from a folder on S3 generated by spark def pd_read_s3_multiple_parquets(filepath, bucket, s3=None, s3_client=None, verbose=False, **args): if not filepath.endswith('/'): filepath = filepath + '/' # Add '/' to the end if s3_client is None: s3_client = boto3.client('s3') if s3 is None: s3 = boto3.resource('s3') s3_keys = [item.key for item in s3.Bucket(bucket).objects.filter(Prefix=filepath) if item.key.endswith('.parquet')] if not s3_keys: print('No parquet found in', bucket, filepath) elif verbose: print('Load parquets:') for p in s3_keys: print(p) dfs = [pd_read_s3_parquet(key, bucket=bucket, s3_client=s3_client, **args) for key in s3_keys] return pd.concat(dfs, ignore_index=True) 

Then you can read several parquet floors in a folder from S3 by

 df = pd_read_s3_multiple_parquets('path/to/folder', 'my_bucket') 

(I think you can simplify this code a lot.)

+4
source share

Probably the easiest way to read parquet data from the cloud into data frames is to use dask.dataframe as follows:

 import dask.dataframe as dd df = dd.read_parquet('s3://bucket/path/to/data-*.parq') 

dask.dataframe can read data from Google Cloud Storage, Amazon S3, the Hadoop file system and dask.dataframe another !

+3
source share

All Articles