I have a hacker way to achieve this using boto3 (1.4.4), pyarrow (0.4.1) and pandas (0.20.3).
Firstly, I can read one parquet file locally as follows:
import pyarrow.parquet as pq path = 'parquet/part-r-00000-1e638be4-e31f-498a-a359-47d017a0059c.gz.parquet' table = pq.read_table(path) df = table.to_pandas()
I can also locally localize the parquet file directory as follows:
import pyarrow.parquet as pq dataset = pq.ParquetDataset('parquet/') table = dataset.read() df = table.to_pandas()
Both work like a charm. Now I want to get the same with the files stored in the S3 bucket. I was hoping something like this would work:
dataset = pq.ParquetDataset('s3n://dsn/to/my/bucket')
But this is not so:
OSError: Passed non-file path: s3n://dsn/to/my/bucket
After reading the pyarrow documentation , this is not possible at the moment . So I came out with the following solution:
Reading a single file from S3 and receiving a pandas frame:
import io import boto3 import pyarrow.parquet as pq buffer = io.BytesIO() s3 = boto3.resource('s3') s3_object = s3.Object('bucket-name', 'key/to/parquet/file.gz.parquet') s3_object.download_fileobj(buffer) table = pq.read_table(buffer) df = table.to_pandas()
And here is my hacky, not very optimized solution for creating a pandas data frame from the S3 folder path:
import io import boto3 import pandas as pd import pyarrow.parquet as pq bucket_name = 'bucket-name' def download_s3_parquet_file(s3, bucket, key): buffer = io.BytesIO() s3.Object(bucket, key).download_fileobj(buffer) return buffer client = boto3.client('s3') s3 = boto3.resource('s3') objects_dict = client.list_objects_v2(Bucket=bucket_name, Prefix='my/folder/prefix') s3_keys = [item['Key'] for item in objects_dict['Contents'] if item['Key'].endswith('.parquet')] buffers = [download_s3_parquet_file(s3, bucket_name, key) for key in s3_keys] dfs = [pq.read_table(buffer).to_pandas() for buffer in buffers] df = pd.concat(dfs, ignore_index=True)
Is there a better way to achieve this? Maybe some kind of pandas connector using piarrow? I would like to avoid using pyspark , but if there is no other solution, then I would take it.
python pandas dataframe boto3 arrow pyarrow
Diego Mora Cespedes
source share