A bit late, but I found this while I was looking, and it might help someone else ...
You can also unpack the argument list into spark.read.parquet()
paths=['foo','bar']
df=spark.read.parquet(*paths)
This is useful if you want to pass multiple blocks to the path argument:
basePath='s3://bucket/'
paths=['s3://bucket/partition_value1=*/partition_value2=2017-04-*',
's3://bucket/partition_value1=*/partition_value2=2017-05-*'
]
df=spark.read.option("basePath",basePath).parquet(*paths)
, basePath, .