Is there a way to write each row of my spark dataframe as a new element in the dynamoDB table? (in pySpark )
I used this code with the boto3 library, but I wonder if there is another way, avoiding the pandas and for loop steps:
sparkDF_dict = sparkDF.toPandas().to_dict('records') for item in sparkDF_dict : table.put_item(Item = item)
source share