Spark - creating a nested DataFrame

I start with PySpark and am having trouble creating DataFrames with nested objects.

This is my example.

I have users.

$ cat user.json {"id":1,"name":"UserA"} {"id":2,"name":"UserB"} 

Users have orders.

 $ cat order.json {"id":1,"price":202.30,"userid":1} {"id":2,"price":343.99,"userid":1} {"id":3,"price":399.99,"userid":2} 

And I like to join it to get a structure where array orders are nested in users.

 $ cat join.json {"id":1, "name":"UserA", "orders":[{"id":1,"price":202.30,"userid":1},{"id":2,"price":343.99,"userid":1}]} {"id":2,"name":"UserB","orders":[{"id":3,"price":399.99,"userid":2}]} 

How can i do this? Is there any nested connection or something similar?

 >>> user = sqlContext.read.json("user.json") >>> user.printSchema(); root |-- id: long (nullable = true) |-- name: string (nullable = true) >>> order = sqlContext.read.json("order.json") >>> order.printSchema(); root |-- id: long (nullable = true) |-- price: double (nullable = true) |-- userid: long (nullable = true) >>> joined = sqlContext.read.json("join.json") >>> joined.printSchema(); root |-- id: long (nullable = true) |-- name: string (nullable = true) |-- orders: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: long (nullable = true) | | |-- price: double (nullable = true) | | |-- userid: long (nullable = true) 

EDIT: I know there is an opportunity to do this using join and foldByKey, but is there an easier way?

EDIT2: I am using @ zero323 solution

 def joinTable(tableLeft, tableRight, columnLeft, columnRight, columnNested, joinType = "left_outer"): tmpTable = sqlCtx.createDataFrame(tableRight.rdd.groupBy(lambda r: r.asDict()[columnRight])) tmpTable = tmpTable.select(tmpTable._1.alias("joinColumn"), tmpTable._2.data.alias(columnNested)) return tableLeft.join(tmpTable, tableLeft[columnLeft] == tmpTable["joinColumn"], joinType).drop("joinColumn") 

I add the lines of the second nested structure

 >>> lines = sqlContext.read.json(path + "lines.json") >>> lines.printSchema(); root |-- id: long (nullable = true) |-- orderid: long (nullable = true) |-- product: string (nullable = true) orders = joinTable(order, lines, "id", "orderid", "lines") joined = joinTable(user, orders, "id", "userid", "orders") joined.printSchema() root |-- id: long (nullable = true) |-- name: string (nullable = true) |-- orders: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: long (nullable = true) | | |-- price: double (nullable = true) | | |-- userid: long (nullable = true) | | |-- lines: array (nullable = true) | | | |-- element: struct (containsNull = true) | | | | |-- _1: long (nullable = true) | | | | |-- _2: long (nullable = true) | | | | |-- _3: string (nullable = true) 

After this, the column names from the rows will be lost. Any ideas?

EDIT 3: I tried to manually specify the schema.

 from pyspark.sql.types import * fields = [] fields.append(StructField("_1", LongType(), True)) inner = ArrayType(lines.schema) fields.append(StructField("_2", inner)) new_schema = StructType(fields) print new_schema grouped = lines.rdd.groupBy(lambda r: r.orderid) grouped = grouped.map(lambda x: (x[0], list(x[1]))) g = sqlCtx.createDataFrame(grouped, new_schema) 

Error:

 TypeError: StructType(List(StructField(id,LongType,true),StructField(orderid,LongType,true),StructField(product,StringType,true))) can not accept object in type <class 'pyspark.sql.types.Row'> 
+12
source share
3 answers

This will only work in Spark 2.0 or later.

First we need a couple of imports:

 from pyspark.sql.functions import struct, collect_list 

The rest is a simple aggregation and association:

 orders = spark.read.json("/path/to/order.json") users = spark.read.json("/path/to/user.json") combined = users.join( orders .groupBy("userId") .agg(collect_list(struct(*orders.columns)).alias("orders")) .withColumnRenamed("userId", "id"), ["id"]) 

For example data, the result is:

 combined.show(2, False) 
 +---+-----+---------------------------+ |id |name |orders | +---+-----+---------------------------+ |1 |UserA|[[1,202.3,1], [2,343.99,1]]| |2 |UserB|[[3,399.99,2]] | +---+-----+---------------------------+ 

with circuit:

 combined.printSchema() 
 root |-- id: long (nullable = true) |-- name: string (nullable = true) |-- orders: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: long (nullable = true) | | |-- price: double (nullable = true) | | |-- userid: long (nullable = true) 

and JSON view:

 for x in combined.toJSON().collect(): print(x) 
 {"id":1,"name":"UserA","orders":[{"id":1,"price":202.3,"userid":1},{"id":2,"price":343.99,"userid":1}]} {"id":2,"name":"UserB","orders":[{"id":3,"price":399.99,"userid":2}]} 
+17
source

To align your data frame from nested to regular use

dff= df.select("column with multiple columns.*").toPandas()

0
source

First you need to use userid as the join key for the second DataFrame :

 user.join(order, user.id == order.userid) 

Then you can use the map step to convert the resulting records to the desired format.

-1
source

All Articles