For the same reason why you cannot use
sqlContext.createDataFrame(1 to 10).map(x => Map(x -> 0))
If you look at the source of org.apache.spark.sql.SQLContext , you will find two different implementations of the createDataFrame method:
def createDataFrame[A <: Product : TypeTag](rdd: RDD[A]): DataFrame
and
def createDataFrame[A <: Product : TypeTag](data: Seq[A]): DataFrame
As you can see, both require A be a subclass of Product . When you call toDF on RDD[(Map[Int,Int], Int)] , this works because Tuple2 really a Product . Map[Int,Int] is not an error in itself.
You can get it working by wrapping Map with Tuple1 :
sc.parallelize(1 to 10).map(x => Tuple1(Map(x -> 0))).toDF
source share