I need to load a DataFrame created in SparkR to load in Hive.
#created a dataframe df_test df_test <- createDataFrame(sqlContext, data.frame(mon = c(1,2,3,4,5), year = c(2011,2012,2013,2014,2015))) #initialized the Hive context >sc <- sparkR.init() >hiveContext <- sparkRHive.init(sc) #used the saveAsTable fn to save dataframe "df_test" in hive table named "table_hive" >saveAsTable(df_test, "table_hive")
08/16/24 23:08:36 ERROR RBackendHandler: saveAsTable at 13 failed Error in invokeJava (isStatic = FALSE, objId $ id, methodName, ...): java.lang.RuntimeException: Tables created using SQLContext, must be TEMPORARY. Use a HiveContext instead. in scala.sys.package $ .error (package.scala: 27) at org.apache.spark.sql.execution.SparkStrategies $ DDLStrategy $ .apply (SparkStrategies.scala: 392) at org.apache.spark.sql.catalyst .planning.QueryPlanner $$ anonfun $ 1.apply (QueryPlanner.scala: 58) at org.apache.spark.sql.catalyst.planning.QueryPlanner $$ anonfun $ 1.apply (QueryPlanner.scala: 58) in scala.collection.Iterator $$ anon $ 13.hasNext (Iterator.scala: 371) at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan (QueryPlanner.scala: 59) at org.apache.spark.sql.execution.QueryExecution.sparkPlan $ lzycompute (QueryExecution.scala: 47) at org.apache.spark.sql.execution.QueryExecution.sparkPlan (QueryExecution.scala: 45) at org.apache.spark.sql.execution.QueryExecution.executedPlan $ lzycompute (QueryExecution.ala : 52) at org.apache.spark.sql.execution.QueryExecution.executedPlan (QueryExecution.scala: 52) at org. apache.spark.sql.execution
Throws the above error. Please help.
source share