Failed to get data from generated SparkR DataFrame

I have a simple SparkR program, which consists in creating a SparkR DataFrame and extracting / collecting data from it.

 Sys.setenv(HADOOP_CONF_DIR = "/etc/hadoop/conf.cloudera.yarn") Sys.setenv(SPARK_HOME = "/home/user/Downloads/spark-1.6.1-bin-hadoop2.6") .libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths())) library(SparkR) sc <- sparkR.init(master="yarn-client",sparkEnvir = list(spark.shuffle.service.enabled=TRUE,spark.dynamicAllocation.enabled=TRUE,spark.dynamicAllocation.initialExecutors="40")) hiveContext <- sparkRHive.init(sc) n = 1000 x = data.frame(id = 1:n, val = rnorm(n)) xs <- createDataFrame(hiveContext, x) xs head(xs) collect(xs) 

I can create it and view the information successfully, but any operation related to data extraction throws below the error.

07/16/25 16:33:59 WARN TaskSetManager: Lost task 0.3 at step 17.0 (TID 86, wlos06.nrm.minn.seagate.com): java.net.SocketTimeoutException: Accept a timeout on java.net.PlainSocketImpl. socketAccept (native method) on java.net.AbstractPlainSocketImpl.accept (AbstractPlainSocketImpl.javahaps98) in java.net.ServerSocket.implAccept (ServerSocket.javaβˆ—30) in java.net.ServerSocket.accept (ServerSocket.java:498) at org.apache.spark.api.r.RRDD $ .createRWorker (RRDD.scala: 432) at org.apache.spark.api.r.BaseRRDD.compute (RRDD.scala: 63) at org.apache.spark. rdd.RDD.computeOrReadCheckpoint (RDD.scala: 306) at org.apache.spark.rdd.RDD.iterator (RDD.scala: 270) at org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala: 38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala: 306) at org.apache.spark.rdd.RDD.iterator (RDD.scala: 270) at org.apach e.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala: 38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala: 306) at org.apache.spark.rdd.RDD.iterator (RDD. scala: 270) at org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala: 38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala: 306) at org.apache.spark.rdd .RDD.iterator (RDD.scala: 270) at org.apache.spark.scheduler.ResultTask.runTask (ResultTask.scala: 66) at org.apache.spark.scheduler.Task.run (Task.scala: 89) at org.apache.spark.executor.Executor $ TaskRunner.run (Executor.scala: 214) in java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145) in java.util.concurrent.ThreadPoolExecutor $ Worker.run ( ThreadPoolExecutor.java:615) in java.lang.Thread.run (Thread.java:745)

07/16/25 16:33:59 ERROR TaskSetManager: Task 0 at step 17.0 failed 4 times; interruption 16/07/25 16:33:59 ERROR RBackendHandler: dfToCols on org.apache.spark.sql.api.r.SQLUtils failed Error in invokeJava (isStatic = TRUE, className, methodName, ...): org .apache.spark.SparkException: The operation is interrupted due to the failure of the stage: Task 0 at stage 17.0 failed 4 times, last failure: lost task 0.3 at stage 17.0 (TID 86, wlos06.nrm.minn.seagate.com): java .net.SocketTimeoutException: Accept a timeout on java.net.PlainSocketImpl.socketAccept (native method) on java.net.AbstractPlainSocketImpl.accept (AbstractPlainSocketImpl.javahaps98) in java.net.ServerSocket.implAccept (ServerSocket.javaPoint30) at java.net.ServerSocket.accept (ServerSocket.java:498) at org.apache.spark.api.r.RRDD $ .createRWorker (RRDD.scala: 432) at org.apache.spark.api.r.BaseRRDD. compute (RRD D.scala: 63) on org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala: 306) on org.apache.spark.rdd.RDD.iterator (RDD.scala: 270) on org.apache.spark .rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala: 38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala: 306) at org.apache.spark.rdd.RDD.iterator (RDD.scala: 270 ) at org.apache.spark.rdd.MapPartitionsRDD.compute (MapPar

If I execute it using the sparkR command line, as shown below, it is executed.

 ~/Downloads/spark-1.6.1-bin-hadoop2.6/bin/sparkR --master yarn-client 

But when I execute it through R and sparkR.init ((master = "yarn-client"), it throws an error.

Can anyone help resolve these errors?

+5
source share
1 answer

Adding this line made the difference:

 Sys.setenv("SPARKR_SUBMIT_ARGS"="--master yarn-client sparkr-shell") 

Here is the complete code:

 Sys.setenv(HADOOP_CONF_DIR = "/etc/hadoop/conf.cloudera.yarn") Sys.setenv(SPARK_HOME = "/home/user/Downloads/spark-1.6.1-bin-hadoop2.6") .libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths())) library(SparkR) Sys.setenv("SPARKR_SUBMIT_ARGS"="--master yarn-client sparkr-shell") sc <- sparkR.init(sparkEnvir = list(spark.shuffle.service.enabled=TRUE,spark.dynamicAllocation.enabled=TRUE,spark.dynamicAllocation.initialExecutors="40")) hiveContext <- sparkRHive.init(sc) n = 1000 x = data.frame(id = 1:n, val = rnorm(n)) xs <- createDataFrame(hiveContext, x) xs head(xs) collect(xs) 
+5
source

All Articles