I have a spark cluster running on
spark://host1:7077 spark://host2:7077 spark://host3:7077
and connect via /bin/spark-shell --master spark://host1:7077 When you try to read the file using
val textFile = sc.textFile("README.md") textFile.count()
The prompt indicates
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
When checking through Web ui on host1:8080 it shows:
Workers: 0 Cores: 0 Total, 0 Used Memory: 0.0 B Total, 0.0 B Used Applications: 0 Running, 2 Completed Drivers: 0 Running, 0 Completed Status: ALIVE
My question is how to specify the kernel and memory when working in cluster mode with a spark shell? Or do I need to start packing my scala code into a .jar file and then send the job to a spark?
thanks
source share