I am studying Hadoop and Apache Spark. I want to know how to get the result from Apache Spark Job on the Internet.
the following is simple php code to run Apache Spark Job on the web because I just want to test it.
<?php
echo shell_exec("spark-submit --class stu.ac.TestProject.App --master spark://localhost:7077 /TestProject-0.0.1-SNAPSHOT.jar");
?>
and the following java code example for specifying Apache Spark.
public class App
{
public static void main( String[] args )
{
SparkConf sparkConf = new SparkConf().setAppName("JavaSparkPi");
sparkConf.setMaster("spark://localhost:7077");
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
int slices = (args.length == 1) ? Integer.parseInt(args[0]) : 2;
int n = 100000 * slices;
List<Integer> l = new ArrayList<Integer>(n);
for (int i = 0; i < n; i++) {
l.add(i);
}
JavaRDD<Integer> dataSet = jsc.parallelize(l, slices);
JavaRDD<Integer> countRDD = dataSet.map(new Function<Integer, Integer>() {
public Integer call(Integer arg0) throws Exception {
double x = Math.random() * 2 - 1;
double y = Math.random() * 2 - 1;
return (x * x + y * y < 1) ? 1 : 0;
}
});
int count = countRDD.reduce(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer arg0, Integer arg1) throws Exception {
return arg0 + arg1;
}
});
System.out.println("Pi is roughly " + 4.0 * count / n);
jsc.stop();
}
}
I want to get only standard output, but after running the code, I got empty output. I am creating this java code on a maven project, so I also checked its operation in cmd mode.
How can I solve it?
Thanks in advance for your reply and sorry for my poor English. If you do not understand my question, write a comment.
source
share